prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I used the following guide to set up my chaostoolkit cluster: <a href="https://chaostoolkit.org/deployment/k8s/operator/" rel="nofollow noreferrer">https://chaostoolkit.org/deployment/k8s/operator/</a></p>
<p>I am attempting to kill a pod using kubernetes, however the following error:</p>
<pre><code>HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\" cannot list resource \"pods\" in API group \"\" in the namespace \"task-dispatcher\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
</code></pre>
<p>I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to "system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb".</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-chaos-exp
namespace: chaostoolkit-run
data:
experiment.yaml: |
---
version: 1.0.0
title: Terminate Pod Experiment
description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time.
tags: ["kubernetes"]
secrets:
k8s:
KUBERNETES_CONTEXT: "docker-desktop"
method:
- type: action
name: terminate-k8s-pod
provider:
type: python
module: chaosk8s.pod.actions
func: terminate_pods
arguments:
label_selector: ''
name_pattern: my-release-rabbitmq-[0-9]$
rand: true
ns: default
---
apiVersion: chaostoolkit.org/v1
kind: ChaosToolkitExperiment
metadata:
name: my-chaos-exp
namespace: chaostoolkit-crd
spec:
serviceAccountName: test-user
automountServiceAccountToken: false
pod:
image: chaostoolkit/chaostoolkit:full
imagePullPolicy: IfNotPresent
experiment:
configMapName: my-chaos-exp
configMapExperimentFileName: experiment.yaml
restartPolicy: Never
</code></pre>
| <p>Error which is shared is using default service account "choastoolkit". Look like the role associated might not proper permissions.</p>
<p>The service account "test-user" which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.</p>
<p>Please specify proper service account having proper role access.</p>
|
<p>I have a google kubernetes cluster running and I am trying to manually scale some pods with the python-client kubernetes SDK. I use the following command on my terminal to get my google account credentials:</p>
<pre><code>gcloud auth login
</code></pre>
<p>Next, I connect to my cluster using the default command to get locally my kube-config:</p>
<pre><code>gcloud container clusters get-credentials ${clusterName} --zone ${zoneName}--${projectName}
</code></pre>
<p>Using the python SDK I load my configuration:</p>
<pre><code>from kubernetes import client, config
import kubernetes.client
config.load_kube_config()
v1 = client.CoreV1Api()
api = client.CustomObjectsApi()
k8s_apps_v1 = client.AppsV1Api()
</code></pre>
<p>With this code I have my cluster info and I can scale my pods as needed. This works for around 30-45 mins and after that when I try to make API requests to scale the pods in my cluster I get a response with the following error:</p>
<pre><code>kubernetes.client.exceptions.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': '697f82b7-4db9-46c3-b873-cef49a45bb19', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Tue, 31 May 2022 01:20:53 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
</code></pre>
<p>Why do I get anauthorized and can't make API calls anymore, and how can I fix this ?</p>
| <p>To resolve it, you should refresh the token before calling the API. This <a href="https://github.com/kubernetes-client/python-base/blob/474e9fb32293fa05098e920967bb0e0645182d5b/config/kube_config.py#L625" rel="nofollow noreferrer">doc</a> is useful to check if the token expired, the function <code>load_gcp_token</code> refreshes the GCP token only if it expires.</p>
|
<p>I want to create a cluster of RESTful web APIs in AWS EKS and be able to access them through a single IP (allowing kubernetes to load balance requests to each). I have followed the procedure explained the this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="nofollow noreferrer">link</a> and have set up an example <code>nginx</code> deployment as shown in the following image:</p>
<p><a href="https://i.stack.imgur.com/tPQgu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tPQgu.png" alt="enter image description here" /></a></p>
<p>The problem is that when I access the example <code>nginx</code> deployment via <code>172.31.22.183</code> it works just fine, but when I try to use the cluster IP <code>10.100.145.181</code> it does not yield any response in such a way that it seems to be unreachable.</p>
<p>What's the purpose of that cluster ip then and how can I use it to achieve what I need?</p>
| <p><code>ClusterIP</code> is an IP that is only accessible inside the cluster. You cannot hit it from outside cluster unless you use <code>kubectl port-forward</code></p>
|
<p>I have a NGINX Ingress controller and multiple ingress resources attached to each pod in GKE. I want to know if I can attach more than one ingress resource to a single pod.</p>
<p>For example I have a Java application running on a pod and I have attached an ingress resource to it. The application can be accessed at <code>example.dev.com</code> and it already has a ssl cert attached to it which I don't want to touch. I want the same application to be accessed via <code>example.com</code> by attaching a new Ingress resource and applying a new SSL certificate and hostname <code>example.com</code> .
Is it possible ?
Any leads will be appreciated.</p>
| <p>Yes you can do it.</p>
<blockquote>
<p>For example I have a Java application running on a pod and I have
attached an ingress resource to it. The application can be accessed at
example.dev.com and it already has a ssl cert attached to it which I
don't want to touch. I want the same application to be accessed via
example.com by attaching a new Ingress resource and applying a new SSL
certificate and hostname example.com . Is it possible ? Any leads will
be appreciated.</p>
</blockquote>
<p>Create another ingress with an <strong>SSL</strong> cert stored into the secret and attach that secret to the new ingress.</p>
<p>make sure you are providing different names in the new ingress so it won't overwrite the existing one.</p>
<p>You can keep both config inside single file or else make different ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
tls:
- hosts:
- example.com
- example.dev.com
secretName: testsecret-tls
rules:
- host: "example.dev.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
- host: "example.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service1
port:
number: 80
</code></pre>
<p>If you have a single wild card <strong>SSL</strong></p>
<pre><code>tls:
- hosts:
- example.dev.com
- example.com
secretName: testsecret-tls
</code></pre>
<p>If you don't want both together just create once ingress for <code>example.com</code> as you want</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
tls:
- hosts:
- example.com
secretName: testsecret-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
</code></pre>
<p>Ref :<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
|
<p>Here, I have a python file with the name main.py that I want to execute.</p>
<p>The main.py file contains:</p>
<pre><code>import time
import sys
def hello():
for i in range(10):
print(β\nβ*4, βHello from AMAN!β, β\nβ*4)
time.sleep(2)
sys.exit(0)
if __name__ == β__main__β:
hello()
</code></pre>
<p>and I have created a docker image with name Docker file</p>
<pre><code>FROM python:3.7
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt
CMD [βpythonβ, β/app/main.pyβ]
</code></pre>
<p>I want to know that is there any way to run the python-app image on AWS EKS cluster in pods and get the output <strong>βHello from AMAN!β</strong> <strong>(whatever this code prints )</strong> back to my system / print it in console .</p>
<p>Since I am not using flask I cannot see the application on web browser so this app will run in pods ,but I am not able to figure out how to get the output back to the terminal /SSH or in form of some file .</p>
<p>Please Help me in finding a way to get the output of app running back <strong>without using any web framework</strong> like flask /node etc .</p>
| <p>You can use the command</p>
<pre><code>kubectl logs <POD name>
</code></pre>
<p>it show you the output.</p>
<p>If POD running on an EKS you can use different ways to get the logs, write to file, push logs to AWS cloud watch other external logging systems like ELK, using kubectl check logs.</p>
<p>K8s by default store container logs <strong>stdin</strong> and <strong>stdout</strong></p>
|
<p>I am new to elastic search. I have a 3 data node and 3 master node elastic cluster deployed in Kubernetes. It was working well until recently there is a large intake of data. Now, I am in a stage where I need to apply index refresh interval configuration to 120s to allow optimal usage of the cluster. I am able to do it on individual index level <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#reset-index-setting" rel="nofollow noreferrer">using this method</a>. but I am not able to do at the cluster level. I have a process that creates a new index everyday and do not have more than 15 indexes totally at any point of time in the cluster. So, currently, I am doing it manually by using the method afore mentioned/using Kibana UI. I tried to do this couple of ways both failed.</p>
<ol>
<li>Used the settings PUT method to force the index settings at the global level and it gives an error of no requests in the range</li>
</ol>
<p><code>PUT /_cluster/settings -d { "index" : { "refresh_interval" : "120s" } }</code></p>
<ol start="2">
<li>I used the elasticsearch yaml to set this value for the data node and the data node fails to come up.</li>
</ol>
<p>elasticsearch yaml</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-cluster
namespace: elastic-system
spec:
version: 7.6.2
nodeSets:
- name: master
count: 3
config:
node.master: true
node.data: false
node.ingest: false
node.ml: false
node.store.allow_mmap: false
podTemplate:
...
- name: data1
count: 3
config:
node.master: false
node.data: true
node.ingest: true
node.ml: false
node.store.allow_mmap: false
index.refresh_interval: 120s # I added it here
podTemplate:
...
</code></pre>
<p>There is a third way, through kibana UI -> settings-> Elastic Search -> Index management -> index template. But there is no index template for me to start with. Nevertheless, the elastic search creates an index daily with the date. So, I do not want to mess with the existing template.</p>
<p>Can anyone suggest me a better way to do this</p>
| <p>The recommended way is to create an index template that applies default settings when an index is created.
<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html</a></p>
<p>Old indexes are not affected by this template - only new ones.</p>
<p>If you don't want to create a template, you have to set the refresh interval manually.</p>
<p>You can try setting index settings with a wildcard like this:
<code>PUT /my-index-2022-*/_settings</code></p>
<p><code>/_cluster/settings</code> has no relation to index settings - it's configuration settings for the cluster, so don't try to do index operations with that URL.</p>
<p>The same applies to the YAML file - the configuration there has no relationship with the index settings.</p>
|
<p>I am trying to start a pod in privileged mode using the following manifest but it doesn't work for me.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>
apiVersion: v1
kind: Pod
metadata:
name: ftp
spec:
privileged: true
hostNetwork: true
containers:
- name: ftp
image: 84d3f7ba5876/ftp-run</code></pre>
</div>
</div>
</p>
| <p><strong><code>privileged: true</code></strong> needs to be in <strong><code>securityContext</code></strong> in the spec section of the pod template.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ftp
spec:
hostNetwork: true
containers:
- name: ftp
image: 84d3f7ba5876/ftp-run
securityContext:
privileged: true
</code></pre>
<p>You can refer to this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context" rel="nofollow noreferrer">doc</a> for detailed information for privileged mode</p>
|
<p>I want to create a cluster of RESTful web APIs in AWS EKS and be able to access them through a single IP (allowing kubernetes to load balance requests to each). I have followed the procedure explained the this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="nofollow noreferrer">link</a> and have set up an example <code>nginx</code> deployment as shown in the following image:</p>
<p><a href="https://i.stack.imgur.com/tPQgu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tPQgu.png" alt="enter image description here" /></a></p>
<p>The problem is that when I access the example <code>nginx</code> deployment via <code>172.31.22.183</code> it works just fine, but when I try to use the cluster IP <code>10.100.145.181</code> it does not yield any response in such a way that it seems to be unreachable.</p>
<p>What's the purpose of that cluster ip then and how can I use it to achieve what I need?</p>
| <blockquote>
<p>What's the purpose of that cluster ip then and how can I use it to
achieve what I need?</p>
</blockquote>
<p><code>ClusterIP</code> is local IP that is used internally in the cluster, you can use it to access the application.</p>
<p>While i think Endpoint IP that you got, is might be external and you can access the application outside.</p>
<blockquote>
<p>AWS EKS and be able to access them through a single IP (allowing
kubernetes to load balance requests to each)</p>
</blockquote>
<p>For this best practice is to use the ingress, API gateway or service mesh.</p>
<p>Ingress is single point where all your request will be coming inside it will be load balancing and forwarding the traffic internally inside the cluster.</p>
<p>Consider ingress is like Loadbalancer single point to come inside the cluster.</p>
<p><strong>Ingress</strong> : <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>AWS Example : <a href="https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/</a></p>
|
<p>I logged in using docker login command in my machine.</p>
<p>then I tried to run the <code>kubectl</code> command to apply a <code>yaml</code> file:</p>
<p><code>kubectl apply -f manifests/1_helloworld_deploy.yaml</code></p>
<p>but this failed with error :</p>
<blockquote>
<p>Warning Failed 20s (x2 over 35s) kubelet Failed to pull image "nginx:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code <a href="https://registry-1.docker.io/v2/library/nginx/manifests/sha256:89ea560b277f54022cf0b2e718d83a9377095333f8890e31835f615922071ddc" rel="nofollow noreferrer">https://registry-1.docker.io/v2/library/nginx/manifests/sha256:89ea560b277f54022cf0b2e718d83a9377095333f8890e31835f615922071ddc</a>: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <a href="https://www.docker.com/increase-rate-limit" rel="nofollow noreferrer">https://www.docker.com/increase-rate-limit</a></p>
</blockquote>
<p>Now, I already logged in using the docker username and account, still I'm getting a pull rate error.</p>
<p>What should I do to make this working for <code>.yaml</code> file also?</p>
| <ul>
<li>When you login to docker on a machine , its credentials will be saved in a file named <code>~/.docker/config.json</code></li>
<li>We need need to explicitly instruct these credentials to be used while pulling images .</li>
<li>For that we need to create a secret with contents of <code>~/.docker/config.json</code> and mention that as <code>imagePullSecrets</code> in the yaml file.</li>
</ul>
<p>Following is a sample procedure :</p>
<pre><code>kubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=/root/.docker/config.json
In the podspec update it as image pull secret as following :
apiVersion: v1
kind: Pod
metadata:
name: boo
spec:
containers:
- name: boo
image: busybox
imagePullSecrets:
- name: my-secret
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">Detailed Documentation</a></p>
|
<p>I'm trying to upgrade some GKE cluster from 1.21 to 1.22 and I'm getting some warnings about deprecated APIs. Am running Istio 1.12.1 version as well in my cluster</p>
<p>One of them is causing me some concerns:</p>
<p><code>/apis/extensions/v1beta1/ingresses</code></p>
<p>I was surprised to see this warning because we are up to date with our deployments. We don't use Ingresses.</p>
<p>Further deep diving, I got the below details:</p>
<pre><code>β kubectl get --raw /apis/extensions/v1beta1/ingresses | jq
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
{
"kind": "IngressList",
"apiVersion": "extensions/v1beta1",
"metadata": {
"resourceVersion": "191638911"
},
"items": []
}
</code></pre>
<p>It seems an IngressList is that calls the old API. Tried deleting the same,</p>
<pre><code>β kubectl delete --raw /apis/extensions/v1beta1/ingresses
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
</code></pre>
<p>Neither able to delete it, nor able to upgrade.</p>
<p>Any suggestions would be really helpful.</p>
<p>[Update]: My GKE cluster got updated to <code>1.21.11-gke.1900</code> and after that the warning messages are gone.</p>
| <p>we have also upgraded cluster/Node version from 1.21 to 1.22 directly from GCP which have successfully upgraded both node as well as cluster version.</p>
<p>even after upgrading we are still getting ingresslist</p>
<pre><code>/apis/extensions/v1beta1/ingresses
</code></pre>
<p>we are going to upgrade our cluster version from 1.22 to 1.23 tomorrow will update you soon.</p>
|
<p>I'm using Fluent-Bit to ship kubernetes container logs into cloudwatch. <a href="https://github.com/fluent/fluent-bit-kubernetes-logging/blob/master/output/elasticsearch/fluent-bit-configmap.yaml" rel="nofollow noreferrer">This config</a> is working fine. Instead of <code>output-elasticsearch.conf</code> I have following:</p>
<pre><code>output-cloudwatch.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name /aws/eks/eks-cluster-1234/containers
log_stream_prefix <kubernetes-namespace>
auto_create_group On
</code></pre>
<p>How can I grab the <strong>kubernetes namespace</strong> value for this config? So our cloudwatch logs will be little bit organized.</p>
<p>Thank you.</p>
| <p>I had a same issue and I used this raw file to extract the necessary Application.* inputs and filters that would allow You to use $(tag['0') as a log_stream key.</p>
<p><a href="https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit-compatible.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit-compatible.yaml</a></p>
<p>So once You enable the Kubernetes filter and the necessary filters You will be able to set Your log_stream_name in the [OUTPUT]: $(kubernetes['container_name'])</p>
<p>Or in Your case log_stream_prefix: $(kubernetes['namespace'])</p>
<p>Hope this helps.</p>
<p>Edit:</p>
<p>I should also mention, that in order to use the tags You need a fluent-bit cloudwatch plugin.
Because I assumed by default that You are using a fluent-bit image with the latest cloudwatch plugin already in it.
In case this does not work and it turns out You do not have the plugin, here is the link with the fluent-bit image that has it included from their official ecr repository:</p>
<p><a href="https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit" rel="nofollow noreferrer">https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit</a></p>
|
<p>I want to my backend service which is deployed on kubernetes service to access using ingress with path /sso-dev/, for that i have deployed my service on kubernetes container the deployment, service and ingress manifest is mentioned below, but while accessing the ingress load balancer api with path /sso-dev/ it throws "response 404 (backend NotFound), service rules for the path non-existent" error</p>
<p>I required a help just to access the backend service which is working fine with kubernetes container load balance ip.</p>
<p>here is my ingress configure</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30969--6d0e236a1c7d6409":"HEALTHY","k8s1-6d0e236a-default-sso-dev-service-80-849fdb46":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/target-proxy: k8s2-tp-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/url-map: k8s2-um-uwdva40x-default-my-ingress-h98d0sfl
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/backend-protocol":"HTTP","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"sso-dev-service","port":{"number":80}}},"path":"/sso-dev/*","pathType":"ImplementationSpecific"}]}}]}}
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-06-22T12:30:49Z"
finalizers:
- networking.gke.io/ingress-finalizer-V2
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-22T12:30:49Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:ingress.kubernetes.io/backends: {}
f:ingress.kubernetes.io/forwarding-rule: {}
f:ingress.kubernetes.io/target-proxy: {}
f:ingress.kubernetes.io/url-map: {}
f:finalizers:
.: {}
v:"networking.gke.io/ingress-finalizer-V2": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:32:13Z"
name: my-ingress
namespace: default
resourceVersion: "13073497"
uid: 253e067f-0711-4d24-a706-497692dae4d9
spec:
rules:
- http:
paths:
- backend:
service:
name: sso-dev-service
port:
number: 80
path: /sso-dev/*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 34.111.49.35
</code></pre>
<p>Deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-22T08:52:11Z"
generation: 1
labels:
app: sso-dev
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"cent-sha256-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:52:11Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T11:51:22Z"
name: sso-dev
namespace: default
resourceVersion: "13051665"
uid: c8732885-b7d8-450c-86c4-19769638eb2a
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: sso-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sso-dev
spec:
containers:
- image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent@sha256:64b50553219db358945bf3cd6eb865dd47d0d45664464a9c334602c438bbaed9
imagePullPolicy: IfNotPresent
name: cent-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-22T08:52:11Z"
lastUpdateTime: "2022-06-22T08:52:25Z"
message: ReplicaSet "sso-dev-8566f4bc55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-06-22T11:51:22Z"
lastUpdateTime: "2022-06-22T11:51:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
</code></pre>
<p>Service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6d0e236a-default-sso-dev-service-80-849fdb46"},"zones":["us-central1-c"]}'
creationTimestamp: "2022-06-22T08:53:32Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: sso-dev
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:53:32Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T08:53:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.google.com/neg-status: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:30:49Z"
name: sso-dev-service
namespace: default
resourceVersion: "13071362"
uid: 03b0cbe6-1ed8-4441-b2c5-93ae5803a582
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.32.6.103
clusterIPs:
- 10.32.6.103
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30584
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sso-dev
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 104.197.93.226
</code></pre>
<p><a href="https://i.stack.imgur.com/zEh4m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zEh4m.png" alt="Load Balancer" /></a></p>
| <p>You need to change the pathType to Prefix as follows, in your ingress:</p>
<pre><code>pathType: Prefix
</code></pre>
<p>Because I noted that you are using the <code>pathType: ImplementationSpecific</code> . With this value, the matching depends on the <code>IngressClass</code>, so I think for your case the <code>pathType Prefix</code> should be more helpful. Additionally, you can find more information about the ingress path types supported in kubernetes in in this <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">link</a>.</p>
|
<p>I have a scenario and was wondering the best way to structure it with Kustomize.</p>
<p>Say I have multiple environments: <code>dev</code>, <code>qa</code>, <code>prod</code></p>
<p>and say I have multiple DCs: <code>OnPrem</code>, <code>AWS</code>, <code>GCP</code></p>
<p>Let's say each DC above has a <code>dev</code>, <code>qa</code>, <code>prod</code> environment.</p>
<p>I have data that is per environment but also per DC. For example, apply this string to dev overlays but apply these, if AWS.</p>
<p>Is there a way to easily doing this without duplication. An example may be, say if it's AWS, I want to run an additional container in my pod, and if it's prod I want extra replicas. If it's GCP, I want a different image but if it's prod, I still want extra replicas.</p>
<p>The below example, will have a lot of duplication. I've read you can do multiple bases. Maybe it makes sense to have a <code>AWS</code>, <code>GCP</code>, <code>OnPrep</code> Base and then have a <code>dev</code>, <code>qa</code>, <code>prod</code> overlay and have mutiple Kustomize files for each?</p>
<p>ie</p>
<pre><code>βββ base
βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βββ guestbook-ui-svc.yaml
βΒ Β βββ kustomization.yaml
βββ overlay
βββ dev
βΒ Β βββ aws
βΒ Β βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βββ gcp
βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βββ kustomization.yaml
βββ qa
βββ aws
βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βββ kustomization.yaml
βββ gcp
βββ guestbook-ui-deployment.yaml
βββ kustomization.yaml
</code></pre>
| <p>I recommend having an overlay for each combination you want to build. e.g:</p>
<pre><code>βββ overlays
βββ aws-dev
βββ aws-qa
βββ gcp-dev
</code></pre>
<p>Then you can structure in different ways, such as using components:</p>
<pre><code>βββ components
βββ environments
β βββ dev
β βββ qa
βββ providers
βββ aws
βββ gcp
</code></pre>
<p>This makes sense because you usually don't create all combinations of possible environments, but only some that make sense to you.</p>
<p>More documentation: <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md</a></p>
|
<p>I want to deploy an ASP.NET application with MongoDB on Kubernetes locally. Currently the application is working fine with the <code>docker-compose</code>.</p>
<p>In the case of Kubernetes YAML file, I have created a <code>StatefulSet</code> for the MongoDB and a Deployment for the ASP.NET app and also, I have created their respected Services and ConfigMap too. I have attached the complete code below.</p>
<p>Even I can see all the data are loaded into the Mongo database when I use the Mongo-Express Deployment. So, I am sure that the MongoDB StatefulSet is working fine. Now the only concern is the .NET App is throwing an exception called "Resource temporarily unavailable".</p>
<p>About the issue: the build is working fine while performing docker-compose up. But in the case of Kubernetes cluster deployment its throwing this exception:</p>
<blockquote>
<p>fail:<br />
Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware[1]</p>
<p>An unhandled exception has occurred while executing the request.</p>
<p>System.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "Automatic", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/mongo:27017" }", EndPoint: "Unspecified/mongo:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.</p>
<p>System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (00000001, 11): Resource temporarily unavailable</p>
<p>at System.Net.Dns.InternalGetHostByName(String hostName)<br />
at System.Net.Dns.ResolveCallback(Object context)<br />
--- End of stack trace from previous location where exception was thrown ---<br />
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw(Exception source)<br />
at System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult)<br />
at System.Net.Dns.EndGetHostAddresses(IAsyncResult asyncResult)<br />
at System.Net.Dns.<>c.b__25_1(IAsyncResult asyncResult)<br />
at System.Threading.Tasks.TaskFactory<code>1.FromAsyncCoreLogic(IAsyncResult iar, Func</code>2 endFunction, Action<code>1 endAction, Task</code>1 promise, Boolean requiresSynchronization)<br />
--- End of stack trace from previous location where exception was thrown ---<br />
at MongoDB.Driver.Core.Connections.TcpStreamFactory.ResolveEndPointsAsync(EndPoint initial)<br />
at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)<br />
at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)<br />
--- End of inner exception stack trace ---<br />
at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)<br />
at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)
at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)", LastHeartbeatTimestamp: "2022-06-26T16:04:05.7393346Z", LastUpdateTimestamp: "2022-06-26T16:04:05.7393356Z" }] }.
at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)
at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)
at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChanged(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)
at MongoDB.Driver.Core.Clusters.Cluster.SelectServer(IServerSelector selector, CancellationToken cancellationToken)
at MongoDB.Driver.MongoClient.AreSessionsSupportedAfterServerSelection(CancellationToken cancellationToken)
at MongoDB.Driver.MongoClient.AreSessionsSupported(CancellationToken cancellationToken)
at MongoDB.Driver.MongoClient.StartImplicitSession(CancellationToken cancellationToken)
at MongoDB.Driver.OperationExecutor.StartImplicitSession(CancellationToken cancellationToken)
at MongoDB.Driver.MongoCollectionImpl<code>1.UsingImplicitSession[TResult](Func</code>2 func, CancellationToken cancellationToken)
at MongoDB.Driver.MongoCollectionImpl<code>1.FindSync[TProjection](FilterDefinition</code>1 filter, FindOptions<code>2 options, CancellationToken cancellationToken) at MongoDB.Driver.FindFluent</code>2.ToCursor(CancellationToken cancellationToken)
at MongoDB.Driver.IAsyncCursorSourceExtensions.FirstOrDefault[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)</p>
</blockquote>
| <p>Pretty sure you need the change the name of mongo db connectionstring. Check out the DNS for services section: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
<p>In your case it would be <code>mongodb-service.default.svc.cluster.local</code> if you've deployed everything into default namespace.</p>
|
<p>I use Windows and I've just downloaded kubectl using link from this instruction: <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/</a></p>
<p>But when I check the version I see a warning:</p>
<pre><code>C:\>kubectl version
WARNING: This version information is deprecated and will be replaced
with the output from kubectl version --short.
Use --output=yaml|json to get the full version.
Client Version: version.Info{
Major:"1",
Minor:"24",
GitVersion:"v1.24.0",
GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0",
GitTreeState:"clean",
BuildDate:"2022-05-03T13:46:05Z",
GoVersion:"go1.18.1",
Compiler:"gc", Platform:"windows/amd64"
}
Kustomize Version: v4.5.4
Server Version: version.Info{
Major:"1",
Minor:"20",
GitVersion:"v1.20.7",
GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2",
GitTreeState:"clean",
BuildDate:"2021-05-12T12:32:49Z",
GoVersion:"go1.15.12",
Compiler:"gc",
Platform:"linux/amd64"
}
WARNING: version difference between client (1.24) and server (1.20)
exceeds the supported minor version skew of +/-1
</code></pre>
<ul>
<li>Should I worry about it?</li>
<li>If so, how can I fix it?</li>
<li>What is the <code>server</code> version?</li>
<li>Is it somehow related to <code>~/.kube/config</code> file?</li>
</ul>
| <p>Server version is the version of Kubernetes the control plane is running.</p>
<p>Client version is the version of your <code>kubectl</code>.</p>
<p>The recommendation is that you keep the two as close as possible to prevent discrepancies between API versions.</p>
<p>For example, if you did a <code>kubectl create --dry-run=client</code> with a 1.24.0 kubectl, it would produce output that is valid for 1.24.0 kubernetes, but the apis it refers to may not be recognised by a 1.20.7 cluster.</p>
<p>Your server is 1.20.7, and your client is 1.24.0. To remove this error, you need to downgrade your kubectl to 1.20.7 while interacting with this server. OR upgrade your cluster to 1.24.0</p>
|
<pre><code>{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"ResourceDeploymentFailure","message":"The resource provision operation did not complete within the allowed timeout period."}]}.
</code></pre>
<p>I get this error message whenever I try to deploy my AKS Cluster, no matter if I deploy it through Terraform, The azure portal or Azure CLI.</p>
<p>The config I use is :</p>
<pre><code>az aks create --name Aks-moduleTf --max-count 1 --min-count 1 --network-plugin azure --vnet-subnet-id /subscriptions/<SUBID>/resourceGroups/MyResources/providers/Microsoft.Network/virtualNetworks/MyVnet/subnets/Mysubnet --node-count 1 --node-vm-size Standard_B2s --dns-service-ip X.X.X.X --resource-group MyResources --generate-ssh-keys --enable-cluster-autoscaler --service-cidr X.X.X.X/X
</code></pre>
<p>Thank you for your help.</p>
| <p>The Error you are getting beacuse issue with the NSGS [acls] of subnet that are restricting the traffic flow to the Azure management network to let the AKS creation work.</p>
<p>These NSGs are associated with the Subnet in Vnet that you are trying to create an AKS for.</p>
<blockquote>
<p>Apparently, when we created a new AKS(resource) with all the default
options by creating a new subnet with no NSGs, It worked.</p>
</blockquote>
<p><strong>Az CLI code</strong></p>
<pre><code>az aks create --resource-group v-rXXXXXtree --name Aks-moduleTf --max-count 1 --min-count 1 --network-plugin azure --vnet-subnet-id /subscriptions/b83cXXXXXXXXXXXXX074c23f/resourceGroups/v-rXXXXXXXXXe/providers/Microsoft.Network/virtualNetworks/Vnet1/subnets/Subnet1 --node-count 1 --node-vm-size Standard_B2s --dns-service-ip 10.2.0.10 --service-cidr 10.2.0.0/24 --generate-ssh-keys --enable-cluster-autoscaler
</code></pre>
<p><strong>Solution</strong> : If you are creating Azure resoruce with existing <code>vnet/subnet</code>. you need to disable(Select None) for the NSG of subnet.</p>
<p><a href="https://i.stack.imgur.com/f1nLq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f1nLq.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/zLrp3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zLrp3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/dDrVt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dDrVt.png" alt="enter image description here" /></a></p>
<p><strong>Reference : You can check this <a href="https://social.msdn.microsoft.com/Forums/en-US/df3400a1-b31a-4159-a242-5f03000c04d2/appsrvenv-creation-times-out-quotthe-resource-provision-operation-did-not-complete-within-the?forum=windowsazurewebsitespreview" rel="nofollow noreferrer">link</a> one of the user faced this issue and went to Microsoft Support team and found the issue is with NSG</strong></p>
|
<p>I did kubeadm init on one machine. I followed all the instructions on network etc and end up with this:</p>
<p><code>kubectl get nodes</code>:</p>
<pre class="lang-bash prettyprint-override"><code>NAME STATUS ROLES AGE VERSION
slchvdvcybld001 Ready control-plane 140m v1.24.2
slchvdvcydtb001 Ready <none> 136m v1.24.2
slchvdvcytst001 Ready <none> 137m v1.24.2
</code></pre>
<p>As you can see, no nodes are Master or worker or similar.</p>
<p>I don't have some special setup, all I did is install it and did init.</p>
<p>There are no errors in logs file. Dashboard is in GREEN and everything is in green.</p>
<p>These are versions of kubectl and so on:</p>
<pre class="lang-bash prettyprint-override"><code>Client Version: v1.24.2
Kustomize Version: v4.5.4
Server Version: v1.24.2
</code></pre>
| <p>Labelling of master node is deprecated. That's where when using <code>kubectl get nodes</code> its showing role as "control-plane" instead of "control-plane,master"</p>
<p>More details are in following link
Kubeadm: <a href="http://git.k8s.io/enhancements/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md" rel="nofollow noreferrer">http://git.k8s.io/enhancements/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md</a></p>
|
<p>Im trying to get all the external IPs that the pods from kubernetes can use, its possible consult this in console?</p>
| <p>Pods have no external IP as the nodes are responsible for communication with the Internet. You can check this diagram for more details[1].</p>
<p>It seems what you're referring here is the internal IP address range that the pods can use.</p>
<p>You can get this information by navigating to <code>β°</code> > <code>Kubernetes Engine</code> > <code>Clusters</code>.</p>
<p>Click the name of your cluster, then scroll to "Networking". It will show you the "Cluster pod address range (default)". You can check this documentation[2] for more details.</p>
<p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods</a></p>
<p>[2] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ip-allocation" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ip-allocation</a></p>
|
<p>I have a multi-tenant Kubernetes cluster. On it I have an nginx reverse proxy with load balancer and the domain <code>*.example.com</code> points to its IP.</p>
<p>Now, several namespaces are essentially grouped together as project A and project B (according to the different users).</p>
<p>How, can I ensure that any service in a namespace with label <code>project=a</code>, can have any domain like <code>my-service.project-a.example.com</code>, but not something like <code>my-service.project-b.example.com</code> or <code>my-service.example.com</code>? Please keep in mind, that I use NetworkPolicies to isolate the communication between the different projects, though communication with the nginx namespace and the reverse proxy is always possible.</p>
<p>Any ideas would be very welcome.</p>
<hr />
<p><strong>EDIT:</strong></p>
<p>I made some progress as have been deploying Gatekeeper to my GKE clusters via Helm charts. Then I was trying to ensure that only Ingress hosts of the form "<em>.project-name.example.com" should be allowed. For this, I have different namespaces that each have labels "project=a" or similar and each of these should only allow to use ingress of the form "</em>.a.example.com". Hence I need that project label information for the respective namespaces. I wanted to deploy the following resources</p>
<pre><code>apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredingress
spec:
crd:
spec:
names:
kind: K8sRequiredIngress
validation:
# Schema for the `parameters` field
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredingress
operations := {"CREATE", "UPDATE"}
ns := input.review.object.metadata.namespace
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
not data.kubernetes.namespaces[ns].labels.project
msg := sprintf("Ingress denied as namespace '%v' is missing 'project' label", [ns])
}
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
project := data.kubernetes.namespaces[ns].labels.project
not fqdn_matches(host, project)
msg := sprintf("invalid ingress host %v, has to be of the form *.%v.example.com", [host, project])
}
fqdn_matches(str, pattern) {
str_parts := split(str, ".")
count(str_parts) == 4
str_parts[1] == pattern
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredIngress
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Ingress"]
---
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
</code></pre>
<p>However, when I try to setup everything in the cluster I keep getting:</p>
<pre><code>kubectl apply -f constraint_template.yaml
Error from server: error when creating "constraint_template.yaml": admission webhook "validation.gatekeeper.sh" denied the request: invalid ConstraintTemplate: invalid data references: check refs failed on module {template}: errors (2):
disallowed ref data.kubernetes.namespaces[ns].labels.project
disallowed ref data.kubernetes.namespaces[ns].labels.project
</code></pre>
<p>Do you know how to fix that and what I did wrong. Also, in case you happen to know a better approach just let me know.</p>
| <p>Alternative to other answer, you may use validation webhook to enfore by any parameter present in the request. Example, name,namespace, annotations, spec etc.</p>
<p>The validation webhook could be a service running in the cluster or External to cluster. This service would essentially make a logical decision based on the logic we put. For every request Sent by user, api server send a review request to the webhook and the validation webhook would either approve or reject the review.</p>
<p>You can read more about it <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-admission-webhooks-on-the-fly" rel="nofollow noreferrer">here</a>, more descriptive post by me <a href="https://technekey.com/what-is-validation-webhook-in-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
|
<p>NOTE: I tried to include screenshots but stackoverflow does not allow me to add images with preview so I included them as links.</p>
<p>I deployed a web app on AWS using kOps.
I have two nodes and set up a Network Load Balancer.</p>
<p><a href="https://i.stack.imgur.com/DBOsC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DBOsC.png" alt="enter image description here" /></a>
The target group of the NLB has two nodes (each node is an instance made from the same template).</p>
<p><a href="https://i.stack.imgur.com/0jwuk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0jwuk.png" alt="enter image description here" /></a>
Load balancer actually seems to be working after checking ingress-nginx-controller logs.
The requests are being distributed over pods correctly. And I can access the service via ingress external address.
But when I go to AWS Console / Target Group, one of the two nodes is marked as and I am concerned with that.</p>
<p>Nodes are running correctly.
<a href="https://i.stack.imgur.com/cUBxa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cUBxa.png" alt="enter image description here" /></a></p>
<p>I tried to execute sh into nginx-controller and tried curl to both nodes with their internal IP address.
For the healthy node, I get nginx response and for the unhealthy node, it times out.
I do not know how nginx was installed on one of the nodes and not on the other one.</p>
<p>Could anybody let me know the possible reasons?</p>
| <p>I had exactly the same problem before and this should be documented somewhere on AWS or Kubernetes. The answer is copied from <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-unhealthy-worker-node-nginx/" rel="nofollow noreferrer">AWS Premium Support</a></p>
<h3>Short description</h3>
<p>The NGINX Ingress Controller sets the <code>spec.externalTrafficPolicy</code> option to <code>Local</code> to preserve the client IP. Also, requests aren't routed to unhealthy worker nodes. The following troubleshooting implies that you don't need to maintain the cluster IP address or preserve the client IP address.</p>
<h3>Resolution</h3>
<p>If you check the ingress controller service you will see the <code>External Traffic Policy</code> field set to <code>Local</code>.</p>
<pre><code>$ kubectl -n ingress-nginx describe svc ingress-nginx-controller
Output:
Name: ingress-nginx-controller
Namespace: ingress-nginx
...
External Traffic Policy: Local
...
</code></pre>
<p>This Local setting drops packets that are sent to Kubernetes nodes that aren't running instances of the NGINX Ingress Controller. Assign NGINX pods (from the Kubernetes website) to the nodes that you want to schedule the NGINX Ingress Controller on.</p>
<p>Update the pec.externalTrafficPolicy option to <code>Cluster</code></p>
<pre><code>$ kubectl -n ingress-nginx patch service ingress-nginx-controller -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
Output:
service/ingress-nginx-controller patched
</code></pre>
<p>By default, NodePort services perform source address translation (from the Kubernetes website). For NGINX, this means that the source IP of an HTTP request is always the IP address of the Kubernetes node that received the request. If you set a NodePort to the value of the externalTrafficPolicy field in the ingress-nginx service specification to Cluster, then you can't maintain the source IP address.</p>
|
<p>i installed nginx ingress with the yaml file</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>when deploy i can see that the endpoints/externalIPs by default are all the ip of my nodes
<a href="https://i.stack.imgur.com/Cdtuj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cdtuj.png" alt="enter image description here" /></a></p>
<p>but i only want 1 externalIPs to be access able to my applications</p>
<p>i had tried bind-address(<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address</a>) in a configuration file and applied it but it doesn't work, my ConfigMap file:</p>
<pre><code>apiVersion: v1
data:
bind-address: "192.168.30.16"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
</code></pre>
<p>I tried <code>kubectl edit svc/ingress-nginx-controller -n ingress-nginx</code> to edit the svc adding externalIPs but it still doesn't work.</p>
<p><a href="https://i.stack.imgur.com/RQKdy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RQKdy.png" alt="enter image description here" /></a></p>
<p>The only thing the nginx ingress document mentioned is <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#external-ips</a> but i tried editing the svc, after i changed, it was set to single IP, but later it re-add the IPs again. Seems like there an automatic update of external IPs mechanic in ingress-nginx?</p>
<p>Is there anyway to set nginx ingress externals ip to only 1 of the node ip? i'm running out of option for googling this. Hope someone can help me</p>
| <blockquote>
<p>but I only want 1 external IPs to be access able to my applications</p>
</blockquote>
<p>If you wish to "control" who can access your service(s) and from which ip/subnet/namesapce etc you should use <code>NetworkPolicy</code></p>
<hr />
<p><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a></p>
<blockquote>
<p>The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:</p>
<ol>
<li>Other pods that are allowed (exception: a pod cannot block access to itself)</li>
<li>Namespaces that are allowed.</li>
<li>IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)</li>
</ol>
<p>When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.</p>
<p>Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
</code></pre>
<hr />
<p><a href="https://i.stack.imgur.com/OufJL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OufJL.jpg" alt="enter image description here" /></a></p>
|
<p>I have ingress controller nginx with basic_auth</p>
<p>In my yaml I have:</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
</code></pre>
<p>Everything works fine, but I need to exclude some IP addresses from basic_auth, and I can`t find a solution for it.</p>
<p>Does ingress have such opportunities?</p>
| <p>You want to use custom server snippet and use allow/deny. I find it more elegant solution then the <code>if</code> recommentation in the comment above.</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/server-snippet: |
satisfy any;
allow 66.220.144.0/20;
deny all;
</code></pre>
<p>To give credit where credit is due: This answer is based on <a href="https://serverfault.com/questions/242218/how-to-disable-http-basic-auth-in-nginx-for-a-specific-ip-range">https://serverfault.com/questions/242218/how-to-disable-http-basic-auth-in-nginx-for-a-specific-ip-range</a> .</p>
|
<p>Is there a way to tell cassandra to: "listen to the port 7000 for intra node connection but when you try to connect to other cassandra hosts use another port like 443?"</p>
<p>Basically same as Elasticsearch where you have <code>http.port</code> and <code>http.publish_port</code> <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#http-settings" rel="nofollow noreferrer">Elasticsearch configuration</a></p>
<p>Our scenario is:</p>
<ul>
<li>we have a cassandra cluster deployed in Openshift using the bitnami helm chart</li>
<li>now we want to deploy other cassandra ring/dc in another Openshift cluster</li>
<li>then we want to connect together the 2 cassandra rings deployed in 2 different Openshift clusters</li>
<li>we have a restriction: we have to use an openshift route to expose the traffic outside an openshift cluster, and the openshift route works only on port 443 but I cannot configure the port 443 in docker/cassandra since that is a privileged port</li>
</ul>
| <p>Cassandra nodes use the gossip protocol for internode communications and by default uses port <code>7000</code>:</p>
<pre><code>storage_port: 7000
</code></pre>
<p>This port is exclusively used for <strong>private communications</strong> between nodes which is typically configured on a private address/network (<code>listen_address</code>).</p>
<p>Apps/clients use a completely separate port (CQL client port <code>9042</code>) to connect to the cluster which is usually on a completely separate <strong>public network</strong> (<code>rpc_address</code>):</p>
<pre><code>native_transport_port: 9042
</code></pre>
<p>You can configure whatever port you want/need in <code>conf/cassandra.yaml</code>. Cheers!</p>
|
<p>We deploy some application in different namespaces with node selectors on different node such as Dev & QA in 1 k8s cluster. Now what issue is, the application deploy in dev node is also accessible using qa ip or qa node name. which shouldn't be, because application expose using node port and running only on dev env.</p>
<p>also if i deploy qa app, it also exposed using dev node name or ip.</p>
<p>for example, let's say i have dev.com node & qa.com node, i deploy app.dev.com on 3000, it should open using dev.com:3000 inside or outside of cluster, but it's also opening using qa.com:3000 which is not correct as peer me.</p>
| <p>Use ingress & a LoadBalancer service instead of NodePort, that should fix your issue.</p>
|
<p>I am installing linkerd helm verison with flux and cert mananger for tls rotation</p>
<p>cert manager holds default config so there isnt much to talk there</p>
<p>flux and linkerd with this config:</p>
<p>release.yaml</p>
<pre><code>apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linkerd
namespace: linkerd
spec:
interval: 5m
values:
identity.issuer.scheme: kubernetes.io/tls
installNamespace: false
valuesFrom:
- kind: Secret
name: linkerd-trust-anchor
valuesKey: tls.crt
targetPath: identityTrustAnchorsPEM
chart:
spec:
chart: linkerd2
version: "2.11.2"
sourceRef:
kind: HelmRepository
name: linkerd
namespace: linkerd
interval: 1m
</code></pre>
<p>source.yaml</p>
<pre><code>---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: linkerd
namespace: linkerd
spec:
interval: 5m0s
url: https://helm.linkerd.io/stable
</code></pre>
<p>linkerd-trust-anchor.yaml</p>
<pre><code>apiVersion: v1
data:
tls.crt: base64encoded
tls.key: base64encoded
kind: Secret
metadata:
name: linkerd-trust-anchor
namespace: linkerd
type: kubernetes.io/tls
</code></pre>
<p>which was created with:</p>
<pre><code>step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure
</code></pre>
<p>issuer.yaml</p>
<pre><code>---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: linkerd-trust-anchor
namespace: linkerd
spec:
ca:
secretName: linkerd-trust-anchor
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: linkerd-identity-issuer
namespace: linkerd
spec:
secretName: linkerd-identity-issuer
duration: 48h
renewBefore: 25h
issuerRef:
name: linkerd-trust-anchor
kind: Issuer
commonName: identity.linkerd.cluster.local
dnsNames:
- identity.linkerd.cluster.local
isCA: true
privateKey:
algorithm: ECDSA
usages:
- cert sign
- crl sign
- server auth
- client auth
</code></pre>
<p>now when it comes the time to reconcile i get this error in the helmrelease</p>
<pre><code>Helm install failed: execution error at (linkerd2/templates/identity.yaml:19:21): Please provide the identity issuer certificate
</code></pre>
<p>however doing it manually does work perfectly</p>
<pre><code>helm install linkerd2 \
--set-file identityTrustAnchorsPEM=ca.crt \
--set identity.issuer.scheme=kubernetes.io/tls \
--set installNamespace=false linkerd/linkerd2 \
-n linkerd
</code></pre>
<p>It Also work if I have the same setup but without cert manager and certificates declared manually (with a different secret name as linkerd will create it on its own)like this:</p>
<pre><code>valuesFrom:
- kind: Secret
name: linkerd-trust-anchor
valuesKey: tls.crt
targetPath: identityTrustAnchorsPEM
- kind: Secret
name: linkerd-identity-issuer-2
valuesKey: tls.crt
targetPath: identity.issuer.tls.crtPEM
- kind: Secret
name: linkerd-identity-issuer-2
valuesKey: tls.key
targetPath: identity.issuer.tls.keyPEM
</code></pre>
<p>Am I missing something?</p>
| <p>The problem lies here:</p>
<pre><code>values:
identity.issuer.scheme: kubernetes.io/tls
</code></pre>
<p>It should be:</p>
<pre><code>values:
identity:
issuer:
scheme: kubernetes.io/tls
</code></pre>
<p>Otherwise, helm wont recognize it and linkerd will think the schema is linkerd.io/tls, which doesn't match the schema structure of kubernetes secret tls.</p>
|
<p>How can I define my networkpolicy so that both pods (test-server and test-server2) that are in the same namespace are reachable from outside the cluster but cannot reach each other?</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-server-7555d49f48-sfzv9 1/1 Running 0 63m
test-server2-55c9cc78d4-knn59 1/1 Running 0 100m
</code></pre>
<pre><code># test: deny all ingress traffic
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
</code></pre>
<pre><code># test: allow ingress traffic for test-server service
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-test-server-ingress
spec:
podSelector:
matchLabels:
app: test-server
policyTypes:
- Ingress
ingress:
- {}
---
# test: allow ingress traffic for test-server2 service
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-test-server2-ingress
spec:
podSelector:
matchLabels:
app: test-server2
policyTypes:
- Ingress
ingress:
- {}
</code></pre>
<p>Using this approach both services can be accessed externally, but you can also jump from one service to the other.</p>
| <blockquote>
<p>How can I define my <code>network policy</code> so that both pods (test-server and test-server2) that are in the same namespace are reachable from outside the cluster but cannot reach each other?</p>
</blockquote>
<p>Your <code>NetworkPolicy</code> should be something similar to this one, based upon your settings</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: codewizard-block-policy
namespace: codewizard
spec:
# You can also add podSelection to
# be more specific.... (up to you)
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
# Block all traffic from the same subnet (10.10.10.10)
# Or change the rule to only block a given IP and not a subnet
- ipBlock:
cidr: 10.10.10.10/32
except:
- 172.17.0.0/16
# Add allow ip from your LoadBalancer IP
# Same thing for out going traffic
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
</code></pre>
<hr />
<ul>
<li>Another solution might be to use <code>Ingress</code> with the following annotation:
<code>ingress.kubernetes.io/whitelist-source-range: "x.x.x.x/xx"</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whitelist
annotations:
ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"
spec:
rules:
- host: whitelist.test.net
http:
paths:
- path: /
backend:
serviceName: webserver
servicePort: 80
</code></pre>
|
<p>I developed service which listen 0.0.0.0:8080. When I run app local it works and i can connect by browser. I push it in image to dockerhub. By this image I created pod and service in my mini kube cluster. By command "minicube service --url" i get url like 192.168.49.2:30965, but I can't connect to this url. I tried with curl and with browser.</p>
<p>I tried connecting with curl and with browser. In browser i got</p>
<blockquote>
<p>"ERR_CONNECTION_TIME_OUT".</p>
</blockquote>
<p>Curl in shell get error too.</p>
<p>When I use command "minicube service --url" i got this message</p>
<pre><code>! Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 2.3171367s
* Restarting the docker service may improve performance.
http://192.168.49.2:30965
* Starting tunnel for service <name>.
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
</code></pre>
| <p>You would need, as <a href="https://stackoverflow.com/q/71175415/6309">illustrated here</a> to:</p>
<ul>
<li>create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> and apply it</li>
<li>then expose the port:<br />
<code>kubectl expose -f laravel-deployment.yaml --port=8000 --target-port=8000</code></li>
<li>And port-forward it:<br />
<code>kubectl port-forward <backend-deployment-xxxx-xxxx> 8000:8000</code></li>
</ul>
<p>That last step would allow you to access http://localhost:8000, instead of relying on the dynamic minikube IP.</p>
|
<p>We have a data visualization server hosted in Kubernetes pods. The dashboards in that data viz are displayed in the browser of different monitors/terminals for near-real time operational reporting. Sometimes the pods fail, and when they come alive again, the browser redirects to Single Sign-On page instead of going to the dashboard the URL is originally configured to.</p>
<p>The server are hosted in I would presume a replica set. There are two pods that exist as far as I can tell.</p>
<p>I was granted privilege on using kubectl to solve this problem, but still quite new with the whole Kubernetes thing. Using kubectl, how do I simulate pod failure/restart for testing purposes? Since the pods are in duplicate, shutting one of them will only redirect the traffic to the other pod. How to make both pods fail/restart at the same time? (I guess doing <code>kubectl delete pod</code> on both pods will do, but I want to make sure k8s will respawn the pods automatically, and not delete them forever).</p>
| <p>If I understand the use case correctly, you might want to use <code>kubectl scale</code> command. This will give you the flexibility to make the replica count to zero to N by running a simple <code>kubectl scale</code> command. See examples. Also, if you are using deployment, you can just do the <code>kubectl delete</code> pod, the deployment controller will spawn a new one to satisfy the replica count.</p>
<pre><code>kubectl scale deployment/<DEPLOYMENT-NAME> --replicas=<DESIRED-NUMBER-OF-REPLICA>
</code></pre>
<p><strong>short example:</strong></p>
<pre><code>kubectl scale deployment/deployment-web --replicas=0
deployment.apps/deployment-web scaled
</code></pre>
<p><strong>Long Example</strong>:</p>
<p>// create a deployment called, <code>deployment-web</code> with two replicas.</p>
<pre><code>kubectl create deployment deployment-web --image=nginx --replicas 2
deployment.apps/deployment-web created
</code></pre>
<p>// verify that both replicas are up</p>
<pre><code>kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-web 2/2 2 2 13s
</code></pre>
<p>// expose the deployment with a service [OPTIONAL-STEP, ONLY FOR EXPLANATION]</p>
<pre><code>kubectl expose deployment deployment-web --port 80
service/deployment-web exposed
</code></pre>
<p>//verify that the service is created</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-web ClusterIP 10.233.24.174 <none> 80/TCP 5s
</code></pre>
<p>// dump the list of end-points for that service, there would be one for each replica. Notice the two IPs in the 2nd column.</p>
<pre><code>kubectl get ep
NAME ENDPOINTS AGE
deployment-web 10.233.111.6:80,10.233.115.9:80 12s
</code></pre>
<p>//scale down to 1 replica for the deployment</p>
<pre><code>kubectl scale --current-replicas=2 --replicas=1 deployment/deployment-web
deployment.apps/deployment-web scaled
</code></pre>
<p>// Notice the endpoint is reduced from 2 to 1.</p>
<pre><code>kubectl get ep
NAME ENDPOINTS AGE
deployment-web 10.233.115.9:80 43s
</code></pre>
<p>// also note that there is only one pod remaining</p>
<pre><code>kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-web-64c769b44-qh2qf 1/1 Running 0 105s
</code></pre>
<p>// scale down to zero replica</p>
<pre><code>kubectl scale --current-replicas=1 --replicas=0 deployment/deployment-web
deployment.apps/deployment-web scaled
</code></pre>
<p>// The endpoint list is empty</p>
<pre><code>kubectl get ep
NAME ENDPOINTS AGE
deployment-web <none> 9m4s
</code></pre>
<p>//Also, both pods are gone</p>
<pre><code>kubectl get pod
No resources found in default namespace.
</code></pre>
<p>// When you are done with testing. restore the replicas</p>
<pre><code>kubectl scale --current-replicas=0 --replicas=2 deployment/deployment-web
deployment.apps/deployment-web scaled
</code></pre>
<p>//endpoints and pods are restored back</p>
<pre><code>kubectl get ep
NAME ENDPOINTS AGE
deployment-web 10.233.111.8:80,10.233.115.11:80 10m
foo-svc 10.233.115.6:80 50m
kubernetes 192.168.22.9:6443 6d23h
kubectl get pod -l app=deployment-web
NAME READY STATUS RESTARTS AGE
deployment-web-64c769b44-b72k5 1/1 Running 0 8s
deployment-web-64c769b44-mt2dd 1/1 Running 0 8s
</code></pre>
|
<p>I am getting the below error when installing the latest stable Rancher Desktop in my Virtual Machine.</p>
<p>Could someone please help?</p>
<p><strong>Error:</strong></p>
<blockquote>
<p>Error: wsl.exe exited with code 4294967295</p>
</blockquote>
<p><strong>Command:</strong></p>
<pre><code>wsl --distribution rancher-desktop --exec mkdir -p /mnt/wsl/rancher-desktop/run/data
</code></pre>
<p><strong>Logs:</strong></p>
<blockquote>
<p>2022-02-02T09:58:39.490Z: Running command wsl --distribution
rancher-desktop --exec wslpath -a -u
C:\Users\VIVEK~1.NUN\AppData\Local\Temp\rd-distro-gGd3SG\distro.tar...
2022-02-02T09:58:40.641Z: Running command wsl --distribution
rancher-desktop --exec tar -cf
/mnt/c/Users/VIVEK~1.NUN/AppData/Local/Temp/rd-distro-gGd3SG/distro.tar
-C / /bin/busybox /bin/mount /bin/sh /lib /etc/wsl.conf /etc/passwd /etc/rancher /var/lib... 2022-02-02T09:58:42.628Z: Running command wsl
--import rancher-desktop-data C:\Users\Vivek.Nuna\AppData\Local\rancher-desktop\distro-data
C:\Users\VIVEK~1.NUN\AppData\Local\Temp\rd-distro-gGd3SG\distro.tar
--version 2... 2022-02-02T09:58:44.025Z: Running command wsl --distribution rancher-desktop-data --exec /bin/busybox [ ! -d /etc/rancher ]... 2022-02-02T09:58:44.025Z: Running command wsl
--distribution rancher-desktop-data --exec /bin/busybox [ ! -d /var/lib ]... 2022-02-02T10:03:54.533Z: Running command wsl
--terminate rancher-desktop... 2022-02-02T10:03:54.534Z: Running command wsl --terminate rancher-desktop-data...
2022-02-02T10:03:54.971Z: Running command wsl --distribution
rancher-desktop --exec mkdir -p /mnt/wsl/rancher-desktop/run/data...
2022-02-02T10:04:03.418Z: WSL: executing: mkdir -p
/mnt/wsl/rancher-desktop/run/data: Error: wsl.exe exited with code
4294967295</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/xdyQ6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xdyQ6.png" alt="enter image description here" /></a></p>
| <p>I met the same issue in Windows 10.</p>
<p>Below solution helped me:</p>
<h2>1. Quit Rancher Desktop</h2>
<p><a href="https://i.stack.imgur.com/U8kpt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U8kpt.png" alt="Quit Rancher" /></a></p>
<h2>2. Run below command in Windows command line:</h2>
<pre class="lang-bash prettyprint-override"><code>wsl --update
</code></pre>
<h2>3. After update completed, open Rancher Desktop again.</h2>
<p>Rancher Desktop works well now.</p>
<p><a href="https://i.stack.imgur.com/t7bWi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/t7bWi.png" alt="Rancher Desktop works well now" /></a></p>
<p>After completed installing Rancher desktop, you can use the <code>docker</code> and <code>kubectl</code> commands in Windows command line successfully.</p>
<h2>References:</h2>
<p><a href="https://github.com/rancher-sandbox/rancher-desktop/issues/1328#issuecomment-1022190843" rel="noreferrer">Error: wsl.exe exited with code 4294967295 #1328 - github</a></p>
|
<p>I am running a Kubernetes cluster in AWS EKS and I set up the autoscaler. I tested the autoscaler and it worked as when the number of pods in a node exceeded 110 then new nodes were automatically added to the cluster and the pending pods entered running state.</p>
<p>After that, I deleted the deployment. It's been about 10 minutes and I see that all new nodes created by the autoscaler are already there and in ready state!</p>
<p>How long does it take for Kubernetes to delete them automatically? Does it down-scale the cluster automatically at all?</p>
| <p>Although scaling down is a slow process the default scan interval is <strong>10 seconds</strong> if you are using the autoscaler to scale the nodes in EKS.</p>
<p>You can check the status of autoscaler using <strong>configmap</strong> and its a decision.</p>
<p>There could be a possibility that on the new node you have some system pod running so due to that EKS is not able to scale those nodes down or PDB(PodDisruptionBudget) is set for deployments.</p>
<p>Pod has the annotation <code>"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"</code></p>
<p>Read more about EKS scaling : <a href="https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html</a></p>
|
<pre><code>apiVersion: projectcalico.org/v3
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
</code></pre>
<p>By using <code>kubectl get networkpolicy</code>, I can see only the policies created by <code>networking.k8s.io/v1</code> and not those created by <code>projectcalico.org/v3</code>. Any suggestion how to see the latter ones?</p>
| <p><code>kubectl get XXX</code> does not display all the resources in the cluster, in your case you cannot see CRD</p>
<ul>
<li>You can find your object with <code>kubectl get crds</code></li>
<li>Then <code>kubectl get <crd name> -A</code></li>
</ul>
<hr />
<p>In your case it would be:</p>
<pre class="lang-bash prettyprint-override"><code># Get all the CRD from the desired type
kubectl get projectcalico.org/v3 -A
# Now grab the desired name and do whatever you want with it
kubectl describe <CRD>/<resource name> -n <namespace>
</code></pre>
<hr />
<p><a href="https://i.stack.imgur.com/Sop3X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Sop3X.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/BF4Nq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BF4Nq.png" alt="enter image description here" /></a></p>
|
<p>I have a kubeadm cluster and i am trying to deploy to static websites using ingress (after installing the metalLB and nginx-ingress controller )</p>
<p>After deploying the ingress, i find that the ingress don't load the website assets (the html file only)
Please any help !</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress-test
spec:
ingressClassName: nginx
rules:
- host: k8s.example.k8s
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-svc
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-svc
port:
number: 80
</code></pre>
| <p>Make sure your routing working properly with ingress</p>
<p>Example and it's not 404 for css:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: service
servicePort: 3000
- path: /api/?(.*)
backend:
serviceName: service
servicePort: 5000
</code></pre>
<p>Read more at : <a href="https://github.com/kubernetes/ingress-nginx/issues/2557" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/2557</a></p>
|
<p>From "Extending kubectl with plugins":</p>
<blockquote>
<p>It is currently not possible to create plugins that overwrite existing
<code>kubectl</code> commands. [...] Due to this limitation, it is also not
possible to use plugins to add new subcommands to existing <code>kubectl</code>
commands. For example, adding a subcommand <code>kubectl create foo</code> by
naming your plugin <code>kubectl-create-foo</code> will cause that plugin to be
ignored.</p>
<p>-- <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#limitations" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#limitations</a></p>
</blockquote>
<p>Is there another way to extend <code>kubectl create</code>?</p>
| <p>It does not look like that in source code, all sub-commands are currently registered explicitly (<a href="https://github.com/kubernetes/kubernetes/blob/db8d77cfeb0593b0accc17804df43834cc7f9917/staging/src/k8s.io/kubectl/pkg/cmd/create/create.go#L142-L160" rel="nofollow noreferrer">cf.</a>):</p>
<pre class="lang-golang prettyprint-override"><code> // create subcommands
cmd.AddCommand(NewCmdCreateNamespace(f, ioStreams))
cmd.AddCommand(NewCmdCreateQuota(f, ioStreams))
cmd.AddCommand(NewCmdCreateSecret(f, ioStreams))
cmd.AddCommand(NewCmdCreateConfigMap(f, ioStreams))
cmd.AddCommand(NewCmdCreateServiceAccount(f, ioStreams))
cmd.AddCommand(NewCmdCreateService(f, ioStreams))
cmd.AddCommand(NewCmdCreateDeployment(f, ioStreams))
cmd.AddCommand(NewCmdCreateClusterRole(f, ioStreams))
cmd.AddCommand(NewCmdCreateClusterRoleBinding(f, ioStreams))
cmd.AddCommand(NewCmdCreateRole(f, ioStreams))
cmd.AddCommand(NewCmdCreateRoleBinding(f, ioStreams))
cmd.AddCommand(NewCmdCreatePodDisruptionBudget(f, ioStreams))
cmd.AddCommand(NewCmdCreatePriorityClass(f, ioStreams))
cmd.AddCommand(NewCmdCreateJob(f, ioStreams))
cmd.AddCommand(NewCmdCreateCronJob(f, ioStreams))
cmd.AddCommand(NewCmdCreateIngress(f, ioStreams))
cmd.AddCommand(NewCmdCreateToken(f, ioStreams))
return cmd
</code></pre>
|
<p>We are running a terraform to create a GKE Cluster and using the below to create a local kubeconfig file after the creation of the cluster.</p>
<pre><code>module "gke_auth" {
source = "terraform-google-modules/kubernetes-engine/google//modules/auth"
depends_on = [module.gke]
project_id = var.project_id
location = module.gke.location
cluster_name = module.gke.name
}
resource "local_file" "kubeconfig" {
content = module.gke_auth.kubeconfig_raw
filename = "kubeconfig"
}
</code></pre>
<p>Post that we would like to continue and deploy istio and other deployments on the cluster and to connect to the cluster we are referring kubeconfig file as below.</p>
<pre><code>provider "helm" {
kubernetes {
config_path = "kubeconfig"
}
}
provider "kubernetes" {
config_path = "kubeconfig"
}
</code></pre>
<p>But as soon as we run apply command below warning is shown.</p>
<pre><code>Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on main.tf line 42, in provider "kubernetes":
42: provider "kubernetes" {
'config_path' refers to an invalid path: "kubeconfig": stat kubeconfig: no such file or directory
</code></pre>
<p>It is because initially the file is not there but it will be created eventually once the cluster is created. But the problem is after applying the template the session state is not refreshed automatically and even though the kubeconfig file has been created, it throws the below error and exits the execution.</p>
<pre><code>Error: Post "http://localhost/api/v1/namespaces": dial tcp 127.0.0.1:80: connect: connection refused
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on main.tf line 42, in provider "kubernetes":
42: provider "kubernetes" {
'config_path' refers to an invalid path: ".kubeconfig": stat .kubeconfig: no such file or directory
</code></pre>
<p>Please suggest how to make this work?</p>
| <p>We have this issue with the setup below.</p>
<p>For the first module, we added an output block.</p>
<pre><code>module "gke_auth" {
source = "terraform-google-modules/kubernetes-engine/google//modules/auth"
depends_on = [module.gke]
project_id = var.project_id
location = module.gke.location
cluster_name = module.gke.name
}
resource "local_file" "kubeconfig" {
content = module.gke_auth.kubeconfig_raw
filename = "kubeconfig"
}
output "kubeconfig_file" {
value = "${path.cwd}/kubeconfig"
}
</code></pre>
<p>For the second module we made below changes:</p>
<pre><code>data "terraform_remote_state" "kubeconfig_file" {
backend = "local"
config = {
path = "${path.module}/../dirA/terraform.tfstate"
}
}
provider "helm" {
kubernetes {
config_path = "${data.terraform_remote_state.kubeconfig_file.outputs.kubeconfig_file}"
}
}
provider "kubernetes" {
config_path = "${data.terraform_remote_state.kubeconfig_file.outputs.kubeconfig_file}"
}
</code></pre>
<p><em>Note: In a similar way we can access variables from the different module or stack in a different directory</em></p>
|
<p>I have a pod with multiple init containers and one main container. One of the init container create a sh file with some export commands like:</p>
<pre><code>export Foo=Bar
</code></pre>
<p>I want to source the file so it creates the env variable like this:</p>
<pre><code> containers:
- name: test
command:
- "bash"
- "-c"
args:
- "source /path/to/file"
</code></pre>
<p>It doesn't create the env variable. But if I run the source command directly in the container it works. What is the best way to do this using the command option in the pod definition?</p>
| <p>If you are looking for create the <code>sh</code> in the init container with the variables and then use in the "main container" here is a quick example:</p>
<p><strong>manifest</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
name: mypod
spec:
initContainers:
- name: my-init-container
image: alpine:latest
command: ["sh", "-c", "echo export Foo=bar > /shared/script.sh && chmod +x /shared/script.sh"]
volumeMounts:
- name: shared
mountPath: /shared
containers:
- name: mycontainer
image: mycustomimage
resources:
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: shared
mountPath: /shared
volumes:
- name: shared
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM alpine:latest
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD ...
</code></pre>
<p><strong>entrypoint.sh</strong></p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
. /shared/script.sh
env
exec "$@"
</code></pre>
<p><strong>logs</strong></p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl logs pod/mypod
<...>
Foo=bar
<...>
</code></pre>
<p>As you can see we can created a script file in the init container with <code>Foo=bar</code> variable and source the file in the "main container", the script is there the volume <code>shared</code> mounted in both containers.</p>
<p>Most of the situations we use configMaps/secrets/vaults and inject that as variables in the containers as the others answers mentioned. I recommend checking if those can solve your problem first.</p>
|
<p>I have simple web application deployed to Kubernetes cluster (EKS) with <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller" rel="nofollow noreferrer">aws load balancer controller</a> ingress</p>
<p><a href="https://i.stack.imgur.com/MmFF9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MmFF9.png" alt="schema" /></a></p>
<p>When accessing app in the intended way on ALB endpoint the performance is very poor (2-3x worse than regular deployment on bare metal instance). Benchmark was done with <a href="https://github.com/rakyll/hey" rel="nofollow noreferrer">Hey</a></p>
<pre><code>$ hey -t 30 -z 1m https://k8s-default-ingre-fdeb4c8b98-1975505070.us-east-1.elb.amazonaws.com/
# 5-10 reqs/s
$ hey -t 30 -z 1m http://172.16.3.37/ # from another pod accessing directly by its IP
# 20-30 reqs/s
</code></pre>
<p>If accessed from same/another pod or from different instance(node) when exposed as a <code>NodePort</code> performance stays the same, so I'm assuming something wrong with ingress/ALB.</p>
<p>How to identify bottleneck and debug such kind of an issue?</p>
<p>Here's my config</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: "/healthz/"
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1::certificate/cert"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /
pathType: Prefix
backend:
service:
name: app
port:
number: 80
</code></pre>
| <ol>
<li>Try and run kubernetes networking benchmark on both the clusters (bare-metal and EKS) -
<a href="https://github.com/InfraBuilder/k8s-bench-suite" rel="nofollow noreferrer">https://github.com/InfraBuilder/k8s-bench-suite</a></li>
</ol>
<p>This shows how pod2pod and pod2svc latencies are on both clusters. If drastically different, it must probably be the underlying network -</p>
<ol start="2">
<li>Run iperf3 benchmark between nodes -
<a href="https://aws.amazon.com/premiumsupport/knowledge-center/network-throughput-benchmark-linux-ec2/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/network-throughput-benchmark-linux-ec2/</a></li>
</ol>
<p>This would probably give you an idea of how nodes separated far apart on EKS cluster impact latencies, uneven networking bandwidth, etc.</p>
<p>If the latencies are somewhat similar, you might want to dig into your application code using something like jaeger or application profiler to get breakdown of latencies.</p>
|
<p>I have this</p>
<pre><code>kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.24) and server (1.18) exceeds the supported minor version skew of +/-1
</code></pre>
<p>I upgraded my client version using official <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="nofollow noreferrer">install kubectl</a> docs.</p>
<p>I installed kubeadm with snap</p>
<pre><code>kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-17T22:34:44Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>How to upgrade server?</p>
| <p>As you asked how to upgrade K8s :
This document contains a step by step process to <a href="https://www.golinuxcloud.com/kubernetes-upgrade-version/" rel="nofollow noreferrer">update K8βs</a></p>
<p>When your cluster is using the version 1.18.6, you could upgrade to 1.18.p where p >= 7 and to 1.19.x (whatever the value of x), but not to 1.20.x. If you plan to upgrade from 1.18.x to 1.20.x directly then the kubeadm upgrade plan command would fail so to overcome that you would first have to upgrade from 1.18.x to 1.19.x and from 1.19.x to 1.20.x.
Post this needs to upgrade a Kubernetes cluster from <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">version 1.19.x to version 1.20.x to 1.21 to 1.22 to 1.23</a>.</p>
|
<p>I am trying to run kubectl commands in offline mode but it keeps saying></p>
<pre><code> kubectl cordon hhpsoscr0001
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>I fail to understand what can be the issue. Can anyone help me on this?</p>
| <p>Please execute : kubectl get svc to see if you get a ClusterIP type output.
If you don't, please configure your kube config properly as @David Maze suggested earlier.</p>
|
<h2>Issue Description</h2>
<p>I am getting an error in fluent-bit basically saying it cant resolve host</p>
<pre><code>getaddrinfo(host='<My Elastic Cloud Instance>.aws.elastic-cloud.com:9243', err=4): Domain name not found
</code></pre>
<p>I suspect it has something to do with the port getting appended in the dns lookup, but I can't seem to see any settings that join the two together in my configuration's</p>
<p>I have verified that using the a dnsutil pod in the same namespace, I am able to resolve the host correctly</p>
<h2>Info that may be helpful</h2>
<p>Config Map output-elasticsearch.conf</p>
<pre><code>[OUTPUT]
Name es
Match *
Host ${CLOUD_ELASTICSEARCH_HOST}
Port ${CLOUD_ELASTICSEARCH_PORT}
Cloud_ID ${CLOUD_ELASTICSEARCH_ID}
Cloud_Auth ${CLOUD_ELASTICSEARCH_USER}:${CLOUD_ELASTICSEARCH_PASSWORD}
Logstash_Format On
Logstash_Prefix kube1
Replace_Dots On
Retry_Limit False
tls On
tls.verify Off
</code></pre>
<p>elasticsearch-configmap</p>
<pre><code>data:
CLOUD_ELASTICSEARCH_HOST: <MyCloudId>.aws.elastic-cloud.com
CLOUD_ELASTICSEARCH_ID: >-
elastic-security-deployment:<Bunch of Random Bits>
CLOUD_ELASTICSEARCH_PORT: '9243'
</code></pre>
<p>env portion of my daemonset</p>
<pre><code> env:
- name: CLOUD_ELASTICSEARCH_HOST
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: CLOUD_ELASTICSEARCH_HOST
- name: CLOUD_ELASTICSEARCH_PORT
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: CLOUD_ELASTICSEARCH_PORT
- name: CLOUD_ELASTICSEARCH_ID
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: CLOUD_ELASTICSEARCH_ID
- name: CLOUD_ELASTICSEARCH_USER
valueFrom:
secretKeyRef:
name: elasticsearch-secret
key: CLOUD_ELASTICSEARCH_USER
- name: CLOUD_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-secret
key: CLOUD_ELASTICSEARCH_PASSWORD
- name: FLUENT_ELASTICSEARCH_HOST
value: elasticsearch
- name: FLUENT_ELASTICSEARCH_PORT
value: '9200'
</code></pre>
| <p>Also, if you are using Elastic Cloud, try to decode the value of ${CLOUD_ELASTICSEARCH_ID} variable, remove the <code>:443</code> and encode it again.</p>
<p>I was getting this error and it was solved after doing this.</p>
|
<h1>Goal</h1>
<p>Build a CI/CD pipeline multiple GitLab repositories with a certain project structure can make use of. For this, a Docker container with Python code is built and subsequently securely pushed to Google Cloud's Container Registry.</p>
<h1>Set up</h1>
<ul>
<li>KubernetesExecutor is installed on Kubernetes Engine using the <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">Helm chart</a> as provided by GitLab.</li>
<li>The base image for the build process (<code>runners.image</code> in the <code>values.yaml</code>) is a custom one as this helps automatically containerising the provided repository. <strong>The reason this is worth mentioning is that this is from the <em>same</em> private repository as where the image should be pushed to.</strong></li>
<li>Right now, building the container from the repository runs successfully (see code below).</li>
</ul>
<h1>Problem</h1>
<p>How can I <em>push</em> the image to the Container Registry <em>without</em> adding a service account key to a Docker image (otherwise, please convince me this isn't bad practice)?</p>
<h1>Code</h1>
<h2>.gitlab-ci.yml</h2>
<pre class="lang-yaml prettyprint-override"><code>services:
- docker:19.03.1-dind
stages:
- build
build:
stage: build
script:
- docker build -t ${CONTAINER_REGISTRY}/pyton-container-test:latest .
# This line is where I'd need to use `docker login`, I guess.
- docker push ${CONTAINER_REGISTRY}/python-container-test:latest
</code></pre>
<h2>values.yaml (Helm)</h2>
<p>It's worth mentioning that the following environment variables are set by the GitLab Runner:</p>
<pre class="lang-yaml prettyprint-override"><code>runners:
env:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
CONTAINER_REGISTRY: eu.gcr.io/<project_id>
</code></pre>
<h1>Direction of solution</h1>
<p>I think I should be able to mount a secret from the Kubernetes cluster to the GitLab Runner build pod, but I can't seem to find a way to do that. Then, I should be able to add the following line into <code>.gitlab-ci.yml</code>:</p>
<pre class="lang-sh prettyprint-override"><code>cat mounted_secret.json | docker login -u _json_key --password-stdin https://eu.gcr.io
</code></pre>
<p>Setting up <code>config.toml</code> to use a <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#secret-volumes" rel="nofollow noreferrer">secret volume</a> should work. However, with a Helm chart this doesn't seem possible yet.</p>
<h1>Notes</h1>
<ul>
<li>It <em>is</em> possible to set protected environment variables in GitLab CI, but I'd rather not, as they're harder to maintain.</li>
<li>I've investigated <a href="https://stackoverflow.com/questions/52474255/gitlab-runner-image-with-gcp-credentials">this</a> answer, but this says I need to add a key to my Docker image.</li>
<li>Looked into the <a href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#using-a-private-container-registry" rel="nofollow noreferrer">GitLab documentation</a> on using a private container registry, but don't seem to get much further with that.</li>
<li>A similar problem would occur when, for example, it must connect to a database during the build process.</li>
</ul>
| <p>You can add <code>DOCKER_AUTH_CONFIG</code> CI/CD variable with auth values from <code>~/.docker/config.json</code>. The variable will look like this:</p>
<pre><code>{
"auths": {
"northamerica-northeast1-docker.pkg.dev": {
"auth": "{JSON key here}"
},
"us.gcr.io": {
"auth": "{JSON key here}"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.13 (linux)"
}
}
</code></pre>
<p>This way, next time your gitlab runner is trying to pull a docker image from private repo, it will be able to do so.</p>
|
<p>I am trying to setup EFK (ElasticSearch 8, FluentD and Kibana) stack on K8S cluster (on-premises)</p>
<p>I followed this <a href="https://phoenixnap.com/kb/elasticsearch-helm-chart" rel="nofollow noreferrer">link</a> to install elasticsearch and installed it using helm charts and followed this <a href="https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a" rel="nofollow noreferrer">link</a> to install fluentd</p>
<p><strong>Output of fluentd and elasticsearch pods</strong></p>
<pre><code>[root@ctrl01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 136m
[root@ctrl01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
fluentd-cnb7p 1/1 Running 0 107m
fluentd-dbxjk 1/1 Running 0 107m
</code></pre>
<p>However, elasticsearch log was piled up with the following warning messages</p>
<pre><code>2021-10-18 12:13:12 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2021-10-18 12:13:42 +0000 error_class="Elasticsearch::Transport::Transport::Errors::BadRequest" error="[400] {\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Action/metadata line [1] contains an unknown parameter [_type]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"Action/metadata line [1] contains an unknown parameter [_type]\"},\"status\":400}" plugin_id="out_es"
2021-10-18 12:13:12 +0000 [warn]: suppressed same stacktrace
</code></pre>
<p><strong>Conf file (tailored output)</strong></p>
<pre><code>2021-10-18 12:09:10 +0000 [info]: using configuration file: <ROOT>
<match fluent.**>
@type null
</match>
<source>
@type tail
@id in_tail_container_logs
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
format json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</source>
<source>
@type tail
@id in_tail_minion
path /var/log/salt/minion
pos_file /var/log/fluentd-salt.pos
tag salt
format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S
</source>
</code></pre>
<p>I am not sure which <code>'type'</code> field it refers to. I am unable to find an example of ElasticSearch 8 for <code>match</code> and <code>source</code> directives to compare</p>
<p>It seems <code>type</code> field is <strong>not</strong> supported from ES 8 onwards but I am not sure on that. Kindly let me know the reason for the error</p>
| <p>I faced similar errors when I tried to use elasticsearch 8.2.3 with fluentBit 1.9.5. I could see elastic was sending logs but could not see any data in kibana webpage due to which could not create indices and saw the above error in fluent-bit pod logs. I followed <a href="https://github.com/fluent/fluent-bit/issues/5138" rel="noreferrer">this github issue</a> and added <strong>Suppress_Type_Name On</strong> under outputs: section in my fluent-bit helm chart values.yaml file and it worked fine after that.</p>
<pre><code> [OUTPUT]
Name es
Match *
Host {{ .Values.global.backend.es.host }}
Port {{ .Values.global.backend.es.port }}
Logstash_Format Off
Retry_Limit False
Type _doc
Time_Key @timestamp
Replace_Dots On
Suppress_Type_Name On
Index {{ .Values.global.backend.es.index }}
{{ .Values.extraEntries.output }}
</code></pre>
|
<p>I'm trying to run in kubernetes mern application (<a href="https://github.com/ibrahima92/fullstack-typescript-mern-todo/" rel="nofollow noreferrer">https://github.com/ibrahima92/fullstack-typescript-mern-todo/</a>). I have a client and a server container, and I need to replace the path to the url client in the backend, so I defined variables in the backend code, but they don't replace the values of the variables from the manifest files. There are variables inside the container, but the backend does not use them.
I tried such options 1. ${FRONT_URL}, ${process.env.FRONT_URL}, process.env.FRONT_URL. If I directly insert the URL of the service with the port number in backend code then everything works. How to correctly define variables in a container?</p>
<p><strong>I need replace http://localhost:${PORT} to url of service from K8S and
the same thing need to do with ${MONGO_URL}</strong></p>
<pre><code>import express, { Express } from 'express'
import mongoose from 'mongoose'
import cors from 'cors'
import todoRoutes from './routes'
const app: Express = express()
const PORT: string | number = process.env.PORT || 4000
app.use(cors())
app.use(todoRoutes)
const uri: string = `mongodb://${MONGO_URL}?retryWrites=true&w=majority`
const options = { useNewUrlParser: true, useUnifiedTopology: true }
mongoose.set('useFindAndModify', false)
mongoose
.connect(uri, options)
.then(() =>
app.listen(PORT, () =>
console.log(`Server running on http://localhost:${PORT}`)
)
)
.catch((error) => {
throw error
})
</code></pre>
<p><strong>Manifest</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-server-app-deploy
spec:
replicas: 1
selector:
matchLabels:
app: todo-server-app
template:
metadata:
labels:
app: todo-server-app
spec:
containers:
- image: repo/todo-server-app:24
name: container1
ports:
- containerPort: 4000
env:
- name: FRONT_URL
value: a1ecab155236d4c7fba8b0c6a1b6ad2b-549550669.us-east-1.elb.amazonaws.com:80
- name: MONGO_URL
value: todo-mongo-service:27017
imagePullPolicy: IfNotPresent
</code></pre>
| <p>You can create a <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">config map</a>, giving your container run time variables,
Or alternatively, build your own docker image using the <code>ENV</code></p>
<p>You can also acheive that using kustomization.</p>
<ul>
<li>kustomization.yml</li>
</ul>
<pre><code>secretGenerator:
Β Β -Β name:Β my-secret
Β Β Β Β behavior:Β create
Β Β Β Β env:Β .env
</code></pre>
<ul>
<li>Deployment.yml</li>
</ul>
<pre><code>
envFrom:
-Β secretRef:
Β name:Β my-secret
</code></pre>
|
<p>I want to know the Object(<code>Deployment/Statefulset/..</code>) which is using a secret. Is there a way to find this out from a secret? Is there a tool in Kubernetes community to do this?</p>
| <p>Seems like there is nothing built in, but you can use <code>kubectl</code> in conjuction with <code>jq</code> to figure it out. Here is the example for <code>deployments</code></p>
<pre><code>kubectl get deployment -o json | jq '.items[] | select(.spec.template.spec.volumes[]? | .secret.secretName=="<secret name>") | .metadata.name'
</code></pre>
|
<p>I am having an issue when trying to setup <strong>Virtual Nodes</strong> for <strong>Azure Kubernetes cluster</strong> using <strong>Terraform</strong>.</p>
<p>When I check the pod for the aci-connector-linux, I get the below error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 41m (x50 over 4h26m) kubelet Container image "mcr.microsoft.com/oss/virtual-kubelet/virtual-kubelet:1.4.1" already present on machine
Warning BackOff 68s (x1222 over 4h26m) kubelet Back-off restarting failed container
</code></pre>
<p>I've also granted the System Assigned identity of the Azure Kubernetes Cluster the required contributor role using the documentation here - <a href="https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/kubernetes/aci_connector_linux/main.tf" rel="nofollow noreferrer">https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/kubernetes/aci_connector_linux/main.tf</a> but I'm still getting CrashLoopBackOff status error.</p>
| <p>I finally fixed it.</p>
<p>The issue was caused by the Outdated documentation for <code>aci-connector-linux</code> here - <a href="https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/kubernetes/aci_connector_linux/main.tf" rel="nofollow noreferrer">https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/examples/kubernetes/aci_connector_linux/main.tf</a> which assigns the role to the Managed identity of the Azure Kubernetes cluster</p>
<p><strong>Here's how I fixed it</strong>:</p>
<p>Azure Kubernetes Service creates Node resource group which is separate from the resource group for the Kubernetes Cluster. Within the Node resource group, AKS creates a Managed Identity for the <code>aci-connector-linux</code>. The name of the Node resource group is usually <code>MC_<KubernetesResourceGroupName_KubernetesServiceName-KubernetesResourceGroupLocation></code>, so if your <strong>KubernetesResourceGroupName</strong> is <code>MyResourceGroup</code> and if the <strong>KubernetesServiceName</strong> is <code>my-test-cluster</code> and if the <strong>KubernetesResourceGroupLocation</strong> <code>westeurope</code>, then the Node resource group will be <code>MC_MyResourceGroup_my-test-cluster_westeurope</code>. You can view the resources in the Azure Portal under Resource Groups.</p>
<p>Next, you can view the root cause of the issue by viewing the logs of the <code>aci-connector-linux</code> pod using the command:</p>
<pre><code>kubectl logs aci-connector-linux-577bf54d75-qm9kl -n kube-system
</code></pre>
<p>And you will an output like this:</p>
<blockquote>
<p>time="2022-06-29T15:23:38Z" level=fatal msg="error initializing provider azure: error setting up network profile: error while looking up subnet: api call to <a href="https://management.azure.com/subscriptions/0237fb7-7530-43ba-96ae-927yhfad80d1/resourcegroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/my-vnet/subnets/k8s-aci-node-pool-subnet?api-version=2018-08-01" rel="nofollow noreferrer">https://management.azure.com/subscriptions/0237fb7-7530-43ba-96ae-927yhfad80d1/resourcegroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/my-vnet/subnets/k8s-aci-node-pool-subnet?api-version=2018-08-01</a>: got HTTP response status code 403 error code "AuthorizationFailed": The client '560df3e9b-9f64-4faf-aa7c-6tdg779f81c7' with object id '560df3e9b-9f64-4faf-aa7c-6tdg779f81c7' does not have authorization to perform action 'Microsoft.Network/virtualNetworks/subnets/read' over scope '/subscriptions/0237fb7-7530-43ba-96ae-927yhfad80d1/resourcegroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/my-vnet/subnets/k8s-aci-node-pool-subnet' or the scope is invalid. If access was recently granted, please refresh your credentials."</p>
</blockquote>
<p>You can fix this in <strong>Terraform</strong> using the code below:</p>
<pre><code># Get subnet ID
data "azurerm_subnet" "k8s_aci" {
name = "k8s-aci-node-pool-uat-subnet"
virtual_network_name = "sparkle-uat-vnet"
resource_group_name = data.azurerm_resource_group.main.name
}
# Get the Identity of a service principal
data "azuread_service_principal" "aks_aci_identity" {
display_name = "aciconnectorlinux-${var.kubernetes_cluster_name}"
depends_on = [module.kubernetes_service_uat]
}
# Assign role to aci identity
module "role_assignment_aci_nodepool_subnet" {
source = "../../../modules/azure/role-assignment"
role_assignment_scope = data.azurerm_subnet.k8s_aci.id
role_definition_name = var.role_definition_name.net-contrib
role_assignment_principal_id = data.azuread_service_principal.aks_aci_identity.id
}
</code></pre>
<p>You can also achieve this using the Azure CLI command below:</p>
<pre><code>az role assignment create --assignee <Object (principal) ID> --role "Network Contributor" --scope <subnet-id>
</code></pre>
<p><strong>Note</strong>: The <strong>Object (principal) ID</strong> is the ID that you obtained in the error message.</p>
<p>An example is this:</p>
<pre><code>az role assignment create --assignee 560df3e9b-9f64-4faf-aa7c-6tdg779f81c7 --role "Network Contributor" --scope /subscriptions/0237fb7-7530-43ba-96ae-927yhfad80d1/resourcegroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/my-vnet/subnets/k8s-aci-node-pool-subnet
</code></pre>
<p><strong>Resources</strong>:</p>
<p><a href="https://github.com/hashicorp/terraform-provider-azurerm/issues/9733" rel="nofollow noreferrer">Aci connector linux should export the identity associated to its addon</a></p>
<p><a href="https://github.com/Azure/AKS/issues/1894" rel="nofollow noreferrer">Using Terraform to create an AKS cluster with "SystemAssigned" identity and aci_connector_linux profile enabled does not result in a creation of a virtual node</a></p>
<p><a href="https://cloud.netapp.com/blog/azure-cvo-blg-azure-kubernetes-service-tutorial-integrate-aks-with-aci" rel="nofollow noreferrer">Azure Kubernetes Service Tutorial: How to Integrate AKS with Azure Container Instances</a></p>
<p><a href="https://docs.microfocus.com/doc/SMAX/2021.05/AKSFailToPatchLB" rel="nofollow noreferrer">Fail to configure a load balancer (AKS)</a></p>
|
<p>I am trying to deploy an application with k3s kubernetes. Currently I have two master nodes behind a load-balancer, and I have some issues connecting worker nodes to them. All nodes and the load-balancer runs in seperate vms.</p>
<p>The load balancer is a nginx server with the following configuration.</p>
<pre><code>load_module /usr/lib/nginx/modules/ngx_stream_module.so;
events {}
stream {
upstream k3s_servers {
server {master_node1_ip}:6443;
server {master_node2_ip}:6443;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}
</code></pre>
<p>the master nodes connects through the load-balancer, and seemingly it works as expected.</p>
<pre><code>ubuntu@ip-172-31-20-78:/$ sudo k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-33-183 Ready control-plane,master 81m v1.20.2+k3s1
ip-172-31-20-78 Ready control-plane,master 81m v1.20.2+k3s1
</code></pre>
<p>However the worker nodes yields an error about the SSL certificate?</p>
<pre><code>sudo systemctl status k3s-agent
β k3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-01-24 15:54:10 UTC; 19min ago
Docs: https://k3s.io
Process: 3065 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 3066 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 3067 (k3s-agent)
Tasks: 6
Memory: 167.3M
CGroup: /system.slice/k3s-agent.service
ββ3067 /usr/local/bin/k3s agent
Jan 24 16:12:23 ip-172-31-27-179 k3s[3311]: time="2021-01-24T16:34:02.483557102Z" level=info msg="Running load balancer 127.0.0.1:39357 -> [104.248.34.
Jan 24 16:12:23 ip-172-31-27-179 k3s[3067]: time="2021-01-24T16:12:23.313819380Z" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:339
</code></pre>
<blockquote>
<p>level=error msg="failed to get CA certs: Get "https://127.0.0.1:39357/cacerts": EOF"</p>
</blockquote>
<p>if I try to change K3S_URL in <code>/etc/systemd/system/k3s-agent.service.env</code> to use <code>http</code>, I get an error saying that only https is accepted.</p>
| <p>Using the IP Address instead of the hostname in <code>k3s-agent.service.env</code> works for me. Not really a solution as much as a workaround.</p>
<p><code>/etc/systemd/system/k3s-agent.service.env</code></p>
<pre><code>K3S_TOKEN='<token>'
K3S_URL='192.168.xxx.xxx:6443'
</code></pre>
|
<p>I am using a GKE cluster with Ubuntu nodes & containerd as CRI. I am not able to see any pause containers the way we used to see in docker. Are they no more exists in containerd? Anything changed or I am not aware of?</p>
<p>cluk_ti4@gke-cluster-1-default-pool-b897ab15-7kzn:~$ sudo -i</p>
<pre><code>> root@gke-cluster-1-default-pool-b897ab15-7kzn:~# docker ps CONTAINER
> ID IMAGE COMMAND CREATED
> STATUS PORTS NAMES
>
> root@gke-cluster-1-default-pool-b897ab15-7kzn:~# crictl ps CONTAINER
> IMAGE CREATED STATE NAME
> ATTEMPT POD ID 65b544036d815 cb2388d1f0a57 25
> hours ago Running data-db 0
> 868cf5dd712a3 42afee3f328b1 0e403e3816e89 25 hours ago
> Running dbcontainer 0
> 1e67b12c7ddbf 944ac9c2334e2 295c7be079025 25 hours ago
> Running nginx 0
> 9bc0d4292190b 99aff9af2f0c8 0e403e3816e89 25 hours ago
> Running redis 0
> d5cda32e41f0f 43af76f1b819e 6266988902813 4 days ago
> Running prometheus-to-sd 0
> 43441f62220af 17d024b959956 d204263033d6e 4 days ago
> Running sidecar 0
> 43441f62220af e417d5e3b723f ffd5a31c75009 4 days ago
> Running dnsmasq 0
> 43441f62220af d1c035046787f 1434d0253770f 4 days ago
> Running konnectivity-agent 0
> b19697ac36cf1 596e5a51c5fb8 1434d0253770f 4 days ago
> Running konnectivity-agent 0
> d0d35f65b0a3d 9231a60426be9 98b27a8d721c5 4 days ago
> Running gce-pd-driver 0
> e17e59a9486f6 63b943910b402 a26d732ed0895 4 days ago
> Running gke-metrics-agent 0
> 7052db1775ea5 cca9f35cec83d 5440bb4e13af5 4 days ago
> Running kubedns 0
> 43441f62220af 83570807e719c ff9d4d52a7759 4 days ago
> Running fluentbit-gke 0
> 7bf90df2dc604 a0444e1f50435 8ee6ce05080ec 4 days ago
> Running csi-driver-registrar 0
> e17e59a9486f6 16ec942baf944 294aee909773c 4 days ago
> Running fluentbit 0
> 7bf90df2dc604 1cb29a3a26896 217b1e208caea 4 days ago
> Running kube-proxy 0
> 3923908ea54d7
</code></pre>
<p>root@gke-cluster-1-default-pool-b897ab15-7kzn:~# crictl ps | grep -i pause</p>
<p>root@gke-cluster-1-default-pool-b897ab15-7kzn:~#</p>
| <p><a href="https://i.stack.imgur.com/fjhVr.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>ctr -n k8s.io c ls
</code></pre>
<p>you can do like this</p>
|
<p>I'm deploying my application in the cloud, inside a cluster on 03 pods:</p>
<ol>
<li>one pod: backend - Quarkus</li>
<li>one pod: frontend - Angular</li>
<li>one pod: DB - Postgres</li>
</ol>
<p><strong>The backend has 03 endpoints:</strong></p>
<ol>
<li>One endpoint: GraphQL</li>
<li>Two endpoints: Rest</li>
</ol>
<p><strong>The pods are exposed:</strong></p>
<ol>
<li>backend: ClusterIp</li>
<li>DB: ClusterIp</li>
<li>frontend: NodePort</li>
</ol>
<p><strong>I have an Ngnix web server & 02 ingress manifests; one for the backend and a second for the frontend:</strong></p>
<p>1- backend-ingress:</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mcs-thirdparty-back-ingress
namespace: namespace
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx-internal
rules:
- host: backend.exemple.com
http:
paths:
- path: /
backend:
service:
name: mcs-thirdparty-backend
port:
number: 8080
pathType: Prefix
</code></pre>
<p>2- frontend-ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mcs-thirdparty-ingress
namespace: namespace
spec:
ingressClassName: nginx-internal
rules:
- host: bilels.exemple.com
http:
paths:
- path: /
backend:
service:
name: mcs-thirdparty-frontend
port:
number: 80
pathType: Prefix
</code></pre>
<p>For the GraphQL endpoint/request; the frontend can correctly communicate with the backend and fetchs the required data.
When I run the POST request to fetch the accessToken from the server (Rest Endpoint), I receive the 404 error code.</p>
<p><a href="https://i.stack.imgur.com/rp4ey.jpg" rel="nofollow noreferrer">The error screenshot is here</a></p>
<p>I tried to add several changes in the backend-ingress manifest, but always 404
<code>- path: /(.*)</code>
<code>- path: /*</code>
<code>- path: /response</code></p>
| <p>I thinks that I managed to make another Diagnotic method wiht the help for Ryan Dawson.
I did PortForaward the backend pod and request from the locally, then I found that there is a 500 error code --> meaning that the request was not matching the api requirements: in the frontend I was sendign the wrong context type.</p>
<p>--> so the ingress config is already in a good shape.</p>
|
<pre><code>lifecycle:
preStop:
exec:
command: ["sh", "-c", "curl -v -X PUT -d '\"SHUTTING_DOWN\"' http://localhost:8080/v1/info/state"]
</code></pre>
<p>I am expecting this will produce a curl url like</p>
<pre><code>curl -v -X PUT -d '"SHUTTING_DOWN"' http://localhost:8080/v1/info/state
</code></pre>
<p>How ever I am getting with extra single quotes surrounded ''"SHUTTING_DOWN"''</p>
<pre><code>curl -v -X PUT -d ''"SHUTTING_DOWN"'' http://localhost:8080/v1/info/state
</code></pre>
<p>Any pointers, where am I going wrong?</p>
| <p>I'd suggest getting rid of as many layers of quotes as you can. In the original example you have a layer of quotes from YAML, plus a layer of quotes from the <code>sh -c</code> wrapper. Since you need the HTTP PUT body itself to have both single and double quotes β you need to send the string <code>'"SHUTTING_DOWN"'</code> with both kinds of quotes over the wire β getting rid of as much quoting as you can is helpful.</p>
<p>In both the <a href="https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02" rel="nofollow noreferrer">shell</a> and <a href="https://yaml.org/spec/1.2.2/#73-flow-scalar-styles" rel="nofollow noreferrer">YAML</a>, the two kinds of quotes behave differently. Backslash escaping only works in double-quoted strings and so you probably need that at the outer layer; then you need single quotes inside the double quotes; and then you need backslash-escaped double quotes inside that.</p>
<p>In YAML specifically the quotes around strings are usually optional, unless they're required to disambiguate things (forcing <code>'true'</code> or <code>'12345'</code>) to be strings. This lets you get rid of one layer of quoting. You also may find this slightly clearer if you use YAML <em>block style</em> with one list item on a line.</p>
<pre class="lang-yaml prettyprint-override"><code>command:
- /bin/sh
- -c
- curl -v -X PUT -d "'\"SHUTTING_DOWN\"'" http://localhost:8080/v1/info/state
</code></pre>
<p>I might even go one step further here, though. You're not using environment variable expansion, multiple commands, or anything else that requires a shell. That means you don't need the <code>sh -c</code> wrapper. If you remove this, then the only layer of quoting you need is YAML quoting; you don't need to worry about embedding a shell-escaped string inside a YAML-quoted string.</p>
<p>You do need to make sure the quotes are handled correctly. If the string begins with a <code>'</code> or <code>"</code> then YAML will parse it as a quoted string, and if not then <a href="https://yaml.org/spec/1.2.2/#733-plain-style" rel="nofollow noreferrer">there are no escaping options in an unquoted string</a>. So again you probably need to put the whole thing in a double-quoted string and backslash-escape the double quotes that are part of the value.</p>
<p>Remember that each word needs to go into a separate YAML list item. <code>curl</code> like many commands will let you combine options and arguments, so you can have <code>-XPUT</code> as a single argument or <code>-X</code> and <code>PUT</code> as two separate arguments, but <code>-X PUT</code> as a single word will include the space as part of that word and confuse things.</p>
<pre class="lang-yaml prettyprint-override"><code>command:
- curl
- -v
- -X
- PUT
- -d
- "'\"SHUTTING_DOWN\"'"
- http://localhost:8080/v1/info/state
</code></pre>
|
<p>I'm new to nextflow. We would like to build our workflow using nextflow and have nextflow deploy the workflow to a large mulit-institution Kubernetes cluster that we use.</p>
<p>In this cluster we don't have admin permission, we have a namespace we work in. Also, pods in our cluster have limited resources, but jobs have unlimited resources.</p>
<p>In looking at <a href="https://www.nextflow.io/docs/latest/kubernetes.html" rel="nofollow noreferrer">the documentation</a> for nextflow + kubernetes and it says that the workflow runs under a Kubernetes pod, which raises red flags for me because of the limitation on pods in our cluster.</p>
<p>Is there a way to execute nextflow workflows as a kubernetes jobs instead of a pod? What are my options in this area?</p>
| <p>There might be a new feature with nextflow 22.04. Quoting Ben Sherman's <a href="https://github.com/seqeralabs/nf-k8s-best-practices/blob/master/feature-evaluation.md" rel="nofollow noreferrer">nextflow k8s best practices</a> page:</p>
<blockquote>
<p>In Nextflow v22.04 and later, the <code>k8s</code> executor can be configured to use Jobs instead of Pods directly.</p>
</blockquote>
|
<p>Given a bash function in .bashrc such as</p>
<pre><code>kgp () {
kubectl get po -n $1 $2
}
</code></pre>
<p>Is it possible to have kubectl auto complete work for k8s resources such as namespaces/pods? As an example if I use</p>
<pre><code>kubectl get po -n nsprefix podprefix
</code></pre>
<p>I can tab auto complete the prefix. Whereas with the positional parameters when I call</p>
<pre><code>kgp nsprefix podprefix
</code></pre>
<p>I have to type out the entire resource name.</p>
| <p>Yes, that's because bash-completion only understands <em>known commands</em>, not aliases or new functions that you have made up. You will experience the same thing with a trivial example of <code>alias whee=/bin/ls</code> and then <code>whee <TAB></code> will do nothing because it doesn't "recurse" into that alias, and <em>for sure</em> does not attempt to call your function in order to find out what arguments it could possibly accept. That could potentially be catastrophic</p>
<p>You're welcome to create a new <a href="https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion.html#Programmable-Completion" rel="nofollow noreferrer"><code>complete</code></a> handler for your custom <code>kgp</code>, but that's the only way you'll get the desired behavior</p>
<pre class="lang-bash prettyprint-override"><code>_kgp_completer() {
local cur prev words cword
COMPREPLY=()
_get_comp_words_by_ref -n : cur prev words cword
if [[ $cword == 1 ]] && [[ -z "$cur" ]]; then
COMPREPLY=( $(echo ns1 ns2 ns3) )
elif [[ $cword == 2 ]] && [[ -z "$cur" ]]; then
COMPREPLY=( $(echo pod1 pod2 pod3) )
fi
echo "DEBUG: cur=$cur prev=$prev words=$words cword=$cword COMPREPLY=${COMPREPLY[@]}" >&2
}
complete -F _kgp_completer kgp
</code></pre>
|
<p>In the yaml file, I have configure the auto-scaling and in this block I've set minimum and maximum replica count, But few deployments are still having 1 pod.
I know this might be due to traffic that those deployments having 1 pod might have less traffic.
Here's the code of the yaml file:</p>
<pre><code> autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
</code></pre>
<p>But, Is there any command through which I can see the limit of my replicas?</p>
| <p>You can use:</p>
<pre><code>kubectl get hpa -A -w
</code></pre>
<p>with the <code>-A</code> option it gives you all your hpa from all <code>namespaces</code> if you want to specify a <code>namespace</code>, you can use <code>-n</code> option. And the <code>-w</code> argument is for watch so you have a refreshed interface that gives you infos on your hpa ressources.</p>
|
<p>I run <code>kubent</code> and identify an outdated apiVersion</p>
<pre><code>> kubent
...
KIND NAMESPACE NAME API_VERSION REPLACE_WITH (SINCE)
PodDisruptionBudget mynamespace mypdb policy/v1beta1 policy/v1 (1.21.0)
</code></pre>
<p>I try to patch the resource in place but that doesn't seem to work:</p>
<pre><code>kubectl patch PodDisruptionBudget mypdb --namespace mynamespace -p'{"apiVersion":"policy/v1"}'
poddisruptionbudget.policy/mypdb patched (no change)
</code></pre>
<p>Running <code>kubent</code> still shows it's outdated.</p>
<p>Why doesn't patch work for updating apiVersion? I need to do this for many resources in many namespaces I want to script it out.</p>
<p>Also, when I run <code>kubectl edit PodDisruptionBudget mypdb --namespace mynamespace</code> it shows the apiVersion is the updated one ("policy/v1"), but <code>kubent</code> still shows it as outdated (policy/v1beta1).</p>
<h2>Edit</h2>
<p>Per the suggested answer I did this, but it did not work. It applied without error, but running kubent again still shows the resources outdated:</p>
<pre><code>kubectl get PodDisruptionBudget \
-A \
-o yaml > updated.yaml \
&& kubectl apply -f updated.yaml
</code></pre>
<h2>Edit 2</h2>
<p>Maybe kubent is not reporting the apiVersion correctly because after running apply if I run <code>kubectl get poddisruptionbudget.v1.policy -A</code> it returns the same resources <code>kubent</code> says are using outdated versions.</p>
| <p>Here is a solution for you.</p>
<pre class="lang-bash prettyprint-override"><code>kubent
### output:
(β |minikube:default)[23:23:23] [~/repositories/KubernetesLabs] git(master) π₯ β±β±β± kubent
11:23PM INF >>> Kube No Trouble `kubent` <<<
11:23PM INF version 0.5.1 (git sha a762ff3c6b5622650b86dc982652843cc2bd123c)
11:23PM INF Initializing collectors and retrieving data
11:23PM INF Target K8s version is 1.23.3
11:23PM INF Retrieved 52 resources from collector name=Cluster
11:23PM INF Retrieved 0 resources from collector name="Helm v2"
11:23PM INF Retrieved 9 resources from collector name="Helm v3"
11:23PM INF Loaded ruleset name=custom.rego.tmpl
11:23PM INF Loaded ruleset name=deprecated-1-16.rego
11:23PM INF Loaded ruleset name=deprecated-1-22.rego
11:23PM INF Loaded ruleset name=deprecated-1-25.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.25 <<<
------------------------------------------------------------------------------------------
KIND NAMESPACE NAME API_VERSION REPLACE_WITH (SINCE)
PodDisruptionBudget istio-system istio-ingressgateway policy/v1beta1 policy/v1 (1.21.0)
PodDisruptionBudget istio-system istiod policy/v1beta1 policy/v1 (1.21.0)
</code></pre>
<p><a href="https://i.stack.imgur.com/CBq5n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CBq5n.png" alt="enter image description here" /></a></p>
<hr />
<h1>How to update API?</h1>
<ul>
<li>The secret is this, when you "get" resource K8S will "update" the <code>apiVersion</code> and will display the correct one.</li>
<li>Then you can apply the "update" API back.</li>
<li>It will allow you to write script which does it for you as asked</li>
</ul>
<pre class="lang-bash prettyprint-override"><code># Get the list of outdated resources in a json format
kubent -o json > outdated.json
# Check the output
cat outdated.json
# Grab the desired resource name(s) from the json and loop over them ==> script
</code></pre>
<h3>Update script:</h3>
<ul>
<li>The content of the updated script will be:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code># The content of your script will be
# (loop over the list and run the following command):
- Get the updated API using kubectl get
- Save the updated content to file
- apply the changes
kubectl get <resourceType> \
<resourceName> \
-n <namespace> \
-o yaml > newUpdatedApi.yaml \
&& kubectl apply -f newUpdatedApi.yaml
</code></pre>
<h3>Demo:</h3>
<pre class="lang-bash prettyprint-override"><code>
# Print out the outdated resources
kubent
# Get the updated apiVersion and save to file
# also apply the changes
kubectl get PodDisruptionBudget \
istiod \
-n istio-system \
-o yaml > updated.yaml \
&& kubectl apply -f updated.yaml
# Check to verify that the updated.yaml indeed have the desired apiVersion
head -1 updated.yaml
# Verify that the "patch" is made
kubent
</code></pre>
<p><a href="https://i.stack.imgur.com/Hdq6Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hdq6Y.png" alt="enter image description here" /></a></p>
|
<p>I am currently implementing a CI Pipeline using Tekton. I was wondering if there is a way to use some kind of <code>valueFromEnv</code> for pipeline params.</p>
<p>For example to authenticate a Task for sonarqube analysis with my company's sonar host i need the login token, which I would rather want to insert via reference to a secret than passing it directly.</p>
<p>As I am relatively new to tekton I am unsure if I just haven't grasped the tekton way of doing this. Two possibilities that crossed my mind were:</p>
<ol>
<li>A "Pre-Task" which reads the env in it's step definition and publishes it as a result (which then can be used as param to the next Task)</li>
<li>Mounting the secret as a file for the Task to load the secret (e.g. by <code>cat</code>ting it)</li>
</ol>
<p>Both of those ideas do not feel like I should do it this way, but maybe I am wrong here.</p>
<p>Any help is appreciated!</p>
| <p>Your first Idea is not impossible, but in my eyes ugly as well. You can set the desired ENV in your image via DockerFile and use it later in the task:</p>
<p>Docker file (example):</p>
<pre><code>FROM gradle:7.4-jdk11
USER root
RUN apt-get update && apt-get install -y npm
YOUR_VARIABLE_KEY="any VALUE"
</code></pre>
<p>afterwards you can just use it in script tasks like:</p>
<pre><code>echo $YOUR_VARIABLE_KEY
</code></pre>
<p><strong>RECOMMENDED (for Openshift)</strong></p>
<p>The cleaner way is, to define it as Secret (Key/value) or as a SealeedSecret (Opaque)</p>
<p>this can be done directly within the namespace on the openshift-UI or as Code.</p>
<p>Next step is to "bind" it in your task:</p>
<pre><code>spec:
description: |-
any
params:
- name: any-secret-name
default: "any-secret"
type: string
stepTemplate:
name: ""
resources:
limits:
cpu: 1500m
memory: 4Gi
requests:
cpu: 250m
memory: 500Mi
steps:
- image: $(params.BUILDER_IMAGE)
name: posting
resources:
limits:
cpu: 1500m
memory: 4Gi
requests:
cpu: 250m
memory: 500Mi
env:
- name: YOU_NAME_IT
valueFrom:
secretKeyRef:
name: $(params.any-secret-name)
key: "any-secret-key"
script: |
#!/usr/bin/env sh
set -eu
set +x
echo $YOU_NAME_IT
set -x
</code></pre>
<p><strong>BEWARE!!!</strong> If you run it that way - nothing should be logged - if you leave out <code>set +x</code> before and <code>set -x</code> after the <code>echo</code> it is logged.</p>
<p>Now I saw you're may not working in openshift - here is the kubernetes page: <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a> => Using Secrets as environment variables (is close to your first idea - but the whole page looks like good cookbook)</p>
|
<p>The goal is to monitor the flowable project deployed on Kubernetes using Prometheus/Grafana</p>
<p>Install kube-prometheus-stack using helm charts:</p>
<pre><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack
</code></pre>
<p>Its successfully deployed and we are able to start monitoring the other resources inside our Kubernetes cluster using Prometheus/Grafana</p>
<p>Next, Flowable is running as a pod, which I want to get the flowable pod metrics onto Prometheus and come up the dasboard.</p>
<p>Any suggestions on how to achieve the monitoring for a flowable application running as a pod inside kubernetes</p>
| <p>Flowable (as a Spring Boot application) uses Micrometer that will provide metrics in prometheus format as soon as you add the <code>micrometer-registry-prometheus</code> dependency. Endpoint is then <code>actuator/prometheus</code>.</p>
<p>Creating your own prometheus metric is actually not that difficult. You can create a bean implementing <code>FlowableEventListener</code> and <code>MetricBinder</code>and then listen to the FlowableEngineEventType <code>PROCESS_COMPLETED</code> to increase a micrometer <code>Counter</code> every time a process gets completed.</p>
<p>Register your counter to the <code>MeterRegistry</code> in the bindTo() method and the metric should be available over the prometheus endpoint. No need for a dedicated exporter pod.</p>
|
<p>I was passing one of the sample tests for CKA and one question says this:</p>
<p>"Configure a LivenessProbe which simply runs <code>true</code>"</p>
<p>This is while creating simple nginx pod(s) in the general question, then they ask that as one of the items. What does that mean and how to do it?</p>
| <p><code>...Configure a LivenessProbe which simply runs true...while creating simple nginx pod...</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
livenessProbe:
exec:
command: ["true"]
</code></pre>
<p><code>true</code> is a command that returns zero. In this case it means the probe simply return no error. Alternately, you can probe nginx with: <code>command: ["ash","-c","nc -z localhost 80"]</code>.</p>
|
<p>I'm facing this such error in kubernetes( <em>0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable:</em> <em>}, that the pod didn't tolerate.)</em>. My application server is down.
First, I just add one file in daemon set , due to memory allocation (we are having one node), all pods are failed to allocate and shows pending state and fully clashes (stays in pending condition).If I delete all deployments and I run any new deployments also its showing <strong>pending</strong> condition .Now please help to get sort it out this issue. I also tried the taint commands, also it doesn't work.
As per my consent , can I create a node with existing cluster or revoke the instance? thanks in advance</p>
| <p>You need to configure autoscaling (it doesn't work by default) for the cluster
<a href="https://docs.aws.amazon.com/eks/latest/userguide/create-managed-node-group.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/create-managed-node-group.html</a>
Or, you can manually change the desired size of the node group.
Also, make sure that your deployment has relevant resources request for your nodes</p>
|
<p>We are using helm provider in Terraform to provision istio ingress gateway which in backend use this <a href="https://artifacthub.io/packages/helm/istio-official/gateway" rel="nofollow noreferrer">chart</a></p>
<p>Below is the terraform code snippet to provision the same. Please help to override the default chart value to create an internal load balancer instead of the default external one. We are aware that it can be done by updating the annotation in the manifest file. But not sure how to do the same in the terraform code snippet?</p>
<pre><code>terraform {
required_providers {
helm = {
source = "hashicorp/helm"
version = ">= 1.0.0"
}
}
}
provider "helm" {
kubernetes {
config_path = "${var.kubeconfig_file}"
}
}
resource "helm_release" "istio-ingress" {
repository = local.istio_charts_url
chart = "gateway"
name = "istio-ingress-gateway"
namespace = kubernetes_namespace.istio_system.metadata.0.name
version = ">= 1.12.1"
timeout = 500
cleanup_on_fail = true
force_update = false
depends_on = [helm_release.istiod]
}
</code></pre>
| <p>You can either use the <code>set</code> argument block (<a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#set" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#set</a>) or the <code>values</code> argument (<a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#values" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#values</a>) of the resource <code>helm_release</code> to override the default settings from the original <code>values.yaml</code></p>
<p>Should look something like:</p>
<pre><code>resource "helm_release" "istio-ingress" {
repository = local.istio_charts_url
chart = "gateway"
name = "istio-ingress-gateway"
namespace = kubernetes_namespace.istio_system.metadata.0.name
version = ">= 1.12.1"
timeout = 500
cleanup_on_fail = true
force_update = false
depends_on = [helm_release.istiod]
set {
name = "serviceAnnotations.cloud.google.com/load-balancer-type"
value = "internal"
}
}
</code></pre>
<p>or</p>
<pre><code>resource "helm_release" "istio-ingress" {
repository = local.istio_charts_url
chart = "gateway"
name = "istio-ingress-gateway"
namespace = kubernetes_namespace.istio_system.metadata.0.name
version = ">= 1.12.1"
timeout = 500
cleanup_on_fail = true
force_update = false
depends_on = [helm_release.istiod]
values = [
file("${path.module}/custom-values.yaml")
]
}
</code></pre>
<p>and place a <code>custom-values.yaml</code> file in your code.</p>
<p>This article explains it pretty nicely:
<a href="https://getbetterdevops.io/terraform-with-helm/" rel="nofollow noreferrer">https://getbetterdevops.io/terraform-with-helm/</a></p>
<p>Hope this helps!</p>
|
<pre><code>apiVersion: v1
kind: Pod
spec:
serviceAccountName: <service account name in kubernetes>
containers:
- name: spark-submit
image: <image arn>
command:
- cat
tty: true
- name: kube-tools
image: <image arn>
command:
- cat
tty: true
</code></pre>
<p>this is the part of yaml configuration of kubernetes agent in Jenkinsfile, what does the <strong>- cat tty: true</strong> means?
i suppose tty: true means activating the terminal.</p>
| <p>Please note that these are two different settings.
With <code>tty: true</code> you enable a TTY which is required for execution of a shell.</p>
<p>The second is <code>cat</code> as argument for the container <code>command</code>. It means that the <code>cat</code> executable is run without further arguments.
If you run <code>cat</code> on a terminal yourself in this , you will notice that it does nothing, but blocks and waits for input. The reason is that the <code>cat</code> command ("concatenate" in short) reads input from a file or stdin and prints it. If there is nothing, it just waits. (See the manpage here as well: <a href="https://man7.org/linux/man-pages/man1/cat.1.html" rel="nofollow noreferrer">https://man7.org/linux/man-pages/man1/cat.1.html</a> )</p>
<p>That is often the sole idea: Start a container or Pod and keep it running. It will be terminated by other means.
You can think of it like <code>sleep (infinity)</code>.</p>
|
<p>I am working on integrating a few Laravel PHP applications into a new Kubernetes architecture, and still struggling on how I can run <code>php artisan schedule:run</code> in a nice manner.</p>
<p>In the official Laravel manual, we are advised to set up the cronjob like this.</p>
<pre><code>* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
</code></pre>
<p>ref. <a href="https://readouble.com/laravel/5.7/en/scheduling.html" rel="nofollow noreferrer">https://readouble.com/laravel/5.7/en/scheduling.html</a></p>
<p><strong>Cronjob</strong></p>
<p>Initially, I came up with the idea of using cronjob in Kubernetes, and it works fine for now but started worried about the current architecture.</p>
<p>(One deployment for web service, and one cronjob for the task scheduling.)</p>
<pre><code>---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron
namespace: my-laravel-app
spec:
concurrencyPolicy: Replace
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- image: my_laravel_app_image:latest
name: cron
command: ["php", "artisan", "schedule:run"]
imagePullPolicy: Always
envFrom:
- configMapRef:
name: laravel-app-config
- secretRef:
name: laravel-app-secret
restartPolicy: Never
</code></pre>
<p>However, since I use <code>concurrencyPolicy: Replace</code> here the pod itself might be terminated even the job is still running (for more than 1 minute). To avoid this issue, I could use the default value <code>concurrencyPolicy: Allow</code> but it introduces another issue - Even I set <code>failedJobsHistoryLimit</code> at 1 and <code>successfulJobsHistoryLimit</code> at 1 the pod associated with the job are not properly terminated in the current running in-house Kubernetes cluster, and it reaches quota limit.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/test-cronjob-a 0/1 Completed 0 4m30s
pod/test-cronjob-b 0/1 Completed 0 3m30s
pod/test-cronjob-c 0/1 Completed 0 2m29s
pod/test-cronjob-d 0/1 Completed 0 88s
pod/test-cronjob-e 0/1 Completed 0 28s
</code></pre>
<p>ref. <a href="https://github.com/kubernetes/kubernetes/issues/74741#issuecomment-712707783" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/74741#issuecomment-712707783</a></p>
<p>Also, I feel It's a bit tricky to configure the monitoring and logging stack for those one-off jobs.</p>
<p><strong>Deployment</strong></p>
<p>Instead of using cronjob, I'm thinking to try to deploy the scheduler as another pod having the cron setting in the container using deployment resource.</p>
<p>(One deployment for web service, and one deployment for the task scheduling.)</p>
<p><strong>I just wonder how you guys normally work around this issue in a scalable manner.</strong></p>
| <p>Instead of running <code>schedule:run</code> use K8s kind: Deployment (not Job) and run scheduler as a daemon:</p>
<pre><code>php artisan schedule:work
</code></pre>
<p><a href="https://laravel.com/docs/9.x/scheduling#running-the-scheduler-locally" rel="nofollow noreferrer">https://laravel.com/docs/9.x/scheduling#running-the-scheduler-locally</a></p>
<p>More details:</p>
<p>See: <a href="https://medium.com/@agungdarmanto/how-to-run-a-laravel-application-into-kubernetes-a6d0111dc98d" rel="nofollow noreferrer">https://medium.com/@agungdarmanto/how-to-run-a-laravel-application-into-kubernetes-a6d0111dc98d</a></p>
|
<p>I am trying to use google cloud for my pubsub event driven application.
Currently, I am setting up Cloud Run for Anthos following the below tutorials</p>
<ul>
<li><a href="https://codelabs.developers.google.com/codelabs/cloud-run-events-anthos#7" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/cloud-run-events-anthos#7</a></li>
<li><a href="https://cloud.google.com/anthos/run/archive/docs/events/cluster-configuration" rel="nofollow noreferrer">https://cloud.google.com/anthos/run/archive/docs/events/cluster-configuration</a></li>
</ul>
<p>I have created the GKE clusters. It is successful and is up and running.</p>
<p>However, I am getting the below error when I try to create event broker.</p>
<p><strong>$ gcloud beta events brokers create default --namespace default</strong></p>
<pre><code>X Creating Broker... BrokerCell cloud-run-events/default is not ready
- Creating Broker...
Failed.
ERROR: gcloud crashed (TransportError): HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Max retries exceeded with url: /token (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
gcloud's default CA certificates failed to verify your connection, which can happen if you are behind a proxy or firewall.
To use a custom CA certificates file, please run the following command:
gcloud config set core/custom_ca_certs_file /path/to/ca_certs
</code></pre>
<p>However, When I rerun the command, it shows broker already exists</p>
<p><strong>$ gcloud beta events brokers create default --namespace default</strong></p>
<pre><code>ERROR: (gcloud.beta.events.brokers.create) Broker [default] already exists.
</code></pre>
<p>Checking the status of broker, it shows <strong>BrokerCellNotReady</strong></p>
<p><strong>$ kubectl get broker -n default</strong></p>
<pre><code>NAME URL AGE READY REASON
default http://default-brokercell-ingress.cloud-run-events.svc.cluster.local/default/default 39m Unknown BrokerCellNotReady
</code></pre>
<p>And I am getting status pending for default-brokercell-fanout pod.</p>
<p><strong>$ kubectl get pods -n cloud-run-events</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
controller-648c495796-b5ccb 1/1 Running 0 105m
default-brokercell-fanout-855494bb9b-2c7zv 0/1 Pending 0 100m
default-brokercell-ingress-5f8cdc6467-wwq42 1/1 Running 0 100m
default-brokercell-retry-6f4f9696d6-tg898 1/1 Running 0 100m
webhook-85f7bc69b4-qrpck 1/1 Running 0 109m
</code></pre>
<p>I couldn't find any discussion related to this error.
Please give me some ideas to resolve this issue.</p>
| <p>I encountered the same issue. The reason might be the given cluster setup does not have enough CPU resources.</p>
<p>You can check it by</p>
<pre><code>kubectl describe pod/default-brokercell-retry-6f4f9696d6-tg898 -n cloud-run-events
</code></pre>
<p>If the output is
<a href="https://i.stack.imgur.com/DeIul.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DeIul.png" alt="enter image description here" /></a>
then that's the reason.</p>
<p>After knowing the root cause, you can fix it in various ways, e.g., enable auto-scaling in your node pool.</p>
|
<p>We are using a node js application along with Kubernetes with auto-scaling.
My requirement is to log a few things wrt the current instance (pod, IP or anything unique for that instance).</p>
<p>Is there any way in the node js and typescript to get such identifiers in the backend application?</p>
| <p>The hostname will contain the unique (per namespace) pod name.
So just get the current hostname of the backend and you have a unique identifier.</p>
<p>For example</p>
<pre><code>const os = require("os");
const hostName = os.hostname();
</code></pre>
|
<p>Now i have Pods as Kubernetes structs wiht the help of the command</p>
<pre><code>pods , err := clientset.CoreV1().Pods("namespace_String").List(context.TODO(), metav1.ListOptions{})
</code></pre>
<p>now i do i get it as individual yaml files
which command should i use</p>
<pre><code>for i , pod := range pods.Items{
if i==0{
t := reflect.TypeOF(&pod)
for j := 0; j<t.NumMethod(); j++{
m := t.Method(j)
fmt.Println(m.Name)
}
}
}
</code></pre>
<p>this function will print the list of functions in the pod item which should i use</p>
<p>Thanks for the answer</p>
| <p>The <code>yaml</code> is just a representation of the Pod object in the kubernetes internal storage in etcd. With your <code>client-go</code> what you have got is the <code>Pod</code> instance, of the type <code>v1.Pod</code>. So you should be able to work with this object itself and get whatever you want, for example <code>p.Labels()</code> etc. But if for some reason, you are insisting on getting a yaml, you can do that via:</p>
<pre><code>import (
"sigs.k8s.io/yaml"
)
b, err := yaml.Marshal(pod)
if err != nil {
// handle err
}
log.Printf("Yaml of the pod is: %q", string(b))
</code></pre>
<p>Note that <code>yaml</code> library coming here is not coming from <code>client-go</code> library. The documentation for the <code>yaml</code> library can be found in: <a href="https://pkg.go.dev/sigs.k8s.io/yaml#Marshal" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/yaml#Marshal</a></p>
<p>Instead of <code>yaml</code> if you want to use <code>json</code>, you can simply use the <code>Marshal</code> function <a href="https://pkg.go.dev/k8s.io/apiserver/pkg/apis/example/v1#Pod.Marshal" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/apiserver/pkg/apis/example/v1#Pod.Marshal</a> provided by the <code>v1.Pod</code> struct itself, like any other Go object.</p>
|
<p>I am trying to create a k8s pod with a docker container image from a private insecure registry. With the latest K8s, I get ErrImagePull as it complains of http vs https for the insecure registry.</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7s default-scheduler Successfully assigned imagename to xxxx
Normal Pulling 7s kubelet Pulling image "registry:5000/imagename:v1β
Warning Failed 6s kubelet Failed to pull image "registry:5000/imagename:v1β: rpc error: code = Unknown desc = failed to pull and unpack image "registry:5000/imagename:v1β: failed to resolve reference "registry:5000/imagename:v1β: failed to do request: Head "https://registry:5000/v2/imagename/manifests/v1β: http: server gave HTTP response to HTTPS client
Warning Failed 6s kubelet Error: ErrImagePull
Normal BackOff 6s kubelet Back-off pulling image "registry:5000/imagename:v1β
Warning Failed 6s kubelet Error: ImagePullBackOff
</code></pre>
<p>Before the CRI changes for K8s (i.e. <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/</a>), this has worked for me when I used to have insecure registry configuration in /etc/docker/daemon.json, however with the new changes in K8s, I am trying to understand what is the right configuration needed here.</p>
<p>On the same node, I am able to pull the image from the insecure registry successfully with βdocker pull imagenameβ (since I have /etc/docker/daemon.json configuration for the insecure registry), and I have also verified with containerd command βctr -i pull βplain-http imagenameβ.</p>
<p>What configuration is needed for this to work in a pod.yaml for me to pull this image via βkubectl create -f pod.yamlβ. It's just a simple pod.yaml with the image, nothing fancy.</p>
<p>I saw a post on creating secret key for private registry (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a>), but that requires registry authentication token to create a key. I just tried using /etc/docker/daemon.json to create a regcred, but when I used it in imagePullSecrets in pod.yaml, k8s was still complaining of the same http vs https error.</p>
<p>My /etc/docker/daemon.json</p>
<pre><code>{
"insecure-registries": ["registry:5000"]
}
</code></pre>
<p>I have a new install of K8s, and containerd is the CRI.</p>
<p>Thank you for your help.</p>
| <p>I faced a similar problem recently about not being able to pull images from an insecure private docker registry using containerd only. I will post my solution here in case it works for your question too. Steps below show the details of how I solved it on Ubuntu Server 20.04 LTS:</p>
<pre><code>$ containerd --version\
containerd containerd.io 1.6.4 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
</code></pre>
<p>insecure private docker registry running at 17.5.20.23:5000</p>
<p>The file <code>/etc/containerd/config.toml</code> gets created automatically when you install docker using <code>.deb</code> packages in ubuntu looks as follows:</p>
<pre><code># Copyright 2018-2022 Docker Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#disabled_plugins = ["cri"]
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0
#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
</code></pre>
<p>In my first few attempts I was editing this file (which is created automatically) by simply adding the appropriate lines mentioned at <a href="https://stackoverflow.com/questions/65681045/adding-insecure-registry-in-containerd">Adding insecure registry in containerd</a> at the end of the file and restarting containerd. This made the file look as follows:</p>
<pre><code># Copyright 2018-2022 Docker Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#disabled_plugins = ["cri"]
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0
#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."17.5.20.23:5000"]
endpoint = ["http://17.5.20.23:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."17.5.20.23:5000".tls]
insecure_skip_verify = true
</code></pre>
<p>This did not work for me. To know why, I checked the configurations with which containerd was running (after <code>/etc/containerd/config.toml</code> was edited) using:</p>
<pre><code>$ sudo containerd config dump
</code></pre>
<p>The output of the above command is shown below:</p>
<pre><code>disabled_plugins = []
imports = ["/etc/containerd/config.toml"]
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
</code></pre>
<p>In the above output I noticed that the configurations I was trying to add by editing the <code>/etc/containerd/config.toml</code> were actually not there. So somehow containerd was not accepting the added configurations. To fix this I decided to start from scratch by generating a full configuration file and editing it appropriately (according to instructions at <a href="https://stackoverflow.com/questions/65681045/adding-insecure-registry-in-containerd">Adding insecure registry in containerd</a>).</p>
<p>First took a backup of the current containerd configuration file:</p>
<pre><code>$ sudo su
$ cd /etc/containerd/
$ mv config.toml config_bkup.toml
</code></pre>
<p>Then generated a fresh full configuration file:</p>
<pre><code>$ containerd config default > config.toml
</code></pre>
<p>This generated a file that looked as follows:</p>
<pre><code>disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "k8s.gcr.io/pause:3.6"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
</code></pre>
<p>Then edited the above file to look as follows (the edited lines have been appended with the comment '# edited line'):</p>
<pre><code>disabled_plugins = []
imports = ["/etc/containerd/config.toml"] # edited line
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "17.5.20.23:5000/pause-amd64:3.0" #edited line
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true # edited line
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."17.5.20.23:5000"] # edited line
[plugins."io.containerd.grpc.v1.cri".registry.configs."17.5.20.23:5000".tls] # edited line
ca_file = "" # edited line
cert_file = "" # edited line
insecure_skip_verify = true # edited line
key_file = "" # edited line
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."17.5.20.23:5000"] # edited line
endpoint = ["http://17.5.20.23:5000"] # edited line
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
</code></pre>
<p>Then I restarted containerd</p>
<pre><code>$ systemctl restart containerd
</code></pre>
<p>Finally I tried pulling an image from the private registry using <code>crictl</code> which pulled it successfully:</p>
<pre><code>$ crictl -r unix:///var/run/containerd/containerd.sock 17.5.20.23:5000/nginx:latest
Image is up to date for sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
</code></pre>
|
<p>I have set up the <code>Nginx-Ingress</code> controller as per the documentation (<a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Installation guide</a>)and followed the steps using the example provided. When I try to access the service using the <code>curl</code> command, I am getting a <code>400</code> Bad request. When I look at the logs of the <code>nginx-ingress</code> pod, I am not seeing any error. I have attached the logs for reference. I am finding difficult to troubleshoot the issue. Where </p>
<pre><code>fetch the pods from the nginx-ingress namespace
$ kubectl get po -n nginx-ingress
NAME READY STATUS RESTARTS AGE
coffee-7c45f487fd-965dq 1/1 Running 0 46m
coffee-7c45f487fd-bncz5 1/1 Running 0 46m
nginx-ingress-7f4b784f79-7k4q6 1/1 Running 0 48m
tea-7769bdf646-g559m 1/1 Running 0 46m
tea-7769bdf646-hlr5j 1/1 Running 0 46m
tea-7769bdf646-p5hp8 1/1 Running 0 46m
making the request. I have set up the DNS record in the /etc/hosts file
$ curl -vv http://cafe.example.com/coffee
GET /coffee HTTP/1.1
> Host: cafe.example.com
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Server: nginx/1.17.10
< Date: Mon, 11 May 2020 17:36:31 GMT
< Content-Type: text/html
< Content-Length: 158
< Connection: close
checking the logs after the curl request
$ kubectl logs -n nginx-ingress nginx-ingress-7f 4b784f79-7k4q6
100.96.1.1 - - [11/May/2020:17:31:48 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8340 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:31:51 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40392 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:31:58 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8348 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:01 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40408 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:08 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8360 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:11 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40414 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:18 +0000] "PROXY TCP4 3.6.94.242 172.20.81.142 35790 80" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:18 +0000] "PROXY TCP4 172.20.61.112 172.20.61.112 8366 32579" 400 158 "-" "-" "-"
100.96.1.1 - - [11/May/2020:17:32:21 +0000] "PROXY TCP4 172.20.81.142 172.20.81.142 40422 32579" 400 158 "-" "-" "-"
</code></pre>
| <p>I was able to resolve the issue after adding the following annotations.</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
</code></pre>
|
<p>Trying to update the resources of my Deployment using <code>kubectl patch</code> command:</p>
<pre><code>kubectl patch statefulset test -n test --patch '{"spec": {"template": {"spec": {"containers": [{"resources": [{"limits": [{"cpu": "4000m","memory": "16Gi"}]},{"requests": [{"cpu": "3000m","memory": "13Gi"}]}]}]}}}}'
</code></pre>
<p>But getting the below error:</p>
<blockquote>
<p><strong>Error from server: map: map[resources:[map[limits:[map[cpu:4000m memory:16Gi]]] map[requests:[map[cpu:3000m memory:13Gi]]]]] does not contain declared merge key: name</strong></p>
</blockquote>
| <p>It needs to know which container you want to patch in the statefulset. You indicate this by including the name of the container.</p>
<p>Also, the json structure of your resources field is incorrect. See the example below for a complete working example:</p>
<p>(replace <strong>???</strong> with the name of the container you want patched)</p>
<pre><code>kubectl patch statefulset test -n test --patch '{"spec": {"template": {"spec": {"containers": [{"name": "???", "resources": {"limits": {"cpu": "4000m","memory": "16Gi"},"requests": {"cpu": "3000m","memory": "13Gi"}}}]}}}}'
</code></pre>
|
<p>I just created a new AKS cluster that has to replace an old cluster. The new cluster is now ready to replace the old one, except for one crucial thing, it's outbound ip address. The address of the old cluster must be used so that our existing DNS records do not have to change.</p>
<p><strong>How do I change the public IP address of the Azure load balancer (that is used by the nginx ingress controller) of the new cluster to the one used by the old cluster?</strong>
The old cluster is still running, I want to switch it off / delete it when the new cluster is available. Some down time needed to switch the ip address is acceptable.</p>
<p>I think that the ip first has to be deleted from the load balancer's Frontend IP configuration of the old cluster and can then be added to the Frontend IP configuration of the load balancer used in the new cluster. But I need to know exactly how to do this and what else need to be done if needed (maybe adding a backend pool?)</p>
<p><strong>Update</strong></p>
<p>During the installation of the new cluster I already added the public ip address of the load balancer of the old cluster in the yaml of the new ingress-nginx-controller.
The nginx controller load balancer in the new cluster is in the state <em>Pending</em> and continuously generating events with message "Ensuring Load Balancer". Could it be that simple that I only need to assign an other ip address to the ingress-nginx-controller load balancer in the old cluster so that the ip can be used in the new cluster?</p>
| <p>You have to create a static public IP address for the AKS cluster. Once you delete the old cluster, the public IP address and load balancer associated with it will be deleted as well. You can check and try this documentation[1] for a detailed guide.</p>
<p>[1] <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a></p>
|
<p>I am creating k8s cluster from digital ocean but every time I am getting same warning after I create cluster and open that cluster in lens ID.</p>
<p>Here is the screenshot of warning:</p>
<p><a href="https://i.stack.imgur.com/6lZvD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6lZvD.png" alt="screenshot" /></a></p>
<pre><code>i did every soltion which i found but still can't able to remove the error.
</code></pre>
| <p>Check first if <a href="https://github.com/k3s-io/k3s/issues/1857#issuecomment-950154218" rel="noreferrer"><code>k3s-io/k3s</code> issue 1857</a> could help:</p>
<blockquote>
<p>I was getting the same error when I installed kubernetes cluster via <code>kubeadm</code>.</p>
<p>After reading all the comments on the subject, I thought that the problem might be caused by <code>containerd</code> and the following two commands solved my problem, maybe it can help:</p>
<pre class="lang-bash prettyprint-override"><code>systemctl restart containerd
systemctl restart kubelet
</code></pre>
</blockquote>
<p>And:</p>
<blockquote>
<p>This will need to be fixed upstream. I suspect it will be fixed when we upgrade to containerd v1.6 with the cri-api v1 changes</p>
</blockquote>
<p>So checking the <code>containerd</code> version can be a clue.</p>
|
<p>I have a K8s job that spins up a pod with two containers. Those containers are client and a server. After the client did all it needs to, it sends a special stop signal to the service after which the service exits; then client exits. The job succeeds.</p>
<p>The client and the service containers use <a href="https://www.eclipse.org/jetty/documentation/jetty-9/index.html" rel="nofollow noreferrer">jetty</a> (see "Startup / Shutdown Command Line"), which has that signaling capability. I am looking for something more portable. It would be nice, to be able to send a SIGTERM from the client to the service, then the client would not need to use <code>jetty</code> for signaling. Is there a way to send SIGTERM from the client container to the server container. The client and the server processes are PID 1 in their respective containers.</p>
| <p>Yes, enable <code>shareProcessNamespace</code> on the pod, for example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app
spec:
shareProcessNamespace: true
</code></pre>
<p>Your containers can now send signals to one another. They will no longer use PID 1 anymore though.</p>
<p>Here are the docs that explain it all in detail:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/</a></p>
|
<p>I have a container where I used a bitnami/kubectl image.<br />
Now I want to run a few kubectl commands inside that container.</p>
<p>How kubectl container aware of my kubeconfig file?<br />
I know that I can mount the local kubeconfig file into containers and use it.</p>
<p>But is there any other way possible to access kubeconfig without using it as a volume mount?</p>
<p>I went throug the documentation of RBAC in Kubernetes.<br />
Does configure role and role-binding alone is enough to run kubectl apply and kubectl delete commands successfully even without mounting kubeconfig file?</p>
<p>It would be really helpful if someone helps me with this.<br />
Thanks in advance!</p>
| <blockquote>
<p>Now I want to run a few kubectl commands inside that container.</p>
</blockquote>
<ul>
<li>Why do you need it inside the container?</li>
</ul>
<p><code>kubectl</code> is your CLI to "communicate" with the cluster, the commands are passed to the <code>kube-api</code>, parsed, and executed usually by <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer"><code>Admission controller</code></a>.</p>
<p>Not clear why you need to run kubectl commands inside the container, since <code>kubectl</code> use your <code>kubeconfig</code> file for the communication (it will read the certificate path to the certificate data) and will be able to connect to your cluster.</p>
<hr />
<h3>How to run K8S API in your container?</h3>
<ul>
<li><p>The appropriate solution is to run an API query inside your container.</p>
</li>
<li><p>Every pod stores internally the <code>Token</code> & <code>ServiceAccount</code> which will allow you to query the API</p>
</li>
<li><p>Use the following script I'm using to query the API
<a href="https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh" rel="nofollow noreferrer">https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh</a></p>
</li>
</ul>
<pre class="lang-bash prettyprint-override"><code> #!/bin/sh
#################################
## Access the internal K8S API ##
#################################
# Point to the internal API server hostname
API_SERVER_URL=https://kubernetes.default.svc
# Path to ServiceAccount token
# The service account is mapped by the K8S API server in the pods
SERVICE_ACCOUNT_FOLDER=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace if required
# NAMESPACE=$(cat ${SERVICE_ACCOUNT_FOLDER}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT_FOLDER}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICE_ACCOUNT_FOLDER}/ca.crt
# Explore the API with TOKEN and the Certificate
curl -X GET \
--cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${API_SERVER_URL}/api
</code></pre>
|
<h1>Problem encountered</h1>
<p>When deploying a cluster with <strong>Kubespray</strong>, <strong>CRI-O</strong> and <strong>Cilium</strong> I get an error about having multiple CRI socket to choose from.</p>
<p><em>Full error</em></p>
<pre><code>fatal: [p3kubemaster1]: FAILED! => {"changed": true, "cmd": " mkdir -p /etc/kubernetes/external_kubeconfig && /usr/local/bin/kubeadm init phase kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig --cert-dir /etc/kubernetes/ssl --apiserver-advertise-address 10.10.3.15 --apiserver-bind-port 6443 >/dev/null && cat /etc/kubernetes/external_kubeconfig/admin.conf && rm -rf /etc/kubernetes/external_kubeconfig ", "delta": "0:00:00.028808", "end": "2019-09-02 13:01:11.472480", "msg": "non-zero return code", "rc": 1, "start": "2019-09-02 13:01:11.443672", "stderr": "Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock", "stderr_lines": ["Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock"], "stdout": "", "stdout_lines": []}
</code></pre>
<p><em>Interesting part</em></p>
<pre><code>kubeadm init phase kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig [...] >/dev/null,"stderr": "Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock"}
</code></pre>
<hr>
<h1>What I've tried</h1>
<ul>
<li>1) I've tried to set the <code>--cri-socket</code> flag inside <code>/var/lib/kubelet/kubeadm-flags.env</code>: </li>
</ul>
<pre><code>KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --cri-socket=/var/run/crio/crio.sock"
</code></pre>
<p>=> Makes no difference</p>
<ul>
<li>2) I've checked <code>/etc/kubernetes/kubeadm-config.yaml</code> but it already contains the following section :</li>
</ul>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.3.15
bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8d9f2ff3xxxxxxxxxxxx
nodeRegistration:
name: p3kubemaster1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
criSocket: /var/run/crio/crio.sock
</code></pre>
<p>=> Its already ending with the <code>criSocket</code> flag, so nothing to do...</p>
<ul>
<li>3) Tried to edit the ansible script to add the <code>--cri-socket</code> to the existing command but it fails with <code>Unknow command --cri-socket</code></li>
</ul>
<p>Existing : </p>
<pre><code>{% if kubeadm_version is version('v1.14.0', '>=') %}
init phase`
</code></pre>
<p>Tried : </p>
<pre><code>{% if kubeadm_version is version('v1.14.0', '>=') %}
init phase --crio socket /var/run/crio/crio.sock`
</code></pre>
<hr>
<h1>Theories</h1>
<p>It seems that the problem comes from the command <code>kubeadm init phase</code> which is not compatible with the <code>--crio-socket</code> flag... (see point 3) </p>
<p>Even though the correct socket is set (see point 2) using the config file, the <code>kubeadm init phase</code> is not using it. </p>
<p>Any ideas would be apreciated ;-)<br>
thx </p>
| <p>This worked for me for multiple cri sockets</p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
</code></pre>
<p>Image pull command before initialization for multiple cri:</p>
<pre><code>kubeadm config images pull --cri-socket=unix:///var/run/cri-dockerd.sock
</code></pre>
<hr />
<p>You can choose cri socket path from the following table. See original documentation <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime" rel="noreferrer">here</a></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Runtime</th>
<th>Path to Unix domain socket</th>
</tr>
</thead>
<tbody>
<tr>
<td>containerd</td>
<td><code>unix:///var/run/containerd/containerd.sock</code></td>
</tr>
<tr>
<td>CRI-O</td>
<td><code>unix:///var/run/crio/crio.sock</code></td>
</tr>
<tr>
<td>Docker Engine (using cri-dockerd)</td>
<td><code>unix:///var/run/cri-dockerd.sock</code></td>
</tr>
</tbody>
</table>
</div> |
<p>I am trying to mount a remote webdav:</p>
<p><code>sudo mount -t davfs https://files.isric.org/soilgrids/latest/data/ ~/webdav</code></p>
<p>But I only get the following error: <code>/sbin/mount.davfs: mounting failed; the server does not support WebDAV</code></p>
<p>This server is a <a href="https://github.com/mar10/wsgidav" rel="nofollow noreferrer">wsgidav</a> running in a kubernetes cluster.</p>
<p>Same problem with nautilus, using <code>gvfsd-dav</code> to debug the problem as indicated <a href="https://forum.manjaro.org/t/webdav-folder-will-not-mount-in-nautilus/109948/4" rel="nofollow noreferrer">here</a>. I've the following HTTP requests/responses from the server:</p>
<pre><code>/usr/libexec/gvfsd-dav ssl=true user=anonymous host=files.isric.org prefix=/soilgrids/latest/data/
dav: setting 'ssl' to 'true'
dav: setting 'user' to 'anonymous'
dav: setting 'host' to 'files.isric.org'
dav: setting 'prefix' to '/soilgrids/latest/data/'
dav: Added new job source 0x556590e0c1a0 (GVfsBackendDav)
dav: Queued new job 0x556590e0a380 (GVfsJobMount)
dav: + mount
> OPTIONS /soilgrids/latest/data HTTP/1.1
> Soup-Debug-Timestamp: 1657091152
> Soup-Debug: SoupSession 1 (0x556590e0c100), SoupMessage 1 (0x7fa1b40060e0), SoupSocket 1 (0x7fa1b43440e0)
> Host: files.isric.org
> Accept-Encoding: gzip, deflate
> User-Agent: gvfs/1.48.2
> Accept-Language: en-us, en;q=0.9
> Connection: Keep-Alive
< HTTP/1.1 204 No Content
< Soup-Debug-Timestamp: 1657091152
< Soup-Debug: SoupMessage 1 (0x7fa1b40060e0)
< Date: Wed, 06 Jul 2022 07:05:52 GMT
< Connection: keep-alive
< Strict-Transport-Security: max-age=15724800; includeSubDomains
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Methods: GET, OPTIONS, HEAD
< Access-Control-Allow-Headers: Content-Type, Accept-Ranges, Content-Range, Range, Content-Encoding, Content-Length, Access-Control-Allow-Origin
< Access-Control-Max-Age: 1728000
< Content-Length: 0
<
dav: send_reply(0x556590e0a380), failed=1 (Not a WebDAV enabled share)
dav: Mount failed: Not a WebDAV enabled share
</code></pre>
<p>The server HTTP response is <code>HTTP/1.1 204 No Content</code></p>
| <p>The problem lies on the k8s ingress that has:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, OPTIONS, HEAD"
nginx.ingress.kubernetes.io/cors-allow-headers: "Content-Type, Accept-Ranges, Content-Range, Range, Content-Encoding, Content-Length, Access-Control-Allow-Origin"
</code></pre>
<p>The <code>HTTP/1.1 204 No Content</code> response is the correct <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204" rel="nofollow noreferrer">pre-flight CORS</a> response <strong>that is provided by the ingress</strong>, this is <strong>not the expected WebDAV reponse</strong>, and the first request was not even reaching the pod running wsgidav.</p>
<p>Solution: <strong>Disable CORS support on ingress</strong>. and then things are OK:</p>
<pre><code>/usr/libexec/gvfsd-dav ssl=true user=anonymous host=files.isric.org prefix=/soilgrids/latest/data/
> OPTIONS /soilgrids/latest/data HTTP/1.1
> Soup-Debug-Timestamp: 1657095042
> Soup-Debug: SoupSession 1 (0x55758f924100), SoupMessage 1 (0x7f01180060d0), SoupSocket 1 (0x7f0118342110)
> Host: dev-files.isric.org
> Accept-Encoding: gzip, deflate
> User-Agent: gvfs/1.48.2
> Accept-Language: en-us, en;q=0.9
> Connection: Keep-Alive
< HTTP/1.1 200 OK
< Soup-Debug-Timestamp: 1657095042
< Soup-Debug: SoupMessage 1 (0x7f01180060d0)
< Date: Wed, 06 Jul 2022 08:10:42 GMT
< Content-Type: text/html
< Content-Length: 0
< Connection: keep-alive
< DAV: 1
< Allow: OPTIONS, HEAD, GET, PROPFIND
< MS-Author-Via: DAV
< Strict-Transport-Security: max-age=15724800; includeSubDomains
<
</code></pre>
|
<p>When trying to use scaled objects it fails with the error</p>
<blockquote>
<p>Failed to create the scaledobject 'azure-monitor-scaler'. Error: (400) : ScaledObject is currently not yet supported in the portal.</p>
</blockquote>
<p>I am using the following code as per their documentation. Still seem it is not supported by azure portal.</p>
<pre><code>kind: Secret
metadata:
name: azure-monitor-secrets
data:
activeDirectoryClientId: test
activeDirectoryClientPassword: test123
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: azure-monitor-trigger-auth
spec:
secretTargetRef:
- parameter: activeDirectoryClientId
name: azure-monitor-secrets
key: activeDirectoryClientId
- parameter: activeDirectoryClientPassword
name: azure-monitor-secrets
key: activeDirectoryClientPassword
# or Pod Identity, kind: Secret is not required in case of pod Identity
podIdentity:
provider: azure
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: azure-monitor-scaler
spec:
scaleTargetRef:
name: sample-dep
minReplicaCount: 1
maxReplicaCount: 10
triggers:
- type: azure-monitor
metadata:
resourceURI: Microsoft.Network/applicationgateways/ag
tenantId: 22323-2321-2232-1212
subscriptionId: 2323232323232323
resourceGroupName: sample-rd
metricName: AvgRequestCountPerHealthyHost
metricFilter: BackendSettingsPool eq 'pool'
metricAggregationInterval: "0:0:10"
metricAggregationType: Average
targetValue: "10"
authenticationRef:
name: azure-monitor-trigger-auth
</code></pre>
| <p>It appears that the LA scaler is broken by changes in keda version 2.7.0.</p>
<p>You can try to run an older version of keda with LA scaler it should work for you.</p>
<p>You can do it by running the following command: <code>helm install keda kedacore/keda --version 2.0.0 --namespace keda</code></p>
|
<p>I am trying to deploy an admission controller / mutating webhook</p>
<p>Image: <a href="https://hub.docker.com/layers/247126140/aagashe/label-webhook/1.2.0/images/sha256-acfe141ca782eb8699a3656a77df49a558a1b09989762dbf263a66732fd00910?context=repo" rel="nofollow noreferrer">https://hub.docker.com/layers/247126140/aagashe/label-webhook/1.2.0/images/sha256-acfe141ca782eb8699a3656a77df49a558a1b09989762dbf263a66732fd00910?context=repo</a></p>
<p>Steps are executed in the below order.</p>
<ol>
<li>Created the ca-csr.json and ca-config.json as per below
<strong>ca-config.json</strong></li>
</ol>
<pre><code>{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "175200h"
}
}
}
}
</code></pre>
<p><strong>ca-csr.json</strong></p>
<pre><code>{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "175200h"
}
}
}
}
</code></pre>
<p>Create a docker container and run commands one after the other as below:</p>
<pre><code>docker run -it --rm -v ${PWD}:/work -w /work debian bash
apt-get update && apt-get install -y curl &&
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o /usr/local/bin/cfssl && \
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o /usr/local/bin/cfssljson && \
chmod +x /usr/local/bin/cfssl && \
chmod +x /usr/local/bin/cfssljson
cfssl gencert -initca ca-csr.json | cfssljson -bare /tmp/ca
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=ca-config.json \
-hostname="label-webhook,label-webhook.default.svc.cluster.local,label-webhook.default.svc,localhost,127.0.0.1" \
-profile=default \
ca-csr.json | cfssljson -bare /tmp/label-webhook
ca_pem_b64="$(openssl base64 -A <"/tmp/ca.pem")"
ls -alrth /tmp/
total 32K
drwxr-xr-x 1 root root 4.0K Jul 5 05:07 ..
-rw-r--r-- 1 root root 2.0K Jul 5 05:13 ca.pem
-rw-r--r-- 1 root root 1.8K Jul 5 05:13 ca.csr
-rw------- 1 root root 3.2K Jul 5 05:13 ca-key.pem
-rw-r--r-- 1 root root 2.2K Jul 5 05:17 label-webhook.pem
-rw-r--r-- 1 root root 1.9K Jul 5 05:17 label-webhook.csr
-rw------- 1 root root 3.2K Jul 5 05:17 label-webhook-key.pem
drwxrwxrwt 1 root root 4.0K Jul 5 05:17 .
cp -apvf /tmp/* .
'/tmp/ca-key.pem' -> './ca-key.pem'
'/tmp/ca.csr' -> './ca.csr'
'/tmp/ca.pem' -> './ca.pem'
'/tmp/label-webhook-key.pem' -> './label-webhook-key.pem'
'/tmp/label-webhook.csr' -> './label-webhook.csr'
'/tmp/label-webhook.pem' -> './label-webhook.pem'
pwd
/work
export ca_pem_b64="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZqakNDQTNhZ0F3SUJBZ0lVVVVCSHcvTUlPak5IVjE1ZHBhMytFb0RtTlE4d0RRWUpLb1pJaHZjTkFRRU4KQlFBd1h6RUxNQWtHQTFVRUJoTUNRVlV4RlRBVEJnTlZCQWdUREdOaFlYTXRaR1YyTFdOaFl6RVNNQkFHQTFVRQpCeE1KVFdWc1ltOTFjbTVsTVF3d0NnWURWUVFLRXdOUVYwTXhGekFWQmdOVkJBc1REa05OVXlCWGIzSnJjM1J5ClpXRnRNQjRYRFRJeU1EY3dOVEExTURnd01Gb1hEVEkzTURjd05EQTFNRGd3TUZvd1h6RUxNQWtHQTFVRUJoTUMKUVZVeEZUQVRCZ05WQkFnVERHTmhZWE10WkdWMkxXTmhZekVTTUJBR0ExVUVCeE1KVFdWc1ltOTFjbTVsTVF3dwpDZ1lEVlFRS0V3TlFWME14RnpBVkJnTlZCQXNURGtOTlV5QlhiM0pyYzNSeVpXRnRNSUlDSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUF1Vmxyd3lLSE5QMVllZUY5MktZMG02YXc0VUhBMEtac0JyNUkKeEZaWnVtM3ZzSHV3eXFBa3BjZHpibWhqSmVGcTZXYXJXUUNSTGxoU1ZRaVcxUnJkOXpxMWVndVZRYjJmN0w1cApkbGFteGZ4UGhSc3RodTZscXVCOC9XbWo3RVVEVnBMMkx3bHJNUm1tOWhrYWxSSUN6cXRLa1Y2MDFJMG9KMEd6ClN4SUFPSnRBS3VxamtuTWtnOTNTVit0WEdVamxLOTFzbGZ3V2Z5UUtjVVZWU1dxUVZiUEdxcjFIblZzeU5TcGYKTERFZGRFRVBNSUZLM3U2eWg3M3R3ME1SR3RyQ0RWSEdyR2xtd0xrZDZENjhzdHJCWWhEdnVVU2NRMG5Wb2VxaQowbVRESENCQ0x3UVptd2piR1UyYzhrMklVMGRkaGUvM2dYb2ErZkxSL3c4RHFPUldFKzVPREFxNFp1aklRQ01WCkdWSVJzdERwRmZscFdvQ0t1RnFDMUk2bFJPcFVJYi9ER0xyV29oeWdRYmxmcFlZd0JkbWtqVWhXaHpOL0N4MTcKeDR2WFM3a0NjVDJDVDZqR0NoUVlZTGRPL2lsTCtFMEhJWE9oRUVWbVZhaTcrUW5qRXVmeTEyUGlHQUEyWnc2dwp6NmpYVjJab1NXQUgxZ0xGSTYxTGRNQTE1Y084RTJERkFHMXdOUmM0TndJYUNmejNQMDRBUzFwbk5yRW5xNE1XCkVqM2ZUSGU4MWlRTTBuMnZ6VlltUDVBcEFwa2JNeUQrRU9ENWxnWXlFa1dTNVpON2RlVWZ5QURZSVQvMFR0USsKQTFzbk94K1RnT0lnTGxnY0xrMWllVnhHNHBLOTJqTWpWMjBGb0RDUmM1SHZCWHZrMWYvSWN2VDhDOENDRXJISwpJWkptdGFrQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0VHTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3CkhRWURWUjBPQkJZRUZQMjJrRm4rZWlOcFJHMkU0VkhoVGlRdFo0TmlNQTBHQ1NxR1NJYjNEUUVCRFFVQUE0SUMKQVFBTlRHSEhCODFwaWxwVnBsamZvVjY3ZTlhWTJFaUNudkRRSmdTWTBnZ0JOY3ZzblVMaFRpKytWZ25qZ0Q5YwpCOGMvQkU1QU0vWGdWVHE3UXpiUS92REhLbE4xbjRMbXdzWWxJc1RTWGhDZCtCTFlLeGEyTlJsVXZHR3h2OWZFCnZTVVpvcDk4MEtiMExlQU5lZ0FuOHlldnRTZ2hRdC9GNkcrVENOWk5GS25ZZFFKenp2ejFXNk1VOURPL0J4cGMKVWovTTZSMFhaeHdJOE5hR281MGRQUzZTVFNTcUdCQ3VIbUEyRDRrUCtWdHZIdVZoS2Izd3pXMVVPL1dCcTBGLwpKU3o2and4c05OUU8vOVN4SXNNOVRMWFY5UjkvNThSTEl1Y3ZObDFUODd2dzd5ZGp0S0c3YUR3N1lxSXplODN0ClF1WW1NQlY3Y0k2STdSRi9RVHhLVUdGbXJ6K3lDTHZzNjViVjJPdThxUm5ocUhTV3kwbkNjakYwR2h6L09hblIKdDFNWWNKTytpQzJBR09adVlGRnJsbUk0cWlCUHBJc204YmxDVGRoT1FhLzI2RTJWQzJXQk9SQmVrU2VWY3ZzUgpQaXFWMkRzV2I3ODc5UzEwWS9lOVQ2WUhIc3Z4TDVjZnhibVBsRDF2dlR0TmI2TjJiYTYyWmZVVlEvU3E3ZmEwClhKbUtpQ2pLbU9oMVhKRm5ZRmpRb25kMUFSNUVzR0drblI5NW5QN0x5ejd5RmpHMjFndkJHSU0xbHV0alg5aW8KVkdpMjlHdFA4THVQait6TDNsMElUTEZqb0RCOVBiNXFFbjR4MGpqMHlHc09kdFQ0ZGYvSGVja2ZHV0xmNkZEawp5ZmNuMTlRWDB0NXl6YklZVG9qRFV3VXlEUFZDYW44Y0JkdWdCNGptZkNjV2pRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
helm upgrade --install rel-label-webhook chart --namespace mutatingwebhook --create-namespace --set secret.cert=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.csr | base64 -w0) --set secret.key=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.pem | base64 -w0) --set secret.cabundle=$(echo "${ca_pem_b64}"|base64 -w0)
</code></pre>
<p>I get an error like below when I check status of pod logs:</p>
<pre><code>k get all
NAME READY STATUS RESTARTS AGE
pod/rel-label-webhook-5575b849dc-d62np 0/1 CrashLoopBackOff 2 (20s ago) 48s
pod/rel-label-webhook-5575b849dc-gg94h 0/1 Error 3 (35s ago) 63s
pod/rel-label-webhook-5575b849dc-zcvc9 0/1 CrashLoopBackOff 2 (19s ago) 48s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rel-label-webhook ClusterIP 10.0.135.138 <none> 8001/TCP 63s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rel-label-webhook 0/3 3 0 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/rel-label-webhook-5575b849dc 3 3 0 64s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/rel-label-webhook Deployment/rel-label-webhook <unknown>/80%, <unknown>/80% 3 8 3 64s
k logs -f pod/rel-label-webhook-5575b849dc-gg94h
time="2022-07-05T13:37:45Z" level=info msg="listening on :8001"
error: error serving webhook: tls: failed to find "CERTIFICATE" PEM block in certificate input after skipping PEM blocks of the following types: [CERTIFICATE REQUEST]
</code></pre>
<p>What I am doing wrong here?</p>
<p>P.S:</p>
<p><strong>Edit 1.</strong></p>
<p>Tried as per larsks but now getting a new error!</p>
<p><strong>Command</strong></p>
<pre><code>azureuser@ubuntuvm:~/container-label-webhook$ helm upgrade --install rel-label-webhook chart --namespace mutatingwebhook --create-namespace --set secret.cert=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.pem | base64 -w0) --set secret.key=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.pem | base64 -w0) --set secret.cabundle="echo "${ca_pem_b64}"|base64 -w0"
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>azureuser@ubuntuvm:~/container-label-webhook$ k logs -f pod/rel-label-webhook-5575b849dc-44xrn
time="2022-07-06T02:41:12Z" level=info msg="listening on :8001"
error: error serving webhook: tls: found a certificate rather than a key in the PEM for the private key
</code></pre>
| <p>The error seems pretty clear: the code is looking for a <code>CERTIFICATE</code> block in a PEM-encoded file, but it is only find a <code>CERTIFICATE REQUEST</code> block. It looks like you're passing a certificate signing request (csr) where the code expects to find an actual SSL certificate. And in fact, looking at your <code>helm upgrade</code> command, that's exactly what you're doing:</p>
<pre><code>helm upgrade ... \
--set secret.cert=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.csr | base64 -w0) ...
</code></pre>
<p>You should use <code>label-webhook.pem</code> here instead of <code>label-webhook.csr</code>.</p>
|
<p>In this case, I have multiple Kubernetes clusters and want to work on different clusters at the same time. (Will keep it as 2 clusters to make it simple)</p>
<p>As described in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">Kubernetes documentation</a> I have configured two clusters (will call them dc1-main and dc2-main)</p>
<p>I'm logging into a node where kubectl is, with an application support user (e.g. appuser)</p>
<p>At the same time on two sessions to the management server I logged in with appuser.</p>
<p>In this case, I want to use kubectl to manage one context on each session.</p>
<p>But, if I set the active context as below, both sessions to the server reflect get the change as both are referring to the the same config file (which has both contexts)</p>
<pre><code>kubectl config use-context dc1-main
</code></pre>
<p>Or the other option in the document is to pass the context with the command as an argument. Which makes the command quite complicated.</p>
<pre><code>kubectl --context="dc2-main" get nodes
</code></pre>
<p>I'm looking at an easy way to change this quickly to change the context without affecting the other session. Which could be most likely an environment variable. Not so sure if this is the easiest though.</p>
<p>I went through the kubectl project GitHub and found a change has been requested long time ago for something similar to this and talking about env variables.</p>
<p>Any better suggestions?</p>
| <p>The standard Kubernetes client libraries support a <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#the-kubeconfig-environment-variable" rel="noreferrer"><code>$KUBECONFIG</code> environment variable</a>. This means that pretty much every tool supports it, including Helm and any locally-built tools you have. You can set this to a path to a cluster-specific configuration. Since it's an environment variable, each shell will have its own copy of it.</p>
<pre class="lang-bash prettyprint-override"><code>export KUBECONFIG="$HOME/.kube/dc1-main.config"
kubectl get nodes
</code></pre>
<p>In your shell dotfiles, you can write a simple shell function to set this</p>
<pre class="lang-bash prettyprint-override"><code>kubecfg() {
export KUBECONFIG="$HOME/.kube/$1.config"
}
</code></pre>
<p>In my use I only have one context (user/host/credentials) in each kubeconfig file, so I pretty much never use the <code>kubectl config</code> family of commands. This does mean that, however you set up the kubeconfig file initially, you either need to repeat those steps for each cluster or split out your existing kubeconfig file by hand (it's YAML so it's fairly doable).</p>
<pre class="lang-bash prettyprint-override"><code># specifically for Amazon Elastic Kubernetes Service
kubecfg dc1-main
aws eks update-kubeconfig --name dc1-main ...
kubecfg dc2-main
aws eks update-kubeconfig --name dc2-main ...
</code></pre>
<p>Tools that want to write the configuration also use this variable, which for me mostly comes up if I want to recreate my minikube environment. You may find it useful to <code>chmod 0400 "$KUBECONFIG"</code> to protect these files once you've created them.</p>
|
<p>My aspired tekton usecase is simple:</p>
<ul>
<li>successful pipelineruns should be removed after x days</li>
<li>failed pipelineruns shouldn't be removed automatically.</li>
</ul>
<p>I plan to do the cleanup in an initial cleanup-task. That seems better to me than annotation- or cronjob-approaches. As long as nothing new is built, nothing has to be deleted.</p>
<p>Direct approaches:</p>
<ul>
<li><strong>Failed</strong>: <code>tkn delete</code> doesn't seem very helpful because it doesn't discriminate between successful or not.</li>
<li><strong>Failed</strong>: <code>oc delete --field-selector ...</code> doesn't contain the well hidden but highly expressive field <code>status.conditions[0].type==Succeeded</code></li>
</ul>
<p>Indirect approaches (first filtering a list of podnames and then delete them - not elegant at all):</p>
<ul>
<li><strong>Failed</strong>: Filtering output with <code>-o=jsonpath...</code> seems costly and the condition-array seems to break the statement, so that (why ever?!) everything is returned... not viable</li>
<li>My last attempt is <code>tkn pipelineruns list --show-managed-fields</code> and parse this with sed/awk... which is gross... but at least it does what I want it to do... and quite efficiently at that. But it might result as brittle when the design of the output is going to change in future releases...</li>
</ul>
<p>Do you have any better more elegant approaches?
Thanks a lot!</p>
| <p>Until a better solution is there, I'll post my current solution (and its drawbacks):</p>
<p>Our cleanup-task is now built around the following solution, evaluating the table returned by <code>tkn pipelineruns list</code>:</p>
<pre><code>tkn pipelineruns list --show-managed-fields -n e-dodo-tmgr --label tekton.dev/pipeline=deploy-pipeline | awk '$6~/Succeeded/ && $3~/day|week|month/ {print $1}'
</code></pre>
<p>Advantages:</p>
<ul>
<li>It does what it should without extensive calls or additional calculation.</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li>Time is limited to "older than an hour / a day / a week ..." But that's acceptable, since only successful builds are concerned.</li>
<li>I guess the design is quite brittle, because with changes in the tkn-Client the format of the table might change which implies that awk will pick the wrong columns, or similar pattern-probs.</li>
</ul>
<p>All in all I hope the solution will hold until there are some more helpful client-features that make the desired info directly filterable. Actually I'd hope for something like <code>tkn pipelineruns delete --state successful --period P1D</code>.
The notation for the time period is from ISO8601.</p>
|
<p>In a Kubernetes cluster on Oracle cloud, I have a pod with an Apache server.</p>
<p>This pod needs a persistent volume so I used a persistentVolumeClaim and the cloud provider is able to automatically create an associated volume (Oracle Block Volume).</p>
<p>The access mode used by the PVC is readWriteOnce and therefore the volume created has the same access mode.</p>
<p>Everything work great.</p>
<p>Now I want to backup this volume using borg backup and borgmatic by starting a new pod regularly with a cronJob.</p>
<p>This backup pod needs to mount the volume in read only.</p>
<p>Question:</p>
<ul>
<li>Can I use the previously defined PVC?</li>
<li>Do I need to create a new PVC with readOnly access mode?</li>
</ul>
| <p>As per documentation:</p>
<blockquote>
<p>ReadWriteOnce:
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.</p>
</blockquote>
<p>That means if you make a strict rule for deploying your pods to the same node, you can use the same PVC, here's the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">INSTRUCTION</a></p>
|
<p>My AKS cluster and storage account are in the same Region: East US 2.
I have created secret:
<code>kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=fastorage --from-literal=azurestorageaccountkey='OWd7e9Ug' secret/fa-fileshare-secret created</code></p>
<p>In that storage account I have file share: <code>containershare</code></p>
<p>I have checked in the Configuration of the secret and values are being matched for account name and key (as this is stated in similar questions which did not help me).
I think VNET for storage account and AKS cluster are different, and also Subscription and Resource group are different (if relevant.)</p>
<p>When I try to execute deployment for my app, I am getting:</p>
<pre><code> Mounting arguments: -t cifs -o actimeo=30,mfsymlinks,file_mode=0777,dir_mode=0777,
<masked> //fastorage.file.core.windows.net/containershare
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/#fa-fileshare-secret#containershare#ads-volume#default/globalmount
Output: mount error(13): Permission denied
</code></pre>
<p>In <code>deployment.yaml</code> definition:</p>
<pre><code>........
volumes:
- name: ads-volume
azureFile:
secretName: fa-fileshare-secret
shareName: containershare
readOnly: false
............
</code></pre>
<p>What can be the problem (since different region and wrong credentials are not the issue). I am accessing the cluster through the kubectl from remote windows machine.</p>
| <p>Thank You <a href="https://stackoverflow.com/users/12990185/andreys">AndreyS</a> for confirming you resolve your issue. Here is few more additional details that can help to know cause of your issue.</p>
<p>As Per Microsoft <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#akssubnetnotallowed" rel="nofollow noreferrer">Document</a> here is the possible cause for this error <strong><code>Mount error(13): Permission denied</code></strong></p>
<blockquote>
<ul>
<li><a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#secretnotusecorrectstorageaccountkey" rel="nofollow noreferrer">Cause 1: Kubernetes secret doesn't reference the correct storage account name or
key</a></li>
<li><a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#akssubnetnotallowed" rel="nofollow noreferrer">Cause 2: AKS's VNET and subnet aren't allowed for the storage account</a></li>
<li><a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#aksnotawareprivateipaddress" rel="nofollow noreferrer">Cause 3: Connectivity is via a private link but nodes and the private endpoint are in different
VNETs</a></li>
</ul>
</blockquote>
<p>For mounting the storage file share with AKS Cluster(Pod) you should deploy both the resource in same resource group and same region and also to make sure to both resource in same VNET if not then you have to allow access to your AKS VNET in Storage is set to Selected networks, check if the VNET and subnet of the AKS cluster are added.</p>
<p><a href="https://i.stack.imgur.com/ubEdj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ubEdj.png" alt="enter image description here" /></a></p>
<p>It may take a few moments for the changes to take effect. After the VNET and subnet are added, check if the pod status changes from ContainerCreating to Running and mounted the File share as well.</p>
|
<p>I'm trying to use client-go informers to get the replica count on deployments. Whenever autoscaling changes the number of replicas, I need to retrieve this in order to handle some other logic. I was previously using the Watch() function, but there are a few inconsistencies with timeouts and connection drops.</p>
<p>The following code below shows an example of the implementation:</p>
<pre><code>labelOptions := informers.WithTweakListOptions(func(opts *v1.ListOptions) {
opts.FieldSelector = "metadata.name=" + name
})
factory := informers.NewSharedInformerFactoryWithOptions(clientSet, 2*time.Second, informers.WithNamespace(namespace), labelOptions)
informer := factory.Apps().V1().Deployments().Informer()
// Using the channels and goroutines below didn't show changes:
stopper := make(chan struct{})
defer close(stopper)
//go func() {
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
mObj, ok := obj.(*appsv1.Deployment)
if !ok {
panic(spew.Sdump("informer returned invalid type", mObj))
}
replicas := int(*mObj.Spec.Replicas)
logger.Infof("updating replicas to %d", replicas)
sendUpdates() // use updates elsewhere
},
UpdateFunc: func(oldObj, newObj interface{}) {
old, ok := oldObj.(*appsv1.Deployment)
if !ok {
panic(spew.Sdump("informer returned invalid type", old))
}
newDeployment, ok := newObj.(*appsv1.Deployment)
if !ok {
panic(spew.Sdump("informer returned invalid type", newDeployment))
}
oldReplicas := int(*old.Spec.Replicas)
newReplicas := int(*newDeployment.Spec.Replicas)
if oldReplicas != newReplicas {
sendUpdates()
}
},
})
//factory.Start(wait.NeverStop)
//factory.WaitForCacheSync(wait.NeverStop)
informer.Run(stopper)
</code></pre>
<p>When Kubernetes autoscales or I change the Deployments replica manually, I get <code>deployment.apps/app scaled</code> but it doesn't get caught by the Informer. Nothing gets printed in the logs and it enters a crash loop with no error message.</p>
<p>I used the following resources:</p>
<ul>
<li><a href="https://dev.to/davidsbond/go-creating-dynamic-kubernetes-informers-1npi" rel="nofollow noreferrer">https://dev.to/davidsbond/go-creating-dynamic-kubernetes-informers-1npi</a></li>
<li><a href="https://stackoverflow.com/questions/53200785/whats-the-best-way-to-get-notified-when-kubernetes-deployments-change-using-the">What's the best way to get notified when kubernetes Deployments change using the k8s.io/client-go library?</a></li>
<li><a href="https://stackoverflow.com/questions/47205516/kubernetes-filter-objects-in-informer">kubernetes filter objects in Informer</a></li>
</ul>
| <p>A few things to note:</p>
<ul>
<li>Before calling <code>informerFactory.Start()</code>, ensure that the Informer is called directly(<code>informer := factory.Apps().V1().Deployments().Informer()</code>) or <code>Start()</code> wont start anything.</li>
<li>Using a goroutine to start theΒ SharedInformerFactoryΒ is meaningless because the <code>informerFactory.Start()</code> uses one internally.</li>
<li>It will also cease theΒ <code>informerFactory.WaitForCacheSync()</code>Β method from working resulting in it getting the wrong data for started informers.</li>
</ul>
<pre><code>labelOptions := informers.WithTweakListOptions(func(opts *v1.ListOptions) {
opts.FieldSelector = "metadata.name=" + name
})
factory := informers.NewSharedInformerFactoryWithOptions(clientSet, 2*time.Second, informers.WithNamespace(namespace), labelOptions)
informer := factory.Apps().V1().Deployments().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
mObj, ok := obj.(*appsv1.Deployment)
if !ok {
doSomething()
}
replicas := int(*mObj.Spec.Replicas)
doSomething()
},
UpdateFunc: func(oldObj, newObj interface{}) {
old, ok := oldObj.(*appsv1.Deployment)
if !ok {
doSomething()
}
newDeployment, ok := newObj.(*appsv1.Deployment)
if !ok {
doSomething()
}
oldReplicas := int(*old.Spec.Replicas)
newReplicas := int(*newDeployment.Spec.Replicas)
if oldReplicas != newReplicas {
doSomething()
}
},
})
// Initializes all active informers and starts the internal goroutine
factory.Start(wait.NeverStop)
factory.WaitForCacheSync(wait.NeverStop)
</code></pre>
|
<p>How can I tell with <code>kubectl</code> how much <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#local-ephemeral-storage" rel="noreferrer">ephemeral storage</a> a pod is currently using?</p>
<p>In a Kubernetes pod spec, I can specify resource requests and limits for CPU, memory, and ephemeral storage:</p>
<pre><code>resources:
requests:
memory: "60Mi"
cpu: "70m"
ephemeral-storage: "2Gi"
limits:
memory: "65Mi"
cpu: "75m"
ephemeral-storage: "4Gi"
</code></pre>
<p>However, to set good requests and limits on ephemeral storage, I need to know what this value actually is for a running pod, which I can't figure out. I can get CPU and memory usage using <code>kubectl top pod</code>, but, from what I can tell, <a href="https://github.com/kubernetes/kubernetes/blob/30a06af453059ffe636f483b7db914d51757139a/pkg/kubelet/eviction/eviction_manager.go#L503" rel="noreferrer">ephemeral storage usage is only actually calculated when making an actual eviction decision</a>.</p>
| <p>You can do this through the raw command.</p>
<p><code>kubectl get --raw "/api/v1/nodes/(your-node-name)/proxy/stats/summary"</code></p>
<p>There is also this</p>
<p><code>kubectl get --raw "/api/v1/nodes/(your-node-name)/proxy/metrics/cadvisor""</code></p>
<p>EDIT:</p>
<p>I've created a Prometheus exporter for this now.</p>
<p><a href="https://github.com/jmcgrath207/k8s-ephemeral-storage-metrics" rel="noreferrer">https://github.com/jmcgrath207/k8s-ephemeral-storage-metrics</a></p>
|
<p>I'm running a mongodb instance as a kubernetes pod in a single node cluster (bare metal ubuntu machine). The volume is configured <code>ReadWriteOnce</code> as the mongodb pod is accessed only by pods in one node.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy:
type: Recreate
template:
metadata:
labels:
app: mongo
spec:
hostname: mongo
volumes:
- name: data
persistentVolumeClaim:
claimName: data
- name: restore
persistentVolumeClaim:
claimName: restore
containers:
- name: mongo
image: mongo:4.4.14
args: ["--auth"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: data
- mountPath: /restore
name: restore
</code></pre>
<p>But from time to time I cannot run commands like inserting documents to a non existing collection or running mongodump.
Then I do get the error <code>MongoServerError: 1: Operation not permitted</code>. This is caused by a <code>chown</code> problem: <code>ls -ld /data/db/</code> is returning</p>
<pre><code>drwxr-sr-x 4 nobody 4294967294 16384 Jun 28 18:19 /data/db/
</code></pre>
<p>I can fix the problem by running</p>
<pre><code>chown mongodb:mongodb /data/db
</code></pre>
<p>But after some time it changes again, so the same problem happens again and I have to rerun the <code>chown mongodb:mongodb /data/db</code></p>
<p>I tried to set</p>
<pre><code>securityContext:
runAsUser: 1000
fsGroup: 2000
</code></pre>
<p>But then the mongodb pod is failing:</p>
<pre><code>{"t":{"$date":"2022-07-03T10:09:24.379+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-07-03T10:09:24.383+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongodb"}}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.14","gitVersion":"0b0843af97c3ec9d2c0995152d96d2aad725aab7","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db"}}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
</code></pre>
<p>I also tried</p>
<pre><code> initContainers:
- name: mongodb-init
image: mongo:4.4.14
command: ["sh", "-c", "chown 999:999 /data/db"]
volumeMounts:
- name: data
mountPath: /data/db
</code></pre>
<p>But after some time I again get:</p>
<pre><code>drwxr-sr-x 4 nobody 4294967294 16384 Jun 28 18:19 /data/db/
</code></pre>
<p>I do not understand what is causing this behaviour.</p>
| <p>If you set <code>securityContext</code> then according to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000.</p>
</blockquote>
<p>there is no fear of mismatching between different processes of the container.</p>
<p>Perhaps you could give (although not recommended) root user access to the pod by setting <code>runAsUser</code> to <code>0</code>? Just to see if security context can fix the issue. It should.</p>
|
<p>I experienced slow response of any kubectl command at my test environment.
I don't have this issue in the production and at minikube in my dev.
For example I run:</p>
<pre><code>kubectl get deployments
</code></pre>
<p>And I have to wait even one minute. If I break with <em>Ctrl + C</em>, and start again sometimes it respond immediately. And when I repeat command is again delayed.</p>
<p>Do you have any tips where I should dig in ?
Could it be memory issue ? Or rather network issue?
How can I monitor them ?
I'm in ssh session to one of master .</p>
| <p>I had also slow response of any kubectl command at one of my testing environment. I checked API response timings by using</p>
<pre><code>kubectl get deployments -v=6
</code></pre>
<p>and encounter the following error</p>
<pre><code>Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
</code></pre>
<p>This error is caused by the metrics server being down for some error, or in case it has been deleted. We can check the list of api services using kubectl get apiservice. If we want to get rid of this error we can just removed the apiservice using kubectl delete apiservice as follows:</p>
<pre><code>$ kubectl delete apiservice v1beta1.metrics.k8s.io
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
</code></pre>
<p><a href="https://pet2cattle.com/2021/05/unable-to-retrieve-the-complete-list-of-server-apis" rel="nofollow noreferrer">This blog</a> supports the same idea.</p>
|
<p>Envoy is using all the memory and the pods are getting evicted. Is there a way to set limit to how much memory envoy proxy can use in the envoy configuration file?</p>
| <p>You can probably do that by configuring the <code>overload-manager</code> in the <code>bootstrap</code> configuration for Envoy. Here's a <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/operations/overload_manager/overload_manager#overload-manager" rel="nofollow noreferrer">documentation</a> link for more details. It is done simply by adding <code>overload-manager</code> section as follows:</p>
<pre><code>overload_manager:
refresh_interval: 0.25s
resource_monitors:
- name: "envoy.resource_monitors.fixed_heap"
typed_config:
"@type": type.googleapis.com/envoy.extensions.resource_monitors.fixed_heap.v3.FixedHeapConfig
# TODO: Tune for your system.
max_heap_size_bytes: 2147483648 # 2 GiB <==== fix this!
actions:
- name: "envoy.overload_actions.shrink_heap"
triggers:
- name: "envoy.resource_monitors.fixed_heap"
threshold:
value: 0.95
- name: "envoy.overload_actions.stop_accepting_requests"
triggers:
- name: "envoy.resource_monitors.fixed_heap"
threshold:
value: 0.98
</code></pre>
|
<p>I was trying to run my pod as non root and also grant it some <a href="https://linux.die.net/man/7/capabilities" rel="nofollow noreferrer">capabilities</a>.<br />
This is my config:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: container-name
securityContext:
capabilities:
add: ["SETUID", "SYS_TIME"]
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
</code></pre>
<p>when I deploy my pod and connect to it I run <code>ps aux</code> and see:</p>
<pre><code>PID USER TIME COMMAND
1 root 0:32 node bla.js
205 root 0:00 /bin/bash
212 root 0:00 ps aux
</code></pre>
<p>I then do <code>cat /proc/1/status</code> and see:</p>
<pre><code>CapPrm: 0000000000000000
CapEff: 0000000000000000
</code></pre>
<p>Which means I have no capabilities for this container's process.<br />
The thing is that if I remove the <code>runAsNonRoot: true</code> flag from the <code>securityContext</code> I can see I <strong>do</strong> have multiple capabilities.<br />
Is there a way to run a pod as a non-root and still add some capabilities?</p>
| <p>This is the expected behavior. The capabilities are meant to divide the privileges traditionally associated with superuser (root) into distinct units; a non-root user cannot enable/disable such capabilities, that could create a security breach.</p>
<p>The <code>capabilities</code> feature in the <code>SecurityContext</code> key is designed to manage (either to limit or to expand) the Linux capabilities for the container's context; in a pod run as a root this means that the capabilities are inherited by the processes since these are owned by the root user; however, if the pod is run as a non-root user, it does not matter if the context has those capabilities enabled because the Linux Kernel will not allow a non-root user to set capabilities to a process.</p>
<p>This point can be illustrated very easily. If you run your container with the key <code>runAsNonRoot</code> set to <code>true</code> and add the capabilities as you did in the manifest shared, and then you exec into the Pod, you should be able to see those capabilities added to the context with the command:</p>
<pre><code>$ capsh --print
Current: = cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_time,cap_mknod,cap_audit_write,cap_setfcap+i
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_time,cap_mknod,cap_audit_write,cap_setfcap
</code></pre>
<p>But you will see the <code>CapPrm</code> or <code>CapEff</code> set to x0 in any process run by the user 1001:</p>
<pre><code>$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1001 1 0.0 0.0 4340 760 ? Ss 14:57 0:00 /bin/sh -c node server.js
1001 7 0.0 0.5 772128 22376 ? Sl 14:57 0:00 node server.js
1001 21 0.0 0.0 4340 720 pts/0 Ss 14:59 0:00 sh
1001 28 0.0 0.0 17504 2096 pts/0 R+ 15:02 0:00 ps aux
$ grep Cap proc/1/status
CapInh: 00000000aa0425fb
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 00000000aa0425fb
CapAmb: 0000000000000000
</code></pre>
|
<p>I need to run sudo minikube start --memory=4096 --cpus=2 for my use case. However it is running with 1 cpu already. Somehow I started it w/o creating an entry in ~/kube/config
so "kubectl config delete-cluster minikube" returns, "error: cannot delete cluster minikube, not in /home/ec2-user/.kube/config".</p>
<p>Trying to work around this, to delete my cluster and start with a larger one, I ran, "kubectl config set current-context minikube". Which created this entry in .kube/config:
cat .kube/config</p>
<p>apiVersion: v1</p>
<p>clusters: null</p>
<p>contexts: null
current-context: minikube</p>
<p>kind: Config</p>
<p>preferences: {}</p>
<p>users: null</p>
<p>NOTE: ignore double spaces I had to enter carriage returns to get the proper format)</p>
<p>Anyone know how to delete my current cluster, so I can create a bigger one with more memory and cpus?</p>
| <p>To change cpus or memory you want to get the profile name like mentioned before:</p>
<pre><code>minikube profile list
</code></pre>
<p>And then delete the profile you want to change, in my case it was minikube:</p>
<pre><code>minikube delete --profile minikube
</code></pre>
<p>Then you can start minikube again creating new profile:</p>
<pre><code>minikube start --vm-driver=virtualbox --cpus 8 --memory 16g
</code></pre>
|
<p>I'd like to launch a clustered Socket IO application in Kubernetes. When I create a service (whether NodePort or LoadBalancer) the client application keeps getting disconnected and it reconnects again with the following logs:</p>
<pre><code>undefined
oah4g28zZCw36g1MAAAm
undefined
undefined
oac4g28zZCw36g1MFAAAx
undefined
</code></pre>
<p>and this happens rapidly.</p>
<p>However, when I connect to a single Pod directly, the problem goes away and the connection becomes stable.</p>
<p>How I am creating the service is by the following command:</p>
<pre><code>kubectl expose deployment xxx --type=LoadBalancer --port=80 --target-port=3000
</code></pre>
<p>I know that something such as a KeepAlive or Timeout configuration is missing in the service, but how can I add those or better said properly configure the service for Socket IO?</p>
| <p>You can use the <code>sessionAffinity: ClientIP</code>, which will manage the session from K8s service.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 1800
</code></pre>
<p>just for ref : <a href="https://stackoverflow.com/questions/41124840/does-the-ws-websocket-server-library-requires-sticky-session-when-it-is-used-beh">Does the ws websocket server library requires sticky session when it is used behind a load balancer?</a></p>
|
<p>I'm trying to upgrade some GKE cluster from 1.21 to 1.22 and I'm getting some warnings about deprecated APIs. Am running Istio 1.12.1 version as well in my cluster</p>
<p>One of them is causing me some concerns:</p>
<p><code>/apis/extensions/v1beta1/ingresses</code></p>
<p>I was surprised to see this warning because we are up to date with our deployments. We don't use Ingresses.</p>
<p>Further deep diving, I got the below details:</p>
<pre><code>β kubectl get --raw /apis/extensions/v1beta1/ingresses | jq
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
{
"kind": "IngressList",
"apiVersion": "extensions/v1beta1",
"metadata": {
"resourceVersion": "191638911"
},
"items": []
}
</code></pre>
<p>It seems an IngressList is that calls the old API. Tried deleting the same,</p>
<pre><code>β kubectl delete --raw /apis/extensions/v1beta1/ingresses
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
</code></pre>
<p>Neither able to delete it, nor able to upgrade.</p>
<p>Any suggestions would be really helpful.</p>
<p>[Update]: My GKE cluster got updated to <code>1.21.11-gke.1900</code> and after that the warning messages are gone.</p>
| <p>In our case, use of old version of <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> seems to cause the beta version of API calls. Some days after updating kube-state-metrics, the deprecated API calls was stopped so far.</p>
|
<p>I have Gitlab Kubernetes integration in my <code>Project 1</code> and I am able to access the <code>kube-context</code> within that project's pipelines without any issues.</p>
<p>I have another project, <code>Project 2</code> where I want to use the same cluster that I integrated in my <code>Project 1</code>.</p>
<p>This i my agent config file:</p>
<pre><code># .gitlab/agents/my-agent/config.yaml
ci_access:
projects:
- id: group/project-2
</code></pre>
<p>When I try to add a Kubernetes cluster in my <code>Project 2</code>, I am expecting to see the cluster name that I set up for <code>Project 1</code> in the dropdown, but I don't see it:</p>
<p><a href="https://i.stack.imgur.com/LYZfr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LYZfr.png" alt="gitlab-agent" /></a></p>
| <p>Have you ensured that the agent configuration file <code>config.yaml</code> is present in your <code>project2</code> at the specified directory?</p>
<p>If you have, you should be able to choose the agent from the list and setup Gitlab with it. Having the configuration file in both projects is <a href="https://docs.gitlab.com/ee/user/clusters/agent/install/#create-an-agent-configuration-file" rel="nofollow noreferrer">necessary</a> when you configure multiple projects to use the same k8s cluster.</p>
|
<p>I have created an EKS using AWS ALB Ingress Controller and also created a ingress for the same. But after creating the ingress, I am getting <strong>404</strong> while accessing some file extension. Ex: main.css. Following is my ingress code.</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
name: my-site
namespace: my-site
spec:
rules:
-
host: my-site.com
http:
paths:
-
backend:
service:
name: my-site-front-end-service
port:
number: 80
path: /
pathType: Exact
-
backend:
service:
name: my-site-backend-service
port:
number: 3000
path: /marketing/
pathType: Prefix
</code></pre>
<p>My frontend application needs to be available on <code>HTTP://my-site.com </code>and the backend service in the URL <code>HTTP://my-site.com/marketing/</code>.But now I am getting 404 for some of the .css files. The URL for the files are like, HTTP://my-site.com/main.css , HTTP://my-site.com//main-5a17625f9c68d9c60c4c.js . What is wrong with my ingress configuration?</p>
<p>When I update the path type to prefix in the frotend-service section, all the calls will be forwarded to frontend hence the calls to the backend service don't work.</p>
| <p>Your ingress endpoint <code>/</code> is of type <code>ExactMatch</code>. Therefore, the files matching <code>/*</code> will <em>not be matched</em>. You can change this to type <code>Prefix</code> and it should work.</p>
<p>Additionally, consider moving your files away from the root of the path to somewhere like <code>/files/</code> and then use <code>/files</code> as a <code>prefix</code> type ingress route. This will make sure the <code>/</code> path remains <code>exactmatch</code> so as not to confuse it with other paths.</p>
|
<p>hey guys iβm trying to setup datadog as custom metric for my kubernetes hpa using the official guide:</p>
<p><a href="https://docs.datadoghq.com/agent/cluster_agent/external_metrics/?tab=helm" rel="nofollow noreferrer">https://docs.datadoghq.com/agent/cluster_agent/external_metrics/?tab=helm</a></p>
<p>running on <strong>EKS 1.18</strong> & Datadog Cluster Agent (<strong>v1.10.0</strong>).
the problem is that i can't get the external metrics's for my HPA:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hibob-hpa
spec:
minReplicas: 1
maxReplicas: 5
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: something
metrics:
- type: External
external:
metricName: **kubernetes_state.container.cpu_limit**
metricSelector:
matchLabels:
pod: **something-54c4bd4db7-pm9q5**
targetAverageValue: 9
</code></pre>
<p>horizontal-pod-autoscaler unable to get external metric:</p>
<pre><code>canary/nginx.net.request_per_s/&LabelSelector{MatchLabels:map[string]string{kube_app_name: nginx,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server is currently unable to handle the request (get nginx.net.request_per_s.external.metrics.k8s.io)
</code></pre>
<p>This is the errors i'm getting inside the cluster-agent:</p>
<pre><code>datadog-cluster-agent-585897dc8d-x8l82 cluster-agent 2021-08-20 06:46:14 UTC | CLUSTER | ERROR | (pkg/clusteragent/externalmetrics/metrics_retriever.go:77 in retrieveMetricsValues) | Unable to fetch external metrics: [Error while executing metric query avg:nginx.net.request_per_s{kubea_app_name:ingress-nginx}.rollup(30): API error 403 Forbidden: {"status":********@datadoghq.com"}, strconv.Atoi: parsing "": invalid syntax]
</code></pre>
<pre><code># datadog-cluster-agent status
Getting the status from the agent.
2021-08-19 15:28:21 UTC | CLUSTER | WARN | (pkg/util/log/log.go:541 in func1) | Agent configuration relax permissions constraint on the secret backend cmd, Group can read and exec
===============================
Datadog Cluster Agent (v1.10.0)
===============================
Status date: 2021-08-19 15:28:21.519850 UTC
Agent start: 2021-08-19 12:11:44.266244 UTC
Pid: 1
Go Version: go1.14.12
Build arch: amd64
Agent flavor: cluster_agent
Check Runners: 4
Log Level: INFO
Paths
=====
Config File: /etc/datadog-agent/datadog-cluster.yaml
conf.d: /etc/datadog-agent/conf.d
Clocks
======
System UTC time: 2021-08-19 15:28:21.519850 UTC
Hostnames
=========
ec2-hostname: ip-10-30-162-8.eu-west-1.compute.internal
hostname: i-00d0458844a597dec
instance-id: i-00d0458844a597dec
socket-fqdn: datadog-cluster-agent-585897dc8d-x8l82
socket-hostname: datadog-cluster-agent-585897dc8d-x8l82
hostname provider: aws
unused hostname providers:
configuration/environment: hostname is empty
gce: unable to retrieve hostname from GCE: status code 404 trying to GET http://169.254.169.254/computeMetadata/v1/instance/hostname
Metadata
========
Leader Election
===============
Leader Election Status: Running
Leader Name is: datadog-cluster-agent-585897dc8d-x8l82
Last Acquisition of the lease: Thu, 19 Aug 2021 12:13:14 UTC
Renewed leadership: Thu, 19 Aug 2021 15:28:07 UTC
Number of leader transitions: 17 transitions
Custom Metrics Server
=====================
External metrics provider uses DatadogMetric - Check status directly from Kubernetes with: `kubectl get datadogmetric`
Admission Controller
====================
Disabled: The admission controller is not enabled on the Cluster Agent
=========
Collector
=========
Running Checks
==============
kubernetes_apiserver
--------------------
Instance ID: kubernetes_apiserver [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/kubernetes_apiserver.d/conf.yaml.default
Total Runs: 787
Metric Samples: Last Run: 0, Total: 0
Events: Last Run: 0, Total: 660
Service Checks: Last Run: 3, Total: 2,343
Average Execution Time : 1.898s
Last Execution Date : 2021-08-19 15:28:17.000000 UTC
Last Successful Execution Date : 2021-08-19 15:28:17.000000 UTC
=========
Forwarder
=========
Transactions
============
Deployments: 350
Dropped: 0
DroppedOnInput: 0
Nodes: 497
Pods: 3
ReplicaSets: 576
Requeued: 0
Retried: 0
RetryQueueSize: 0
Services: 263
Transaction Successes
=====================
Total number: 3442
Successes By Endpoint:
check_run_v1: 786
intake: 181
orchestrator: 1,689
series_v1: 786
==========
Endpoints
==========
https://app.datadoghq.eu - API Key ending with:
- f295b
=====================
Orchestrator Explorer
=====================
ClusterID: f7b4f97a-3cf2-11ea-aaa8-0a158f39909c
ClusterName: production
ContainerScrubbing: Enabled
======================
Orchestrator Endpoints
======================
===============
Forwarder Stats
===============
Pods: 3
Deployments: 350
ReplicaSets: 576
Services: 263
Nodes: 497
===========
Cache Stats
===========
Elements in the cache: 393
Pods:
Last Run: (Hits: 0 Miss: 0) | Total: (Hits: 7 Miss: 5)
Deployments:
Last Run: (Hits: 36 Miss: 1) | Total: (Hits: 40846 Miss: 2444)
ReplicaSets:
Last Run: (Hits: 297 Miss: 1) | Total: (Hits: 328997 Miss: 19441)
Services:
Last Run: (Hits: 44 Miss: 0) | Total: (Hits: 49520 Miss: 2919)
Nodes:
Last Run: (Hits: 9 Miss: 0) | Total: (Hits: 10171 Miss: 755)```
and this is what i get from datadogmetric:
</code></pre>
<pre><code>Name: dcaautogen-2f116f4425658dca91a33dd22a3d943bae5b74
Namespace: datadog
Labels: <none>
Annotations: <none>
API Version: datadoghq.com/v1alpha1
Kind: DatadogMetric
Metadata:
Creation Timestamp: 2021-08-19T15:14:14Z
Generation: 1
Managed Fields:
API Version: datadoghq.com/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:status:
.:
f:autoscalerReferences:
f:conditions:
.:
k:{"type":"Active"}:
.:
f:lastTransitionTime:
f:lastUpdateTime:
f:status:
f:type:
k:{"type":"Error"}:
.:
f:lastTransitionTime:
f:lastUpdateTime:
f:message:
f:reason:
f:status:
f:type:
k:{"type":"Updated"}:
.:
f:lastTransitionTime:
f:lastUpdateTime:
f:status:
f:type:
k:{"type":"Valid"}:
.:
f:lastTransitionTime:
f:lastUpdateTime:
f:status:
f:type:
f:currentValue:
Manager: datadog-cluster-agent
Operation: Update
Time: 2021-08-19T15:14:44Z
Resource Version: 164942235
Self Link: /apis/datadoghq.com/v1alpha1/namespaces/datadog/datadogmetrics/dcaautogen-2f116f4425658dca91a33dd22a3d943bae5b74
UID: 6e9919eb-19ca-4131-b079-4a8a9ac577bb
Spec:
External Metric Name: nginx.net.request_per_s
Query: avg:nginx.net.request_per_s{kube_app_name:nginx}.rollup(30)
Status:
Autoscaler References: canary/hibob-hpa
Conditions:
Last Transition Time: 2021-08-19T15:14:14Z
Last Update Time: 2021-08-19T15:53:14Z
Status: True
Type: Active
Last Transition Time: 2021-08-19T15:14:14Z
Last Update Time: 2021-08-19T15:53:14Z
Status: False
Type: Valid
Last Transition Time: 2021-08-19T15:14:14Z
Last Update Time: 2021-08-19T15:53:14Z
Status: True
Type: Updated
Last Transition Time: 2021-08-19T15:14:44Z
Last Update Time: 2021-08-19T15:53:14Z
Message: Global error (all queries) from backend
Reason: Unable to fetch data from Datadog
Status: True
Type: Error
Current Value: 0
Events: <none>
</code></pre>
<pre><code>
this is my cluster agent deployment:
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "18"
meta.helm.sh/release-name: datadog
meta.helm.sh/release-namespace: datadog
creationTimestamp: "2021-02-05T07:36:39Z"
generation: 18
labels:
app.kubernetes.io/instance: datadog
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: datadog
app.kubernetes.io/version: "7"
helm.sh/chart: datadog-2.7.0
name: datadog-cluster-agent
namespace: datadog
resourceVersion: "164881216"
selfLink: /apis/apps/v1/namespaces/datadog/deployments/datadog-cluster-agent
uid: ec52bb4b-62af-4007-9bab-d5d16c48e02c
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: datadog-cluster-agent
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
ad.datadoghq.com/cluster-agent.instances: |
[{
"prometheus_url": "http://%%host%%:5000/metrics",
"namespace": "datadog.cluster_agent",
"metrics": [
"go_goroutines", "go_memstats_*", "process_*",
"api_requests",
"datadog_requests", "external_metrics", "rate_limit_queries_*",
"cluster_checks_*"
]
}]
checksum/api_key: something
checksum/application_key: something
checksum/clusteragent_token: something
checksum/install_info: something
creationTimestamp: null
labels:
app: datadog-cluster-agent
name: datadog-cluster-agent
spec:
containers:
- env:
- name: DD_HEALTH_PORT
value: "5555"
- name: DD_API_KEY
valueFrom:
secretKeyRef:
key: api-key
name: datadog
optional: true
- name: DD_APP_KEY
valueFrom:
secretKeyRef:
key: app-key
name: datadog-appkey
- name: DD_EXTERNAL_METRICS_PROVIDER_ENABLED
value: "true"
- name: DD_EXTERNAL_METRICS_PROVIDER_PORT
value: "8443"
- name: DD_EXTERNAL_METRICS_PROVIDER_WPA_CONTROLLER
value: "false"
- name: DD_EXTERNAL_METRICS_PROVIDER_USE_DATADOGMETRIC_CRD
value: "true"
- name: DD_EXTERNAL_METRICS_AGGREGATOR
value: avg
- name: DD_CLUSTER_NAME
value: production
- name: DD_SITE
value: datadoghq.eu
- name: DD_LOG_LEVEL
value: INFO
- name: DD_LEADER_ELECTION
value: "true"
- name: DD_COLLECT_KUBERNETES_EVENTS
value: "true"
- name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
value: datadog-cluster-agent
- name: DD_CLUSTER_AGENT_AUTH_TOKEN
valueFrom:
secretKeyRef:
key: token
name: datadog-cluster-agent
- name: DD_KUBE_RESOURCES_NAMESPACE
value: datadog
- name: DD_ORCHESTRATOR_EXPLORER_ENABLED
value: "true"
- name: DD_ORCHESTRATOR_EXPLORER_CONTAINER_SCRUBBING_ENABLED
value: "true"
- name: DD_COMPLIANCE_CONFIG_ENABLED
value: "false"
image: gcr.io/datadoghq/cluster-agent:1.10.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 6
httpGet:
path: /live
port: 5555
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 5
name: cluster-agent
ports:
- containerPort: 5005
name: agentport
protocol: TCP
- containerPort: 8443
name: metricsapi
protocol: TCP
readinessProbe:
failureThreshold: 6
httpGet:
path: /ready
port: 5555
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/datadog-agent/install_info
name: installinfo
readOnly: true
subPath: install_info
dnsConfig:
options:
- name: ndots
value: "3"
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: datadog-cluster-agent
serviceAccountName: datadog-cluster-agent
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: datadog-installinfo
name: installinfo
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-05-13T15:46:33Z"
lastUpdateTime: "2021-05-13T15:46:33Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-02-05T07:36:39Z"
lastUpdateTime: "2021-08-19T12:12:06Z"
message: ReplicaSet "datadog-cluster-agent-585897dc8d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 18
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<pre><code></code></pre>
| <p>For the record i got this sorted.</p>
<p>According to the helm default values file you must set the app key in order to use metrics provider:</p>
<pre><code> # datadog.appKey -- Datadog APP key required to use metricsProvider
## If you are using clusterAgent.metricsProvider.enabled = true, you must set
## a Datadog application key for read access to your metrics.
appKey: # <DATADOG_APP_KEY>
</code></pre>
<p>I guess this is a lack of information in the docs and also a check that is missing at the cluster-agent startup. Going to open an issue about it.</p>
|
<p>I accidentally deleted kube-proxy daemonset by using command: <code>kubectl delete -n kube-system daemonset kube-proxy</code> which should run kube-proxy pods in my cluster, what the best way to restore it?
<a href="https://i.stack.imgur.com/AChcS.png" rel="nofollow noreferrer">That's how it should look</a></p>
| <p>Kubernetes allows you to <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy/" rel="nofollow noreferrer">reinstall kube-proxy</a> by running the following command which install the kube-proxy addon components via the API server.</p>
<pre><code>$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
</code></pre>
<p>This will generate the output as</p>
<pre><code>[addons] Applied essential addon: kube-proxy
</code></pre>
<p>The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.</p>
<p>Hence kube-proxy will be reinstalled in the cluster by creating a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> and launching the pods.</p>
|
<p>I'm running the redis chart (<a href="https://artifacthub.io/packages/helm/bitnami/redis/15.7.0" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/redis/15.7.0</a>) as a dependency of a custom chart. I enabled sentinel, then the pods are running two containers (redis and sentinel).
I'm using the default values for the chart and I defined 4 replicas. The cluster have 10 nodes and I notice that three redis-sentinel's pods run on a single node and only one runs in another node:</p>
<pre><code>myapp-redis-node-0 2/2 Running 8d ip k8s-appname-ctw9v
myapp-redis-node-1 2/2 Running 34d ip k8s-appname-ctw9v
myapp-redis-node-2 2/2 Running 34d ip k8s-appname-ctw9v
myapp-redis-node-3 2/2 Running 34d ip k8s-appname-crm3k
</code></pre>
<p>This is the affinity section for the pod's:</p>
<pre><code>spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: node
app.kubernetes.io/instance: myapp
app.kubernetes.io/name: redis
namespaces:
- test
topologyKey: kubernetes.io/hostname
weight: 1
</code></pre>
<p>How I can do to have each pod on diferent nodes?</p>
<p>Thanks!</p>
| <p>You need to update the <code>podAntiAffinity</code> section of the pod template to add a certain k/v pair. This will ensure that for a node, if a pod with that k/v pair already exists, the schedular will attempt to schedule the pod on another node that doesn't have a pod with that k/v pair. I say attempt, because anti-affinity rules are <code>soft</code> rules and if there are no nodes available, a pod will be scheduled on a node that might possibly violate the anti-affinity. Details <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">here</a>.</p>
<p>Try patching the template as:</p>
<pre><code>spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: <ADD_LABEL_HERE>
operator: In
values:
- <ADD_VALUE_HERE>
</code></pre>
|
<p>Developing CRDs for Kubernetes, using VScode as an IDE. Want to provide autocompletion and Intellisense in IDE.</p>
<p>It needs a JSON schema to do so. I have a huge number of CRDs to support. I want to do it in an easy way to convert CRDs to JSON schema.</p>
| <p>You can have CRD IntelliSense thanks to <a href="https://github.com/vscode-kubernetes-tools/vscode-kubernetes-tools/" rel="nofollow noreferrer">vscode-kubernetes-tools</a> extension. It will fetch CRD from the active cluster. Here is the relevant merged <a href="https://github.com/vscode-kubernetes-tools/vscode-kubernetes-tools/pull/924" rel="nofollow noreferrer">pull request</a> for feature details.</p>
|
<p>I am trying to set up Postgres cluster using CrunchyData Postgres operator. I am facing issue where the pod <code>backrest-shared-repo</code> is always in <code>Pending</code> state.</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-ha-1 Pending 12m
postgres-ha-1-pgbr-repo Pending 12m
</code></pre>
<p>While debugging I found that the <code>PersistentVolumeClaim</code> is also in pending state. Events of <code>PVC</code> is below:</p>
<pre><code>no persistent volumes available for this claim and no storage class is set
</code></pre>
<p><strong>PVC.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2022-07-08T10:28:48Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
pg-cluster: postgres-ha-1
vendor: crunchydata
name: postgres-ha-1-pgbr-repo
namespace: pgo
resourceVersion: "1786569"
uid: 6f80d516-320c-490e-ad6a-83400ea998a4
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3G
volumeMode: Filesystem
status:
phase: Pending
</code></pre>
<p>Below is the storage configuration in <code>postgres-operator.yml</code>:</p>
<pre><code>backrest_storage: "hostpathstorage"
backup_storage: "hostpathstorage"
primary_storage: "hostpathstorage"
replica_storage: "hostpathstorage"
pgadmin_storage: "hostpathstorage"
wal_storage: ""
storage1_name: "default"
storage1_access_mode: "ReadWriteOnce"
storage1_size: "1G"
storage1_type: "dynamic"
storage2_name: "hostpathstorage"
storage2_access_mode: "ReadWriteMany"
storage2_size: "3G"
storage2_type: "create"
storage3_name: "nfsstorage"
storage3_access_mode: "ReadWriteMany"
storage3_size: "1G"
storage3_type: "create"
storage3_supplemental_groups: "65534"
storage4_name: "nfsstoragered"
storage4_access_mode: "ReadWriteMany"
storage4_size: "1G"
storage4_match_labels: "crunchyzone=red"
storage4_type: "create"
storage4_supplemental_groups: "65534"
storage5_name: "storageos"
storage5_access_mode: "ReadWriteOnce"
storage5_size: "5Gi"
storage5_type: "dynamic"
storage5_class: "fast"
storage6_name: "primarysite"
storage6_access_mode: "ReadWriteOnce"
storage6_size: "4G"
storage6_type: "dynamic"
storage6_class: "primarysite"
storage7_name: "alternatesite"
storage7_access_mode: "ReadWriteOnce"
storage7_size: "4G"
storage7_type: "dynamic"
storage7_class: "alternatesite"
storage8_name: "gce"
storage8_access_mode: "ReadWriteOnce"
storage8_size: "300M"
storage8_type: "dynamic"
storage8_class: "standard"
storage9_name: "rook"
storage9_access_mode: "ReadWriteOnce"
storage9_size: "1Gi"
storage9_type: "dynamic"
storage9_class: "rook-ceph-block"
</code></pre>
<p>Can anyone help me in solving the issue?</p>
| <p>You need to create a <code>PersistentVolume</code> in the cluster before you can use it with a <code>PersistentVolumeClaim</code>. The error simply means you do not have any PVs that can be matched with your PVC.</p>
<p>Here's the official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">guide</a> on how to create <code>PersistentVolumes</code>. Just ensure that the specifications you set on the <code>PersistentVolume</code> match the <code>PersistentVolumeClaim</code>, otherwise it will not be bound.</p>
<p>You can use an <code>hostPath</code> type PV which will simply create a directory on your worker node and use it to store data. This will prove the functional correctness. Later perhaps you can move towards a more central solution of having a centralized Volume store (details in the docs <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">here</a>).</p>
|
<p>I have deployment with volumes and limits presented above.
The problem is that kubernetes reject create pod with such error:</p>
<pre><code> pods "app-app-96d5dc969-2g6zp" is forbidden:
exceeded quota: general-resourcequota, requested: limits.ephemeral-storage=1280Mi,
used: limits.ephemeral-storage=0, limited: limits.ephemeral-storage=1Gi
</code></pre>
<p>As I've understood nodes have limit 1Gi for ephemeral-storage, but what is 1280Mi?
Is it correct, that kubernetes allocate some amount of memory for each volume?</p>
<pre><code>...
spec:
containers:
resources:
limits:
cpu: 1
memory: 3Gi
ephemeral-storage: "1Gi"
requests:
cpu: 1
memory: 3Gi
ephemeral-storage: "1Gi"
volumeMounts:
- name: app-logs
mountPath: /app/log
- name: app-tmp
mountPath: /tmp
- name: app-backups
mountPath: /app/backups
- name: app-logback
mountPath: /app/config/logback.xml
subPath: logback.xml
- name: app-mdc
mountPath: /app/config/mdc.properties
subPath: mdc.properties
volumes:
- name: app-logs
emptyDir: {}
- name: app-tmp
emptyDir: {}
- name: app-backups
emptyDir: {}
- name: app-logback
configMap:
name: "{{ include "app.fullname" . }}-app-logback"
- name: app-mdc
configMap:
name: "{{ include "app.fullname" . }}-app-mdc"
</code></pre>
<p>Resource quotes for namespace:</p>
<pre><code>kubectl describe quota
Name: general-resourcequota
Namespace: app
Resource Used Hard
-------- ---- ----
configmaps 5 15
limits.cpu 0 4
limits.ephemeral-storage 0 1Gi
limits.memory 0 8Gi
pods 0 10
requests.cpu 0 2
requests.memory 0 4Gi
requests.storage 0 50Gi
services 1 20
services.loadbalancers 1 5
services.nodeports 2 5
</code></pre>
| <p>You namespace has a quota set to cap at 1Gi:</p>
<p><code>limits.ephemeral-storage 0 1Gi</code></p>
<p>The messaging said that the namespace will exceed the limit and reach 1.28Gi (1280Mi) with your deployment.</p>
<p>Reduce your limit to 700Mi to stay within the 1Gi limit and your pod will be schedule accordingly. Note that quota aggregates resource consumption in the namespace, not per pod basis.</p>
|
<p>Currently I use <strong>Traefik IngressRoute</strong> to expose the Traefik dashboard. I am using this configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard
namespace: my-namespace
spec:
routes:
- match: Host(`traefik.example.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
kind: Rule
services:
- name: api@internal
kind: TraefikService
middlewares:
- name: traefik-dashboard-https-redirect
- name: traefik-dashboard-basic-auth
tls:
certResolver: le
</code></pre>
<p>and it works fine.</p>
<p>However I would like to expose it with a native <strong>Kubernetes Ingress</strong>. I can't find any resource which shows how to access <code>api@internal</code> from an Ingress. Is it even possible?</p>
| <p>It is not possible to reference api@internal from an Ingress.</p>
<p>There is a workaround I think, which could be:</p>
<ul>
<li>expose the api as insecure, it exposes the dashboard by default on an entrypoint called traefik on port 8080.</li>
<li>update the entrypoint manually in the static conf: <code>entrypoints.traefik.address=<what-you-want></code></li>
<li>create a service pointing to the traefik entrypoint (port 8080 by default).</li>
<li>create an ingress pointing to the service</li>
</ul>
|
<p>I'm trying to build a Kubernetes job on the fly by using the Kubernetes client in C# (<a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp</a>). I get an error when the job is trying to pull the image from the repo.</p>
<p>The image I'm trying to attach to the job is situated in the local docker repo. Deploying the job to the namespace is no problem; this works just fine, but during the build is throws an error in Lens (see image).</p>
<p><a href="https://i.stack.imgur.com/zjErV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zjErV.png" alt="enter image description here" /></a></p>
<p>The code for building the job:</p>
<pre><code> var job = new V1Job
{
ApiVersion = "batch/v1",
Kind = "Job",
Metadata = new V1ObjectMeta
{
Name = name,
Labels = new Dictionary<string, string>(),
},
Spec = new V1JobSpec
{
BackoffLimit = backoffLimit,
TtlSecondsAfterFinished = 0,
Template = new V1PodTemplateSpec
{
Spec = new V1PodSpec
{
Tolerations = new List<V1Toleration>(),
Volumes = new List<V1Volume>
{
new V1Volume
{
Name = "podinfo",
DownwardAPI = new V1DownwardAPIVolumeSource
{
Items = new V1DownwardAPIVolumeFile[]
{
new V1DownwardAPIVolumeFile { Path = "namespace", FieldRef = new V1ObjectFieldSelector("metadata.namespace") },
new V1DownwardAPIVolumeFile { Path = "name", FieldRef = new V1ObjectFieldSelector("metadata.name") },
},
},
},
},
Containers = new[]
{
new V1Container
{
Name = "tapereader-job-x-1",
Image = "tapereader_sample_calculation",
Resources = new V1ResourceRequirements
{
Limits = new Dictionary<string, ResourceQuantity>
{
{ "cpu", new ResourceQuantity("4") },
{ "memory", new ResourceQuantity("4G") },
},
Requests = new Dictionary<string, ResourceQuantity>
{
{ "cpu", new ResourceQuantity("0.5") },
{ "memory", new ResourceQuantity("2G") },
},
},
VolumeMounts = new List<V1VolumeMount>
{
new V1VolumeMount { Name = "podinfo", MountPath = "/etc/podinfo", ReadOnlyProperty = true },
},
Env = new List<V1EnvVar>(),
},
},
RestartPolicy = "Never",
},
},
},
};
await Client.CreateNamespacedJobAsync(job, "local-tapereader");
</code></pre>
<p>The container is ok, it is present in Docker Desktop (local repo) and I can build & run it without any problems - it also executes the way it should in Docker desktop.</p>
<p><a href="https://i.stack.imgur.com/BDiDk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BDiDk.png" alt="enter image description here" /></a></p>
<p>The k8s client creates the pod & job successfully but I get the following error in Lens:
<a href="https://i.stack.imgur.com/vZcHm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vZcHm.png" alt="enter image description here" /></a></p>
<p>So basically, it states that access was denied? How can I overcome this issue?</p>
<p>I already tried to add creds but this doesn't work</p>
<blockquote>
<p>kubectl create secret generic regcred
--from-file=.dockerconfigjson=pathto.docker\config.json --type=kubernetes.io/dockerconfigjson</p>
</blockquote>
<p>UPDATE:</p>
<p>I actually ran the following, like zero0 suggested:</p>
<blockquote>
<p>kubectl create secret generic regcred
--from-file=.dockerconfigjson=C:\Users\<USER_NAME>\.docker\config.json --type=kubernetes.io/dockerconfigjson</p>
</blockquote>
| <p>Found the solution. The image resides in the local repo of Docker Desktop. Because of this the image doesn't have to be pulled. To avoid the image pull, the parameter <strong>ImagePullPolicy</strong> of the Container object should be equal to <strong>"Never"</strong>.</p>
<pre><code>new V1Container
{
ImagePullPolicy = "Never",
Name = name,
Image = image,
...
}
</code></pre>
|
<p>I'm trying to deploy a simple nginx ingress object in my local kubernetes (Docker-Desktop) but I noticed that the class name is showing as "none" when I do:</p>
<pre><code>β localk8 k get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-resource none * localhost 80 15m
</code></pre>
<p>My ingress definition file is defined as:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello-app
port:
number: 8080
</code></pre>
<p>I'm also running on a M1 macbook if that makes any difference. Thanks all for the help!</p>
| <p>I am not sure what version of kubernetes you're using, but that annotation was <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation" rel="noreferrer">deprecated in 1.18</a>.</p>
<p>Also, you need to <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation" rel="noreferrer">define an <code>IngressClass</code> resource that describes the class</a> and then reference it with the new field <code>ingressClassName</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: example.com/ingress-controller
</code></pre>
<p>Then in your ingress resource add the new field under spec and reference the class name:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
// use new field
ingressClassName: nginx
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello-app
port:
number: 8080
</code></pre>
|
<p>Well, I'm using <code>minikube v1.26.0</code> and <code>kubectl</code> to manage <code>Kubernetes</code> on my local machine, when I decide to create a <code>ConfigMap</code> with <code>kubectl apply -f ConfigMapFile.yaml</code> I'm getting error <code>no matches for kind "configmap" in version "apps/v1"</code></p>
<p>ConfigMapFile.yaml</p>
<pre><code>apiVersion: apps/v1
kind: ConfigMap
metadata:
name: test-config-map
data:
.........
</code></pre>
<p>Seems Like <code>ConfigMap</code> is not allowed or deprecated in Kubernetes <code>apps/v1</code>, but cannot find any solution or tips that would help me with this problem.</p>
| <p>you need to use <code>apiVersion: v1</code> for configmap. You can also check the version of any resource using:</p>
<pre><code>kubectl api-resources |grep -i configmap
configmaps cm v1 true ConfigMap
</code></pre>
|
<p>I have put neo4j community edition(v3.2.2) into kubernetes. But the community edition do not support hot backup so I have to shut it down for backup/restore data. </p>
<p>I tried:</p>
<pre><code>kubectl exec neo4j-0 /var/lib/neo4j/bin/neo4j stop
</code></pre>
<p>but it shows:</p>
<pre><code>Neo4j not running
</code></pre>
<p>also tried:</p>
<pre><code>kubectl exec -it neo4j-0 bash
/var/lib/neo4j/bin/neo4j stop
</code></pre>
<p>but still can't stop the neo4j in container</p>
<pre><code>Neo4j not running
</code></pre>
<p>Does any body have a solution?</p>
| <p>For now, <code>offlineMaintenanceModeEnabled</code> is available for Neo4j's Helm chart.</p>
<p><a href="https://neo4j.com/docs/operations-manual/current/kubernetes/maintenance/#put-offline-mode" rel="nofollow noreferrer">https://neo4j.com/docs/operations-manual/current/kubernetes/maintenance/#put-offline-mode</a></p>
<p>You can control the offline maintenance mode by setting <code>neo4j.offlineMaintenanceModeEnabled</code> in Helm's <code>values.yaml</code> to <code>true</code> and <code>false</code>.</p>
|
<p>I have setup a kubernetes cluster with kubeadm with a 3 node vagrant setup. I have installed ArgoCD and when I use vagrant ssh into the kubemaster vm, I can run:</p>
<pre><code>kubectl port-forward svc/argocd-server -n argocd 8080:443
</code></pre>
<p>And I can curl it in the ssh session successfully with:</p>
<pre><code>curl -k https://localhost:8080
</code></pre>
<p>I have a static ip for the nodes with the master being 192.168.56.2, and a port forward set for that vm</p>
<pre><code>config.vm.define "kubemaster" do |node|
...
node.vm.network :private_network, ip: 192.168.56.2
node.vm.network "forwarded_port", guest: 8080, host: 8080
...
end
</code></pre>
<p>On the host I try to access ArgoCD UI in browser with:</p>
<pre><code>https://localhost:8080
https://192.168.56.2:8080
</code></pre>
<p>And I get connection refused</p>
<p>What am I missing?</p>
<p><strong>Edit:</strong></p>
<p>The nodes are running ubuntu 22 and ufw is not enabled.
Im running on a Mac</p>
| <p>It turns out I needed to add the address flag to the port forwarding command</p>
<pre><code>// from
kubectl port-forward svc/argocd-server -n argocd 8080:443
// to
kubectl port-forward --address 0.0.0.0 svc/argocd-server -n argocd 8080:443
</code></pre>
|
Subsets and Splits