Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I have been trying to run a Python Django application on Kubernets but not success. The application runs fine in Docker.</p> <p>This is the yaml Deployment to Kubernets:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: &quot;1&quot; creationTimestamp: &quot;2022-02-06T14:48:45Z&quot; generation: 1 labels: app: keyvault name: keyvault namespace: default resourceVersion: &quot;520&quot; uid: ccf0e490-517f-4102-b282-2dcd71008948 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: keyvault strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: keyvault spec: containers: - image: david900412/keyvault_web:latest imagePullPolicy: Always name: keyvault-web-5wrph resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: conditions: - lastTransitionTime: &quot;2022-02-06T14:48:45Z&quot; lastUpdateTime: &quot;2022-02-06T14:48:45Z&quot; message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: &quot;False&quot; type: Available - lastTransitionTime: &quot;2022-02-06T14:48:45Z&quot; lastUpdateTime: &quot;2022-02-06T14:48:46Z&quot; message: ReplicaSet &quot;keyvault-6944b7b468&quot; is progressing. reason: ReplicaSetUpdated status: &quot;True&quot; type: Progressing observedGeneration: 1 replicas: 1 unavailableReplicas: 1 updatedReplicas: 1 </code></pre> <p>This is the <code>docker compose</code> file I'm using to run the image in Docker:</p> <pre><code>version: &quot;3.9&quot; services: web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - &quot;8000:8000&quot; </code></pre> <p>This is the docker file I'm using to run the image in Docker:</p> <pre><code>FROM python:3.9 WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ </code></pre> <p><code>Kubectl describe pod</code> Output:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned default/keyvault-6944b7b468-frss4 to minikube Normal Pulled 37s kubelet Successfully pulled image &quot;david900412/keyvault_web:latest&quot; in 12.5095594s Normal Pulled 33s kubelet Successfully pulled image &quot;david900412/keyvault_web:latest&quot; in 434.2995ms Normal Pulling 17s (x3 over 49s) kubelet Pulling image &quot;david900412/keyvault_web:latest&quot; Normal Created 16s (x3 over 35s) kubelet Created container keyvault-web-5wrph Normal Started 16s (x3 over 35s) kubelet Started container keyvault-web-5wrph Normal Pulled 16s kubelet Successfully pulled image &quot;david900412/keyvault_web:latest&quot; in 395.5345ms Warning BackOff 5s (x4 over 33s) kubelet Back-off restarting failed container </code></pre> <p><code>Kubectl log pod</code> Does not show anything :(</p> <p>Thanks for your help.</p>
David
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p>Based on the comments, the solution should be as shown below.</p> <ol> <li>Remove <code>volumes</code> definition from the Compose file:</li> </ol> <pre><code>version: &quot;3.9&quot; services: web: build: . command: python manage.py runserver 0.0.0.0:8000 ports: - &quot;8000:8000&quot; </code></pre> <ol start="2"> <li>Specify the startup command with <code>CMD</code> for an image in Dockerfile:</li> </ol> <pre><code> FROM python:3.9 WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ CMD [&quot;python3&quot;,&quot;manage.py&quot;,&quot;runserver&quot;] </code></pre> <p>Then translate a Docker Compose file to Kubernetes resources. This can be done with using <a href="https://kompose.io/" rel="nofollow noreferrer">Kompose</a> or another suitable solution.</p>
Andrew Skorkin
<p>We are using the Job <code>ttlSecondsAfterFinished</code> attribute to automatically clean up finished jobs. When we had a very small number of jobs (10-50), the jobs (and their pods) would get cleaned up approximately 60 seconds after completion. However, now that we have ~5000 jobs running on our cluster, it takes 30 + minutes for a Job object to get cleaned after completion.</p> <p>This is a problem because although the Jobs are just sitting there, not consuming resources, we do use a <code>ResourceQuota</code> (selector count/jobs.batch) to control our workload, and those completed jobs are taking up space in the <code>ResourceQuota</code>.</p> <p>I know that jobs only get marked for deletion once the TTL has passed, and are not guaranteed to be deleted immediately then, but 30 minutes is a very long time. What could be causing this long delay?</p> <p>Our logs indicate that our k8s API servers are not under heavy load, and that API response times are reasonable.</p>
C_Z_
<p><strong>Solution 1</strong></p> <p>How do you use the Job <code>ttlSecondsAfterFinished</code>? You can specify <code>.spec.ttlSecondsAfterFinished</code> to the value what you need. Below is the example from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">official documentation</a></p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never </code></pre> <p>And please note this:</p> <blockquote> <p>Note that the TTL period, e.g. .spec.ttlSecondsAfterFinished field of Jobs, can be modified after the job is created or has finished. However, once the Job becomes eligible to be deleted (when the TTL has expired), the system won't guarantee that the Jobs will be kept, even if an update to extend the TTL returns a successful API response.</p> </blockquote> <p>For more information: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#updating-ttl-seconds" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#updating-ttl-seconds</a></p> <p><strong>Solution 2</strong></p> <p>As it mentioned above in the comment field, you can try to play with <code>kube-controller-manager</code> and increase the number of TTL-after-finished controller workers that are allowed to sync concurrently by using the following <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">flag option</a>:</p> <p><code>kube-controller-manager --concurrent-ttl-after-finished-syncs int32 Default: 5</code></p>
Bazhikov
<p>My team doesn't have all our code in the same place locally but we are all working on the same service. This service depends on a few other libraries and during the development it would be great to have the live version of those libs mounted into the pod for faster iteration.</p> <p>How the path becomes dynamic doesn't matter, env var, config map, weird volume mount sorcery, etc...</p> <p>My current approach uses helm to template out the yaml. I was hoping to be able to do something like this:</p> <pre><code> volumes: - name: my-lib hostPath: path: $CODE_PATH/my_lib volumeMounts: - name: my-lib mountPath: /tmp/my_lib </code></pre> <p>Where my team members could externally define <code>CODE_PATH</code>, allowing them to point to where they keep their code. I'm not attached to a method for doing this. Currently, mine doesn't work anyway. I just need to be able to mount a host directory into a pod without statically defining an absolute path right in the yaml.</p>
LISTERINE
<p>According to the official documentation there are 2 ways to achieve this but in both of them the <code>hostPath</code> has to be defined.</p> <p>First approach would be to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>, where <code>hostPath</code> volume uses the local disk of the node to mount the volume and specify the <code>hostPath</code> in Preferences-&gt;File Sharing in Docker Desktop.</p> <p>Second approach would be to use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage" rel="nofollow noreferrer">PersistentVolume</a> approach in which cluster administrator creates the volumes and pods can access them through <code>PersistentVolumeClaims</code>, a level of abstraction between the volume and its storage mechanism.</p>
Jakub Siemaszko
<p>I have a spring-boot postgres setup that I am trying to containerize and deploy in minikube. My pods and services show that they are up.</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE server-deployment-5bc57dcd4f-zrwzs 1/1 Running 0 14m postgres-7f887f4d7d-5b8v5 1/1 Running 0 25m </code></pre> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE server-service NodePort 10.100.21.232 &lt;none&gt; 8080:31457/TCP 15m postgres ClusterIP 10.97.19.125 &lt;none&gt; 5432/TCP 26m </code></pre> <pre><code>$ minikube service list |-------------|------------------|--------------|-----------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-------------|------------------|--------------|-----------------------------| | default | kubernetes | No node port | | kube-system | kube-dns | No node port | | custom | server-service | http/8080 | http://192.168.59.106:31457 | | custom | postgres | No node port | |-------------|------------------|--------------|-----------------------------| </code></pre> <p>But when I try to hit any of my endpoints using postman, I get:</p> <pre><code>Could not send request. Error: connect ECONNREFUSED 192.168.59.106:31457 </code></pre> <p>I don't know where I am going wrong. I tried deploying the individual containers directly in docker (I had to modify some of the <code>application.properties</code> to get the rest server talking to the db container) and that works without a problem so clearly my server side code should not be a problem.</p> <p>Here is the yml for the rest-server:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: server-deployment namespace: custom spec: selector: matchLabels: app: server-deployment template: metadata: name: server-deployment labels: app: server-deployment spec: containers: - name: server-deployment image: aruns1494/rest-server-k8s:latest env: - name: POSTGRES_USER valueFrom: configMapKeyRef: name: postgres-config key: postgres_user - name: POSTGRES_PASSWORD valueFrom: configMapKeyRef: name: postgres-config key: postgres_password - name: POSTGRES_SERVICE valueFrom: configMapKeyRef: name: postgres-config key: postgres_service --- apiVersion: v1 kind: Service metadata: name: server-service namespace: custom spec: selector: name: server-deployment ports: - name: http port: 8080 type: NodePort </code></pre> <p>I have not changed the spring boot's default port so I expect it to work on 8080. I tried connecting to that URL through chrome and Firefox and I get the same error message. I expect it to fall back to a default error message page when I try to hit the <code>/</code> endpoint.</p> <p>I did look up several online articles but none of them seem to help. I am also attaching my kube-system pods if that helps:</p> <pre><code>$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-x6mv6 1/1 Running 0 39m etcd-minikube 1/1 Running 0 40m kube-apiserver-minikube 1/1 Running 0 40m kube-controller-manager-minikube 1/1 Running 0 40m kube-proxy-dnr8p 1/1 Running 0 39m kube-scheduler-minikube 1/1 Running 0 40m storage-provisioner 1/1 Running 1 (39m ago) 40m </code></pre>
whiplash
<p>My proposition is to check that provided Deployment and Service have the same labels and selectors, because now in the Deployment config pod label is <code>app: server-deployment</code>, but in the Service config selector is <code>name: server-deployment</code>.</p> <p>If we want to use <code>name: server-deployment</code> selector for the Service, then we need to update the Deployment as shown below (<code>matchLabels</code> and <code>labels</code> fields):</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: server-deployment namespace: custom spec: selector: matchLabels: name: server-deployment template: metadata: name: server-deployment labels: name: server-deployment spec: containers: ... </code></pre>
Andrew Skorkin
<p>In my Kubernetes cluster, Rancher never creates Persistent Volumes after creating a Persistent Volume Claim and applying a Pod.</p> <p><strong>Solution/Work around</strong> available under second update.</p> <p>The cluster has been installed with Kubespray. The configuration for local path provisioning in <code>inventory/myclster/group_vars/k8s-cluster/addons.yml</code>:</p> <pre><code># Rancher Local Path Provisioner local_path_provisioner_enabled: true # local_path_provisioner_namespace: &quot;local-path-storage&quot; # local_path_provisioner_storage_class: &quot;local-path&quot; # local_path_provisioner_reclaim_policy: Delete # local_path_provisioner_claim_root: /opt/local-path-provisioner/ # local_path_provisioner_debug: false # local_path_provisioner_image_repo: &quot;rancher/local-path-provisioner&quot; # local_path_provisioner_image_tag: &quot;v0.0.14&quot; # local_path_provisioner_helper_image_repo: &quot;busybox&quot; # local_path_provisioner_helper_image_tag: &quot;latest&quot; # Local volume provisioner deployment local_volume_provisioner_enabled: false # local_volume_provisioner_namespace: kube-system # local_volume_provisioner_nodelabels: # - kubernetes.io/hostname # - topology.kubernetes.io/region # - topology.kubernetes.io/zone # local_volume_provisioner_storage_classes: # local-storage: # host_dir: /mnt/disks # mount_dir: /mnt/disks # volume_mode: Filesystem # fs_type: ext4 # fast-disks: # host_dir: /mnt/fast-disks # mount_dir: /mnt/fast-disks # block_cleaner_command: # - &quot;/scripts/shred.sh&quot; # - &quot;2&quot; # volume_mode: Filesystem # fs_type: ext4 </code></pre> <p>Steps to recreate the problem:</p> <p>Create PVC:<br /> <code>kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc.yaml</code></p> <p>Result: Created PVC, name <code>local-path-pvc</code>, status Pending, storage class <code>local-path</code></p> <p>Create Pod<br /> <code>kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod.yaml</code></p> <p>Result:<br /> Created Pod, name <code>create-pvc-123</code>, status Waiting:ContainerCreating.</p> <p>Describing Pod with <code>kubectl</code>:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling &lt;unknown&gt; error while running &quot;VolumeBinding&quot; prebind plugin for pod &quot;create-pvc-123&quot;: Failed to bind volumes: timed out waiting for the condition </code></pre> <p>I have tried different charts, and in all cases no Persistent Volume has been created. The ServiceAccount local-path-provisioner-service-account exists. The Deployment of the local path provisioner has one Pod.</p> <p><strong>UPDATE</strong><br /> On the server the logs contains several errors, <code>sudo journalctl -xeu kubelet | grep 'fail'</code>:</p> <pre><code>... Oct 12 16:53:36 node1 kubelet[274306]: E1012 16:53:36.000246 274306 nestedpendingoperations.go:301] Operation for &quot;{volumeName:kubernetes.io/configmap/71b44438-fadb-4859-a788-8d911dfab2db-script podName:71b44438-fadb-4859-a788-8d911dfab2db nodeName:}&quot; failed. No retries permitted until 2020-10-12 16:54:40.000164134 +0200 CEST m=+9380.643933974 (durationBeforeRetry 1m4s). Error: &quot;MountVolume.SetUp failed for volume \&quot;script\&quot; (UniqueName: \&quot;kubernetes.io/configmap/71b44438-fadb-4859-a788-8d911dfab2db-script\&quot;) pod \&quot;create-pvc-80d115d9-98fd-4fcd-9e41-55b74f809efb\&quot; (UID: \&quot;71b44438-fadb-4859-a788-8d911dfab2db\&quot;) : configmap references non-existent config key: setup&quot; Oct 12 16:53:36 node1 kubelet[274306]: E1012 16:53:36.404015 274306 nestedpendingoperations.go:301] Operation for &quot;{volumeName:kubernetes.io/configmap/424b196e-5132-479a-8b95-63e41e0ea124-script podName:424b196e-5132-479a-8b95-63e41e0ea124 nodeName:}&quot; failed. No retries permitted until 2020-10-12 16:54:40.403980548 +0200 CEST m=+9381.047750378 (durationBeforeRetry 1m4s). Error: &quot;MountVolume.SetUp failed for volume \&quot;script\&quot; (UniqueName: \&quot;kubernetes.io/configmap/424b196e-5132-479a-8b95-63e41e0ea124-script\&quot;) pod \&quot;create-pvc-3b132d90-8812-4391-bc29-966ee47bee0d\&quot; (UID: \&quot;424b196e-5132-479a-8b95-63e41e0ea124\&quot;) : configmap references non-existent config key: setup&quot; Oct 12 16:54:40 node1 kubelet[274306]: E1012 16:54:40.464999 274306 nestedpendingoperations.go:301] Operation for &quot;{volumeName:kubernetes.io/configmap/424b196e-5132-479a-8b95-63e41e0ea124-script podName:424b196e-5132-479a-8b95-63e41e0ea124 nodeName:}&quot; failed. No retries permitted until 2020-10-12 16:56:42.464936126 +0200 CEST m=+9503.108706016 (durationBeforeRetry 2m2s). Error: &quot;MountVolume.SetUp failed for volume \&quot;script\&quot; (UniqueName: \&quot;kubernetes.io/configmap/424b196e-5132-479a-8b95-63e41e0ea124-script\&quot;) pod \&quot;create-pvc-3b132d90-8812-4391-bc29-966ee47bee0d\&quot; (UID: \&quot;424b196e-5132-479a-8b95-63e41e0ea124\&quot;) : configmap references non-existent config key: setup&quot; </code></pre> <p><strong>UPDATE - solution?</strong><br /> I changed the ConfigMap 'local-path-config' as described in the <a href="https://github.com/rancher/local-path-provisioner#configuration" rel="nofollow noreferrer">docs</a>.<br /> However, the <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes-apps/external_provisioner/local_path_provisioner/templates/local-path-storage-cm.yml.j2" rel="nofollow noreferrer">jinja template</a> in Kubespray lacks the properties 'setup' and 'teardown' in the configuration.</p> <p>When I added de <code>setup</code> and <code>teardown</code> properties Kubernetes created the PV and the Pod started.</p> <p><strong>What's the reason Kubespray doesn't provide these properties in the template?</strong></p>
erwineberhard
<p>Default version that kubespray should use is local_path_provisioner_image_tag: &quot;v0.0.14&quot;. I suppose the source for the template is <a href="https://github.com/rancher/local-path-provisioner/blob/v0.0.14/deploy/example-config.yaml" rel="nofollow noreferrer">https://github.com/rancher/local-path-provisioner/blob/v0.0.14/deploy/example-config.yaml</a>. It doesn't have setup and teardown properties, they were introduced in v0.0.15.</p>
Dmitry S
<p>I am working with ingress-nginx in kubernetes to set up a server. The issue is that the paths are not routing at all and I get a 404 error from the nginx server on any request I make. Below is my code for ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: # defaultBackend: # service: # name: auth-srv # port: # number: 3000 rules: - host: app.dev - http: paths: - pathType: Prefix path: /api/auth/?(.*) backend: service: name: auth-srv port: number: 3000 - path: /api/coms/?(.*) pathType: Prefix backend: service: name: coms-srv port: number: 3000 </code></pre> <p>If I uncomment the default backend service I get a response but as soon as I remove it I get the 404 nginx error. So I know its connecting to the services I set.</p> <p>I don't know where I'm going wrong how to go about fixing this as I'm copying straight from the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">docs</a>. Any help or insight would be great. Thank you in advance!</p> <p><strong>Edit 1:</strong> I removed the regex from the path and commented out the /api/auth path so no requests should be going to the auth-srv. For some reason, all requests route to the auth-srv even though there is no mapping to it. NOTE: Both the auth and coms pods/services are running in the background, just ingress-nginx still isn't routing properly.</p>
Gerry Saporito
<p>So the reason why this wasn't routing properly was because of the:</p> <pre><code> - host: app.dev - http: </code></pre> <p>The &quot;-&quot; in front of the &quot;http&quot; made the controller think it was its own ruleset so the following routes had a host of &quot;*&quot;. After I Removed the &quot;-&quot; in front of the &quot;http&quot;, the rules were set to the proper host of app.dev and it started routing accordingly.</p> <p>Thank you for your help everyone! What a long day it has been :')</p>
Gerry Saporito
<p>I'm not sure if I ask this question here, but I need some clarification I have a Kubernetes cluster and I am wondering since the frontend runs on the client's web browser. Am I able to only expose the API internally and still make HTTP requests to it from the client or am I only able to expose the service using the node port, ingress, and load balancer which exposes it to the internet?</p> <p>Thanks in advance for the feedback</p>
Conor Donohoe
<p>You can expose it to the frontend via an ingress and also (at the same time) internally for other services/pods/containers you have running inside the cluster, it all depends on how you configure it.</p> <p>Assuming you only want it to run internally, all you have to do is not create an ingress. If you want to expose it, then do create the ingress. In both cases you should always create the 'service' as that's what exposes your pods's code to the cluster (inside and outside via ingress).</p> <p>Service: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p> <p>Ingress: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>Hope that clarifies it!</p>
DevRacker
<p>Major features of service meshes are</p> <ol> <li>Service Discovery</li> <li>Configuration management</li> </ol> <p>both of them are provided by Kubernetes.<br /> <strong>Why do we need a service mesh then?</strong></p> <p>*I understand that for more complex tasks e.g. zoning, security, complex load balancing and routing a service mesh is the right tool.</p>
whowhenhow
<p>In short, applying a service mesh, for example Istio help establish and manage communication between services (microservices) easier especially when you have a large number of services, and also provide security and other features. But if you had just couple of services, you might not need it for example.</p>
Howie S. Nguyen
<p>I have deployed a multiconnect setup of the WhatsApp Business API client in Production Kubernetes enviroment, using the documentation for Minikube <a href="https://developers.facebook.com/docs/whatsapp/on-premises/get-started/installation/dev-multiconnect-on-minikube#step-2--get-the-whatsapp-business-api-client-configuration-files" rel="nofollow noreferrer">Developer Setup: Multiconnect on Minikube</a> as referece.</p> <p>But when doing the first login, in order to get the auth token, i get the following error on Postman:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;meta&quot;: { &quot;version&quot;: &quot;v2.37.1&quot;, &quot;api_status&quot;: &quot;stable&quot; }, &quot;errors&quot;: [ { &quot;code&quot;: 1006, &quot;title&quot;: &quot;Resource not found&quot;, &quot;details&quot;: &quot;URL path not found&quot; } ] } </code></pre> <p>All the containers are running:</p> <pre class="lang-json prettyprint-override"><code>NAME READY STATUS RESTARTS AGE mysql-dev-6cdc47979f-6f6t5 1/1 Running 0 2d23h whatsapp-coreapp-deployment-7bb4c6b8bc-qw946 1/1 Running 0 24m whatsapp-coreapp-deployment-7bb4c6b8bc-zkj5z 1/1 Running 0 24m whatsapp-master-deployment-84ffbdd48d-4rw8w 1/1 Running 0 24m whatsapp-master-deployment-84ffbdd48d-zwvlq 1/1 Running 0 24m whatsapp-web-deployment-74b99f4579-s44lp 1/1 Running 1 25m whatsapp-web-deployment-74b99f4579-sn55t 1/1 Running 0 25m </code></pre> <p>And the given error happens on every call on Postman, not only when logging in (check health, get users, login, login admin, etc), all of them gives the same error:</p> <pre class="lang-json prettyprint-override"><code>&quot;code&quot;: 1006, &quot;title&quot;: &quot;Resource not found&quot;, &quot;details&quot;: &quot;URL path not found&quot; </code></pre> <p>I've checked the container logs and i'ts returning 404 for every path called:</p> <pre class="lang-json prettyprint-override"><code>[2021-12-20 12:40:57.546610] app.INFO: [dd610cd0d21e431fafafc737c323565e] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-20 12:40:57.548893] app.INFO: [dd610cd0d21e431fafafc737c323565e] Request POST_//v1/users/login returns 404 in 530.65 ms [] [] [2021-12-20 12:45:18.556704] app.ERROR: [4018a09ea0084f9fa82f843905de2b00] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;GET //v1/stats/app&quot; (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;GET //v1/stats/app\&quot; at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;//v1/stats/app\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-20 12:45:18.557154] app.INFO: [4018a09ea0084f9fa82f843905de2b00] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-20 12:45:18.557462] app.INFO: [4018a09ea0084f9fa82f843905de2b00] Request GET_//v1/stats/app returns 404 in 84.57 ms [] [] [2021-12-20 12:52:11.890507] app.ERROR: [5a84217237cc49e8bb9df953ac32c799] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;GET /auth/v1/login/&quot; (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;GET /auth/v1/login/\&quot; at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;/auth/v1/login/\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-20 12:52:11.890825] app.INFO: [5a84217237cc49e8bb9df953ac32c799] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-20 12:52:11.891043] app.INFO: [5a84217237cc49e8bb9df953ac32c799] Request GET_/auth/v1/login/ returns 404 in 84.07 ms [] [] [2021-12-20 12:52:12.088612] app.ERROR: [2b26c43f700640f190977bb797ec4448] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;GET /favicon.ico&quot; (from &quot;https://192.168.88.80:31599/auth/v1/login/&quot;) (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;GET /favicon.ico\&quot; (from \&quot;https://192.168.88.80:31599/auth/v1/login/\&quot;) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;/favicon.ico\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-20 12:52:12.088863] app.INFO: [2b26c43f700640f190977bb797ec4448] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-20 12:52:12.089117] app.INFO: [2b26c43f700640f190977bb797ec4448] Request GET_/favicon.ico returns 404 in 82.22 ms [] [] </code></pre> <p>Edit: Here's also the logs for the second replica of the webapp deployment</p> <pre class="lang-json prettyprint-override"><code>Web server started Starting web monitor loop ... ==&gt; /var/log/lighttpd/error.log &lt;== 2021-12-20 12:38:05: (server.c.1488) server started (lighttpd/1.4.55) tail: cannot open '/var/log/whatsapp/web.log' for reading: No such file or directory Setting up watches. Watches established. tail: '/var/log/whatsapp/web.log' has appeared; following new file [2021-12-20 12:52:24.295383] app.ERROR: [7c73b15c0a6c488fb5ac7703a4b337ec] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;GET /teste/&quot; (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;GET /teste/\&quot; at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;/teste/\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-20 12:52:24.302930] app.INFO: [7c73b15c0a6c488fb5ac7703a4b337ec] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-20 12:52:24.307525] app.INFO: [7c73b15c0a6c488fb5ac7703a4b337ec] Request GET_/teste/ returns 404 in 201.72 ms [] [] [2021-12-21 11:04:28.642518] app.ERROR: [7181dfea9e7b4e51adb41fc41571253f] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;POST //v1/users/login&quot; (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;POST //v1/users/login\&quot; at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;//v1/users/login\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-21 11:04:28.644938] app.INFO: [7181dfea9e7b4e51adb41fc41571253f] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-21 11:04:28.645501] app.INFO: [7181dfea9e7b4e51adb41fc41571253f] Request POST_//v1/users/login returns 404 in 87.15 ms [] [] [2021-12-21 11:05:29.180215] app.ERROR: [8ce236970e404d7b90d86ad53e774105] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;GET /auth/v1/login/&quot; (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;GET /auth/v1/login/\&quot; at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;/auth/v1/login/\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-21 11:05:29.180746] app.INFO: [8ce236970e404d7b90d86ad53e774105] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-21 11:05:29.181257] app.INFO: [8ce236970e404d7b90d86ad53e774105] Request GET_/auth/v1/login/ returns 404 in 26.28 ms [] [] [2021-12-21 11:05:29.332427] app.ERROR: [feb8c3253624422383421e253fc8ce73] Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for &quot;GET /favicon.ico&quot; (from &quot;https://192.168.88.80:31599/auth/v1/login/&quot;) (uncaught exception) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php line 2 {&quot;exception&quot;:&quot;[object] (Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException(code: 0): No route found for \&quot;GET /favicon.ico\&quot; (from \&quot;https://192.168.88.80:31599/auth/v1/login/\&quot;) at /var/www/html/vendor/symfony/http-kernel/EventListener/RouterListener.php:2, Symfony\\Component\\Routing\\Exception\\ResourceNotFoundException(code: 0): No routes found for \&quot;/favicon.ico\&quot;. at /var/www/html/vendor/symfony/routing/Matcher/UrlMatcher.php:2)&quot;} [] [2021-12-21 11:05:29.332971] app.INFO: [feb8c3253624422383421e253fc8ce73] Response: {&quot;meta&quot;:{&quot;version&quot;:&quot;v2.37.1&quot;,&quot;api_status&quot;:&quot;stable&quot;},&quot;errors&quot;:[{&quot;code&quot;:1006,&quot;title&quot;:&quot;Resource not found&quot;,&quot;details&quot;:&quot;URL path not found&quot;}]} [] [2021-12-21 11:05:29.333465] app.INFO: [feb8c3253624422383421e253fc8ce73] Request GET_/favicon.ico returns 404 in 21.29 ms [] [] </code></pre> <p>And here is the service that got created for webapp:</p> <pre class="lang-json prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE whatsapp-web-service NodePort 10.110.67.166 &lt;none&gt; 443:31599/TCP 22h </code></pre> <p>What could this be?</p>
ReaperClown
<p>You need to generate admin auth token to call the APIs. I did the following steps and it worked for me. Note : I used docker-desktop inbuilt k8s single node cluster for POC on this issue.</p> <pre><code>1. Download the postman collection from this link: https://github.com/fbsamples/WhatsApp-Business-API-Postman-Collection 2. Follow the steps in this document: https://developers.facebook.com/docs/whatsapp/on-premises/get-started/postman </code></pre> <p>Do let me know if it worked for you as well.</p> <p>Regards,</p> <p>Prince Arora</p>
Prince Arora
<p>We're building out a release pipeline in Azure DevOps which pushes to a Kubernetes cluster. The first step in the pipeline is to run an Azure CLI script which sets up all the resources - this is an idempotent script so we can run it each time we run the release pipeline. Our intention is to have a standardised release pipeline which we can run against several clusters, existing and new.</p> <p>The final step in the pipeline is to run the Kubectl task with the <code>apply</code> command.</p> <p>However, this pipeline task requires specifying in advance (at the time of building the pipeline) the names of the resource group and cluster against which it should be executed. But the point of the idempotent script in the first step is to ensure that the resources and to create if not.</p> <p>So there's the possibility that neither the resource group nor the cluster will exist before the pipeline is run.</p> <p>How can I achieve this in a DevOps pipeline if the Kubectl task requires a resource group and a cluster to be specified at design time?</p> <p><a href="https://i.stack.imgur.com/uTNyV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uTNyV.png" alt="enter image description here" /></a></p>
awj
<p>This <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops" rel="nofollow noreferrer">Kubectl task</a> works with service connection type: <strong>Azure Resource Manager</strong>. And it requires to select Resource group field and Kubernetes cluster field after you select the Azure subscription, as below. <a href="https://i.stack.imgur.com/QLOqP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QLOqP.png" alt="enter image description here" /></a></p> <p>After testing, we find that these 2 fields supports variable. Thus you could use variable in these 2 fields, and using <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/powershell?view=azure-devops" rel="nofollow noreferrer">PowerShell task</a> to set variable value before this <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops" rel="nofollow noreferrer">Kubectl task</a>. See: <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&amp;tabs=yaml%2Cbatch#set-variables-in-scripts" rel="nofollow noreferrer">Set variables in scripts</a> for details.</p>
Edward Han-MSFT
<p>A node on my 5-node cluster had memory usage peak at ~90% last night. Looking around with <code>kubectl</code>, a single pod (in a 1-replica deployment) was the culprit of the high memory usage and was evicted.</p> <p>However, logs show that the pod was evicted about 10 times (AGE corresponds to around the time when memory usage peaked, all evictions on the same node)</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE example-namespace example-deployment-84f8d7b6d9-2qtwr 0/1 Evicted 0 14h example-namespace example-deployment-84f8d7b6d9-6k2pn 0/1 Evicted 0 14h example-namespace example-deployment-84f8d7b6d9-7sbw5 0/1 Evicted 0 14h example-namespace example-deployment-84f8d7b6d9-8kcbg 0/1 Evicted 0 14h example-namespace example-deployment-84f8d7b6d9-9fw2f 0/1 Evicted 0 14h example-namespace example-deployment-84f8d7b6d9-bgrvv 0/1 Evicted 0 14h ... </code></pre> <p>node memory usage graph: <a href="https://i.stack.imgur.com/EEUNi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EEUNi.png" alt="mem_usg_graph" /></a></p> <pre><code>Status: Failed Reason: Evicted Message: Pod The node had condition: [MemoryPressure]. </code></pre> <p>My question is to do with how or why this situation would happen, and/or what steps can I take to debug and figure out why the pod was repeatedly evicted? The pod uses an in-memory database so it makes sense that after some time it eats up a lot of memory, but it's memory usage on boot shouldn't be abnormal at all.</p> <p>My intuition would have been that the high memory usage pod gets evicted, deployment replaces the pod, new pod isn't using that much memory, all is fine. But the eviction happened many times, which doesn't make sense to me.</p>
Atte
<p>The simplest steps are to run the following commands to debug and read the logs from the specific Pod.</p> <p>Look at the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods" rel="nofollow noreferrer">Pod's states and last restarts</a>:</p> <pre><code>kubectl describe pods ${POD_NAME} </code></pre> <p>Look for it's node name and run the same for the node:</p> <pre><code>kubectl describe node ${NODE_NAME} </code></pre> <p>And you will see some information in <code>Conditions</code> section.</p> <p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">Examine pod logs</a>:</p> <pre><code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} </code></pre> <p>If you want to rerun your pod and watch the logs directly, rerun your pod and do the command:</p> <pre><code>kubectl logs ${POD_NAME} -f </code></pre> <p>More info with <code>kubectl logs</code> command and its flags <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">here</a></p>
Bazhikov
<p>Iam trying to set up aws cluster. I installed kubectl and configured aws with credentials. When I tried to display pods or any kubectl commands, Iam getting this error</p> <pre><code>revaa@revaa-Lenovo-E41-25:~$ kubectl get pod Unable to connect to the server: dial tcp 10.0.12.77:443: i/o timeout </code></pre> <p>How to resolve this</p>
Sathya
<p>The ip used is a private, it cannot be accessed outside the aws. Change your cluster's api server endpoint access to public. It worked.</p>
Sathya
<p>I am running Spark Submit through a pod within cluster on Kubernetes with the following script:</p> <pre><code>/opt/spark/bin/spark-submit --master k8s://someCluster --deploy-mode cluster --name someName --class some.class --conf spark.driver.userClassPathFirst=true --conf spark.kubernetes.namespace=someNamespace --conf spark.kubernetes.container.image=someImage --conf spark.kubernetes.container.image.pullSecrets=image-pull-secret --conf spark.kubernetes.container.image.pullPolicy=Always --conf spark.kubernetes.authenticate.submission.oauthTokenFile=/var/run/secrets/kubernetes.io/serviceaccount/token --conf spark.kubernetes.authenticate.driver.serviceAccountName=someServiceAccount --conf spark.driver.port=7078 --conf spark.blockManager.port=7079 local:////someApp.jar </code></pre> <p>The script runs fine and the driver pod starts along with the auto-generated service, with ports 7078, 7079, and 4040 plus selector that matches the label that was added to the driver pod. However, the svc has no endpoints.</p> <p>The executor then starts but never succeeds due to the following error below:</p> <pre><code>Exception in thread &quot;main&quot; java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:283) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:272) at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$3(CoarseGrainedExecutorBackend.scala:303) at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$1(CoarseGrainedExecutorBackend.scala:301) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ... 4 more Caused by: java.io.IOException: Failed to connect to drivername-svc.namespace.svc:7078 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.net.UnknownHostException: drivername-svc.namespace.svc at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298) at java.base/java.net.InetAddress.getByName(InetAddress.java:1248) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:156) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:153) at java.base/java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:153) at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:41) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:61) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:53) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:55) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:31) at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:106) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) </code></pre> <p>I also have a network policy which exposes the ports 7078 and 7079 through ingress and egress. Not sure what else I am missing.</p>
iAmHereForHelp
<p>Found out the endpoint wasn't added to the service because the driver pod has multiple containers, one of them terminates early causing readiness of the pod to be &quot;not ready&quot;; hence the service does not register the driver pod endpoint. Since there are no endpoints for the service, the executor pods trying to communicate with the service sees no host exception.</p>
iAmHereForHelp
<p>Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.</p> <p>So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.</p> <p>Anyone know of different methods to achieve this?</p> <p>I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.</p>
Kyle N
<p>I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-tasks-overview#task-scenarios" rel="nofollow noreferrer">Azure ACR Quick Tasks</a> to achieve something similar. If you're just trying to do the equivalent of <code>docker build</code> and <code>docker push</code>, Quick Tasks should work fine for you too.</p> <p>For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.</p> <p>First make sure you are in the Dockerfile directory and then:</p> <ol> <li>Authenticate to the Azure CLI using <code>az login</code></li> <li>Authenticate to your ACR using <code>az acr login --name myacr</code>.</li> <li>Replace the values accordingly and run <code>az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .</code></li> <li>Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under <code>services&gt;tasks&gt;runs</code>. You should see every line of the Docker build task appear there.</li> </ol> <p>Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a <a href="https://learn.microsoft.com/en-us/azure/container-registry/tasks-agent-pools" rel="nofollow noreferrer">Dedicated Agent Pool</a> and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.</p> <p>In my case, I'm using terraform to run the <code>az acr build</code> command and you can see the Dockerfile executes the <code>COPY</code> commands without any issues.</p> <p><a href="https://i.stack.imgur.com/u4Z2H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u4Z2H.png" alt="enter image description here" /></a></p>
Ked Mardemootoo
<p>I am trying to patch, clear Node conditions in OpenShift and/or Kubernetes cluster on a worker node. Patch isn't working, trying even workarounds, maybe update the key in etcd.</p> <p>Main problem that i created new node conditions and then i removed them but they are not removed from list although they are no longer there or being updated by the controller.</p> <pre><code>$ oc describe node node1.example.com Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- ExampleToRemove False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 </code></pre>
Sam
<p>Posting answer from comment as Community Wiki.</p> <p>I found the fix, you can edit whatever you want in the node description by updating the etcd key <code>/kubernetes.io/minions/&lt;node-name&gt;</code></p> <p><strong>Edit:</strong> Finally I found a way to patch and update the Node condition type status, add new or even delete</p> <p>Example:</p> <pre><code>curl -k -H &quot;Authorization: Bearer $TOKEN&quot; -H &quot;Content-Type: application/json-patch+json&quot; -X PATCH https://APISERVER:6443/api/v1/nodes/NAME-OF-NODE-Update-Condition/status --data '[{ &quot;op&quot;: &quot;remove&quot;, &quot;path&quot;: &quot;/status/conditions/2&quot;}]' </code></pre> <p>Note: each condition has an index number, so try to know what is the index number and then target it in /status/condition/</p>
JaysonM
<p>I've recently being learning about Kubernetes and I have deployed a service which contains a .NET Core 7 REST API into a pod. This pod/service is configured with it's ClusterIP and with NGINX port as well.</p> <p>Whenever I would want to access the static files which is stored under this path in this screenshot: <a href="https://i.stack.imgur.com/DZwZz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZwZz.png" alt="first folder which is &quot;app&quot;" /></a> <a href="https://i.stack.imgur.com/JTD9q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTD9q.png" alt="wwwroot folder where the static files are" /></a></p> <p>The path will be &quot;app&gt;wwwroot&gt;icons&gt;svg&gt;anyimagesname.svg&quot;.</p> <p>My localhost address for 127.0.0.1 has been configured to have a DNS of <a href="http://www.chorebear.com" rel="nofollow noreferrer">www.chorebear.com</a> hence the complete URL to access the static file will be <a href="http://chorebear.com/icons/svg/afghanistan_adobe_express.xml" rel="nofollow noreferrer">http://chorebear.com/icons/svg/afghanistan_adobe_express.xml</a></p> <p>But everytime I hit this URL, it says &quot;NGINX 404 Not Found&quot;. Is there any issue with the service itself? Any help would help in this. Thanks!.</p> <p><a href="https://i.stack.imgur.com/Mh5De.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mh5De.png" alt="Not found 404 Error" /></a></p> <pre><code>var builder = WebApplication.CreateBuilder(args); // Add services to the container. var sqlConnectionBuilder = new SqlConnectionStringBuilder(); if (builder.Environment.IsDevelopment()) { sqlConnectionBuilder.ConnectionString = builder.Configuration.GetConnectionString(&quot;Chorebear_Connection&quot;); sqlConnectionBuilder.UserID = builder.Configuration[&quot;UserId&quot;]; sqlConnectionBuilder.Password = builder.Configuration[&quot;Password&quot;]; } else if (builder.Environment.IsProduction()) { sqlConnectionBuilder.ConnectionString = builder.Configuration.GetConnectionString(&quot;Chorebear_Connection&quot;); // sqlConnectionBuilder.UserID = builder.Configuration[&quot;UserId&quot;]; // sqlConnectionBuilder.Password = builder.Configuration[&quot;Password&quot;]; } builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddDbContext&lt;ChoreBearContext&gt;(options =&gt; options.UseSqlServer(sqlConnectionBuilder.ConnectionString)); //Registering services and classes which are dependency injection builder.Services.AddAutoMapper(AppDomain.CurrentDomain.GetAssemblies()); builder.Services.AddIdentity&lt;User, UserRole&gt;().AddDefaultTokenProviders(); builder.Services.Configure&lt;IdentityOptions&gt;(options =&gt; { options.User.RequireUniqueEmail = true; options.Password.RequiredLength = 8; options.SignIn.RequireConfirmedEmail = true; }); builder.Services.AddScoped&lt;IUserStore&lt;User&gt;, UserStore&gt;(); builder.Services.AddTransient&lt;IRoleStore&lt;UserRole&gt;, RoleStore&gt;(); builder.Services.AddScoped&lt;ICategoryOfTaskRepo, CategoryOfTaskRepo&gt;(); builder.Services.AddScoped&lt;ITaskRepo, TaskRepo&gt;(); builder.Services.AddScoped&lt;IAccountRepo, AccountRepo&gt;(); //Registering utility classes which are dependency injection builder.Services.AddScoped&lt;IEmailUtility, EmailUtility&gt;(); builder.Services.AddScoped&lt;IPasswordHasher&lt;User&gt;, PasswordHasher&lt;User&gt;&gt;(); builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.UseAuthorization(); app.MapControllers(); app.UseStaticFiles(); PrepDb.PrepPopulation(app, builder.Environment.IsProduction()); app.Run(); </code></pre>
Adrian Joseph
<p>I've solved the issue by my own. Let me explain step-by-step on how I found the bug and how I solve it.</p> <p>Currently, my architecture of my Kubernetes contains the following:</p> <ol> <li>.NET Core REST API service [With a ClusterIP service exposed to port 80] in a pod</li> <li>Node Port Service for .NET Core REST API [only for development purposes]</li> <li>Ingress NGINX Service which is configured to connect or rewrite the ClusterIP of .NET Core REST API to a domain URL instead of &quot;localhost&quot; IP address. [Used as production standard environment]</li> <li>MS SQL Server Instance Database service [With a ClusterIP service exposed to port 1433 and LoadBalancer exposed to 1433 to allow the service to be accessed by Microsoft SQL Server Management Studio]</li> <li>Persistent Volume Claim to persist data saved in MS SQL Server Instance Database service directly into the hardware of the host (in this case my PC)</li> </ol> <p>Now my .NET Core REST API had an issue, with first the middleware position especially which is the middleware called <code>app.UseStaticFiles()</code>. The position previously was right before <code>app.Run()</code> hence when a HTTP request is initiated to the REST API service in Kubernetes, the static files which I saved under <code>www/icons/svg</code> folder was being served quite late. So quick shoutout to @GuruStron for spotting this and providing the links to the documentation which helped me understanding it. I'll provide the links below:</p> <ol> <li><a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-7.0" rel="nofollow noreferrer">ASP.NET Core Middleware</a></li> <li><a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/static-files?view=aspnetcore-7.0" rel="nofollow noreferrer">Static Files in ASP.NET Core</a></li> </ol> <p>But it's not the full solution to the problem. The problem relies in the Ingress Service and in this case my <code>ingress-depl.yaml</code> file. Let me show you the previous YAML file and the update file.</p> <p><em><strong>Previous Ingress Service YAML file</strong></em></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - host: chorebear.com http: paths: - path: /api pathType: Prefix backend: service: name: chorebear-clusterip-srv port: number: 80 </code></pre> <p><em><strong>Updated Ingress YAML file</strong></em></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - host: chorebear.com http: paths: - path: / pathType: Prefix backend: service: name: chorebear-clusterip-srv port: number: 80 </code></pre> <p>The difference between both files were the configuration property called <code>path: /api</code>. This property rewrites the URL domain for the .NET Core REST API pod container to &quot;http://chorebear.com/api&quot;. Any HTTP request to any endpoint will be like this --&gt; <a href="http://chorebear.com/api/%5Bany-endpoint-you-create-in-your-controller%5D" rel="nofollow noreferrer">http://chorebear.com/api/[any-endpoint-you-create-in-your-controller]</a></p> <p>Now since my controllers in my REST API contains <code>[Route(&quot;api/[controller]&quot;)]</code>, the &quot;api&quot; portion of the &quot;Route&quot; gets re-written by the <code>path: /api</code> of the YAML file for Ingress. This then allows me to call any endpoint created in my controllers with the domain URL in my YAML file but for the static files, it cannot be served on that same re-written URL domain which is &quot;chorebear.com/api&quot; simply because the <code>app.UseStaticFiles</code> middleware needs the &quot;api&quot; path from the HTTP request which is not configured/set in the middleware itself and hence the ultimate dead-end of &quot;404 Error Not Found NGINX&quot;.</p> <p>To solve it, I simple re-type the <code>path: /api</code> to <code>path: /</code> in my Ingress Service YAML File and it works. For the endpoints I define in my controllers, I can still call as usual with &quot;api&quot; addition like <a href="http://chorebear.com/api/%5Bany-endpoint-you-create-in-your-controller%5D" rel="nofollow noreferrer">http://chorebear.com/api/[any-endpoint-you-create-in-your-controller]</a> since my controllers are decorated with <code>[Route(&quot;api/[controller]&quot;)]</code> which has the &quot;api&quot; on it. So now, my static files are served and accessible on my domain URL set in my Ingress Service YAML File <a href="http://chorebear.com/icons/svg/afghanistan_adobe_express.xml" rel="nofollow noreferrer">http://chorebear.com/icons/svg/afghanistan_adobe_express.xml</a></p> <p><a href="https://i.stack.imgur.com/cctV4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cctV4.png" alt="The static files now being served under the Ingress Service Domain URL" /></a></p> <p><a href="https://i.stack.imgur.com/9DGJX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9DGJX.png" alt="The API HTTP Request now being served under the Ingress Service Domain URL" /></a></p> <p>I hope this helps!</p>
Adrian Joseph
<p>I am currently studying distributed systems and have seen that many businesses relies on side-car proxy pattern for their services. For example, I know a company that uses an nginx proxy for authentication of their services and roles and permissions instead of including this business logic within their services.</p> <p>Another one makes use of a cloud-sql-proxy on GKE to use the Cloud SQL offering that comes on google cloud. So on top of deploying their services in a container which runs in a pod, they is a proxy just for communicating with the database.</p> <p>There is also istio which is a service mesh solution which can be deployed as a side-car proxy in a pod.</p> <p><strong>I am pretty sure there are other commonly know use-cases where this pattern is used but at some point how much is too much side-car proxy? How heavy is it on the pod running it and what are the complexity that comes with using 2, 3, or even 4 side car proxys on top of your service container?</strong></p>
Razine Bensari
<p>I recommend you to define what really you need and continue your research based on this, since this topic is too broad and doesn't have one correct answer.</p> <p>Due to this, I decided to post a community wiki answer. Feel free to expand it.</p> <p>There can be various reasons for running some containers in one pod. According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods" rel="nofollow noreferrer">Kubernetes documentation</a>:</p> <blockquote> <p>A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate <em>sidecar</em> container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.</p> </blockquote> <p>In its simplest form, a sidecar container can be used to add functionality to a primary application that might otherwise be difficult to improve.</p> <p><strong>Advantages of using sidecar containers</strong></p> <ul> <li>sidecar container is independent from its primary application in terms of runtime environment and programming language;</li> <li>no significant latency during communication between primary application and sidecar container;</li> <li>the sidecar pattern entails designing modular containers. The modular container can be plugged in more than one place with minimal modifications, since you don't need to write configuration code inside each application.</li> </ul> <p><strong>Notes regarding usage of sidecar containers</strong></p> <ul> <li><p>consider making a small sidecar container that doesn't consume much resources. The strong point of a sidecar containers lies in their ability to be small and pluggable. If sidecar container logic is getting more complex and/or becoming more tightly coupled with the main application container, it may better be integrated with the main application’s code instead.</p> </li> <li><p>to ensure that any number of sidecar containers can works successfully with main application its necessary to sum up all the resources/request limits while defining resource limits for the pod, because all the containers will run in parallel. Whole functionality works only if both types of containers are running successfully and most of the time these sidecar containers are simple and small that consume fewer resources than the main container.</p> </li> </ul>
Andrew Skorkin
<p>I'm trying to deploy the ELK stack to my developing kubernetes cluster. It seems that I do everything as described in the tutorials, however, the pods keep failing with Java errors (see below). I will describe the whole process from installing the cluster until the error happens.</p> <p>Step 1: Installing the cluster</p> <pre><code># Apply sysctl params without reboot cat &lt;&lt;EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Setup required sysctl params, these persist across reboots. cat &lt;&lt;EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system #update and install apt https stuff sudo apt-get update sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release # add docker repo for containerd and install it curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ &quot;deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable&quot; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null sudo apt-get update sudo apt-get install -y containerd.io # copy config sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml sudo systemctl restart containerd cat &lt;&lt;EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat &lt;&lt;EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 // somewhat redundant net.bridge.bridge-nf-call-iptables = 1 // somewhat redundant EOF sudo sysctl --system #install kubernetes binaries sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo &quot;deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main&quot; | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl #disable swap and comment swap in fstab sudo swapoff -v /dev/mapper/main-swap sudo nano /etc/fstab #init cluster sudo kubeadm init --pod-network-cidr=192.168.0.0/16 #make user to kubectl admin mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #install calico kubectl apply -f kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml #untaint master node that pods can run on it kubectl taint nodes --all node-role.kubernetes.io/master- #install helm curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - sudo apt-get install apt-transport-https --yes echo &quot;deb https://baltocdn.com/helm/stable/debian/ all main&quot; | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm </code></pre> <p>Step 2: Install ECK (<a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-helm.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-helm.html</a>) and elasticsearch (<a href="https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#installing" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#installing</a>)</p> <pre><code># add helm repo helm repo add elastic https://helm.elastic.co helm repo update # install eck #### ommited as suggested in comment section!!!! helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace helm install elasticsearch elastic/elasticsearch </code></pre> <p>Step 3: Add PersistentVolume</p> <pre><code>--- apiVersion: v1 kind: PersistentVolume metadata: name: elk-data1 labels: type: local spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data1&quot; --- apiVersion: v1 kind: PersistentVolume metadata: name: elk-data2 labels: type: local spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data2&quot; --- apiVersion: v1 kind: PersistentVolume metadata: name: elk-data3 labels: type: local spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data3&quot; </code></pre> <p>apply it</p> <pre><code>sudo mkdir /mnt/data1 sudo mkdir /mnt/data2 sudo mkdir /mnt/data3 kubectl apply -f storage.yaml </code></pre> <p>Now the pods (or at least one) sould run. But I keep getting STATUS CrashLoopBackOff with java errors in the log.</p> <pre><code>kubectl get pv,pvc,pods NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/elk-data1 30Gi RWO Retain Bound default/elasticsearch-master-elasticsearch-master-1 140m persistentvolume/elk-data2 30Gi RWO Retain Bound default/elasticsearch-master-elasticsearch-master-2 140m persistentvolume/elk-data3 30Gi RWO Retain Bound default/elasticsearch-master-elasticsearch-master-0 140m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Bound elk-data3 30Gi RWO 141m persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Bound elk-data1 30Gi RWO 141m persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Bound elk-data2 30Gi RWO 141m NAME READY STATUS RESTARTS AGE pod/elasticsearch-master-0 0/1 CrashLoopBackOff 32 141m pod/elasticsearch-master-1 0/1 Pending 0 141m pod/elasticsearch-master-2 0/1 Pending 0 141m </code></pre> <p>Logs and Error:</p> <pre><code>kubectl logs pod/elasticsearch-master-2 Exception in thread &quot;main&quot; java.lang.InternalError: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:65) at java.base/jdk.internal.platform.Container.metrics(Container.java:43) at jdk.management/com.sun.management.internal.OperatingSystemImpl.&lt;init&gt;(OperatingSystemImpl.java:48) at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl.getOperatingSystemMXBean(PlatformMBeanProviderImpl.java:279) at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl$3.nameToMBeanMap(PlatformMBeanProviderImpl.java:198) at java.management/java.lang.management.ManagementFactory.lambda$getPlatformMBeanServer$0(ManagementFactory.java:487) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1766) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.management/java.lang.management.ManagementFactory.getPlatformMBeanServer(ManagementFactory.java:488) at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:263) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:207) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220) at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197) at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:248) at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:95) at org.elasticsearch.cli.CommandLoggingConfigurator.configureLoggingWithoutConfig(CommandLoggingConfigurator.java:29) at org.elasticsearch.cli.Command.main(Command.java:76) at org.elasticsearch.common.settings.KeyStoreCli.main(KeyStoreCli.java:32) Caused by: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:61) ... 26 more Caused by: java.lang.ExceptionInInitializerError at java.base/jdk.internal.platform.CgroupSubsystemFactory.create(CgroupSubsystemFactory.java:107) at java.base/jdk.internal.platform.CgroupMetrics.getInstance(CgroupMetrics.java:167) ... 31 more Caused by: java.lang.NullPointerException at java.base/java.util.Objects.requireNonNull(Objects.java:208) at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:260) at java.base/java.nio.file.Path.of(Path.java:147) at java.base/java.nio.file.Paths.get(Paths.java:69) at java.base/jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(CgroupUtil.java:66) at java.base/java.security.AccessController.doPrivileged(AccessController.java:554) at java.base/jdk.internal.platform.CgroupUtil.readStringValue(CgroupUtil.java:68) at java.base/jdk.internal.platform.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:65) at java.base/jdk.internal.platform.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:124) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:272) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getHierarchical(CgroupV1Subsystem.java:218) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setPath(CgroupV1Subsystem.java:201) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setSubSystemControllerPath(CgroupV1Subsystem.java:173) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.lambda$initSubSystem$5(CgroupV1Subsystem.java:113) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.initSubSystem(CgroupV1Subsystem.java:113) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.&lt;clinit&gt;(CgroupV1Subsystem.java:47) ... 33 more Exception in thread &quot;main&quot; java.lang.InternalError: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:65) at java.base/jdk.internal.platform.Container.metrics(Container.java:43) at jdk.management/com.sun.management.internal.OperatingSystemImpl.&lt;init&gt;(OperatingSystemImpl.java:48) at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl.getOperatingSystemMXBean(PlatformMBeanProviderImpl.java:279) at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl$3.nameToMBeanMap(PlatformMBeanProviderImpl.java:198) at java.management/sun.management.spi.PlatformMBeanProvider$PlatformComponent.getMBeans(PlatformMBeanProvider.java:195) at java.management/java.lang.management.ManagementFactory.getPlatformMXBean(ManagementFactory.java:686) at java.management/java.lang.management.ManagementFactory.getOperatingSystemMXBean(ManagementFactory.java:388) at org.elasticsearch.tools.launchers.DefaultSystemMemoryInfo.&lt;init&gt;(DefaultSystemMemoryInfo.java:28) at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:125) at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:86) Caused by: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:61) ... 10 more Caused by: java.lang.ExceptionInInitializerError at java.base/jdk.internal.platform.CgroupSubsystemFactory.create(CgroupSubsystemFactory.java:107) at java.base/jdk.internal.platform.CgroupMetrics.getInstance(CgroupMetrics.java:167) ... 15 more Caused by: java.lang.NullPointerException at java.base/java.util.Objects.requireNonNull(Objects.java:208) at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:260) at java.base/java.nio.file.Path.of(Path.java:147) at java.base/java.nio.file.Paths.get(Paths.java:69) at java.base/jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(CgroupUtil.java:66) at java.base/java.security.AccessController.doPrivileged(AccessController.java:554) at java.base/jdk.internal.platform.CgroupUtil.readStringValue(CgroupUtil.java:68) at java.base/jdk.internal.platform.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:65) at java.base/jdk.internal.platform.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:124) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:272) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getHierarchical(CgroupV1Subsystem.java:218) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setPath(CgroupV1Subsystem.java:201) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setSubSystemControllerPath(CgroupV1Subsystem.java:173) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.lambda$initSubSystem$5(CgroupV1Subsystem.java:113) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.initSubSystem(CgroupV1Subsystem.java:113) at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.&lt;clinit&gt;(CgroupV1Subsystem.java:47) ... 17 more </code></pre> <p>values.yaml from helm chart</p> <pre><code>--- clusterName: &quot;elasticsearch&quot; nodeGroup: &quot;master&quot; # The service that non master groups will try to connect to when joining the cluster # This should be set to clusterName + &quot;-&quot; + nodeGroup for your master group masterService: &quot;&quot; # Elasticsearch roles that will be applied to this nodeGroup # These will be set as environment variables. E.g. node.master=true roles: master: &quot;true&quot; ingest: &quot;true&quot; data: &quot;true&quot; remote_cluster_client: &quot;true&quot; ml: &quot;true&quot; replicas: 3 minimumMasterNodes: 2 esMajorVersion: &quot;&quot; # Allows you to add any config files in /usr/share/elasticsearch/config/ # such as elasticsearch.yml and log4j2.properties esConfig: {} # elasticsearch.yml: | # key: # nestedkey: value # log4j2.properties: | # key = value # Extra environment variables to append to this nodeGroup # This will be appended to the current 'env:' key. You can use any of the kubernetes env # syntax here extraEnvs: [] # - name: MY_ENVIRONMENT_VAR # value: the_value_goes_here # Allows you to load environment variables from kubernetes secret or config map envFrom: [] # - secretRef: # name: env-secret # - configMapRef: # name: config-map # A list of secrets and their paths to mount inside the pod # This is useful for mounting certificates for security and for mounting # the X-Pack license secretMounts: [] # - name: elastic-certificates # secretName: elastic-certificates # path: /usr/share/elasticsearch/config/certs # defaultMode: 0755 hostAliases: [] #- ip: &quot;127.0.0.1&quot; # hostnames: # - &quot;foo.local&quot; # - &quot;bar.local&quot; image: &quot;docker.elastic.co/elasticsearch/elasticsearch&quot; imageTag: &quot;7.12.1&quot; imagePullPolicy: &quot;IfNotPresent&quot; podAnnotations: {} # iam.amazonaws.com/role: es-cluster # additionals labels labels: {} esJavaOpts: &quot;-Xmx1g -Xms1g&quot; resources: requests: cpu: &quot;1000m&quot; memory: &quot;2Gi&quot; limits: cpu: &quot;1000m&quot; memory: &quot;2Gi&quot; initResources: {} # limits: # cpu: &quot;25m&quot; # # memory: &quot;128Mi&quot; # requests: # cpu: &quot;25m&quot; # memory: &quot;128Mi&quot; sidecarResources: {} # limits: # cpu: &quot;25m&quot; # # memory: &quot;128Mi&quot; # requests: # cpu: &quot;25m&quot; # memory: &quot;128Mi&quot; networkHost: &quot;0.0.0.0&quot; volumeClaimTemplate: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 30Gi rbac: create: false serviceAccountAnnotations: {} serviceAccountName: &quot;&quot; podSecurityPolicy: create: false name: &quot;&quot; spec: privileged: true fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - secret - configMap - persistentVolumeClaim - emptyDir persistence: enabled: true labels: # Add default labels for the volumeClaimTemplate of the StatefulSet enabled: false annotations: {} extraVolumes: [] # - name: extras # emptyDir: {} extraVolumeMounts: [] # - name: extras # mountPath: /usr/share/extras # readOnly: true extraContainers: [] # - name: do-something # image: busybox # command: ['do', 'something'] extraInitContainers: [] # - name: do-something # image: busybox # command: ['do', 'something'] # This is the PriorityClass settings as defined in # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass priorityClassName: &quot;&quot; # By default this will make sure two pods don't end up on the same node # Changing this to a region would allow you to spread pods across regions antiAffinityTopologyKey: &quot;kubernetes.io/hostname&quot; # Hard means that by default pods will only be scheduled if there are enough nodes for them # and that they will never end up on the same node. Setting this to soft will do this &quot;best effort&quot; antiAffinity: &quot;hard&quot; # This is the node affinity settings as defined in # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature nodeAffinity: {} # The default is to deploy all pods serially. By setting this to parallel all pods are started at # the same time when bootstrapping the cluster podManagementPolicy: &quot;Parallel&quot; # The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when # there are many services in the current namespace. # If you experience slow pod startups you probably want to set this to `false`. enableServiceLinks: true protocol: http httpPort: 9200 transportPort: 9300 service: labels: {} labelsHeadless: {} type: ClusterIP nodePort: &quot;&quot; annotations: {} httpPortName: http transportPortName: transport loadBalancerIP: &quot;&quot; loadBalancerSourceRanges: [] externalTrafficPolicy: &quot;&quot; updateStrategy: RollingUpdate # This is the max unavailable setting for the pod disruption budget # The default value of 1 will make sure that kubernetes won't allow more than 1 # of your pods to be unavailable during maintenance maxUnavailable: 1 podSecurityContext: fsGroup: 1000 runAsUser: 1000 securityContext: capabilities: drop: - ALL # readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 # How long to wait for elasticsearch to stop gracefully terminationGracePeriod: 120 sysctlVmMaxMapCount: 262144 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 3 timeoutSeconds: 5 # https://www.elastic.co/guide/en/elasticsearch/reference/7.12/cluster-health.html#request-params wait_for_status clusterHealthCheckParams: &quot;wait_for_status=green&amp;timeout=1s&quot; ## Use an alternate scheduler. ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## schedulerName: &quot;&quot; imagePullSecrets: [] nodeSelector: {} tolerations: [] # Enabling this will publically expose your Elasticsearch instance. # Only enable this if you have security enabled on your cluster ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: &quot;true&quot; hosts: - host: chart-example.local paths: - path: / tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local nameOverride: &quot;&quot; fullnameOverride: &quot;&quot; # https://github.com/elastic/helm-charts/issues/63 masterTerminationFix: false lifecycle: {} # preStop: # exec: # command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;echo Hello from the postStart handler &gt; /usr/share/message&quot;] # postStart: # exec: # command: # - bash # - -c # - | # #!/bin/bash # # Add a template to adjust number of shards/replicas # TEMPLATE_NAME=my_template # INDEX_PATTERN=&quot;logstash-*&quot; # SHARD_COUNT=8 # REPLICA_COUNT=1 # ES_URL=http://localhost:9200 # while [[ &quot;$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)&quot; != &quot;200&quot; ]]; do sleep 1; done # curl -XPUT &quot;$ES_URL/_template/$TEMPLATE_NAME&quot; -H 'Content-Type: application/json' -d'{&quot;index_patterns&quot;:['\&quot;&quot;$INDEX_PATTERN&quot;\&quot;'],&quot;settings&quot;:{&quot;number_of_shards&quot;:'$SHARD_COUNT',&quot;number_of_replicas&quot;:'$REPLICA_COUNT'}}' sysctlInitContainer: enabled: true keystore: [] networkPolicy: ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. ## In order for a Pod to access Elasticsearch, it needs to have the following label: ## {{ template &quot;uname&quot; . }}-client: &quot;true&quot; ## Example for default configuration to access HTTP port: ## elasticsearch-master-http-client: &quot;true&quot; ## Example for default configuration to access transport port: ## elasticsearch-master-transport-client: &quot;true&quot; http: enabled: false ## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace ## and matching all criteria can reach the DB. ## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this ## parameter to select these namespaces ## # explicitNamespacesSelector: # # Accept from namespaces with all those different rules (only from whitelisted Pods) # matchLabels: # role: frontend # matchExpressions: # - {key: role, operator: In, values: [frontend]} ## Additional NetworkPolicy Ingress &quot;from&quot; rules to set. Note that all rules are OR-ed. ## # additionalRules: # - podSelector: # matchLabels: # role: frontend # - podSelector: # matchExpressions: # - key: role # operator: In # values: # - frontend transport: ## Note that all Elasticsearch Pods can talks to themselves using transport port even if enabled. enabled: false # explicitNamespacesSelector: # matchLabels: # role: frontend # matchExpressions: # - {key: role, operator: In, values: [frontend]} # additionalRules: # - podSelector: # matchLabels: # role: frontend # - podSelector: # matchExpressions: # - key: role # operator: In # values: # - frontend # Deprecated # please use the above podSecurityContext.fsGroup instead fsGroup: &quot;&quot; </code></pre>
I. Shm
<p>What you are experiencing is not an issue related to Elasticsearch. It is a problem resulting from the cgroup configuration for the version of containerd you are using. I haven't unpacked the specifics, but the exception in the Elasticsearch logs relates to the JDK failing when attempting to retrieve the required cgroup information.</p> <p>I had the same issue and resolved it by executing the following steps, before installing Kubernetes, to install a later version of containerd and configure it to use cgroups with systemd:</p> <ol> <li>Add the GPG key for the official Docker repository.</li> </ol> <pre><code>curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - </code></pre> <ol start="2"> <li>Add the Docker repository to APT sources.</li> </ol> <pre><code>sudo add-apt-repository &quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable&quot; </code></pre> <ol start="3"> <li>Install the latest containerd.io package instead of the containerd package from Ubuntu.</li> </ol> <pre><code>apt-get -y install containerd.io </code></pre> <ol start="4"> <li>Generate the default containerd configuration.</li> </ol> <pre><code>containerd config default &gt; /etc/containerd/config.toml </code></pre> <ol start="5"> <li>Configure containerd to use systemd to manage the cgroups.</li> </ol> <pre><code> [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runc] runtime_type = &quot;io.containerd.runc.v2&quot; runtime_engine = &quot;&quot; runtime_root = &quot;&quot; privileged_without_host_devices = false base_runtime_spec = &quot;&quot; [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runc.options] SystemdCgroup = true </code></pre> <ol start="6"> <li>Restart the containerd service.</li> </ol> <pre><code>systemctl restart containerd </code></pre>
Marcus Portmann
<p>I have a web app in docker container that i want to deploy in google cloud, and I am following this documentation to deploy my app <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#exposing_the_sample_app_to_the_internet" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#exposing_the_sample_app_to_the_internet</a>.</p> <p>This particular step to expose the app using load balancer works and it gives a public ip and I am able to access my app as well. But the problem is I want to use a domain for this app with https. The loadbalancer that got deployed is a TCP loadbalancer.</p> <p>Playing around with the loadbalancer settings in the console I see Https Loadbalancer has an option to attach ssl certificate. But I couldnt find a way to expose my app to this HTTPs loadbalancer. Are there any step by step documentation tutorials to do this ? Is this possible with a TCP loadbalancer too i.e an https web app ?</p>
user3279954
<p>For detailed step by step documentation you can refer to this link <br/> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress" rel="nofollow noreferrer">Configuring Ingress for external load balancing</a><br/></p> <p>Steps summary are: <br/></p> <ol> <li>Creating Deployment <br/></li> <li>Creating Service <br/></li> <li>Creating Ingress <br/></li> <li>Testing the External HTTP(S) Load Balancer.</li> </ol> <p>See below links for other document reference. Below sites will help you on the other areas using SSL Certficates. GKE load balancing and TCP Configuration.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Using Google-managed SSL certificates</a> <br/> SSL certificates with Kubernetes Engine, see <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">HTTP(S) Load Balancing with Ingress</a> <br/> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters" rel="nofollow noreferrer">Configuring TCP/UDP Load Balancing</a></p>
JaysonM
<p>I'm running a Ubuntu container with SQL Server in my local Kubernetes environment with Docker Desktop on a Windows laptop. Now I'm trying to mount a local folder (<code>C:\data\sql</code>) that contains database files into the pod. For this, I configured a persistent volume and persistent volume claim in Kubernetes, but it doesn't seem to mount correctly. I don't see errors or anything, but when I go into the container using <code>docker exec -it</code> and inspect the data folder, it's empty. I expect the files from the local folder to appear in the mounted folder 'data', but that's not the case.</p> <p>Is something wrongly configured in the PV, PVC or pod?</p> <p>Here are my yaml files:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: dev-customer-db-pv labels: type: local app: customer-db chart: customer-db-0.1.0 release: dev heritage: Helm spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: /C/data/sql </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dev-customer-db-pvc labels: app: customer-db chart: customer-db-0.1.0 release: dev heritage: Helm spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: dev-customer-db labels: ufo: dev-customer-db-config app: customer-db chart: customer-db-0.1.0 release: dev heritage: Helm spec: selector: matchLabels: app: customer-db release: dev replicas: 1 template: metadata: labels: app: customer-db release: dev spec: volumes: - name: dev-customer-db-pv persistentVolumeClaim: claimName: dev-customer-db-pvc containers: - name: customer-db image: &quot;mcr.microsoft.com/mssql/server:2019-latest&quot; imagePullPolicy: IfNotPresent volumeMounts: - name: dev-customer-db-pv mountPath: /data envFrom: - configMapRef: name: dev-customer-db-config - secretRef: name: dev-customer-db-secrets </code></pre> <p>At first, I was trying to define a volume in the pod without PV and PVC, but then I got access denied errors when I tried to read files from the mounted data folder.</p> <pre class="lang-yaml prettyprint-override"><code>spec: volumes: - name: dev-customer-db-data hostPath: path: C/data/sql containers: ... volumeMounts: - name: dev-customer-db-data mountPath: data </code></pre> <p>I've also tried to Helm install with <code>--set volumePermissions.enabled=true</code> but this didn't solve the access denied errors.</p>
ngruson
<p>Based on this info from <a href="https://github.com/docker/for-win/issues/5325#issuecomment-567481915" rel="noreferrer">GitHub for Docker</a> there is no support hostpath volumes in WSL 2.</p> <p>Thus, <strong>next workaround can be used</strong>.</p> <p>We need just to append <code>/run/desktop/mnt/host</code> to the initial path on the host <code>/c/data/sql</code>. No need for PersistentVolume and PersistentVolumeClaim in this case - just remove them.</p> <p>I changed <code>spec.volumes</code> for Deployment according to <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath-configuration-example" rel="noreferrer">information about hostPath configuration on Kubernetes site</a>:</p> <pre><code>volumes: - name: dev-customer-db-pv hostPath: path: /run/desktop/mnt/host/c/data/sql type: Directory </code></pre> <p>After applying these changes, the files can be found in <code>data</code> folder in the pod, since <code>mountPath: /data</code></p>
Andrew Skorkin
<p>I'm using 0.14.2 Terraform version and 1.3.2 Helm provider.</p> <p>I have a terraform task where I get an output and then I use in a helm chart. So far so good. In the task where I execute de helm deploy I set the var I have to use:</p> <pre><code>resource &quot;helm_release&quot; &quot;kong-deploy&quot; { for_each = local.ob chart = &quot;./helm-charts/kong&quot; name = &quot;kong&quot; namespace = each.value create_namespace = true version = &quot;platform-2.10&quot; timeout = 30 values = [file(&quot;./helm-values/${local.environment}/kong/kong-${local.environment}-${each.value}.yaml&quot;)] set { name = &quot;WORKER_NODE&quot; value = aws_eks_node_group.managed_workers[each.value].node_group_name type = &quot;string&quot; } } </code></pre> <p>The tree directory is like that and I have to use the WORKER_NODE var in the postgres subchart.</p> <pre><code>├── charts │   └── postgres │   ├── Chart.yaml │   ├── templates │   │   ├── deployment.yaml │   │   ├── env.yaml │   │   └── service.yaml │   └── values.yaml ├── Chart.yaml ├── files │   └── purgeKongService.sh ├── templates │   ├── configmap.yaml │   ├── deployment.yaml │   ├── env.yaml │   ├── ingress.yaml │   └── service.yaml └── values.yaml </code></pre> <p>I tried to use this var like the other charts, but with no success:</p> <pre><code>nodeSelector: eks.amazonaws.com/nodegroup: &quot;{{ .Values.WORKER_NODE }}&quot; </code></pre> <p>HOw can I pass this var to a subchart?</p>
hmar
<p>If I'm understanding this right, you want to access the parent chart value's inside the subchart. To do that you can set it as a global value or define the subchart values separately. In this case I would set it as a global value:</p> <p><code>values.yaml</code>:</p> <pre><code> global: WORKER_NODE: &lt;default value&gt; </code></pre> <p>From there, in your TF code you can set it via <code>global.WORKER_NODE</code> to pass to both <code>kong</code> chart and the <code>postgres</code> subchart.</p> <p>Also, it's best practice to use camelCase when naming chart values. So instead of <code>WORKER_NODE</code>, you should do <code>workerNode</code>.</p> <p>Best Practices: <a href="https://helm.sh/docs/chart_best_practices/values/" rel="nofollow noreferrer">https://helm.sh/docs/chart_best_practices/values/</a></p> <p>Subcharts and Globals: <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values</a></p>
cdecoux
<p>I am trying to setup an LXC container (debian) as a Kubernetes node. I am so far that the only thing in the way is the kubeadm init script...</p> <pre class="lang-sh prettyprint-override"><code>error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: &quot;configs&quot;, output: &quot;modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.4.44-2-pve/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.4.44-2-pve\n&quot;, err: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>After some research I figured out that I probably need to add the following: <code>linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay</code> But adding this to <code>/etc/pve/lxc/107.conf</code> doesn't do anything.</p> <p><strong>Does anybody have a clue how to add the linux kernel modules?</strong></p>
kevingoos
<p>To allow load with modprobe any modules inside privileged proxmox lxc container, you need add this options to container config:</p> <pre><code>lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: lxc.mount.auto: proc:rw sys:rw lxc.mount.entry: /lib/modules lib/modules none bind 0 0 </code></pre> <p>before that, you must first create the /lib/modules folder inside the container</p>
Ruslan Ryngach
<p>Is there some way to force pod creation order on Kubernetes?</p> <p>I have a scenario where Kubernetes are selecting a node pool with few resources and the first pod to be deployed consume a very small resource, but the next one consumes a lot of resources and the deployment fails.</p> <p>So I was wondering if there is a way to instruct Kubernetes to deploy first the pod that is resource hog then the small ones</p>
Rodrigo
<p>You can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">Node Selector</a> on your Pod’s specification, you just need <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels" rel="nofollow noreferrer">Node Labels</a> for that.</p> <p>Another option is to use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/" rel="nofollow noreferrer">Node Affinity</a>. You just need to have the <strong>Kubernetes cluster</strong>, and the K8s command line ready (<code>kubectl</code>). The steps for that are:</p> <ul> <li>Add a label to the node.</li> <li>Schedule a Pod using required node affinity, or.</li> <li>Schedule a Pod using preferred node affinity.</li> </ul> <p>Visit the official documentation I shared with you some lines above to have the detailed instructions, manifest examples, and the following URL for official K8’s documentation about <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Assigning Pods to Nodes</a>.</p> <p>Plus, you can also set up the Pod initialization in a specific order. See this <a href="https://stackoverflow.com/questions/56935239/how-to-configure-pod-initialization-in-a-specific-order-in-kubernetes">thread</a> for the proper instructions.</p>
Nestor Daniel Ortega Perez
<p>What would be the best way to set up a <a href="https://cloud.google.com/monitoring/docs" rel="nofollow noreferrer">GCP monitoring alert policy</a> for a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob</a> failing? I haven't been able to find any good examples out there.</p> <p>Right now, I have an OK solution based on monitoring logs in the Pod with <code>ERROR</code> severity. I've found this to be quite flaky, however. Sometimes a job will fail for some ephemeral reason outside my control (e.g., an external server returning a temporary 500) and on the next retry, the job runs successfully.</p> <p>What I really need is an alert that is only triggered when a CronJob is in a persistent failed state. That is, Kubernetes has tried rerunning the whole thing, multiple times, and it's still failing. Ideally, it could also handle situations where the Pod wasn't able to come up either (e.g., downloading the image failed).</p> <p>Any ideas here?</p> <p>Thanks.</p>
joeltine
<p>First of all, confirm the <strong>GKE’s</strong> version that you are running. For that, the following commands are going to help you to identify the <a href="https://cloud.google.com/kubernetes-engine/versioning#use_to_check_versions" rel="nofollow noreferrer"><strong>GKE’s</strong> default version</a> and the available versions too:</p> <p><strong>Default version.</strong></p> <pre><code>gcloud container get-server-config --flatten=&quot;channels&quot; --filter=&quot;channels.channel=RAPID&quot; \ --format=&quot;yaml(channels.channel,channels.defaultVersion)&quot; </code></pre> <p><strong>Available versions.</strong></p> <pre><code>gcloud container get-server-config --flatten=&quot;channels&quot; --filter=&quot;channels.channel=RAPID&quot; \ --format=&quot;yaml(channels.channel,channels.validVersions)&quot; </code></pre> <p>Now that you know your <strong>GKE’s</strong> version and based on what you want is an alert that is only triggered when a <strong>CronJob</strong> is in a persistent failed state, <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/managing-metrics#workload-metrics" rel="nofollow noreferrer">GKE Workload Metrics</a> was the <strong>GCP’s</strong> solution that used to provide a fully managed and highly configurable solution for sending to <strong>Cloud Monitoring all Prometheus-compatible metrics</strong> emitted by <strong>GKE workloads</strong> (such as a <strong>CronJob</strong> or a Deployment for an application). But, as it is right now deprecated in <strong>G​K​E 1.24</strong> and was replaced with <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus" rel="nofollow noreferrer">Google Cloud Managed Service for Prometheus</a>, then this last is the best option you’ve got inside of <strong>GCP</strong>, as it lets you monitor and alert on your workloads, using <strong>Prometheus</strong>, without having to manually manage and operate <strong>Prometheus</strong> at scale.</p> <p>Plus, you have 2 options from the outside of GCP: <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> as well and <a href="https://rancher.com/docs/rancher/v2.5/en/best-practices/rancher-managed/monitoring/#prometheus-push-gateway" rel="nofollow noreferrer">Ranch’s Prometheus Push Gateway</a>.</p> <p>Finally and just FYI, it can be done manually by querying for the job and then checking it's start time, and compare that to the current time, this way, with bash:</p> <pre><code>START_TIME=$(kubectl -n=your-namespace get job your-job-name -o json | jq '.status.startTime') echo $START_TIME </code></pre> <p>Or, you are able to get the job’s current status as a JSON blob, as follows:</p> <pre><code>kubectl -n=your-namespace get job your-job-name -o json | jq '.status' </code></pre> <p>You can see the following <a href="https://stackoverflow.com/questions/57959635/monitor-cronjob-running-on-gke">thread</a> for more reference too.</p> <p>Taking the <strong>“Failed”</strong> state as the medullary point of your requirement, setting up a bash script with <code>kubectl</code> to send an email if you see a job that is in <strong>“Failed”</strong> state can be useful. Here I will share some examples with you:</p> <pre><code>while true; do if `kubectl get jobs myjob -o jsonpath='{.status.conditions[?(@.type==&quot;Failed&quot;)].status}' | grep True`; then mail email@address -s jobfailed; else sleep 1 ; fi; done </code></pre> <p><strong>For newer K8s:</strong></p> <pre><code>while true; do kubectl wait --for=condition=failed job/myjob; mail@address -s jobfailed; done </code></pre>
Nestor Daniel Ortega Perez
<p>An upgrade of our Azure AKS - Kubernetes environment to Kubernetes version 1.19.3 forced me to also upgrade my Nginx helm.sh/chart to nginx-ingress-0.7.1. As a result I was forced to change the API version definition to networking.k8s.io/v1 since my DevOps pipeline failed accordingly (a warning for old API resulting in an error). However, now I have the problem that my session affinity annotation is ignored and no session cookies are set in the response.</p> <p>I am desperately changing names, trying different unrelated blog posts to somehow fix the issue.</p> <p>Any help would be really appreciated.</p> <p>My current nginx yaml (I have removed status/managed fields tags to enhance readability):</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: nginx-ingress-infra-nginx-ingress namespace: ingress-infra labels: app.kubernetes.io/instance: nginx-ingress-infra app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: nginx-ingress-infra-nginx-ingress helm.sh/chart: nginx-ingress-0.7.1 annotations: deployment.kubernetes.io/revision: '1' meta.helm.sh/release-name: nginx-ingress-infra meta.helm.sh/release-namespace: ingress-infra spec: replicas: 2 selector: matchLabels: app: nginx-ingress-infra-nginx-ingress template: metadata: creationTimestamp: null labels: app: nginx-ingress-infra-nginx-ingress annotations: prometheus.io/port: '9113' prometheus.io/scrape: 'true' spec: containers: - name: nginx-ingress-infra-nginx-ingress image: 'nginx/nginx-ingress:1.9.1' args: - '-nginx-plus=false' - '-nginx-reload-timeout=0' - '-enable-app-protect=false' - &gt;- -nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress - &gt;- -default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress-default-server-secret - '-ingress-class=infra' - '-health-status=false' - '-health-status-uri=/nginx-health' - '-nginx-debug=false' - '-v=1' - '-nginx-status=true' - '-nginx-status-port=8080' - '-nginx-status-allow-cidrs=127.0.0.1' - '-report-ingress-status' - '-external-service=nginx-ingress-infra-nginx-ingress' - '-enable-leader-election=true' - &gt;- -leader-election-lock-name=nginx-ingress-infra-nginx-ingress-leader-election - '-enable-prometheus-metrics=true' - '-prometheus-metrics-listen-port=9113' - '-enable-custom-resources=true' - '-enable-tls-passthrough=false' - '-enable-snippets=false' - '-ready-status=true' - '-ready-status-port=8081' - '-enable-latency-metrics=false' </code></pre> <p>My ingress configuration of the service name &quot;account&quot;:</p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1beta1 metadata: name: account namespace: infra resourceVersion: '194790' labels: app.kubernetes.io/managed-by: Helm annotations: kubernetes.io/ingress.class: infra meta.helm.sh/release-name: infra meta.helm.sh/release-namespace: infra nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/proxy-buffer-size: 128k nginx.ingress.kubernetes.io/proxy-buffering: 'on' nginx.ingress.kubernetes.io/proxy-buffers-number: '4' spec: tls: - hosts: - account.infra.mydomain.com secretName: my-default-cert **this is a self-signed certificate with cn=account.infra.mydomain.com rules: - host: account.infra.mydomain.com http: paths: - path: / pathType: Prefix backend: serviceName: account servicePort: 80 status: loadBalancer: ingress: - ip: 123.123.123.123 **redacted** </code></pre> <p>My account service yaml</p> <pre><code>kind: Service apiVersion: v1 metadata: name: account namespace: infra labels: app.kubernetes.io/instance: infra app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: account app.kubernetes.io/version: latest helm.sh/chart: account-0.1.0 annotations: meta.helm.sh/release-name: infra meta.helm.sh/release-namespace: infra spec: ports: - name: http protocol: TCP port: 80 targetPort: 80 selector: app.kubernetes.io/instance: infra app.kubernetes.io/name: account clusterIP: 10.0.242.212 type: ClusterIP sessionAffinity: ClientIP **just tried to add this setting to the service, but does not work either** sessionAffinityConfig: clientIP: timeoutSeconds: 10800 status: loadBalancer: {} </code></pre>
Drain
<p>Ok, the issue was not related to any configuration shown above. The debug logs of the nginx pods were full of error messages in regards to the kube-control namespaces. I was removing the Nginx helm chart completely and used the repositories suggested by Microsoft:</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls</a></p> <pre><code># Create a namespace for your ingress resources kubectl create namespace ingress-basic # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-basic \ --set controller.replicaCount=2 \ --set controller.nodeSelector.&quot;beta\.kubernetes\.io/os&quot;=linux \ --set defaultBackend.nodeSelector.&quot;beta\.kubernetes\.io/os&quot;=linux </code></pre>
Drain
<p>I have 2 services in kubernetes, one is mtls, the other is tls. I'm trying to configure an ingress for them. I want to configure the ssl passthrough for the mtls service but leave the tls service without ssl-passthrough, it doesn't need client certificate.</p> <p>I configured 2 ingress at the same hostname, with two different yaml file. One with passthrough, the other without passthrough.</p> <p>The current behavior is if I create the mtls ingress first, the tls one will not work, the https request that I send to tls one will always route to the mtls service. Then returns 404. But, if I configure the tls ingress first, then the mtls one. Then the tls one will work, but the mtls one will be failed for certificate issue.</p> <p>I'm not sure if the ssl passthrough annotation is configured at host level? Or can I make it work at each path level? The mtls ingress.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; name: mtls-ingress spec: rules: - host: app.abc.com http: paths: - backend: service: name: mtls-service port: number: 8081 path: /mtls-api pathType: Prefix tls: - hosts: - app.abc.com secretName: tls-nginx-mtls </code></pre> <p>Then the tls ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; name: tls-ingress spec: rules: - host: app.abc.com http: paths: - backend: service: name: tls-service port: number: 8080 path: /tls-api pathType: Prefix tls: - hosts: - app.abc.com secretName: tls-nginx-tls </code></pre> <p>It's like the two ingress override each other, only the first annotation works. It looks like passthrough is configured for the host but not the ingress or path. Have no idea. Please help. Thanks.</p>
user2857793
<p>You want to use 2 services on the same host with the annotation <code>nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot;</code> for one of them.</p> <p>This will not work because with SSL Passthrough the proxy doesn't know the path to where route the traffic.</p> <p>From <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer">the NGINX Ingress Controller User Guide</a>:</p> <blockquote> <p>The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication.</p> <p>Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.</p> </blockquote> <p>The solution is to use subdomains for your services, not paths.</p> <p>Additionally, some links from GitHub about this problem:</p> <p><a href="https://github.com/kubernetes/ingress-nginx/issues/5257" rel="nofollow noreferrer">Multiple Ingress backends ignored when SSL Passthrough is enabled</a></p> <p><a href="https://github.com/kubernetes/ingress-nginx/issues/6188" rel="nofollow noreferrer"> Ignoring SSL Passthrough for location &quot;/*&quot; in server &quot;example.com&quot;</a></p> <p><a href="https://github.com/kubernetes/ingress-nginx/issues/2132" rel="nofollow noreferrer">Path based routing only works with base path </a></p> <p>and from <a href="https://serverfault.com/questions/840000/nginx-ssl-pass-through-based-on-uri-path">serverfault</a> about NginX workflow for SSL Passthrough.</p>
Andrew Skorkin
<p>I've made a deployment in GKE with a readiness probe. My container is coming up but it seems the readiness probe is having some difficulty. When I try to describe the pod I see that there are many probe warnings but it's not clear what the warning is.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 43m default-scheduler Successfully assigned default/tripvector-7996675758-7vjxf to gke-tripvector2-default-pool-78cf58d9-5dgs Normal LoadBalancerNegNotReady 43m neg-readiness-reflector Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-07274a01-default-tripvector-np-60000-a912870e] Normal Pulling 43m kubelet Pulling image &quot;us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector&quot; Normal Pulled 43m kubelet Successfully pulled image &quot;us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector&quot; in 888.583654ms Normal Created 43m kubelet Created container tripvector Normal Started 43m kubelet Started container tripvector Normal LoadBalancerNegTimeout 32m neg-readiness-reflector Timeout waiting for pod to become healthy in at least one of the NEG(s): [k8s1-07274a01-default-tripvector-np-60000-a912870e]. Marking condition &quot;cloud.google.com/load-balancer-neg-ready&quot; to True. Warning ProbeWarning 3m1s (x238 over 42m) kubelet Readiness probe warning: </code></pre> <p>I've tried examining events with <code>kubectl get events</code> but that also doesn't provide extra details on the probe warning:</p> <pre><code> ❯❯❯ k get events LAST SEEN TYPE REASON OBJECT MESSAGE 43m Normal LoadBalancerNegNotReady pod/tripvector-7996675758-7vjxf Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-07274a01-default-tripvector-np-60000-a912870e] 43m Normal Scheduled pod/tripvector-7996675758-7vjxf Successfully assigned default/tripvector-7996675758-7vjxf to gke-tripvector2-default-pool-78cf58d9-5dgs 43m Normal Pulling pod/tripvector-7996675758-7vjxf Pulling image &quot;us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector&quot; 43m Normal Pulled pod/tripvector-7996675758-7vjxf Successfully pulled image &quot;us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector&quot; in 888.583654ms 43m Normal Created pod/tripvector-7996675758-7vjxf Created container tripvector 43m Normal Started pod/tripvector-7996675758-7vjxf Started container tripvector 3m38s Warning ProbeWarning pod/tripvector-7996675758-7vjxf Readiness probe warning: </code></pre> <p>How can I get more details on how/why this readiness probe is giving off warnings?</p> <p>EDIT: the logs of the pod are mostly empty as well (<code>klf</code> is an alias I have to kubectl logs):</p> <pre><code> ❯❯❯ klf tripvector-6f4d4c86c5-dn55c (node:1) [DEP0131] DeprecationWarning: The legacy HTTP parser is deprecated. {&quot;line&quot;:&quot;87&quot;,&quot;file&quot;:&quot;percolate_synced-cron.js&quot;,&quot;message&quot;:&quot;SyncedCron: Scheduled \&quot;Refetch expired Google Places\&quot; next run @Tue Mar 22 2022 17:47:53 GMT+0000 (Coordinated Universal Time)&quot;,&quot;time&quot;:{&quot;$date&quot;:1647971273653},&quot;level&quot;:&quot;info&quot;} </code></pre>
Paymahn Moghadasian
<p>Regarding the error in the logs <em><strong>“DeprecationWarning: The legacy HTTP parser is deprecated.”</strong></em>, it is due to the legacy HTTP parser being deprecated with the pending <em><strong>End-of-Life of Node.js 10.x</strong></em>. It will now warn on use, but otherwise continue to function and may be removed in a future <strong>Node.js 12.x</strong> release. Use this URL for more reference <a href="https://nodejs.org/ja/blog/release/v12.22.0/#:%7E:text=The%20legacy%20HTTP%20parser%20is%20runtime%20deprecated&amp;text=x%20" rel="nofollow noreferrer">Node v12.22.0 (LTS)</a>.</p> <p>On the other hand, about the <code>kubelet</code>’s <em><strong>“ProbeWarning”</strong></em> reason in the warning events on your container, <em><strong>Health check (liveness &amp; readiness)</strong></em> probes using an <em><strong>HTTPGetAction</strong></em> will no longer follow redirects to different host-names from the original probe request. Instead, these non-local redirects will be treated as a <em><strong>Success</strong></em> (the documented behavior). In this case an event with reason <strong>&quot;ProbeWarning&quot;</strong> will be generated, indicating that the redirect was ignored. If you were previously relying on the redirect to run health checks against different endpoints, you will need to perform the healthcheck logic outside the <em><strong>Kubelet</strong></em>, for instance by proxying the external endpoint rather than redirecting to it. You can verify the detailed root cause of this in the following <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md#urgent-upgrade-notes" rel="nofollow noreferrer">Kubernetes 1.14 release notes</a>. There is no way to see more detailed information about the <strong>“ProbeWarning”</strong> in the <em><strong>Events’</strong></em> table. Use the following URLs as a reference too <a href="https://phabricator.wikimedia.org/T294072" rel="nofollow noreferrer">Kubernetes emitting ProbeWarning</a>, <a href="https://github.com/kubernetes/kubernetes/issues/103877" rel="nofollow noreferrer">Confusing/incomplete readiness probe warning</a> and <a href="https://github.com/kubernetes/kubernetes/pull/103967" rel="nofollow noreferrer">Add probe warning message body</a>.</p>
Nestor Daniel Ortega Perez
<p>We run Airflow on K8s on DigitalOcean using Helm Chart. Tasks are written using <code>airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator</code>. On a regular basis we see that pods are failing with the following messages (but the issue is not constant, so the majority of time it works fine):</p> <p>We are using <code>KubernetesExecutor</code>.</p> <p>Helm Chart info - <code>airflow-stable/airflow version 7.16.0</code></p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned airflow/... to k8s-sai-pool-8c13r-9b9-8br6p Normal Pulling 18s (x2 over 31s) kubelet Pulling image &quot;registry.digitalocean.com/...:455eebb0&quot; Warning Failed 18s (x2 over 30s) kubelet Failed to pull image &quot;registry.digitalocean.com/...:455eebb0&quot;: rpc error: code = Unknown desc = Error response from daemon: Get https://{URL}: unauthorized: authentication required Warning Failed 18s (x2 over 30s) kubelet Error: ErrImagePull Normal SandboxChanged 18s (x7 over 30s) kubelet Pod sandbox changed, it will be killed and re-created. Normal BackOff 16s (x6 over 29s) kubelet Back-off pulling image &quot;registry.digitalocean.com/...:455eebb0&quot; Warning Failed 16s (x6 over 29s) kubelet Error: ImagePullBackOff </code></pre>
Stephen L.
<p>That are most likely just DigitalOcean Registry issues I suggest to try dockerhub</p>
yurets.pro
<p>I have .Net Core 3.1 web api which is deployed to AKS. I have created Azure App Insights instance to write the logs. I followed <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcore6" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcore6</a> to configure the .Net application.</p> <p>Added Microsoft.ApplicationInsights Nuget package</p> <p>Added connection string in appsettings</p> <p>Added services.AddApplicationInsightsTelemetry(); in startup.cs</p> <p>Running api from local pc I can see telemetry being logged in Visual Studio output.</p> <p>But when I deployed to Azure nothing is flowing into App Insights. Absolutely nothing.</p> <p>I am new to this and checked pod logs but dint find anything in it. The connection string is correct.</p> <p>From my local pc I tried to write to Actual App Insights. But although I can see telemetry in Visual Studio nothing is going to Azure. I am assuming because &quot;Accept data ingestion from public networks not connected through a Private Link Scope&quot; is false for the App Insight instance.So this is also not helping me to debug.I cannot change this setting.</p> <p>The Azure account is linked to On Premise network.</p> <p>Can someone point to me what could be the issue</p>
rakesh raman
<p>Microsoft.ApplicationInsights.AspNetCore was not working when using Connection String. When I changed to InstrumentKey the logs started flowing. Weird as MSFT recommendation is to use connection string</p>
rakesh raman
<p>I have grafana Dashboards, Pods drop down coming None within namespace, however we have pods running in namespace and pulling data prometheus.</p> <p>Screenshot:</p> <p><a href="https://i.stack.imgur.com/6BUzx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6BUzx.png" alt="enter image description here" /></a></p> <p>Query:</p> <pre><code> &quot;datasource&quot;: &quot;Prometheus&quot;, &quot;definition&quot;: &quot;&quot;, &quot;description&quot;: null, &quot;error&quot;: null, &quot;hide&quot;: 0, &quot;includeAll&quot;: false, &quot;label&quot;: &quot;Pod&quot;, &quot;multi&quot;: false, &quot;name&quot;: &quot;pod&quot;, &quot;options&quot;: [], &quot;query&quot;: { &quot;query&quot;: &quot;query_result(sum(container_memory_working_set_bytes{namespace=\&quot;$namespace\&quot;}) by (pod_name))&quot;, &quot;refId&quot;: &quot;Prometheus-pod-Variable-Query&quot; }, &quot;refresh&quot;: 1, &quot;regex&quot;: &quot;/pod_name=\\\&quot;(.*?)(\\\&quot;)/&quot;, &quot;skipUrlSync&quot;: false, &quot;sort&quot;: 0, &quot;tagValuesQuery&quot;: &quot;&quot;, &quot;tags&quot;: [], &quot;tagsQuery&quot;: &quot;&quot;, &quot;type&quot;: &quot;query&quot;, &quot;useTags&quot;: false </code></pre> <p>I am imported json code: <a href="https://grafana.com/grafana/dashboards/6879" rel="nofollow noreferrer">https://grafana.com/grafana/dashboards/6879</a></p>
Vidya
<p>Edit your dashboard's JSON:<br /> Rename &quot;pod_name&quot; to &quot;pod&quot; in the 2 places (and save)</p> <p>Looks like this grafana dashboard was created with older kubernetes version,<br /> and metrics internals since changed.</p> <p>Probably will also need similar edits for &quot;container_name&quot; changing to &quot;container&quot; in these older dashboards</p>
siwasaki
<p>I'm trying to connect an external webflow page to our kubernetes cluster ingress on GCP GKE. Specifically, I want everything at <a href="http://www.domain.com" rel="nofollow noreferrer">www.domain.com</a> to go to the external webflow service, and everything at <a href="http://www.domain.com/app" rel="nofollow noreferrer">www.domain.com/app</a> to go to our local service in the cluster.</p> <p>I've seen this question <a href="https://stackoverflow.com/questions/65919773/ingress-nginx-proxy-to-outside-website-webflow-hosted">Ingress Nginx Proxy to Outside Website (Webflow hosted)</a> and followed it, but I couldn't get it working. I keep getting an error <code>Translation failed: invalid ingress spec: service &quot;default/external-service&quot; is type &quot;ExternalName&quot;, expected &quot;NodePort&quot; or &quot;LoadBalancer&quot;; service &quot;default/external-service&quot; is type &quot;ExternalName&quot;, expected &quot;NodePort&quot; or &quot;LoadBalancer&quot;</code></p> <p>Here's my setup</p> <p>External Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: external-service namespace: default spec: externalName: participant-homepage-9f8712.webflow.io ports: - port: 443 protocol: TCP targetPort: 443 sessionAffinity: None type: ExternalName status: loadBalancer: {} </code></pre> <p>Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.gcp.kubernetes.io/pre-shared-cert: _, ingress.kubernetes.io/backends: _, ingress.kubernetes.io/forwarding-rule: _, ingress.kubernetes.io/https-forwarding-rule: _, ingress.kubernetes.io/https-target-proxy: _, ingress.kubernetes.io/preserve-host: &quot;false&quot; ingress.kubernetes.io/secure-backends: &quot;true&quot; ingress.kubernetes.io/ssl-cert: _, ingress.kubernetes.io/static-ip: _, ingress.kubernetes.io/target-proxy: _, ingress.kubernetes.io/url-map: _, networking.gke.io/managed-certificates: _, nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_name participant-homepage-9f8712.webflow.io; proxy_ssl_server_name on; nginx.ingress.kubernetes.io/upstream-vhost: participant-homepage-9f8712.webflow.io name: my-ingress namespace: default spec: backend: serviceName: external-service servicePort: 443 rules: - host: www.honeybeehub.xyz http: paths: - backend: serviceName: app-service servicePort: 80 path: /app/* pathType: ImplementationSpecific - backend: serviceName: external-service servicePort: 443 path: /* pathType: ImplementationSpecific status: loadBalancer: ingress: - ip: _._._._ </code></pre> <p>Any help would be greatly appreciated. Thank you!</p>
wei
<p>The reason why the steps on the question you quoted <a href="https://stackoverflow.com/questions/65919773/ingress-nginx-proxy-to-outside-website-webflow-hosted">Ingress Nginx Proxy to Outside Website (Webflow hosted)</a> are not working, is because that question focuses on EKS (Amazon Elastic Kubernetes Service). ExternalName Services are not supported in GCE Ingress, as you can see in the following <a href="https://stackoverflow.com/questions/53107348/error-creating-ingress-path-with-gce-externalname">question</a>. What I can recommend to you is to post it as a Feature Request on the <a href="https://www.google.com/url?q=https://cloud.google.com/support/docs/issue-trackers&amp;sa=D&amp;source=docs&amp;ust=1636750079733000&amp;usg=AOvVaw2ocSL1lAPc3gUomJhKCRJm" rel="nofollow noreferrer">Google's Issue tracker</a></p>
Nestor Daniel Ortega Perez
<p>As mentioned <a href="https://github.com/kubernetes/kube-state-metrics/issues/536#issue-358940186" rel="nofollow noreferrer">here</a>: &quot;Currently <code>namespace</code>, <code>pod</code> are default labels provided in the metrics.&quot;</p> <hr /> <p><code>kubectl -n mynamespace get pods --show-labels</code> show the label values that are defined in deployment yaml for Kubernetes</p> <hr /> <p>Goal is to use default label(<code>namespace</code> &amp; <code>pod</code> provided by kubernetes) values through Grafana dashboard's promQL, that prometheus monitor.</p> <pre><code>sum(container_memory_working_set_bytes{namespace=&quot;mynamespace&quot;,pod=~&quot;unknown&quot;}) by (pod) </code></pre> <hr /> <p>How to view the values of default label <code>pod</code> using <code>kubectl</code>?</p>
overexchange
<p>According to the link that you shared, <code>{namespace}</code> and <code>{pod}</code> are default labels provided in the metrics, they are referring to the exposed metrics included in the <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics (KSM) service.</a></p> <p>kube-state-metrics (KSM) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. The exposed metrics are detailed <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs" rel="nofollow noreferrer">in this document</a>.</p> <p>In the following links, you can find the related metric for <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="nofollow noreferrer">Pods</a> and <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/namespace-metrics.md" rel="nofollow noreferrer">namespace</a>.</p> <p>Speaking about the default labels for pods, you need to create a <a href="https://kubernetes.io/blog/2021/06/21/writing-a-controller-for-pod-labels/" rel="nofollow noreferrer">Pod label controller</a> or indicate the label in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/pod#pod-templates" rel="nofollow noreferrer">Pod Template</a>.</p> <p>If you don't explicitly specify labels for the controller, Kubernetes will use the pod template label as the default label for the controller itself. The pod selector will also default to pod template labels if unspecified.</p> <p>If you want to know more about best practices for labels, please <a href="https://www.replex.io/blog/9-additional-best-practices-for-working-with-kubernetes-labels-and-label-selectors" rel="nofollow noreferrer">follow this link</a>. If you want to know more about Labels and selector, <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">follow this link</a>. More about <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates" rel="nofollow noreferrer">Pod Template here</a>.</p>
Ismael Clemente Aguirre
<p>I am trying to understand the difference between Nginx ingress controller</p> <pre><code>kind:service </code></pre> <p>vs</p> <pre><code>kind: Ingress </code></pre> <p>vs</p> <pre><code>kind: configMap </code></pre> <p>in Kubernetes but a little unclear. Is kind: Service same as Kind: Ingress in Service and Ingress?</p>
Jainam Shah
<p><strong>kind</strong> represents the type of Kubernetes objects to be created while using the <strong>yaml</strong> file.</p> <blockquote> <p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" rel="nofollow noreferrer">Kubernetes objects</a> are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:</p> <ul> <li>What containerized applications are running (and on which nodes)</li> <li>The resources available to those applications</li> <li>The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance</li> </ul> <p><a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMap</a> Object: A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> Object: An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> Object: In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).</p> </blockquote>
Ismael Clemente Aguirre
<p>I created PV as follows:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: foo-pv spec: storageClassName: &quot;my-storage&quot; claimRef: name: foo-pvc namespace: foo </code></pre> <p>Why we need to give storageClassName in PV? When Storage class creates PV, why to give storageClassName in PV?</p> <p>Can someone help me to understand this?</p>
Vasu Youth
<p>According to Kubernetes Official documentation:</p> <p><strong>Why we need to give storageClassName in PV?</strong></p> <blockquote> <p>Each <em>StorageClass</em> contains the fields <em>provisioner, parameters, and reclaimPolicy</em>, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.</p> <p>The name of a StorageClass object is significant, and is how users can request a particular class. Administrators set the name and other parameters of a class when first creating <em>StorageClass</em> objects, and the objects cannot be updated once they are created.</p> </blockquote> <p><strong>When Storage class creates PV, why to give storageClassName in PV?</strong></p> <blockquote> <p>A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using <em>Storage Classes</em>. It is a resource in the cluster just like a node is a cluster resource.</p> <p>Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the <em>StorageClass</em> resource.</p> </blockquote> <p>If you wish to know more about Storage class resources, please follow <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">this link</a>, or <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">this one</a> to know more about Persistent Volumes.</p>
Ismael Clemente Aguirre
<p>I have a cli app written in NodeJS [not by me].</p> <p>I want to deploy this on a k8s cluster like I have done many times with web servers.</p> <p>I have not deployed something like this before, so I am in a kind of a loss.</p> <p>I have worked with dockerized cli apps [like Terraform] before, and i know how to use them in a CICD.</p> <p>But how should I deploy them in a pod so they are always available for usage from another app in the cluster?</p> <p>Or is there a completely different approach that I need to consider?</p> <p>#EDIT#</p> <p>I am using this in the end of my Dockerfile ..</p> <pre><code># the main executable ENTRYPOINT [&quot;sleep&quot;, &quot;infinity&quot;] # a default command CMD [&quot;mycli help&quot;] </code></pre> <p>That way the pod does not restart and the cli inside is waiting for commands like <code>mycli do this</code></p> <p>Is it a <code>hacky</code> way that is frowned upon or a legit solution?</p>
Kostas Demiris
<p>Your edit is one solution, another one if you do not want or cannot change the Docker image is to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">Define a Command for a Container</a> to loop infinitely, this would achieve the same as the Dockerfile ENTRYPOINT but without having to rebuild the image.</p> <p>Here's an example of such implementation:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: command-demo labels: purpose: demonstrate-command spec: containers: - name: command-demo-container image: debian command: [&quot;/bin/sh&quot;, &quot;-ec&quot;, &quot;while :; do echo '.'; sleep 5 ; done&quot;] restartPolicy: OnFailure </code></pre> <p>As for your question about if this is a legit solution, this is hard to answer; I would say it depends on what your application is designed to do. Kubernetes Pods are designed to be ephemeral, so a good solution would be one that is running until the job is completed; for a web server, for example, the job is never completed because it should be constantly listening to requests.</p>
Gabriel Robledo Ahumada
<p>This is my configMap. I'm trying to specify [mysqld] config, but when I use this file alone with</p> <pre><code>helm upgrade -i eramba bitnami/mariadb --set auth.rootPassword=eramba,auth.database=erambadb,initdbScriptsConfigMap=eramba,volumePermissions.enabled=true,primary.persistence.existingClaim=eramba-storage --namespace eramba-1 --set mariadb.volumePermissions.enabled=true </code></pre> <p>I don't see the specified configurations in my db pod; however, I do see the c2.8.1.sql file applied tho.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: eramba namespace: eramba-1 data: my.cnf: |- [mysqld] max_connections = 2000 sql_mode=&quot;&quot; max_allowed_packet=&quot;128000000&quot; innodb_lock_wait_timeout=&quot;200&quot; c2.8.1.sql: | CREATE DATABASE IF NOT EXISTS erambadb; #create user 'erambauser'@'eramba-mariadb' identified by 'erambapassword'; #grant all on erambadb.* to 'erambauser'@'eramba-mariadb'; #flush privileges; USE erambadb; # # SQL Export # Created by Querious (201067) # Created: 22 October 2019 at 17:39:48 CEST # Encoding: Unicode (UTF-8) # SET @PREVIOUS_FOREIGN_KEY_CHECKS = @@FOREIGN_KEY_CHECKS; SET FOREIGN_KEY_CHECKS = 0; ..... </code></pre>
Bryan
<p>If you look at <a href="https://github.com/bitnami/charts/blob/master/bitnami/mariadb/values.yaml" rel="nofollow noreferrer">values.yaml</a> file for MariaDB helm chart, you can see 3 types of ConfigMap:</p> <ul> <li>initdbScriptsConfigMap - to supply Init scripts to be run at first boot of DB instance</li> <li>primary.existingConfigmap - to control MariaDB Primary instance configuration</li> <li>secondary.existingConfigmap - to control MariaDB Secondary instance configuration</li> </ul> <p>Thus, each of them is intended for the specific purpose and it is not a good idea to mix these settings in one ConfigMap.</p> <p>I recommend you to create new ConfigMap eramba2 for custom <code>my.cnf</code> with all necessary values (not only new) as below.</p> <pre><code> apiVersion: v1 kind: ConfigMap metadata: name: eramba2 namespace: eramba-1 data: my.cnf: |- [mysqld] skip-name-resolve explicit_defaults_for_timestamp max_connections = 2000 sql_mode=&quot;&quot; innodb_lock_wait_timeout=&quot;200&quot; basedir=/opt/bitnami/mariadb plugin_dir=/opt/bitnami/mariadb/plugin port=3306 socket=/opt/bitnami/mariadb/tmp/mysql.sock tmpdir=/opt/bitnami/mariadb/tmp max_allowed_packet=128000000 bind-address=:: pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid log-error=/opt/bitnami/mariadb/logs/mysqld.log character-set-server=UTF8 collation-server=utf8_general_ci [client] port=3306 socket=/opt/bitnami/mariadb/tmp/mysql.sock default-character-set=UTF8 plugin_dir=/opt/bitnami/mariadb/plugin [manager] port=3306 socket=/opt/bitnami/mariadb/tmp/mysql.sock pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid </code></pre> <p>Create eramba2 ConfigMap:</p> <pre><code>kubectl create -f eramba2.yaml </code></pre> <p>And then create MariaDB with helm using new ConfigMap eramba2:</p> <pre><code>helm upgrade -i eramba bitnami/mariadb --set auth.rootPassword=eramba,auth.database=erambadb,initdbScriptsConfigMap=eramba,volumePermissions.enabled=true,primary.persistence.existingClaim=eramba-storage,mariadb.volumePermissions.enabled=true,primary.existingConfigmap=eramba2 --namespace eramba-1 </code></pre> <p>Connect to pod:</p> <pre><code>kubectl exec -it eramba-mariadb-0 -- /bin/bash </code></pre> <p>Check my.cnf file:</p> <pre><code>cat /opt/bitnami/mariadb/conf/my.cnf </code></pre>
Andrew Skorkin
<p>I have an application that is supposed to expose 2 x ports and the application does not have the default healthcheck endpoint of <code>/</code> that returns <code>200</code>, so at the moment, I supply a custom healthcheck endpoint just for 1 port. I haven't exposed the other port yet as I don't know how to provide another custom healthcheck endpoint for the same application.</p> <p>This is how my Terraform configuration looks like.</p> <pre><code>resource &quot;kubernetes_deployment&quot; &quot;core&quot; { metadata { name = &quot;core&quot; labels = { app = &quot;core&quot; } } spec { replicas = 1 selector { match_labels = { app = &quot;core&quot; } } template { metadata { labels = { app = &quot;core&quot; } } spec { container { name = &quot;core&quot; image = &quot;asia.gcr.io/admin/core:${var.app_version}&quot; port { container_port = 8069 } readiness_probe { http_get { path = &quot;/web/database/selector&quot; port = &quot;8069&quot; } initial_delay_seconds = 15 period_seconds = 30 } image_pull_policy = &quot;IfNotPresent&quot; } } } } } resource &quot;kubernetes_service&quot; &quot;core_service&quot; { metadata { name = &quot;core-service&quot; } spec { type = &quot;NodePort&quot; selector = { app = &quot;core&quot; } port { port = 8080 protocol = &quot;TCP&quot; target_port = &quot;8069&quot; } } } </code></pre> <p>How do I tell GKE to expose the other port (8072) and use a custom healthcheck endpoint for both ports?</p>
billydh
<p>There are a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#configuring_ingress_features" rel="nofollow noreferrer">GKE Ingress feature</a> called <code>FrontendConfig</code> and <code>BackendConfig</code> <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">custom resource definitions (CRDs)</a> that allow you to further customize the load balancer, you can use a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#unique_backendconfig_per_service_port" rel="nofollow noreferrer">Unique BackendConfig per Service port</a> to specify a custom <code>BackendConfig</code> for a specific port or ports of a Service or <code>MultiClusterService</code>, using a key that matches the port's name or port's number. The Ingress controller uses the specific <code>BackendConfig</code> when it creates a load balancer backend service for a referenced Service port</p> <p>When using a <code>BackendConfig</code> to provide a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health" rel="nofollow noreferrer">custom load balancer health check</a>, the port number you use for the load balancer's health check can differ from the Service's <code>spec.ports[].port</code> number, here's an example of the service and the custom health check:</p> <p>Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/backend-config: '{&quot;ports&quot;: { &quot;service-reference-a&quot;:&quot;backendconfig-reference-a&quot;, &quot;service-reference-b&quot;:&quot;backendconfig-reference-b&quot; }}' spec: ports: - name: port-name-1 port: port-number-1 protocol: TCP targetPort: 50000 - name: port-name-2 port: port-number-2 protocol: TCP targetPort: 8080 </code></pre> <p>Custom Health Check:</p> <pre><code>apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: healthCheck: checkIntervalSec: interval timeoutSec: timeout healthyThreshold: health-threshold unhealthyThreshold: unhealthy-threshold type: protocol requestPath: path port: port </code></pre>
arcabah
<p>I have twenty different repositories for the twenty separate services to be deployed on an OKD cluster. I want to create separate eventListener for each repository/service. The first eventListener and all it's related components (Deployment, Pod, Service) are successfully created and the eventListener status becomes active, but on deploying more eventListener, eventListener is created but none of the related resources like (Deployment, Pod, and Service) are created and the status is Empty. Here I am sharing the eventListener YAML file.</p> <pre><code>--- apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: el-name namespace: namespace-name spec: serviceAccountName: pipeline triggers: - triggerRef: trigger-reference-name </code></pre>
Ray
<p>I fixed it by updating my Trigger code, to filter the repository names, so that just the pushed repository can start the pipeline.</p> <pre><code> --- apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: trigger-name namespace: namespace-name spec: nodeSelector: node-role.kubernetes.io/worker-serial=worker2 spec: serviceAccountName: pipeline interceptors: - ref: name: &quot;cel&quot; params: - name: &quot;overlays&quot; value: - key: X-Hub-Signature expression: &quot;1234567&quot; - name: &quot;filter&quot; value: &quot;header.match('X-Event-Key', 'repo:push') &amp;&amp; body.push.changes[0].new.name=='dev'&quot; - name: &quot;filter&quot; value: &quot;header.match('X-Event-Key', 'repo:push') &amp;&amp; body.repository.full_name=='org-name/service-name'&quot; bindings: - ref: binding-sample template: ref: tt-sample </code></pre> <p>Thanks @SYN for some useful hints.</p>
Ray
<p>I have 2 pods that are meant to send logs to Elastic search. Logs in /var/log/messages get sent but some reason service_name.log doesn't get sent - I think it is due to the configuration for Elastic search. There is a .conf file in these 2 pods that handle the connection to Elastic Search.</p> <p>I want to make changes to test if this is indeed the issue. I am not sure if the changes take effect as soon as I edit the file. Is there a way to restart/update the pod without losing changes I might make to this file?</p>
HollowDev
<p>To store non-confidential data as a configuration file in a volume, you could use <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMaps</a>.</p> <p>Here is an example of a Pod that mounts a ConfigMap in a volume:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: &quot;/etc/foo&quot; readOnly: true volumes: - name: foo configMap: name: myconfigmap </code></pre>
Gabriel Robledo Ahumada
<p>I am new to movetokube tool. I am struggling to understand how the move2kube collect command works. The <a href="https://move2kube.konveyor.io/" rel="nofollow noreferrer">web site</a> doesn't have any documentation on this command which is very surprising. I want to get all the applications installed in the Cloud Foundary cluster and I hope move2kube does this through move2kube collect command (or not?). I am not sure sure whether I have to execute the move2kube command on the Cloud Foundary cluster or K8 cluster. Please help!</p> <p>I am executing the following move2kube command on a CF cluster</p> <blockquote> <p>move2kube collect</p> </blockquote> <p>I see the following error</p> <pre><code>INFO[0000] Begin collection INFO[0000] [*collector.ClusterCollector] Begin collection WARN[0001] Error while fetching storage classes using command [/usr/local/bin/kubectl get sc -o yaml] ERRO[0001] API request for server-group list failed WARN[0001] Failed to retrieve preferred group information from cluster WARN[0001] Failed to collect using the API. Error: &quot;Get \&quot;https://cluster02.someserver.com:8443/api?timeout=32s\&quot;: failed to refresh token: oauth2: cannot fetch token: 401 \nResponse: {\&quot;error\&quot;:\&quot;unauthorized\&quot;,\&quot;error_description\&quot;:\&quot;Bad credentials\&quot;}&quot; . Falling back to using the CLI. ERRO[0001] Error while running kubectl api-resources: exit status 1 WARN[0001] Failed to collect using the CLI. Error: &quot;exit status 1&quot; WARN[0001] [*collector.ClusterCollector] failed. Error: &quot;exit status 1&quot; INFO[0001] [*collector.ImagesCollector] Begin collection INFO[0001] [*collector.ImagesCollector] Done INFO[0001] [*collector.CFContainerTypesCollector] Begin collection WARN[0002] Error while getting buildpacks : exit status 1 WARN[0002] Unable to collect buildpacks from cf instance : exit status 1 ERRO[0002] exit status 1 WARN[0002] Unable to find used buildpacks : exit status 1 INFO[0004] [*collector.CFContainerTypesCollector] Done INFO[0004] [*collector.CfAppsCollector] Begin collection ERRO[0004] exit status 1 WARN[0004] [*collector.CfAppsCollector] failed. Error: &quot;exit status 1&quot; INFO[0004] Collection done INFO[0004] Collect Output in [/home/mytest/move2kube/samples/m2k_collect]. Copy this directory into the source directory to be used for planning. </code></pre>
KurioZ7
<p>From <a href="https://github.com/konveyor/move2kube/blob/main/USAGE.md#usage" rel="nofollow noreferrer">the move2kube GitHub page</a>:</p> <blockquote> <p><strong>Usage</strong></p> <p><strong>One step Simple approach</strong></p> <p><code>move2kube transform -s src</code></p> <p><strong>Two step involved approach</strong></p> <ol> <li><em>Plan</em> : Place source code in a directory say <code>src</code> and generate a plan. For example, you can use the <code>samples</code> directory. <code>move2kube plan -s src</code></li> <li><em>Transform</em> : In the same directory, invoke the below command. <code>move2kube transform</code></li> </ol> <p>Note: If information about any runtime instance say cloud foundry or kubernetes cluster needs to be collected use <code>move2kube collect</code>. You can place the collected data in the <code>src</code> directory used in the plan.</p> </blockquote> <p>And from <a href="https://ashokponkumar.medium.com/introducing-konveyor-move2kube-f3b28e78cd22" rel="nofollow noreferrer">the article <strong>Introducing Konveyor Move2Kube</strong> on Medium</a>:</p> <blockquote> <p><strong>Move2Kube Usage</strong></p> <p>Move2Kube takes as input the source artifacts and outputs the target deployment artifacts.</p> <p>Move2Kube accomplishes the above activities using a 3 step approach of</p> <ol> <li><em>Collect</em> : If runtime inspection is required, <code>move2kube collect</code> will analyse your runtime environment such as cloud foundry or kubernetes, extract the required metadata and output them as yaml files in <code>m2k_collect</code> folder.</li> </ol> <p>...</p> </blockquote>
Andrew Skorkin
<p>When I try to create ingress controller on my local machine(mac) using DockerDesktop, I get this error:</p> <pre><code>error when creating ingress/default.yaml&quot;: ingresses.networking.k8s.io &quot;default-test&quot; is forbidden: Internal error occurred: 2 default IngressClasses were found, only 1 allowed </code></pre> <p>And another error that I faced today is when I create NodePort service and exposed it on port 30200 for example, and when I try to connect to localhost:30200, I get this error:</p> <pre><code>curl: (7) Failed to connect to localhost port 30036: Connection refused </code></pre> <p>The only way to connect to the service is using port forwarding.</p>
Petar Petrov
<p>It seems like you have multiple default ingress classes. Check it out by running.</p> <pre><code>$kubectl get ingressclass </code></pre> <p>If you do not specify a particular ingress class when creating your ingress controller, it will fall back on the default ingress class. However, as you have two default ingress classes, it fails. For more info please refer to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers" rel="nofollow noreferrer">docs</a>. You may want to explicitly set a ingress class for your ingress controller or make your cluster having only a single default ingress class.</p>
yezper
<p>I am creating a ClusterIssuer and a Certificate. However, there is <em><strong>no</strong></em> <code>tls.crt</code> on the secret! What I am doing wrong?</p> <p>The clusterissuer looks like is running fine, but neither the keys has the crt</p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-myapp-clusterissuer namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: wildcard-myapp-com solvers: - dns01: cloudDNS: serviceAccountSecretRef: name: clouddns-service-account key: dns-service-account.json project: app selector: dnsNames: - '*.myapp.com' - myapp.com --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: myapp-com-tls namespace: cert-manager spec: secretName: myapp-com-tls issuerRef: name: letsencrypt-myapp-issuer kind: ClusterIssuer commonName: '*.myapp.com' dnsNames: - 'myapp.com' - '*.myapp.com' </code></pre> <p><a href="https://i.stack.imgur.com/s65zp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s65zp.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/5ZlNZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ZlNZ.png" alt="enter image description here" /></a></p>
Rodrigo
<p>With the information provided it is very hard to troubleshoot this, you could be hitting this <a href="https://github.com/cert-manager/cert-manager/issues/2111" rel="nofollow noreferrer">bug</a>.</p> <p>You can start troubleshooting this kind of issues by following this procedure:</p> <ol> <li>Get the certificate request name:</li> </ol> <pre><code>kubectl -n &lt;namespace&gt; describe certificate myapp-com-tls ... Created new CertificateRequest resource &quot;myapp-com-tls-xxxxxxx&quot; </code></pre> <ol start="2"> <li>The request will generate an order, get the order name with the command:</li> </ol> <pre><code>kubectl -n &lt;namespace&gt; describe certificaterequests myapp-com-tls-xxxxxxx … Created Order resource &lt;namespace&gt;/myapp-com-tls-xxxxxxx-xxxxx </code></pre> <ol start="3"> <li>The order will generate a challenge resource, get that with:</li> </ol> <pre><code>kubectl -n &lt;namespace&gt; describe order myapp-com-tls-xxxxxxx-xxxxx … Created Challenge resource &quot;myapp-com-tls-xxxxxxx-xxxxx-xxxxx&quot; for domain &quot;yourdomain.com&quot; </code></pre> <ol start="4"> <li>Finally, with the challenge name, you can get the status of the validation for you certificate:</li> </ol> <pre><code>kubectl -n &lt;namespace&gt; describe challenges myapp-com-tls-xxxxxxx-xxxxx-xxxxx ... Reason: Successfully authorized domain ... Normal Started 2m45s cert-manager Challenge scheduled for processing Normal Presented 2m45s cert-manager Presented challenge using http-01 challenge mechanism Normal DomainVerified 2m22s cert-manager Domain &quot;yourdomain.com&quot; verified with &quot;http-01&quot; validation </code></pre> <p>If the status of the challenge is other than <code>DomainVerified</code>, then something went wrong while requesting the certificate from let's encrypt and will see a reason in the output.</p>
Gabriel Robledo Ahumada
<p>I'm new to Kubernetes and trying to point all requests to the domain to another local service.</p> <p>Both applications are running in the same cluster under a different namespace</p> <p>Example domains <code>a.domain.com</code> hosting first app <code>b.domain.com</code> hosting the second app</p> <p>When I do a <code>curl</code> request from the first app to the second app (<code>b.domain.com</code>). it travels through the internet to the second app.</p> <p>Usually what I could do is in <code>/etc/hosts</code> point <code>b.domain.com</code> to localhost.</p> <p>What do we do in this case in Kubernetes?</p> <p>I was looking into Network Policies but I'm not sure if it correct approach.</p> <p>Also As I understood we could just call <code>service name.namespace:port</code> from the first app. But I would like to keep the full URL.</p> <p>Let me know if you need more details to help me solve this.</p>
SergkeiM
<p>The way to do it is by using the <a href="https://gateway-api.sigs.k8s.io" rel="nofollow noreferrer">Kubernetes Gateway API</a>. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.</p> <p>For what you want, <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at <a href="https://cloud.google.com/anthos" rel="nofollow noreferrer">Anthos</a>, which is the managed version of Istio.</p> <p>Finally, take a look at the blog <a href="https://cloud.google.com/blog/products/networking/welcome-to-the-service-mesh-era-introducing-a-new-istio-blog-post-series" rel="nofollow noreferrer">Welcome to the service mesh era</a>, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.</p>
Gabriel Robledo Ahumada
<p>In my POD, I wanted to restrict ALL my containers to read-only file systems with <em><strong>securityContext: readOnlyRootFilesystem: true</strong></em><br /> example (note: yaml reduced for brevity)</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: server123 name: server123 spec: securityContext: readOnlyRootFilesystem: true containers: - image: server1-image name: server1 - image: server2-image name: server2 - image: server3-image name: server3 </code></pre> <p>this will result in:</p> <blockquote> <p>error: error validating &quot;server123.yaml&quot;: error validating data: ValidationError(Pod.spec.securityContext): unknown field &quot;readOnlyRootFilesystem&quot; in io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>instead I have to configure as:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: server123 name: server123 spec: containers: - image: server1-image name: server1 securityContext: readOnlyRootFilesystem: true - image: server2-image name: server2 securityContext: readOnlyRootFilesystem: true - image: server3-image name: server3 securityContext: readOnlyRootFilesystem: true </code></pre> <p>Is there a way to set this security restriction ONCE for all containers? If not why not?</p>
siwasaki
<p>In Kubernetes, can configure <em><strong>securityContext</strong></em> at pod and/or container level, containers would inherit pod-level settings, but can override in their own.</p> <p>The configuration options for pods and containers do not, however, overlap - you can only set specific ones at each level,<br /> Container level: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#securitycontext-v1-core" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#securitycontext-v1-core</a><br /> Pod level: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#podsecuritycontext-v1-core" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#podsecuritycontext-v1-core</a></p> <p>Its not documented clearly what can be inherited and what cannot (and why!). You have to read through both lists and compare. I would assume that POD's securityContext would allow, say, <em><strong>readOnlyRootFilesystem: true</strong></em> and various <em><strong>capabilities</strong></em>, to be set once and not have to be replicated in each underlying container's securityContext, but <strong>PodSecurityContext</strong> does not allow this!</p> <p>Would be particularly useful when (re)configuring various workloads to adhere to PodSecurityPolicies.</p> <p>I wonder why a Pod's <em><strong>securityContext</strong></em> configuration is labelled as such, and not instead as <em><strong>podSecurityContext</strong></em>, which is what it actually represents.</p>
siwasaki
<p>Small Kubernetes API question please.</p> <p>(This is not helm related btw)</p> <p>I am just running a basic <code>kubectl get --raw &quot;/apis/custom.metrics.k8s.io/v1beta1&quot;</code></p> <p>However, I got the following as result: <code>Error from server (NotFound): the server could not find the requested resource</code></p> <p>I am a bit confused here, hope this technical question is not too much of a trouble.</p> <p>What does it even mean?</p> <p>Is it because I failed to create something? (I never created this &quot;custom.metrics.k8s.io&quot; myself)</p> <p>Maybe some kind of credential issues?</p> <p>How can I root cause, troubleshoot and fix this please?</p> <p>Thank you!</p>
PatPanda
<p>You need to create the APIs with for example:</p> <pre><code>apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: name: v1beta1.custom.metrics.k8s.io spec: service: name: prometheus-adapter namespace: monitoring group: custom.metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 </code></pre> <p>This works with kube-prometheus. Maybe you have to change <code>spec.service.name</code> and <code>spec.service.namepsace</code> for whatever you use as a monitoring service.</p> <p>I found this <a href="https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/deploy/manifests/custom-metrics-apiservice.yaml" rel="nofollow noreferrer">here</a>, but had to change the version of <code>apiregistration.k8s.io</code> from <code>v1beta1</code> to <code>v1</code>.</p>
leonnicolas
<p>I have a k8s cronjob run my docker image <code>transaction-service</code>.</p> <p>It starts and gets its job done successfully. When it's over, I expect the pod to terminate but... <code>istio-proxy</code> still lingers there:</p> <p><a href="https://i.stack.imgur.com/NdnGQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NdnGQ.png" alt="containers" /></a></p> <p>And that results in:</p> <p><a href="https://i.stack.imgur.com/qehiJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qehiJ.png" alt="unready pod" /></a></p> <p>Nothing too crazy, but I'd like to fix it.</p> <p>I know I should call <code>curl -X POST http://localhost:15000/quitquitquit</code></p> <p>But I don't know where and how. I need to call that quitquitquit URL only when transaction-service is in a completed state. I read about <code>preStop</code> lifecycle hook, but I think I need more of a <code>postStop</code> one. Any suggestions?</p>
Fabio B.
<p>You have a few options here:</p> <ol> <li>On your job/cronjob spec, add the following lines and your job immediately after:</li> </ol> <pre><code>command: [&quot;/bin/bash&quot;, &quot;-c&quot;] args: - | trap &quot;curl --max-time 2 -s -f -XPOST http://127.0.0.1:15020/quitquitquit&quot; EXIT while ! curl -s -f http://127.0.0.1:15020/healthz/ready; do sleep 1; done echo &quot;Ready!&quot; &lt; your job &gt; </code></pre> <ol start="2"> <li>Disable Istio injection at the Pod level in your Job/Cronjob definition:</li> </ol> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: ... spec: ... jobTemplate: spec: template: metadata: annotations: # disable istio on the pod due to this issue: # https://github.com/istio/istio/issues/11659 sidecar.istio.io/inject: &quot;false&quot; </code></pre> <p>Note: The annotation should be on the Pod's template, not on the Job's template.</p>
Gabriel Robledo Ahumada
<p>I have the below pipeline. </p> <pre><code>pipeline { agent any environment { PROJECT_ID = "*****" IMAGE = "gcr.io/$PROJECT_ID/node-app" BRANCH_NAME_NORMALIZED = "${BRANCH_NAME.toLowerCase().replace(" / ", " _ ")}" } stages { stage('Build') { steps { sh ' docker build -t ${IMAGE}:${BRANCH_NAME_NORMALIZED} . ' } } stage('Push') { steps { withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) { sh("gcloud auth activate-service-account --key-file=${GC_KEY}") } sh ' gcloud auth configure-docker ' sh ' docker push $IMAGE:${BRANCH_NAME_NORMALIZED} ' } } stage('Deploy') { steps { withDockerContainer(image: "gcr.io/google.com/cloudsdktool/cloud-sdk", toolName: 'latest') { withCredentials([file(credentialsId: 'jenkins_secret', variable: 'GC_KEY')]) { sh("gcloud auth activate-service-account --key-file=${GC_KEY}") sh("gcloud container clusters get-credentials k8s --region us-central1 --project ${DEV_PROJECT}") sh("kubectl get pods") } } } } } } </code></pre> <p>In Deploy stage it gives the following error : </p> <blockquote> <p>gcloud auth activate-service-account --key-file=**** WARNING: Could not setup log file in /.config/gcloud/logs, (Error: Could not create directory [/.config/gcloud/logs/2020.02.05]: Permission denied.</p> <p>Please verify that you have permissions to write to the parent directory.) ERROR: (gcloud.auth.activate-service-account) Could not create directory [/.config/gcloud]: Permission denied. Please verify that you have permissions to write to the parent directory.</p> </blockquote> <p>I can't understand where this command wants to create a directory, docker container or in Host machine? Have you got any similar problem ? </p>
Amir Damirov
<p>You can set where gcloud stores it's configs using the environment variable CLOUDSDK_CONFIG</p> <pre><code>environment { CLOUDSDK_CONFIG = &quot;${env.WORKSPACE}&quot; } </code></pre> <p>I had the same problem and that worked for me.</p>
schulz
<p>How to access Kubernetes worker node labels from the container/pod running in the cluster? Labels are set on the worker node as the yaml output of this kubectl command launched against this Azure AKS worker node shows :</p> <pre><code>$ kubectl get nodes aks-agentpool-39829229-vmss000000 -o yaml apiVersion: v1 kind: Node metadata: annotations: node.alpha.kubernetes.io/ttl: &quot;0&quot; volumes.kubernetes.io/controller-managed-attach-detach: &quot;true&quot; creationTimestamp: &quot;2021-10-15T16:09:20Z&quot; labels: agentpool: agentpool beta.kubernetes.io/arch: amd64 beta.kubernetes.io/instance-type: Standard_DS2_v2 beta.kubernetes.io/os: linux failure-domain.beta.kubernetes.io/region: eastus failure-domain.beta.kubernetes.io/zone: eastus-1 kubernetes.azure.com/agentpool: agentpool kubernetes.azure.com/cluster: xxxx kubernetes.azure.com/mode: system kubernetes.azure.com/node-image-version: AKSUbuntu-1804gen2containerd-2021.10.02 kubernetes.azure.com/os-sku: Ubuntu kubernetes.azure.com/role: agent kubernetes.azure.com/storageprofile: managed kubernetes.azure.com/storagetier: Premium_LRS kubernetes.io/arch: amd64 kubernetes.io/hostname: aks-agentpool-39829229-vmss000000 kubernetes.io/os: linux kubernetes.io/role: agent node-role.kubernetes.io/agent: &quot;&quot; node.kubernetes.io/instance-type: Standard_DS2_v2 storageprofile: managed storagetier: Premium_LRS topology.kubernetes.io/region: eastus topology.kubernetes.io/zone: eastus-1 name: aks-agentpool-39829229-vmss000000 resourceVersion: &quot;233717&quot; selfLink: /api/v1/nodes/aks-agentpool-39829229-vmss000000 uid: 0241eb22-4d1b-4d65-870f-fcc51dac1c70 </code></pre> <p>Note: The pod/Container that I have is running with non-root access and it doesn't have a privileged user.</p> <p>Is there a way to access these labels from the worker node itself ?</p>
user3435964
<p>In the AKS cluster,</p> <ol> <li><p>Create a namespace like:</p> <pre><code>kubectl create ns get-labels </code></pre> </li> <li><p>Create a Service Account in the namespace like:</p> <pre><code>kubectl create sa get-labels -n get-labels </code></pre> </li> <li><p>Create a Clusterrole like:</p> <pre><code>kubectl create clusterrole get-labels-clusterrole --resource=nodes --verb=get,list </code></pre> </li> <li><p>Create a Rolebinding like:</p> <pre><code>kubectl create clusterrolebinding get-labels-rolebinding -n get-labels --clusterrole get-labels-clusterrole --serviceaccount get-labels:get-labels </code></pre> </li> <li><p>Run a pod in the namespace you craeted like:</p> <pre><code>cat &lt;&lt; EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: get-labels namespace: get-labels spec: serviceAccountName: get-labels containers: - image: centos:7 name: get-labels command: - /bin/bash - -c - tail -f /dev/null EOF </code></pre> </li> <li><p>Execute a shell in the running container like:</p> <pre><code>kubectl exec -it get-labels -n get-labels -- bash </code></pre> </li> <li><p>Install <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a> tool in the container:</p> <pre><code>yum install epel-release -y &amp;&amp; yum update -y &amp;&amp; yum install jq -y </code></pre> </li> <li><p>Set up shell variables:</p> <pre><code># API Server Address APISERVER=https://kubernetes.default.svc # Path to ServiceAccount token SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount # Read this Pod's namespace NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) # Read the ServiceAccount bearer token TOKEN=$(cat ${SERVICEACCOUNT}/token) # Reference the internal certificate authority (CA) CACERT=${SERVICEACCOUNT}/ca.crt </code></pre> </li> <li><p>If you want to get a list of all nodes and their corresponding labels, then use the following command:</p> <pre><code>curl --cacert ${CACERT} --header &quot;Authorization: Bearer ${TOKEN}&quot; -X GET ${APISERVER}/api/v1/nodes | jq '.items[].metadata | {name,labels}' </code></pre> <p>else, if you want the labels corresponding to a particular node then use:</p> <pre><code>curl --cacert ${CACERT} --header &quot;Authorization: Bearer ${TOKEN}&quot; -X GET ${APISERVER}/api/v1/nodes/&lt;nodename&gt; | jq '.metadata.labels' </code></pre> </li> </ol> <p>Please replace <code>&lt;nodename&gt;</code> with the name of node intended.</p> <p><strong>N.B.</strong> You can choose to include the installation of the <code>jq</code> tool in the <a href="https://docs.docker.com/engine/reference/builder/" rel="nofollow noreferrer">Dockerfile</a> from which your container image is built and make use of <a href="https://docs.docker.com/compose/env-file/" rel="nofollow noreferrer">environment variables</a> for the shell variables. We have used neither in this answer in order to explain the working of this method.</p>
Srijit_Bose-MSFT
<p>I am trying to setup a kubernetes cluster but apparently the nfs-client-provisioner has issues with the newer versions of kubernetes. Therefore I need to install the latest version of 1.19.</p> <p>I am creating the kubernetes via kubeadm and I am using crio as the runtime. I am also running the whole thing on ubuntu 20.04. I know that I need to install version 1.19.7 of kubeadm, kubelet and kubectl but whatabout the cri-o?</p>
zozo6015
<p>As per official documentation from Kubernetes the CRI-O version needs to match your Kubernetes version.</p> <p><a href="https://v1-19.docs.kubernetes.io/docs/setup/production-environment/container-runtimes/#cri-o" rel="nofollow noreferrer">https://v1-19.docs.kubernetes.io/docs/setup/production-environment/container-runtimes/#cri-o</a></p> <p>So CRI-O 1.19 version should be compatible with the Kubernetes version you want to install.</p>
e2bias
<p>I am trying to create a controller that can create and delete Pods via API calls using <code>operator-sdk</code> with <code>Go</code>. The controller should be able to accept a <code>POST</code> call with information such as <code>{imageTag:&quot;&quot;, namespace:&quot;&quot;}</code> to setup a Pod that can return a <code>podId</code>, and also be able to delete a Pod via API call using <code>podId</code>.</p> <p>I have reviewed some tutorials, but I am unclear on how the Go operator can intercept API calls. Is this possible? Any help on this matter would be greatly appreciated. Thanks.</p>
Mahesh
<p>Found that Kubernetes Client and Kubernetes Operator are two different concepts. Ended up creating a Kubernetes Go Client using which I was able achieve my goal.</p> <p>You can refer the library here. <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go</a></p>
Mahesh
<p>I granted read access to pod</p> <p>Sample:</p> <pre><code>kubectl create serviceaccount sa1 kubectl create role pod-reader --verb=get --resource=pods kubectl create rolebinding sa1-binding --serviceaccount=default:sa1 --role=pod-reader </code></pre> <p>Is there any way to restrict this access to selected pods on the basis of metadata or labels?</p>
Dev
<p>As far as I'm aware, it's not possible to limit roles to certain labels. Actually there was an issue related to this opened here: <a href="https://github.com/kubernetes/kubernetes/issues/44703" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/44703</a></p> <p>With RBAC you're specifying access to resources, which are part of API groups and you select what verbs can be executed - that's all.</p>
theUndying
<p>I have run a basic example project and can confirm it is running, but I cannot identify its URL?</p> <p>Kubectl describe service - gives me</p> <pre><code>NAME READY STATUS RESTARTS AGE frontend-6c8b5cc5b-v9jlb 1/1 Running 0 26s PS D:\git\helm3\lab1_kubectl_version1\yaml&gt; kubectl describe service Name: frontend Namespace: default Labels: name=frontend Annotations: &lt;none&gt; Selector: app=frontend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.108.59.44 IPs: 10.108.59.44 Port: &lt;unset&gt; 80/TCP TargetPort: 4200/TCP Endpoints: 10.1.0.38:4200 Session Affinity: None Events: &lt;none&gt; Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: &lt;none&gt; </code></pre> <p>Should I be able to hit this locally or not? The demo suggests yes but no URL is given and anything I attempt fails.</p>
Aaron Gibson
<p>From outside you do not have any way to connect to your service since its type is set to <code>ClusterIP</code> if you want directly to expose your service, you should set it to either type <code>LoadBalancer</code> or <code>NodePort</code>. For more info about these types check this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">link</a>.</p> <p>However your service has an internal url ( which works within the cluster, for example if you exec into a pod and curl that url, you will get a response ) and that is: <code>&lt;your service&gt;.&lt;your namespace&gt;.svc.cluster.local</code></p> <p>Instead of <code>&lt;your service&gt;</code> type the name of the service and instead of <code>&lt;your namespace&gt;</code> namespace in which that service resides. The rest of the url is the same for all services.</p>
Johnny9
<p>I have 2 services, let's call them <strong>Service 1</strong> and <strong>Service 2</strong>.</p> <p>Service 1 is a Kubernetes Cluster, while Service 2 has been deployed on a single VM. Both services have separate Global Load Balancers set up.</p> <p>Now, I would like to set up IAP for the service deployed on Kubernetes cluster (service 1), but there are hooks from service 2 that are not able to connect to service 1 behind IAP.</p> <p>The idea is to set up Internal Load Balancers for each Service and add DNS entry with this internal IP address, to allow those hooks to work.</p> <p>However I am thinking if there are any other possible ways to do it?</p> <p>TLDR: Service 1 should be behind IAP, however Service 2 can't talk to Service 1 behind IAP, so there is a need for a workaround to omit IAP for this one connection only.</p>
bugZ
<p>There is no way to implement this the way you described it. When a service is frontended with IAP, all calls (from outside and inside the vpc) have to be authenticated with IAP.</p> <p>The solution is to deploy a set of Internal LoadBalancers in front of Service1 and use those from Service2 (via Cloud DNS to resolve the LB IP for example).</p> <p>And anyway this solution is better, as having Service2 call Service1 via IAP (AKA via External LoadBalancer) means you will pay for egress traffic because you are technically sending traffic to the internet even if both apps are on Google Cloud! So with Internal LoadBalancer you only pay for Zonal traffic or nothing if both services are in the same zone. And also you have lower latency!</p>
boredabdel
<p>I have setup a new subnet in my shared VPC for GKE autopilot as the following:</p> <pre><code>node ip: 10.11.1.0/24 first secondary ip: 10.11.2.0/24 second secondary ip: 10.11.3.0/24 </code></pre> <p>I tried to test it by running simple nginx images with 30 replicas.</p> <p>based on my understanding:</p> <pre><code>I have 256 possible node IP I have 256 possible pod IP I have 256 possible service IP </code></pre> <p>after deploying, somehow my k8s are stuck with only 2 pods deployed and running. the rest is just in pending state with error code:<code>IP_SPACE_EXHAUSTED</code></p> <p>my question is how come? I still have plenty IP address, this is fresh deployed kubernetes cluster.</p>
V Adhi Pragantha
<p>Pod CIDR ranges in Autopilot clusters</p> <p>The default settings for Autopilot cluster CIDR sizes are as follows:</p> <ul> <li>Subnetwork range: /23</li> <li>Secondary IP address range for Pods: /17</li> <li>Secondary IP address range for Services: /22</li> </ul> <p>Autopilot has a maximum Pods per node of 32, you may check this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr#cidr_settings_for_clusters" rel="nofollow noreferrer">link</a>.</p> <p>Autopilot cluster maximum number of nodes is pre-configured and immutable, you may check this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr#cidr_settings_for_clusters:%7E:text=The%20steps%20on%20this%20page%20do%20not%20apply%20to%20Autopilot%20clusters%20because%20the%20maximum%20number%20of%20nodes%20is%20pre%2Dconfigured%20and%20immutable." rel="nofollow noreferrer">link</a>.</p>
Reid123
<p>I have a docker container which runs a basic front end angular app. I have verified it runs with no issues and I can successfully access the web app in the browser with <code>docker run -p 5901:80 formbuilder-stand-alone-form</code>.</p> <p>I am able to successfully deploy it with minikube and kubernetes on my cloud dev server</p> <pre><code>apiVersion: v1 kind: Service metadata: name: stand-alone-service spec: selector: app: stand-alone-form ports: - protocol: TCP port: 5901 targetPort: 80 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: stand-alone-form-app labels: app: stand-alone-form spec: replicas: 1 selector: matchLabels: app: stand-alone-form template: metadata: labels: app: stand-alone-form spec: containers: - name: stand-alone-form-pod image: formbuilder-stand-alone-form imagePullPolicy: Never ports: - containerPort: 80 </code></pre> <pre><code>one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main) % kubectl get pods NAME READY STATUS RESTARTS AGE stand-alone-form-app-6d4669f569-vsffc 1/1 Running 0 6s one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main) % kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE stand-alone-form-app 1/1 1 1 8s one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main) % kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5d7h stand-alone-service LoadBalancer 10.96.197.197 &lt;pending&gt; 5901:30443/TCP 21s </code></pre> <p>However, I am not able to access it with the url:</p> <pre><code>one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh % minikube service stand-alone-service |-----------|---------------------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|---------------------|-------------|---------------------------| | default | stand-alone-service | 5901 | http://192.168.49.2:30443 | |-----------|---------------------|-------------|---------------------------| </code></pre> <p>In this example, <code>http://192.168.49.2:30443/</code> gives me a dead web page.</p> <p>I disabled all my iptables for troubleshooting.</p> <p>Any idea how to access the front end web app? I was thinking I might have the selectors wrong but sure.</p> <p><strong>UPDATE</strong>: Here is the requested new outputs:</p> <pre><code>one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main) % kubectl describe service stand-alone-service Name: stand-alone-service Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=stand-alone-form Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.197.197 IPs: 10.96.197.197 LoadBalancer Ingress: 10.96.197.197 Port: &lt;unset&gt; 5901/TCP TargetPort: 80/TCP NodePort: &lt;unset&gt; 30443/TCP Endpoints: 172.17.0.2:80 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <pre><code>one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main) % minikube tunnel Password: Status: machine: minikube pid: 237498 route: 10.96.0.0/12 -&gt; 192.168.49.2 minikube: Running services: [stand-alone-service] errors: minikube: no errors router: no errors loadbalancer emulator: no errors </code></pre> <p><strong>Note</strong>: I noticed with the tunnel I do have a external IP for the loadbalancer now:</p> <pre><code>one@work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main) % kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5d11h stand-alone-service LoadBalancer 10.98.162.179 10.98.162.179 5901:31596/TCP 3m10s </code></pre>
dman
<p>It looks like your LoadBalancer hasn't quite resolved correctly, as the External-IP is still marked as <code>&lt;pending&gt;</code></p> <p>According to Minikube, this happens when the tunnel is missing: <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#check-external-ip" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#check-external-ip</a></p> <p>Have you tried running <code>minikube tunnel</code> in a separate command window?</p> <ul> <li><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel</a></li> <li><a href="https://minikube.sigs.k8s.io/docs/commands/tunnel/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/commands/tunnel/</a></li> </ul>
TheQueenIsDead
<p>I've used <a href="https://kompose.io/" rel="nofollow noreferrer">Kompose</a> to translate the following docker-compose to Kubernetes:</p> <pre><code>--- version: '3' services: freqtrade: image: mllamaza/mycoolimg:latest restart: unless-stopped container_name: mycoolimg volumes: - &quot;./user_data:/freqtrade/user_data&quot; ports: - &quot;8080:8080&quot; command: &gt; start --logfile /data/logs/records.log </code></pre> <p>If I run <code>docker-compose up -d</code> on it, it works perfectly fine. However, when running the equivalent under Kubernetes, the pod is not able to make any external HTTP/S call throwing this error:</p> <pre><code>urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='mywebsite.com', port=443): Max retries exceeded with url: /my/cool/url/ (Caused by NewConnectionError('&lt;urllib3.connection.HTTPSConnection object at 0x7f95197d2a30&gt;: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')) </code></pre> <p>Additionally, the image has a frontend webpage too that can be accessed from <code>http://0.0.0.0:8080</code>.</p> <p>I use Minikube, and <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">their documentation</a> stays that:</p> <blockquote> <p>Services of type <strong>LoadBalancer</strong> can be exposed via the <code>minikube tunnel</code> command. It must be run in a separate terminal window to keep the LoadBalancer running.</p> </blockquote> <p>That's exactly what I did, that command shows no errors:</p> <pre><code>❯ minikube tunnel [sudo] password for mllamaza: Status: machine: minikube pid: 1513359 route: 10.96.0.0/12 -&gt; 192.168.49.2 minikube: Running services: [] errors: minikube: no errors router: no errors load balancer emulator: no errors </code></pre> <p>But, as you can see, the pod has failed because it cannot access external IP's (I checked the logs), and the service/mycoolimg does not have an external IP configured as shown in the documentation:</p> <pre><code>❯ k get all NAME READY STATUS RESTARTS AGE pod/mycoolimg-868cdd75bf-krgp6 0/1 CrashLoopBackOff 2 (15s ago) 47s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mycoolimg ClusterIP 10.105.7.210 &lt;none&gt; 8080/TCP 47s service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 2d13h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mycoolimg 0/1 1 0 47s NAME DESIRED CURRENT READY AGE replicaset.apps/mycoolimg-868cdd75bf 1 1 0 47s </code></pre> <p>What am I missing? Is this a Kompose conversion issue, and Minikube specific configuration or am I missing some Kubernetes step?</p> <p>This is the service output:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert --volumes hostPath -o ./deployment kompose.version: 1.26.0 (40646f47) creationTimestamp: null labels: io.kompose.service: mycoolimg name: mycoolimg spec: ports: - name: &quot;8080&quot; port: 8080 targetPort: 8080 selector: io.kompose.service: mycoolimg status: loadBalancer: {} </code></pre> <p>And this the deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert --volumes hostPath -o ./deployment kompose.version: 1.26.0 (40646f47) creationTimestamp: null labels: io.kompose.service: mycoolimg name: mycoolimg spec: replicas: 1 selector: matchLabels: io.kompose.service: mycoolimg strategy: type: Recreate template: metadata: annotations: kompose.cmd: kompose convert --volumes hostPath -o ./deployment kompose.version: 1.26.0 (40646f47) creationTimestamp: null labels: io.kompose.service: mycoolimg spec: containers: - args: - start - --logfile - /data/logs/records.log image: mllamaza/mycoolimg:latest name: mycoolimg ports: - containerPort: 8080 resources: {} volumeMounts: - mountPath: /data name: mycoolimg-hostpath0 restartPolicy: Always volumes: - hostPath: path: /udata name: mycoolimg-hostpath0 status: {} </code></pre>
mllamazares
<p>The first thing you should be looking at is the <code>CrashLoopBackOff</code> error on your pod, this is an indication that something is going on in that container which is crashing your pod, you can find a very good article here on how to debug this error <a href="https://releasehub.com/blog/kubernetes-how-to-debug-crashloopbackoff-in-a-container" rel="nofollow noreferrer">1</a>.</p> <p>Based on the information and code provided the problem seems to be with the application itself; more precisely with the way Docker and Kubernetes handle entrypoints and commands, perhaps an entrypoint was passed as command to Kubernetes or vice versa?</p> <p>Came to that conclusion after running a pod successfully by replicating your environment but with a different image and taking out the start command:</p> <pre><code>--- version: '3' services: freqtrade: image: expressjs restart: unless-stopped container_name: mycoolimg volumes: - &quot;./user_data:/freqtrade/user_data&quot; ports: - &quot;8080:8080&quot; # command: &gt; # start # --logfile /data/logs/records.log </code></pre> <p>using kompose to convert to kubernetes with command <code>kompose convert --volumes hostPath</code> I get the following output:</p> <pre><code>WARN Restart policy 'unless-stopped' in service freqtrade is not supported, convert it to 'always' INFO Kubernetes file &quot;freqtrade-service.yaml&quot; created INFO Kubernetes file &quot;freqtrade-deployment.yaml&quot; created </code></pre> <p>Applying the deployment with command <code>kubectl apply -f freqtrade-deployment.yaml</code> and I can see the pod running:</p> <pre><code>NAME READY STATUS RESTARTS AGE freqtrade-86cd7d4469-dkhmw 1/1 Running 0 7s </code></pre> <p>Note: Depending on the method you are using to push/pull your images in minikube <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="nofollow noreferrer">2</a>, you may need to add <code>imagePullPolicy: Never</code> under your containers spec:</p> <pre><code> spec: containers: - image: expressjs imagePullPolicy: Never name: mycoolimg ports: - containerPort: 8080 </code></pre>
Gabriel Robledo Ahumada
<h2>Background</h2> <p>On my Kubernetes cluster, I have installed Zookeeper &amp; Kafka using Confluent Operator. I have confirmed that they are configured properly by creating and publishing to a Kafka topic.</p> <h2>Problem</h2> <p>When I use the zookeeper-shell, the command: <code>get /brokers/ids</code> returns null, while I'm expecting something like [0, 1, 2]</p> <h2>Details</h2> <p>I am using the zookeeper-shell from within the Kubernetes cluster, and am connecting without difficulty with the following command: <code>~/bin/zookeeper-shell zookeeper:2181</code></p> <p>The following commands output the correct (non-null) response: <code>get /zookeeper/config</code>, <code>get /cluster/id</code></p> <p>But the following commands return null: <code>get /brokers/ids</code>, <code>get /brokers/topics</code></p> <h2>More Info</h2> <p>The fact that I know the brokers are working makes this strange. It could be a security issue, but it seems strange that the other requests would work in that case. Finally, Confluent Control Center is directly telling me that broker.id 1 is the controller, which implies that this data is somehow retrievable. Any help here would be appreciated. Thank you.</p>
Chandler
<p>Solved. In the brokers' server.properties, zookeeper.connect was set to localhost:PORT instead of zookeeper:PORT</p>
Chandler
<p>I have created a Cloud Armor security policy with Terraform, and I have a Load Balancer that has been created via Kubernetes Ingress. I want to attach the Cloud Armor policy to the Load balancer via Terraform.</p> <p>According to the <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_backend_service#security_policy" rel="nofollow noreferrer">Terraform documentation</a>, a Cloud Armor policy must be attached via <code>google_compute_backend_service</code>.</p> <p>My load balancer is created using <code>kubernetes_ingress</code>, which doesn't allow for a cloud armor policy to be added.</p> <p>Within the GCP console, I can manually add the Load Balancer target to the Cloud Armor policy. Does anyone know of a workaround to achieve this behavior in Terraform?</p> <p>For reference, the resources I have created are: google_compute_security_policy &amp; kubernetes_ingress</p>
fuzzi
<p>The issue you will have if you try to attach a Cloud Armor policy created via Terraform to a Backend Generate via Ingress is that the backend it self is managed by the Ingress controller, and it's name is somehow unpredictable(you cannot know what the backend name is going to look like until you deploy the Ingress). So doing this via Terraform will be an issue and i would not recommend that.</p> <p>Instead you can use a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#cloud_armor" rel="nofollow noreferrer">BackendConfig</a> object to attach the policy to the backend.</p> <p>The recommanded way is to do things in this order</p> <ul> <li>Create the Cloud Armor Policy using Terraform</li> <li>Create a backendConfig object that points to the policy</li> <li>Annotate your Ingress with the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#associating_backendconfig_with_your_ingress" rel="nofollow noreferrer">backendconfig</a> object</li> <li>Deploy the Ingress</li> </ul> <p>After this you can keep changing and adapting your Cloud Armor Policies using Terraform without having to touch the Ingress again. We have a fully documented tutorial <a href="https://github.com/GoogleCloudPlatform/gke-networking-recipes/tree/master/ingress/single-cluster/ingress-cloudarmor" rel="nofollow noreferrer">here</a></p>
boredabdel
<h2>Overview</h2> <p>Kubernetes scheduling errs on the side of 'not shuffling things around once scheduled and happy' which can lead to quite the level of imbalance in terms of CPU, Memory, and container count distribution. It can also mean that <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#known-limitations" rel="nofollow noreferrer">sometimes Affinity and Topology rules may not be enforced</a> / as the state of affair changes:</p> <h6>With regards to topology spread constraints introduced in v1.19 (stable)</h6> <blockquote> <p>There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.</p> </blockquote> <h2>Context</h2> <p>We are currently making use of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">pod topology spread contraints</a>, and they are pretty superb, aside from the fact that they only seem to handle skew during scheduling, and not execution (unlike the ability to differentiate with <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts" rel="nofollow noreferrer">Taints and Tolerations</a>).</p> <p>For features such as <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">Node affinity</a>, we're currently waiting on the ability to add <code>RequiredDuringExecution</code> requirements as opposed to <code>ScheduledDuringExecution</code> requirements</p> <h2>Question</h2> <p><em><strong>My question is</strong></em>, is there a native way to make Kubernetes re-evaluate and attempt to enforce topology spread skew when a new fault domain (topology) is added, without writing my own scheduler?</p> <p>Or do I need to wait for Kubernetes to advance a few more releases? ;-) (I'm hoping someone may have a smart answer with regards to combining affinity / topology constraints)</p>
TheQueenIsDead
<p>After more research I'm fairly certain that using an outside tool like <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">Descheduler</a> is the best way currently.</p> <p>There doesn't seem to be a combination of Taints, Affinity rules, or Topology constraints that can work together to achieve the re-evaluation of topology rules during execution.</p> <p>Descheduler allows you to kill of certain workloads based on user requirements, and let the default <code>kube-scheduler</code> reschedule killed pods. It can be installed easily with manifests or Helm and ran on a schedule. It can even be triggered manually when the topology changes, which is what I think we will implement to suit our needs.</p> <p>This will be the best means of achieving our goal while waiting for <code>RequiredDuringExecution</code> rules to mature across all feature offerings.</p> <p>Given our topology rules mark each node as a topological zone, using a <a href="https://github.com/kubernetes-sigs/descheduler/blob/master/examples/low-node-utilization.yml" rel="nofollow noreferrer">Low Node Utilization strategy</a> to spread workloads across new hosts as they appear will be what we go with.</p> <pre><code>apiVersion: &quot;descheduler/v1alpha1&quot; kind: &quot;DeschedulerPolicy&quot; strategies: &quot;LowNodeUtilization&quot;: enabled: true params: nodeResourceUtilizationThresholds: thresholds: &quot;memory&quot;: 20 targetThresholds: &quot;memory&quot;: 70 </code></pre>
TheQueenIsDead
<p>A couple of weeks ago i published similar question regarding a Kubernetes deployment that uses Key Vault (with User Assigned Managed identity method). The issue was resolved but when trying to implemente everything from scratch something makes not sense to me.</p> <p>Basically i am getting this error regarding mounting volume:</p> <pre><code>Volumes: sonar-data-new: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: sonar-data-new ReadOnly: false sonar-extensions-new2: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: sonar-extensions-new2 ReadOnly: false secrets-store-inline: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: secrets-store.csi.k8s.io FSType: ReadOnly: true VolumeAttributes: secretProviderClass=azure-kv-provider default-token-zwxzg: Type: Secret (a volume populated by a Secret) SecretName: default-token-zwxzg Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s default-scheduler Successfully assigned default/sonarqube-d44d498f8-46mpz to aks-agentpool-35716862-vmss000000 Warning FailedMount 3s (x5 over 11s) kubelet MountVolume.SetUp failed for volume &quot;secrets-store-inline&quot; : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/sonarqube-d44d498f8-46mpz, err: rpc error: code = Unknown desc = failed to mountobjects, error: failed to get objectType:secret, objectName:username, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://SonarQubeHelm.vault.azure.net/secrets/username/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {&quot;error&quot;:&quot;invalid_request&quot;,&quot;error_description&quot;:&quot;Identity not found&quot;} </code></pre> <p>This is my <strong>secret-class.yml</strong> file (name of the keyvault is correct). Also <strong>xxx-xxxx-xxx-xxx-xxxxx4b5ec83</strong> is the <strong>objectID</strong> of the AKS managed identity (<strong>SonarQubeHelm-agentpool</strong>)</p> <pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: azure-kv-provider spec: provider: azure secretObjects: - data: - key: username objectName: username - key: password objectName: password secretName: test-secret type: Opaque parameters: usePodIdentity: &quot;false&quot; useVMManagedIdentity: &quot;true&quot; userAssignedIdentityID: &quot;xxx-xxxx-xxx-xxx-xxxxx4b5ec83&quot; keyvaultName: &quot;SonarQubeHelm&quot; cloudName: &quot;&quot; objects: | array: - | objectName: username objectType: secret objectAlias: username objectVersion: &quot;&quot; - | objectName: password objectType: secret objectAlias: password objectVersion: &quot;&quot; resourceGroup: &quot;rg-LD-sandbox&quot; subscriptionId: &quot;xxxx&quot; tenantId: &quot;yyyy&quot; </code></pre> <p>and this is my <strong>deployment.yml</strong> file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: sonarqube name: sonarqube spec: selector: matchLabels: app: sonarqube replicas: 1 template: metadata: labels: app: sonarqube spec: containers: - name: sonarqube image: sonarqube:8.9-developer resources: requests: cpu: 500m memory: 1024Mi limits: cpu: 2000m memory: 4096Mi volumeMounts: - mountPath: &quot;/mnt/&quot; name: secrets-store-inline - mountPath: &quot;/opt/sonarqube/data/&quot; name: sonar-data-new - mountPath: &quot;/opt/sonarqube/extensions/plugins/&quot; name: sonar-extensions-new2 env: - name: &quot;SONARQUBE_JDBC_USERNAME&quot; valueFrom: secretKeyRef: name: test-secret key: username - name: &quot;SONARQUBE_JDBC_PASSWORD&quot; valueFrom: secretKeyRef: name: test-secret key: password - name: &quot;SONARQUBE_JDBC_URL&quot; valueFrom: configMapKeyRef: name: sonar-config key: url ports: - containerPort: 9000 protocol: TCP volumes: - name: sonar-data-new persistentVolumeClaim: claimName: sonar-data-new - name: sonar-extensions-new2 persistentVolumeClaim: claimName: sonar-extensions-new2 - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;azure-kv-provider&quot; </code></pre> <p>I assigned proper permissions to the AKS managed identity to get access to the keyvault (<strong>xxx-xxxx-xxx-xxx-xxxxx4b5ec83</strong> is the <strong>objectID</strong> of the AKS managed identity - <strong>SonarQubeHelm-agentpool</strong>)</p> <pre><code>xxxx@Azure:~/clouddrive/kubernetes/sonarqubekeyvault$ az role assignment list --assignee xxx-xxxx-xxx-xxx-xxxxx4b5ec83 --all [ { &quot;canDelegate&quot;: null, &quot;condition&quot;: null, &quot;conditionVersion&quot;: null, &quot;description&quot;: null, &quot;id&quot;: &quot;/subscriptions/xxxx-xxx-xxx-xxx-xxxe22e8804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm/providers/Microsoft.Authorization/roleAssignments/xxxx-x-xx-xx-xx86584218f&quot;, &quot;name&quot;: &quot;xxxx-xx-x-x-xx86584218f&quot;, &quot;principalId&quot;: &quot;xxx-xxx-x-xxx-xx3a4b5ec83&quot;, &quot;principalName&quot;: &quot;xxxx-xxxx-xxx-xxx-xx79a3906b8&quot;, &quot;principalType&quot;: &quot;ServicePrincipal&quot;, &quot;resourceGroup&quot;: &quot;rg-LD-sandbox&quot;, &quot;roleDefinitionId&quot;: &quot;/subscriptions/xxxx-xxxx-xxxx-xxx-0e1e22e8804e/providers/Microsoft.Authorization/roleDefinitions/xxx-xxxx-xxx-xxxx-xxxfe8e74483&quot;, &quot;roleDefinitionName&quot;: &quot;Key Vault Administrator&quot;, &quot;scope&quot;: &quot;/subscriptions/xxxx-xxx-xxx-xxxx-xxx2e8804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm&quot;, &quot;type&quot;: &quot;Microsoft.Authorization/roleAssignments&quot; }, { &quot;canDelegate&quot;: null, &quot;condition&quot;: null, &quot;conditionVersion&quot;: null, &quot;description&quot;: null, &quot;id&quot;: &quot;/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxe22e8804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm/providers/Microsoft.Authorization/roleAssignments/xxxx-xxxx-xxxx-xxxx-xxx5137f480&quot;, &quot;name&quot;: &quot;xxxx-xxxx-xxxx-xxxx-xx5137f480&quot;, &quot;principalId&quot;: &quot;xxxx-xxxx-xxxx-xxxx-xx3a4b5ec83&quot;, &quot;principalName&quot;: &quot;xxxx-xxxx-xxxx-xxxx-xx79a3906b8&quot;, &quot;principalType&quot;: &quot;ServicePrincipal&quot;, &quot;resourceGroup&quot;: &quot;rg-LD-sandbox&quot;, &quot;roleDefinitionId&quot;: &quot;/subscriptions/xxxx-xxxx-xxxx-xxxx-0e1e22e8804e/providers/Microsoft.Authorization/roleDefinitions/xxxx-xxxx-xxxx-xxxx-xx2c155cd7&quot;, &quot;roleDefinitionName&quot;: &quot;Key Vault Secrets Officer&quot;, &quot;scope&quot;: &quot;/subscriptions/xxxx/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm&quot;, &quot;type&quot;: &quot;Microsoft.Authorization/roleAssignments&quot; } ] </code></pre> <p>This is the info about my Key Vault.</p> <p><strong>az keyvault show --name SonarQubeHelm</strong></p> <pre><code>{ &quot;id&quot;: &quot;/subscriptions/xxxx-xxxxx-xxxx-xxxx-xxxxxxxx804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm&quot;, &quot;location&quot;: &quot;xxxxx&quot;, &quot;name&quot;: &quot;SonarQubeHelm&quot;, &quot;properties&quot;: { &quot;accessPolicies&quot;: [ { &quot;applicationId&quot;: null, &quot;objectId&quot;: &quot;xxxx-xxx-xxxx-xxxx-xxxxa4b5ec83&quot;, &quot;permissions&quot;: { &quot;certificates&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Update&quot;, &quot;Create&quot;, &quot;Import&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;ManageContacts&quot;, &quot;ManageIssuers&quot;, &quot;GetIssuers&quot;, &quot;ListIssuers&quot;, &quot;SetIssuers&quot;, &quot;DeleteIssuers&quot;, &quot;Purge&quot; ], &quot;keys&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Update&quot;, &quot;Create&quot;, &quot;Import&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;Decrypt&quot;, &quot;Encrypt&quot;, &quot;UnwrapKey&quot;, &quot;WrapKey&quot;, &quot;Verify&quot;, &quot;Sign&quot;, &quot;Purge&quot; ], &quot;secrets&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Set&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;Purge&quot; ], &quot;storage&quot;: null }, &quot;tenantId&quot;: &quot;xxxx-xxxx-xxxx-xxxx-xxxxxdb8c610&quot; }, { &quot;applicationId&quot;: null, &quot;objectId&quot;: &quot;xxxx-xxxx-xxxx-xxxx-xxxx531f67f8&quot;, &quot;permissions&quot;: { &quot;certificates&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Update&quot;, &quot;Create&quot;, &quot;Import&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;ManageContacts&quot;, &quot;ManageIssuers&quot;, &quot;GetIssuers&quot;, &quot;ListIssuers&quot;, &quot;SetIssuers&quot;, &quot;DeleteIssuers&quot;, &quot;Purge&quot; ], &quot;keys&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Update&quot;, &quot;Create&quot;, &quot;Import&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;Decrypt&quot;, &quot;Encrypt&quot;, &quot;UnwrapKey&quot;, &quot;WrapKey&quot;, &quot;Verify&quot;, &quot;Sign&quot;, &quot;Purge&quot; ], &quot;secrets&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Set&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;Purge&quot; ], &quot;storage&quot;: null }, &quot;tenantId&quot;: &quot;xxxxx-xxxxxx-xxx-xxx8db8c610&quot; }, { &quot;applicationId&quot;: null, &quot;objectId&quot;: &quot;xxx-xxxx-xxxx-xxx-xxxx0df6af9&quot;, &quot;permissions&quot;: { &quot;certificates&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Update&quot;, &quot;Create&quot;, &quot;Import&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot;, &quot;ManageContacts&quot;, &quot;ManageIssuers&quot;, &quot;GetIssuers&quot;, &quot;ListIssuers&quot;, &quot;SetIssuers&quot;, &quot;DeleteIssuers&quot; ], &quot;keys&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Update&quot;, &quot;Create&quot;, &quot;Import&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot; ], &quot;secrets&quot;: [ &quot;Get&quot;, &quot;List&quot;, &quot;Set&quot;, &quot;Delete&quot;, &quot;Recover&quot;, &quot;Backup&quot;, &quot;Restore&quot; ], &quot;storage&quot;: null }, &quot;tenantId&quot;: &quot;xxx-xxx-xxx-xxx-xxx8db8c610&quot; } ], &quot;createMode&quot;: null, &quot;enablePurgeProtection&quot;: null, &quot;enableRbacAuthorization&quot;: false, &quot;enableSoftDelete&quot;: true, &quot;enabledForDeployment&quot;: false, &quot;enabledForDiskEncryption&quot;: false, &quot;enabledForTemplateDeployment&quot;: false, &quot;hsmPoolResourceId&quot;: null, &quot;networkAcls&quot;: null, &quot;privateEndpointConnections&quot;: null, &quot;provisioningState&quot;: &quot;Succeeded&quot;, &quot;sku&quot;: { &quot;family&quot;: &quot;A&quot;, &quot;name&quot;: &quot;Standard&quot; }, &quot;softDeleteRetentionInDays&quot;: 90, &quot;tenantId&quot;: &quot;xxx-xxx-xxx-xxx-xxx68db8c610&quot;, &quot;vaultUri&quot;: &quot;https://sonarqubehelm.vault.azure.net/&quot; }, &quot;resourceGroup&quot;: &quot;rg-LD-sandbox&quot;, &quot;systemData&quot;: null, &quot;tags&quot;: {}, &quot;type&quot;: &quot;Microsoft.KeyVault/vaults&quot; } </code></pre> <p>This is the <strong>CSI pod</strong> running at the moment:</p> <pre><code>NAME READY STATUS RESTARTS AGE csi-secrets-store-provider-azure-1632148185-hggl4 1/1 Running 0 5h4m ingress-nginx-controller-65c4f84996-99pkh 1/1 Running 0 5h49m secrets-store-csi-driver-xsx2r 3/3 Running 0 5h4m sonarqube-d44d498f8-46mpz 0/1 ContainerCreating 0 26m </code></pre> <p>I used this <strong>CSI driver</strong></p> <p><em>helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --set secrets-store-csi-driver.syncSecret.enabled=true --generate-name</em></p> <p>To assign proper permissions to the AKS managed identity i followed (just in case i used clientID as well, but it does not work). However, from the previous commands permissions to managed identity seems correct.</p> <pre><code>export KUBE_ID=$(az aks show -g &lt;resource group&gt; -n &lt;aks cluster name&gt; --query identityProfile.kubeletidentity.objectId -o tsv) export AKV_ID=$(az keyvault show -g &lt;resource group&gt; -n &lt;akv name&gt; --query id -o tsv) az role assignment create --assignee $KUBE_ID --role &quot;Key Vault Secrets Officer&quot; --scope $AKV_ID </code></pre> <p>In <strong>Access Policies</strong> in the <strong>key vault</strong> &quot;SonarQubeHelm&quot; these are the applications added with all the permission: <strong>SonarQubeHelm-agentpool</strong> and <strong>SonarQubeHelm</strong></p> <p>The key values in <strong>Secrets</strong> (inside Key Vault) are <strong>username</strong> and <strong>password</strong>.</p> <p>Everything is on the same region, resource group and namespace (default) and I am working with 1 node cluster.</p> <p>Any idea about this error?</p> <p>Thanks in advance!</p>
X T
<p>After doing some tests, it seems that the process that I was following was correct. Most probably, I was using <code>principalId</code> instead of <code>clientId</code> in role assignment for the AKS managed identity.</p> <p>Key points for someone else that is facing similar issues:</p> <ol> <li><p>Check what the managed identity created automatically by AKS is. Check for the <code>clientId</code>; e.g.,</p> <pre><code>az vmss identity show -g MC_rg-LX-sandbox_SonarQubeHelm_southcentralus -n aks-agentpool-xxxxx62-vmss -o yaml </code></pre> <blockquote> <p><em>Remember that <code>MC_**</code> is the resource group that AKS creates automatically to keep AKS resources.</em> VMSS is the Virtual Machine Scale Set; you can get it under the same resource group.</p> </blockquote> </li> <li><p>Check if it has correct permissions to access the Key Vault that you created: e.g., (where <em>xxxx-xxxx-xxx-xxx-xx79a3906b8</em> is the managed identity <code>clientId</code>):</p> <pre><code>az role assignment list --assignee xxxx-xxxx-xxx-xxx-xx79a3906b8 --all </code></pre> <p>It should have:</p> <pre><code>&quot;roleDefinitionName&quot;: &quot;Key Vault Administrator&quot; </code></pre> </li> <li><p>If it doesn't have correct permissions, assign them:</p> <pre><code>export KUBE_ID=$(az aks show -g &lt;resource group&gt; -n &lt;aks cluster name&gt; --query identityProfile.kubeletidentity.clientId -o tsv) export AKV_ID=$(az keyvault show -g &lt;resource group&gt; -n &lt;akv name&gt; --query id -o tsv) az role assignment create --assignee $KUBE_ID --role &quot;Key Vault Secrets Officer&quot; --scope $AKV_ID </code></pre> </li> </ol> <p>The role assignments for the managed identity in AKS works for <code>clientId</code>.</p>
X T
<p>How can i install moloch via helm (or another) to kubernetes system?</p> <p>steps here:</p> <pre><code>1. $git clone https://github.com/sealingtech/EDCOP-MOLOCH 2. $cd EDCOP-MOLOCH 3. $helm install moloch moloch/ --values moloch/values.yaml 4. $helm list (ok) 5. $kubectl get po (pending status) </code></pre> <p>result:</p> <pre><code>$kubectl describe pod moloch-moloch-capture-0 Warning FailedScheduling 7m58s default-scheduler 0/1 nodes are available: ****1 node(s) didn't match Pod's node affinity.**** </code></pre> <p><a href="https://i.stack.imgur.com/2zfhR.png" rel="nofollow noreferrer">enter image description here</a></p>
John
<p>I solved it, so above helm scripts are true. After that you can get another error as i said in this content:(<a href="https://stackoverflow.com/questions/65998472/error-secret-passive-interface-not-found/66008205#66008205">Error: secret &quot;passive-interface&quot; not found</a>) so you can go step by step bros...</p> <p>So, this problem about that used only one node, when you use two node or make configuration in yaml files. you can go up!</p>
John
<p>I'm trying to setup a ingress network for my Google GKE, I have tested locally on Minikube and its working as I expect.</p> <p>When I hit the domain with the prefix /test-1 or /test-2 its sending me to the root of the my service /.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: api-test-domain-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: tls: - hosts: - api.test-domain.com secretName: tls-secret rules: - host: api.test-domain.com http: paths: - path: /test-1(/|$)(.*) pathType: Prefix backend: service: name: test-1-port-forwarding port: number: 8080 - path: /test-2(/|$)(.*) pathType: Prefix backend: service: name: test-2-port-forwarding port: number: 8081 </code></pre> <p>The issue is when I put it into my Kubernetes cluster on Google (GKE) then I get this error</p> <blockquote> <p>Translation failed: invalid ingress spec: failed to validate prefix path /test-1(/|$)(.<em>) due to invalid wildcard; failed to validate prefix path /test-2(/|$)(.</em>) due to invalid wildcard</p> </blockquote> <p>I have trying in hours to trying to get it to working and what's going on, whiteout any kind of result, so really hope some one here can explain about what I did wrong and what I shut change to resolved my problem.</p>
ParisNakitaKejser
<p>GKE Built-in Ingress supports wildcard but with some <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#multiple_backend_services" rel="nofollow noreferrer">conditions</a>. From the doc:</p> <pre><code>The only supported wildcard character for the path field of an Ingress is the * character. The * character must follow a forward slash (/) and must be the last character in the pattern. For example, /*, /foo/*, and /foo/bar/* are valid patterns, but *, /foo/bar*, and /foo/*/bar are not. </code></pre> <p>If you want to use NGINX you will have to deploy it, GKE doesn't ship with NGINX out of the box. Keep in mind that this is something you will have to maintain and take care of yourself. It's a valid choice to make if the GKE default ingress doesn't support what you need to do (like headers re-write for example) but just be aware of the fact that it's an extra piece of software.</p>
boredabdel
<p>There are a few processes I'm struggling to wrap my brain around when it comes to multi-stage <code>Dockerfile</code>.</p> <p>Using this as an example, I have a couple questions below it:</p> <pre><code># Dockerfile # Uses multi-stage builds requiring Docker 17.05 or higher # See https://docs.docker.com/develop/develop-images/multistage-build/ # Creating a python base with shared environment variables FROM python:3.8.1-slim as python-base ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ PIP_NO_CACHE_DIR=off \ PIP_DISABLE_PIP_VERSION_CHECK=on \ PIP_DEFAULT_TIMEOUT=100 \ POETRY_HOME=&quot;/opt/poetry&quot; \ POETRY_VIRTUALENVS_IN_PROJECT=true \ POETRY_NO_INTERACTION=1 \ PYSETUP_PATH=&quot;/opt/pysetup&quot; \ VENV_PATH=&quot;/opt/pysetup/.venv&quot; ENV PATH=&quot;$POETRY_HOME/bin:$VENV_PATH/bin:$PATH&quot; # builder-base is used to build dependencies FROM python-base as builder-base RUN apt-get update \ &amp;&amp; apt-get install --no-install-recommends -y \ curl \ build-essential # Install Poetry - respects $POETRY_VERSION &amp; $POETRY_HOME ENV POETRY_VERSION=1.0.5 RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python # We copy our Python requirements here to cache them # and install only runtime deps using poetry WORKDIR $PYSETUP_PATH COPY ./poetry.lock ./pyproject.toml ./ RUN poetry install --no-dev # respects # 'development' stage installs all dev deps and can be used to develop code. # For example using docker-compose to mount local volume under /app FROM python-base as development ENV FASTAPI_ENV=development # Copying poetry and venv into image COPY --from=builder-base $POETRY_HOME $POETRY_HOME COPY --from=builder-base $PYSETUP_PATH $PYSETUP_PATH # Copying in our entrypoint COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh RUN chmod +x /docker-entrypoint.sh # venv already has runtime deps installed we get a quicker install WORKDIR $PYSETUP_PATH RUN poetry install WORKDIR /app COPY . . EXPOSE 8000 ENTRYPOINT /docker-entrypoint.sh $0 $@ CMD [&quot;uvicorn&quot;, &quot;--reload&quot;, &quot;--host=0.0.0.0&quot;, &quot;--port=8000&quot;, &quot;main:app&quot;] # 'lint' stage runs black and isort # running in check mode means build will fail if any linting errors occur FROM development AS lint RUN black --config ./pyproject.toml --check app tests RUN isort --settings-path ./pyproject.toml --recursive --check-only CMD [&quot;tail&quot;, &quot;-f&quot;, &quot;/dev/null&quot;] # 'test' stage runs our unit tests with pytest and # coverage. Build will fail if test coverage is under 95% FROM development AS test RUN coverage run --rcfile ./pyproject.toml -m pytest ./tests RUN coverage report --fail-under 95 # 'production' stage uses the clean 'python-base' stage and copyies # in only our runtime deps that were installed in the 'builder-base' FROM python-base as production ENV FASTAPI_ENV=production COPY --from=builder-base $VENV_PATH $VENV_PATH COPY ./docker/gunicorn_conf.py /gunicorn_conf.py COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh RUN chmod +x /docker-entrypoint.sh COPY ./app /app WORKDIR /app ENTRYPOINT /docker-entrypoint.sh $0 $@ CMD [ &quot;gunicorn&quot;, &quot;--worker-class uvicorn.workers.UvicornWorker&quot;, &quot;--config /gunicorn_conf.py&quot;, &quot;main:app&quot;] </code></pre> <p>The questions I have:</p> <ol> <li><p>Are you <code>docker build ...</code> this entire image and then just <code>docker run ... --target=&lt;stage&gt;</code> to run a specific stage (<code>development</code>, <code>test</code>, <code>lint</code>, <code>production</code>, etc.) or are you only building and running the specific stages you need (e.g. <code>docker build ... -t test --target=test &amp;&amp; docker run test ...</code>)?</p> <p>I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?</p> </li> <li><p>When it comes to local Kubernetes development (<code>minikube</code>, <code>skaffold</code>, <code>devspace</code>, etc.) and running unit tests, are you supposed referring to these stages in the <code>Dockerfile</code> (<code>devspace</code> Hooks or something) or using native test tools in the container (e.g. <code>npm test</code>, <code>./manage.py test</code>, etc.)?</p> </li> </ol> <p>Thanks for clearing this questions up.</p>
cjones
<p>To answer from a less DevSpace-y persepctive and a more general Docker-y one (With no disrespect to Lukas!):</p> <h2>Question 1</h2> <h3>Breakdown</h3> <blockquote> <p>❌ Are you docker build ... this entire image and then just docker run ... --target= to run a specific stage</p> </blockquote> <p>You're close in your understanding and managed to outline the approach in your second part of the query:</p> <blockquote> <p>✅ or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test &amp;&amp; docker run test ...)?</p> </blockquote> <p>The <code>--target</code> option is not present in the <code>docker run</code> command, which can be seen when calling <code>docker run --help</code>.</p> <blockquote> <p>I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?</p> </blockquote> <p>Yes, it's impossible to do it the first way, as when <code>--target</code> is not specified, then only the final stage is incorporated into your image. This is a great benefit as it cuts down the final size of your container, while allowing you to use multiple directives.</p> <h3>Details and Examples</h3> <p>It is a flag that you can pass in at <em>build time</em> so that you can choose which layers to build specifically. It's a pretty helpful directive that can be used in a few different ways. There's a decent blog post <a href="https://www.docker.com/blog/advanced-dockerfiles-faster-builds-and-smaller-images-using-buildkit-and-multistage-builds/" rel="nofollow noreferrer">here</a> talking about the the new features that came out with multi-stage builds (<code>--target</code> is one of them)</p> <p>For example, I've had a decent amount of success building projects in CI utilising different stages and targets, the following is pseudo-code, but hopefully the context is applied</p> <pre><code># Dockerfile FROM python as base FROM base as dependencies COPY requirements.txt . RUN pip install -r requirements.txt FROM dependencies as test COPY src/ src/ COPY test/ test/ FROM dependencies as publish COPY src/ src/ CMD ... </code></pre> <p>A Dockerfile like this would enable you to do something like this in your CI workflow, once again, pseudo-code-esque</p> <pre><code>docker build . -t my-app:unit-test --target test docker run my-app:unit-test pyunit ... docker build . -t my-app:latest docker push ... </code></pre> <p>In some scenarios, it can be quite advantageous to have this fine grained control over what gets built when, and it's quite the boon to be able to run those images that comprise of only a few stages without having built the entire app.</p> <p>The key here, is that there's no expectation that you need to use <code>--target</code>, but it <em>can</em> be used to solve particular problems.</p> <h2>Question 2</h2> <blockquote> <p>When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?</p> </blockquote> <p>Lukas covers a devspace specific approach very well, but ultimately you can test however you like. Using devspace to make it easier to run (and remember to run) tests certainly sounds like a good idea. Whatever tool you use to enable an easier workflow, will likely still use <code>npm test</code> etc under the hood.</p> <p>If you wish to call <code>npm test</code> outside of a container that's fine, if you wish to call it in a container, that's also fine. The solution to your problem will always change depending on your landscape. CICD helps to standardise on external factors and provide a uniform means to ensure testing is performed, and deployments are auditable</p> <p>Hope that helps in any way shape or form 👍</p>
TheQueenIsDead
<p>Application is deployed on K8s using StatefulSet because of stateful in nature. There is around 250+ pods are running and HPA has been implemented on it too that can scale upto 400 pods.</p> <p>When new deployment occurs, it takes longer time (~ 10-15m) to update all pods in Rolling Update fashion.</p> <p><strong>Problem:</strong> End user get response from 2 version of pods until all pods are replaced with new revision.</p> <p>I am googling for an architecture where overall deployment time can be reduced and getting the best possible solutions to use <code>BLUE/GREEN</code> strategy but it has bunch of impact with integrated services like monitoring, logging, telemetry etc because of 2 naming conventions.</p> <p>Ideally I am looking for a solutions like <code>maxSurge</code> for Deployment in which firstly new pods are created and then traffic are shifted to it at a time but in case of StatefulSet, it won't support maxSurge with RollingUpdate strategy &amp; controller will delete and recreate each Pod in the StatefulSet based on ordinal index from bigger to smaller.</p>
Ashish Kumar
<p>The solution is to do a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset#partitioning_rolling_updates" rel="nofollow noreferrer">partitioning rolling update</a> along with a <a href="https://cloud.google.com/architecture/implementing-deployment-and-testing-strategies-on-gke#perform_a_canary_test" rel="nofollow noreferrer">canary deployment</a>.</p> <p>Let’s suppose we have the statefulset workload defined by the following yaml file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx version: &quot;1.20&quot; spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx version: &quot;1.20&quot; --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # Label selector that determines which Pods belong to the StatefulSet # Must match spec: template: metadata: labels serviceName: &quot;nginx&quot; replicas: 3 template: metadata: labels: app: nginx # Pod template's label selector version: &quot;1.20&quot; spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:1.20 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 1Gi </code></pre> <p>You could patch the statefulset to create a partition, and change the image and version label for the remaining pods: (In this case, since there are only 3 pods, the last one will be the one that will change its image.)</p> <pre><code>$ kubectl patch statefulset web -p '{&quot;spec&quot;:{&quot;updateStrategy&quot;:{&quot;type&quot;:&quot;RollingUpdate&quot;,&quot;rollingUpdate&quot;:{&quot;partition&quot;:2}}}}' $ kubectl patch statefulset web --type='json' -p='[{&quot;op&quot;: &quot;replace&quot;, &quot;path&quot;: &quot;/spec/template/spec/containers/0/image&quot;, &quot;value&quot;:&quot;nginx:1.21&quot;}]' $ kubectl patch statefulset web --type='json' -p='[{&quot;op&quot;: &quot;replace&quot;, &quot;path&quot;: &quot;/spec/template/metadata/labels/version&quot;, &quot;value&quot;:&quot;1.21&quot;}]' </code></pre> <p>At this point, you have a pod with the new image and version label ready to use, but since the version label is different, the traffic is still going to the other two pods. If you change the version in the yaml file and apply the new configuration, the rollout will be transparent, since there is already a pod ready to migrate the traffic:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx version: &quot;1.21&quot; spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx version: &quot;1.21&quot; --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # Label selector that determines which Pods belong to the StatefulSet # Must match spec: template: metadata: labels serviceName: &quot;nginx&quot; replicas: 3 template: metadata: labels: app: nginx # Pod template's label selector version: &quot;1.21&quot; spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:1.21 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 1Gi </code></pre> <pre><code>$ kubectl apply -f file-name.yaml </code></pre> <p>Once traffic is migrated to the pod containing the new image and version label, you should patch again the statefulset and remove the partition with the command <code>kubectl patch statefulset web -p '{&quot;spec&quot;:{&quot;updateStrategy&quot;:{&quot;type&quot;:&quot;RollingUpdate&quot;,&quot;rollingUpdate&quot;:{&quot;partition&quot;:0}}}}'</code></p> <p>Note: You will need to be very careful with the size of the partition, since the remaining pods will handle the whole traffic for some time.</p>
Gabriel Robledo Ahumada
<p>For context, I am bringing up Kafka and Zookeeper locally on an Ubuntu machine using Kubernetes, through Helm:</p> <pre><code> - name: kafka version: 12.7.3 repository: https://charts.bitnami.com/bitnami </code></pre> <p>I've looked at existing questions for this error, but none seem to be related to my issue exactly. For these existing questions, I see that the issue seems to involve docker networks, or communication. However, on my local setup, I can see that Kafka <em>can</em> communicate to Zookeeper successfully and initiate a TCP connection. I saw the following <code>tshark</code> logs, where <code>.83</code> is Kafka and <code>.80</code> is Zookeeper:</p> <pre><code> 57 118.532604170 192.168.83.83 → 192.168.83.80 TCP 74 44978 → 2181 [SYN] Seq=0 Win=64800 Len=0 MSS=1440 SACK_PERM=1 TSval=3500466016 TSecr=0 WS=128 58 118.532617080 192.168.83.80 → 192.168.83.83 TCP 74 2181 → 44978 [SYN, ACK] Seq=0 Ack=1 Win=64260 Len=0 MSS=1440 SACK_PERM=1 TSval=1996498322 TSecr=3500466016 WS=128 59 118.532633329 192.168.83.83 → 192.168.83.80 TCP 66 44978 → 2181 [ACK] Seq=1 Ack=1 Win=64896 Len=0 TSval=3500466016 TSecr=1996498322 60 118.535617526 192.168.83.83 → 192.168.83.80 TCP 115 44978 → 2181 [PSH, ACK] Seq=1 Ack=1 Win=64896 Len=49 TSval=3500466019 TSecr=1996498322 61 118.535644624 192.168.83.80 → 192.168.83.83 TCP 66 2181 → 44978 [ACK] Seq=1 Ack=50 Win=64256 Len=0 TSval=1996498325 TSecr=3500466019 62 118.537006985 192.168.83.80 → 192.168.83.83 TCP 107 2181 → 44978 [PSH, ACK] Seq=1 Ack=50 Win=64256 Len=41 TSval=1996498326 TSecr=3500466019 63 118.537047974 192.168.83.83 → 192.168.83.80 TCP 66 44978 → 2181 [ACK] Seq=50 Ack=42 Win=64896 Len=0 TSval=3500466020 TSecr=1996498326 64 118.540259005 192.168.83.83 → 192.168.83.80 TCP 78 44978 → 2181 [PSH, ACK] Seq=50 Ack=42 Win=64896 Len=12 TSval=3500466024 TSecr=1996498326 65 118.540263332 192.168.83.80 → 192.168.83.83 TCP 66 2181 → 44978 [ACK] Seq=42 Ack=62 Win=64256 Len=0 TSval=1996498330 TSecr=3500466024 66 118.541564514 192.168.83.80 → 192.168.83.83 SMPP 86 Bind_receiver[Malformed Packet] 67 118.541607278 192.168.83.83 → 192.168.83.80 TCP 66 44978 → 2181 [ACK] Seq=62 Ack=62 Win=64896 Len=0 TSval=3500466025 TSecr=1996498331 68 118.541999795 192.168.83.80 → 192.168.83.83 TCP 66 2181 → 44978 [FIN, ACK] Seq=62 Ack=62 Win=64256 Len=0 TSval=1996498331 TSecr=3500466025 69 118.542214437 192.168.83.83 → 192.168.83.80 TCP 66 44978 → 2181 [FIN, ACK] Seq=62 Ack=63 Win=64896 Len=0 TSval=3500466026 TSecr=1996498331 </code></pre> <p>Despite this, it seems like I am still seeing the following errors on the Kafka logs:</p> <pre><code>[2021-01-29 19:17:49,922] INFO Session: 0x1000031e07c0011 closed (org.apache.zookeeper.ZooKeeper) [2021-01-29 19:17:49,922] INFO EventThread shut down for session: 0x1000031e07c0011 (org.apache.zookeeper.ClientCnxn) [2021-01-29 19:17:49,925] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient) [2021-01-29 19:17:49,928] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262) at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258) at kafka.zookeeper.ZooKeeperClient.&lt;init&gt;(ZooKeeperClient.scala:119) at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1881) at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:441) at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:466) at kafka.server.KafkaServer.startup(KafkaServer.scala:233) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:82) at kafka.Kafka.main(Kafka.scala) </code></pre> <p>I've tried a few things:</p> <ol> <li>As mentioned above, I saw that IP/TCP traffic between Kafka and Zookeeper did seem to be working successfully, so I don't believe it's an underlying routing issue.</li> <li>This is sort of implied by (1), but I looked at the <code>iptables</code> rules in the <code>nat</code> table, and the rules seem to be correct. The <code>zookeeper</code> service is correctly NAT'd to the <code>zookeeper</code> pod IP.</li> <li>I've manually tried running debugging commands from within the Kafka pod to confirm once again if it could make an end to end connection to Zookeeper. The following seemed to work: <code>echo mntr | nc 10.96.85.98 2181</code>.</li> <li>I don't have any firewalls running to my knowledge. It is entirely possible there is something within <code>iptables</code> that is preventing another layer from working, but this is what I hope to get some clarity on.</li> </ol>
jackson4123
<p>I have this working now. It appears to be because I repeatedly brought the cluster down and up and didn't properly clear the networking state, which probably led to some sort of black-holing somewhere.</p> <p>It may be overkill, but what I ended up doing was simply flushing the <code>iptables</code> rules and restarting all relevant services like <code>docker</code> which required special <code>iptables</code> rules. Now that the cluster is working, I don't envision repeatedly re-creating the cluster.</p>
jackson4123
<p>I was just checking the network driver used for <code>google kubernetes engine</code>. It seems <code>calico</code> is the default GKE driver for network policy.</p> <pre><code> networkPolicyConfig: {} clusterIpv4Cidr: 172.31.92.0/22 createTime: '2022-01-18T19:41:27+00:00' -- networkPolicy: enabled: true provider: CALICO </code></pre> <p>Is it possible to change <code>calico</code> and replace with some other <code>networking addon</code> for <code>gke</code> ?</p>
Zama Ques
<p>Calico is only used for Network Policies in GKE. By default GKE uses a Google Network Plugin. You also have the option to enable Dataplane <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/dataplane-v2" rel="nofollow noreferrer">V2</a> which is eBPF Based.</p> <p>In both cases the Plugins are managed by Google and you cannot change them</p>
boredabdel
<p>I see this in Kubernetes doc,</p> <blockquote> <p>In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.</p> </blockquote> <p>Also this,</p> <blockquote> <p>The Deployment controller and Job controller are examples of controllers that come as part of Kubernetes itself (&quot;built-in&quot; controllers).</p> </blockquote> <p>But, I couldn't find how does the control loop work. Does it check the current state of the cluster every few seconds? If yes, what is the default value?</p> <p>I also found something interesting here,</p> <p><a href="https://stackoverflow.com/questions/55453072/what-is-the-deployment-controller-sync-period-for-kube-controller-manager">What is the deployment controller sync period for kube-controller-manager?</a></p>
karthikeayan
<p>I would like to start explaining that the <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager" rel="nofollow noreferrer">kube-controller-manager</a> is a collection of individual control processes tied together to reduce complexity.</p> <p>Being said that, the control process responsible for monitoring the node's health and a few other parameters is the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller" rel="nofollow noreferrer">Node Controller</a>, and it does that by reading the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#heartbeats" rel="nofollow noreferrer">Heartbeats</a> sent by the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a> agent in the nodes.</p> <p>According to the Kubernete's documentation:</p> <blockquote> <p>For nodes there are two forms of heartbeats:</p> <ul> <li>updates to the <code>.status</code> of a Node</li> <li><a href="https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/lease-v1/" rel="nofollow noreferrer">Lease</a> objects within the <code>kube-node-lease</code> namespace. Each Node has an associated Lease object.</li> </ul> <p>Compared to updates to <code>.status</code> of a Node, a Lease is a lightweight resource. Using Leases for heartbeats reduces the performance impact of these updates for large clusters.</p> <p>The kubelet is responsible for creating and updating the <code>.status</code> of Nodes, and for updating their related Leases.</p> <ul> <li>The kubelet updates the node's <code>.status</code> either when there is change in status or if there has been no update for a configured interval. The default interval for <code>.status</code> updates to Nodes is 5 minutes, which is much longer than the 40 second default timeout for unreachable nodes.</li> <li>The kubelet creates and then updates its Lease object every 10 seconds (the default update interval). Lease updates occur independently from updates to the Node's <code>.status</code>. If the Lease update fails, the kubelet retries, using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.</li> </ul> </blockquote> <p>As for the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" rel="nofollow noreferrer">Kubernetes Objects</a> running in the nodes:</p> <blockquote> <p>Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:</p> <ul> <li>What containerized applications are running (and on which nodes)</li> <li>The resources available to those applications</li> <li>The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance</li> </ul> <p>A Kubernetes object is a &quot;record of intent&quot;--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's desired state.</p> </blockquote> <p>Depending on the Kubernetes Object, the controller mechanism is responsible for maintaining its desired state. The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment Object</a> for example, uses the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">Replica Set</a> underneath to maintain the desired described state of the Pods; while the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Statefulset Object</a> uses its own Controller for the same purpose.</p> <p>To see a complete list of Kubernetes Objects managed by your cluster, you can run the command: <code>kubectl api-resources</code></p>
Gabriel Robledo Ahumada
<p>I am trying to install Kubernetes in Mac. I followed these instructions - <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a> (for MacOs)</p> <p>Followed all the 5 steps mentioned in that link</p> <pre><code>1. curl -LO &quot;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl&quot; 2.curl -LO &quot;https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256&quot; echo &quot;$(&lt;kubectl.sha256) kubectl&quot; | shasum -a 256 --check output: kubectl: OK 3. chmod +x ./kubectl 4. sudo mv ./kubectl /usr/local/bin/kubectl &amp;&amp; \ sudo chown root: /usr/local/bin/kubectl 5. kubectl version --client </code></pre> <p>Apparently, when I executed this kubectl version --client</p> <p><code>zsh: bad CPU type in executable: kubectl</code></p> <p>I tried to switch the shell from zsh to sh, bash but nothing helped</p>
VJohn
<p>For Mac M1 - install Rosetta <code>softwareupdate --install-rosetta</code></p> <p>Working on my M1 Big Sur 11.5.1</p> <p>For more info , have a look on this link <a href="https://support.apple.com/en-gb/HT211861" rel="nofollow noreferrer">Rosetta</a></p> <p>Check this <a href="https://apple.stackexchange.com/questions/408375/zsh-bad-cpu-type-in-executable?newreg=943f42fc6a254d34bb2729742d135920">answer</a></p>
Optimist Rohit
<p>So I have a bunch of services running in a cluster, all exposed via <code>HTTP</code> only ingress object, example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: some-ingress spec: ingressClassName: nginx rules: - http: paths: - backend: service: name: some-svc port: number: 80 path: /some-svc(/|$)(.*) pathType: Prefix </code></pre> <p>They are accessed by <code>http://&lt;CLUSTER_EXTERNAL_IP&gt;/some-svc</code>, and it works ofc.</p> <p>Now I want to create an additional ingress object for every service which will force <code>SSL</code> connections and allow the use of a domain instead of an IP address.</p> <p>The problem is that the newer <code>SSL</code> ingresses always return <code>404</code> while testing the connection.</p> <p>The manifests are as follows:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: &quot;some-ingress-ssl&quot; annotations: ingress.kubernetes.io/ssl-redirect: &quot;true&quot; ingress.kubernetes.io/app-root: &quot;/some-svc&quot; spec: tls: - hosts: - foobar.com secretName: foobar-tls rules: - host: foobar.com http: paths: - path: /some-svc(/|$)(.*) pathType: Prefix backend: service: name: some-svc port: number: 80 </code></pre> <p>tests (foobar.com point to CLUSTER_EXTERNAL_IP):</p> <pre><code>&gt; curl -I http://&lt;CLUSTER_EXTERNAL_IP&gt;/some-svc HTTP/1.1 200 OK &gt; curl -I https://foobar.com/some-svc HTTP/2 404 </code></pre> <p>Is it possible to have both ingresses simultaneously? (one enforcing <code>SSL</code>, the other not) If so what am I doing wrong here?</p>
wiktor
<p>Figured out I was missing this annotation:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$2 </code></pre> <p>in <code>SSL</code> ingress...</p> <p>works like a charm now, maybe someone will find this usefull</p>
wiktor
<p>I have a GKE cluster of 5 nodes in the same zone. I'm trying to deploy an Elasticsearch statefulset of 3 nodes on the <strong>kube-system namespace</strong>, but every time I do the statefulset gets deleted and the pods get into the <strong>Terminating</strong> state immediately after the creation of the second pod.</p> <p>I tried to check the <strong>pod logs</strong> and to <strong>describe</strong> the pod for any information but found nothing useful.</p> <p>I even checked the GKE cluster logs where I detected the deletion request log but with no extra information of who is initiating it or why is it happening.</p> <p>When I changed the namespace to default everything was fine and the pods were in the ready state.</p> <p>Below is the manifest file I'm using for this deployment.</p> <pre><code># RBAC authn and authz apiVersion: v1 kind: ServiceAccount metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: &quot;true&quot; # addonmanager.kubernetes.io/mode: Reconcile --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: elasticsearch-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: &quot;true&quot; # addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: - &quot;&quot; resources: - &quot;services&quot; - &quot;namespaces&quot; - &quot;endpoints&quot; verbs: - &quot;get&quot; --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: kube-system name: elasticsearch-logging labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: &quot;true&quot; # addonmanager.kubernetes.io/mode: Reconcile subjects: - kind: ServiceAccount name: elasticsearch-logging namespace: kube-system apiGroup: &quot;&quot; roleRef: kind: ClusterRole name: elasticsearch-logging apiGroup: &quot;&quot; --- # Elasticsearch deployment itself apiVersion: apps/v1 kind: StatefulSet metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging version: 7.16.2 kubernetes.io/cluster-service: &quot;true&quot; # addonmanager.kubernetes.io/mode: Reconcile spec: serviceName: elasticsearch-logging replicas: 2 updateStrategy: type: RollingUpdate selector: matchLabels: k8s-app: elasticsearch-logging version: 7.16.2 template: metadata: labels: k8s-app: elasticsearch-logging version: 7.16.2 kubernetes.io/cluster-service: &quot;true&quot; spec: serviceAccountName: elasticsearch-logging containers: - image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2 name: elasticsearch-logging resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch-logging mountPath: /data env: #Added by Nour - name: discovery.seed_hosts value: elasticsearch-master-headless - name: &quot;NAMESPACE&quot; valueFrom: fieldRef: fieldPath: metadata.namespace volumes: - name: elasticsearch-logging # emptyDir: {} # Elasticsearch requires vm.max_map_count to be at least 262144. # If your OS already sets up this number to a higher value, feel free # to remove this init container. initContainers: - image: alpine:3.6 command: [&quot;/sbin/sysctl&quot;, &quot;-w&quot;, &quot;vm.max_map_count=262144&quot;] name: elasticsearch-logging-init securityContext: privileged: true volumeClaimTemplates: - metadata: name: elasticsearch-logging spec: storageClassName: &quot;standard&quot; accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 30Gi --- apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging kubernetes.io/cluster-service: &quot;true&quot; # addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: &quot;Elasticsearch&quot; spec: type: NodePort ports: - port: 9200 protocol: TCP targetPort: db nodePort: 31335 selector: k8s-app: elasticsearch-logging #Added by Nour --- apiVersion: v1 kind: Service metadata: labels: app: elasticsearch-master name: elasticsearch-master namespace: kube-system spec: ports: - name: http port: 9200 protocol: TCP targetPort: 9200 - name: transport port: 9300 protocol: TCP targetPort: 9300 selector: app: elasticsearch-master sessionAffinity: None type: ClusterIP --- apiVersion: v1 kind: Service metadata: labels: app: elasticsearch-master name: elasticsearch-master-headless namespace: kube-system spec: ports: - name: http port: 9200 protocol: TCP targetPort: 9200 - name: transport port: 9300 protocol: TCP targetPort: 9300 clusterIP: None selector: app: elasticsearch-master </code></pre> <p>Below are the available namespaces</p> <pre><code>$ kubectl get ns NAME STATUS AGE default Active 4d15h kube-node-lease Active 4d15h kube-public Active 4d15h kube-system Active 4d15h </code></pre> <p>Am I using any old API version that might cause the issue?</p> <p>Thank you.</p>
nour
<p>To close i think it would make sense to paste the final answer here.</p> <pre><code> I understand your curiousity, i guess GCP just started preventing people from deploying stuff to the kube-system namespaces as it has the risk of messing with GKE. I never tried to deploy stuff to the kube-system namespace before so i'm sure if it was always like this or we just changed it Overall i recommend avoiding deploying stuff into the kube-system namespace in GKE``` </code></pre>
boredabdel
<p>Good afternoon, colleagues. Please tell me. I set up the k8s+vault integration according to the instructions: <a href="https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar" rel="nofollow noreferrer">https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar</a></p> <p>But I have a test and production Kubernetes cluster and only one Vault. Is it possible to integrate one Vault with multiple Kubernetes clusters?</p> <p>Find parameter: authPath: &quot;auth/kubernetes&quot;, mayby for second cluster make: authPath: &quot;auth/kubernetes2&quot; .. etc</p>
Andrew Kaa
<p>It's possible what needs to be done:</p> <pre><code>helm install vault hashicorp/vault \ --set &quot;injector.externalVaultAddr=http://external-vault:8200&quot; --set &quot;authPath=auth/kubernetesnew&quot; vault auth enable -path kubernetesnew kubernetes .. vault write auth/kubernetesnew/role/k8s-name-role \ bound_service_account_names=k8s-vault-sa \ bound_service_account_namespaces=k8s-vault-namespace \ ttl=24h </code></pre>
Andrew Kaa
<p>I have got the following template for a job:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: &quot;gpujob&quot; spec: completions: 1 backoffLimit: 0 ttlSecondsAfterFinished: 600000 template: metadata: name: batch spec: volumes: - name: data persistentVolumeClaim: claimName: &quot;test&quot; containers: - name: myhub image: smat-jupyterlab env: - name: JUPYTERHUB_COOKIE_SECRET value: &quot;sdadasdasda&quot; resources: requests: memory: 500Gi limits: nvidia.com/gpu: 1 command: [&quot;/bin/bash&quot;, &quot;/usr/local/bin/jobscript.sh&quot;, smat-job] volumeMounts: - name: data mountPath: /data restartPolicy: Never nodeSelector: dso-node-role: &quot;inference&quot; </code></pre> <p>As you can see, I claim a lot of memory for the job. My Question is: Does the failed pod free the claimed resources, as soon as it is on a failed state? Due to regulations, I have to keep pods for one week in the cluster, otherwise I would just set a very low <code>ttlSecondsAfterFinished</code>. I read a lot of contradicting stuff in articles, but found nothing in the official docs.</p> <p><a href="https://i.stack.imgur.com/mxRRB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mxRRB.png" alt="enter image description here" /></a></p> <p><strong>TDLR: Does a failed Pod free claimed resources of a cluster? If no, what is a good way to do it?</strong></p>
Data Mastery
<p>Yes, a failed or completed job will produce a container in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-state-terminated" rel="nofollow noreferrer">Terminated</a> state, and therefore the resources allocated to it are freed.</p> <p>You can easily confirm this by using the command:</p> <pre><code>kubectl top pod </code></pre> <p>You should not see any pod associated with the failed job consuming resources.</p>
Gabriel Robledo Ahumada
<p>I have built my docker image in Docker desktop but I do not know how to config so that terraform kubernetes can refer to local image? (it stuck while creating the pod)</p> <p>Here is my tf file look like</p> <pre><code>.... provider &quot;kubernetes&quot; { config_path = &quot;~/.kube/config&quot; } resource &quot;kubernetes_pod&quot; &quot;test&quot; { metadata { name = &quot;backend-api&quot; labels = { app = &quot;MyNodeJsApp&quot; } } spec { container { image = &quot;backendnodejs:0.0.1&quot; name = &quot;backendnodejs-container&quot; # I think it keep pulling from Docker Hub port { container_port = 5000 } } } } resource &quot;kubernetes_service&quot; &quot;test&quot; { metadata { name = &quot;backendnodejs-service&quot; } spec { selector = { app = kubernetes_pod.test.metadata.0.labels.app } port { port = 5000 target_port = 5000 } type = &quot;LoadBalancer&quot; } } </code></pre>
Babyface_Developer
<p>So after hours researching how to deploy to minikube (installed on minikube website not docker desktop kubernetes). I found out that minikube ran as container itself in docker desktop which is why you can not use images from docker desktop to deploy into minikube.</p> <p>Here are the link solved about this:</p> <ul> <li><a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">Pushing images from minikube</a></li> <li><a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube">How to use local docker images with Minikube?</a></li> </ul> <p>So things you will need before using terraform deploy to minikube:</p> <ol> <li><p>Pushing directly to the in-cluster Docker daemon (docker-env)</p> <ul> <li><strong>Windows</strong> <ul> <li>Powershell <pre><code>PS&gt; &amp; minikube -p minikube docker-env --shell powershell | Invoke-Expression </code></pre> </li> <li>CMD <pre><code>CMD&gt; @FOR /f &quot;tokens=*&quot; %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i </code></pre> </li> </ul> </li> <li><strong>Linux/MacOS</strong> <pre><code>&gt; eval $(minikube docker-env) </code></pre> </li> </ul> </li> <li><p>Build docker image again (same terminal that been entered command above)</p> <pre><code>docker build -t your_image_tag your_docker_file </code></pre> </li> <li><p>Run normal terraform file (same terminal)</p> </li> </ol> <p><a href="https://shashanksrivastava.medium.com/how-to-set-up-minikube-to-use-your-local-docker-registry-10a5b564883" rel="nofollow noreferrer">This link</a> also explained same as above</p>
Babyface_Developer
<p>We're looking to migrate our standard GKE cluster to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview" rel="nofollow noreferrer">autopilot</a>, but several pods that we have doesn't require much CPU/memory after they started up. However, during startup, we would like some pods to get more CPU so that they start faster. For example, consider this pod:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: my-pod labels: name: my-pod spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: name: my-pod template: metadata: labels: name: my-pod version: &quot;1.0.0&quot; spec: containers: - name: my-pod image: &quot;europe-west1-docker.pkg.dev/my-repo/my-pod:1.0.0&quot; resources: requests: memory: &quot;1024Mi&quot; cpu: &quot;500m&quot; ... </code></pre> <p>This pod could take 2-3 minutes to start up. If we change CPU to 2000m, it starts <em>much</em> faster. But this pod is long-running, and it's unnecessary for us to pay for 2000m when 500m (or less) is required once it's up and running. I'm looking for something similar to <a href="https://cloud.google.com/run/docs/configuring/cpu#startup-boost" rel="nofollow noreferrer">Cloud Run Startup Boost</a> but for GKE Autopilot.</p> <p>Does something like this exist?</p>
Johan
<p>Unfortunately no. Kubernetes itself doesn't have this feature.</p> <p>Maybe you should look for ways to optimise the startup time of the container itself.</p>
boredabdel
<p>I'm using Grafana based on the helm chart, at the moment I have all the configurations as code, the main configuration is placed into the <code>vales.yaml</code> as part of the <code>grafana.ini</code> values, the dashboards and datasources are placed into configmaps per each datasource or dashboard and the sidecar container is in charge of taking them based on the labels.</p> <p>Now I want to use apps and the first app I'm trying is the Cloudflare app from <a href="https://grafana.com/grafana/plugins/cloudflare-app" rel="noreferrer">here</a>, the app is installed correctly using the plugins section in the chart <code>values.yaml</code> but I don't see any documentation of how to pass the email and token of CloudFlare API by configMap or json.</p> <p>Is it possible? or do I have to configure it manually inside the app settings?</p>
wolmi
<p>To update this answer, this plugin began support of API tokens in December 2020. In order to have the Grafana provisioner pick up your token, if you're using an API token instead of the email/API key, you must specify:</p> <pre><code> jsonData: bearerSet: true secureJsonData: bearer: &quot;your-api-token&quot; </code></pre>
paladin-devops
<p>Me and my colleague have an issue, whenever I type</p> <pre class="lang-sh prettyprint-override"><code>helm install mystuff-nginx ingress-nginx/ingress-nginx --version 3.26.0 </code></pre> <p>I have successfully deployed nginx in version <code>3.26.0</code>, however when he runs the same command on his laptop just with different name <code>mystuff-nginx-1</code> he installs it in the lastest version <code>4.0.1</code>, and idea what's going on ? We have helm, gcloud and kubectl in the same versions, even redownloaded the binary.</p> <p>and we know the version is available</p> <pre class="lang-sh prettyprint-override"><code>MacBook-Pro-2% helm search repo -l ingress-nginx/ingress-nginx ingress-nginx/ingress-nginx 4.0.2 1.0.1 Ingress controller for Kubernetes using NGINX a... ingress-nginx/ingress-nginx 4.0.1 1.0.0 Ingress controller for Kubernetes using NGINX a... ingress-nginx/ingress-nginx 3.37.0 0.49.1 Ingress controller for Kubernetes using NGINX a... ingress-nginx/ingress-nginx 3.36.0 0.49.0 Ingress controller for Kubernetes using NGINX a... ingress-nginx/ingress-nginx 3.35.0 0.48.1 Ingress controller for Kubernetes using NGINX a... ... ingress-nginx/ingress-nginx 3.26.0 0.44.0 Ingress controller for Kubernetes using NGINX a... </code></pre>
CptDolphin
<p>According to Helm's documentation:</p> <pre><code>helm install [NAME] [CHART] [flags] </code></pre> <p>The version flag:</p> <pre><code>--version string specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used </code></pre> <p>So, you can try:</p> <pre><code>$ helm install nginx-ingress ingress-nginx/ingress-nginx --version &quot;3.26.0&quot; </code></pre> <p>helm list:</p> <pre><code>NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nginx-ingress default 1 2021-09-24 09:44:54.261772858 -0300 -03 deployed ingress-nginx-3.26.0 0.44.0 </code></pre> <p>I used the helm version v3.5.4 and a k3d cluster to test.</p>
Emidio Neto
<p><a href="https://i.stack.imgur.com/rcm8B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rcm8B.png" alt="enter image description here" /></a></p> <p>The picture above shows the list of all kubernetes pods I need to save to a text file (or multiple text files).</p> <p>I need a command which:</p> <ol> <li><p>stores multiple pod logs into text files (or on single text file) - so far I have this command which stores one pod into one text file but this is not enough since I will have to spell out each pod name individually for every pod:</p> <p>$ kubectl logs ipt-prodcat-db-kp-kkng2 -n ho-it-sst4-i-ie-enf &gt; latest.txt</p> </li> <li><p>I then need the command to send these files into a python script where it will check for various strings - so far this works but if this could be included with the above command then that would be extremely useful:</p> <p>python CheckLogs.py latest.txt latest2.txt</p> </li> </ol> <p>Is it possible to do either (1) or both (1) and (2) in a single command?</p>
marcz2007
<p>The simplest solution is to create a shell script that does exactly what you are looking for:</p> <pre><code>#!/bin/sh FILE=&quot;text1.txt&quot; for p in $(kubectl get pods -o jsonpath=&quot;{.items[*].metadata.name}&quot;); do kubectl logs $p &gt;&gt; $FILE done </code></pre> <p>With this script you will get the logs of all the pods in your namespace in a FILE. You can even add <code>python CheckLogs.py latest.txt</code></p>
Alonso Valdivia
<p>I have the following two secrets for two different docker registries:</p> <p>secret1-registry.yaml:</p> <pre><code>apiVersion: v1 data: .dockerconfigjson: somevalue1 kind: Secret metadata: name: metadata1 type: kubernetes.io/dockerconfigjson </code></pre> <p>secret2-registry.yaml:</p> <pre><code>apiVersion: v1 data: .dockerconfigjson: somevalue2 kind: Secret metadata: name: metadata2 type: kubernetes.io/dockerconfigjson </code></pre> <p>Is it possible to combine the two secrets?</p>
Marc
<p>If you want to manually combine, the .dockerconfigjson field should be a base64-encoded representation of the combined Docker configuration JSON. To create the base64-encoded data, you can use a tool like echo -n '&lt;json_data&gt;' | base64 -w 0.</p> <pre><code>apiVersion: v1 data: .dockerconfigjson: combined_base64_encoded_data kind: Secret metadata: name: combined-docker-secrets type: kubernetes.io/dockerconfigjson </code></pre> <p>You can also use kubectl commands to create the combined secret. Assuming you have the content of somevalue1 and somevalue2 as actual Docker configuration JSONs.</p> <pre><code># Base64 encode the combined Docker configuration JSON combined_data=$(echo -n '{&quot;auths&quot;:{&quot;registry1&quot;:{&quot;auth&quot;:&quot;somevalue1&quot;},&quot;registry2&quot;:{&quot;auth&quot;:&quot;somevalue2&quot;}}}' | base64 -w 0) # Create the combined secret kubectl create secret generic combined-docker-secrets \ --from-literal=.dockerconfigjson=$combined_data \ --type=kubernetes.io/dockerconfigjson </code></pre>
Nazrul Chowdhury
<p>I am trying to test an outbound connection from within a Amazon Linux 2 container that is running in Kubernetes. I have a service set up and I am able to telnet to that service through a VPN. But I want to test a connection coming out from that container. Is there a way that this can be done. I have tried the ping, etc. but the commands all say &quot;command not found&quot;</p> <p>Is there any command I can run that can test an outbound connection?</p>
Gene Smith
<p>Please provide more context. What exact image are you running? When debugging connectivity of kubernetes pods and services, you can exec into the pod with</p> <pre><code>kubectl exec -it &lt;pod_name&gt; -n &lt;namespace&gt; -- &lt;bash|ash|sh&gt; </code></pre> <p>Once you gain access to the pod and can emulate a shell inside, you can update + upgrade the runtime with the package manager (apt, yum, depends on the distro).</p> <p>After upgrading, you can install <strong>curl</strong> and try to curl an external site.</p>
Pavol Krajkovič
<p>I installed GitLab runner via <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">HelmChart</a> on my <code>Kubernetes</code> cluster</p> <p>While installing via helm I used config <code>values.yaml</code></p> <p>But my Runner stucks every time at <code>docker login</code> command, without <code>docker login</code> working good</p> <p>I have no idea what is wrong :( <strong>Any help appreciated!</strong></p> <p><strong>Error:</strong> <code>write tcp 10.244.0.44:50882-&gt;188.72.88.34:443: use of closed network connection</code></p> <p><a href="https://i.stack.imgur.com/vVOn4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vVOn4.png" alt="enter image description here" /></a></p> <p><code>.gitlab-ci.yaml</code> file</p> <pre><code>build docker image: stage: build image: docker:latest services: - name: docker:dind entrypoint: [&quot;env&quot;, &quot;-u&quot;, &quot;DOCKER_HOST&quot;] command: [&quot;dockerd-entrypoint.sh&quot;] variables: DOCKER_HOST: tcp://localhost:2375/ DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: &quot;&quot; before_script: - mkdir -p $HOME/.docker - echo passwd| docker login -u user https://registry.labs.com --password-stdin script: - docker images - docker ps - docker pull registry.labs.com/jappweek:a_zh - docker build -t &quot;$CI_REGISTRY&quot;/&quot;$CI_REGISTRY_IMAGE&quot;:1.8 . - docker push &quot;$CI_REGISTRY&quot;/&quot;$CI_REGISTRY_IMAGE&quot;:1.8 tags: - k8s </code></pre> <p><code>values.yaml</code> file</p> <pre><code>image: registry: registry.gitlab.com #image: gitlab/gitlab-runner:v13.0.0 image: gitlab-org/gitlab-runner # tag: alpine-v11.6.0 imagePullPolicy: IfNotPresent gitlabUrl: https://gitlab.somebars.com runnerRegistrationToken: &quot;GR1348941a7jJ4WF7999yxsya9Arsd929g&quot; terminationGracePeriodSeconds: 3600 # concurrent: 10 checkInterval: 30 sessionServer: enabled: false ## For RBAC support: rbac: create: true rules: - resources: [&quot;configmaps&quot;, &quot;pods&quot;, &quot;pods/attach&quot;, &quot;secrets&quot;, &quot;services&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;patch&quot;, &quot;update&quot;, &quot;delete&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;pods/exec&quot;] verbs: [&quot;create&quot;, &quot;patch&quot;, &quot;delete&quot;] clusterWideAccess: false podSecurityPolicy: enabled: false resourceNames: - gitlab-runner metrics: enabled: false portName: metrics port: 9252 serviceMonitor: enabled: false service: enabled: false type: ClusterIP runners: config: | [[runners]] [runners.kubernetes] namespace = &quot;{{.Release.Namespace}}&quot; image = &quot;ubuntu:16.04&quot; privileged: true cache: {} builds: {} services: {} helpers: {} securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: false runAsNonRoot: true privileged: false capabilities: drop: [&quot;ALL&quot;] podSecurityContext: runAsUser: 100 # runAsGroup: 65533 fsGroup: 65533 resources: {} affinity: {} nodeSelector: {} tolerations: [] hostAliases: [] podAnnotations: {} podLabels: {} priorityClassName: &quot;&quot; secrets: [] configMaps: {} volumeMounts: [] volumes: [] </code></pre>
Shukurillo Baikhanov
<p>I bypassed <code>docker login</code> with importing <code>$HOME/.docker/config.json</code> file which stores <code>auth token</code> from my host machine to Gitlab Ci</p> <pre><code> before_script: - mkdir -p $HOME/.docker - echo $DOCKER_AUTH_CONFIG &gt; $HOME/.docker/config.json </code></pre> <p><code>$DOCKER_AUTH_CONFIG</code> is <code>$HOME/.docker/config.json</code></p> <p>That's all no <code>docker login</code> required</p>
Shukurillo Baikhanov
<p>how to set image name/tag for container images specified in CRDs through the <code>kustomization.yaml</code> using the images field?</p> <p>The <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/images/" rel="nofollow noreferrer">images</a> field works well when the container images are specified in either <code>Deployment</code> or <code>StatefulSet</code>, but not transform a CRD resource from:</p> <pre><code>apiVersion: foo.example.com/v1alpha1 kind: Application spec: image: xxx </code></pre> <p>To:</p> <pre><code>apiVersion: foo.example.com/v1alpha1 kind: Application spec: image: new-image:tag </code></pre>
shawnzhu
<p>Your task can be solved easily using <code>yq</code>. The command depends on the <code>yq</code> implementation you are using:</p> <h3><a href="https://mikefarah.gitbook.io/yq/" rel="nofollow noreferrer">mikefarah/yq - version 4</a></h3> <p><code>IMAGE=&quot;new-image:tag&quot; yq e '.spec.image = strenv(IMAGE)'</code></p> <h3><a href="https://github.com/kislyuk/yq" rel="nofollow noreferrer">kislyuk/yq</a></h3> <p><code>yq -y --arg IMAGE &quot;new-image:tag&quot; '.spec.image |= $IMAGE'</code></p>
jpseng
<p>I have two separate clusters (Application and DB) in the same namespace. Statefulset for DB cluster and Deployment for Application cluster. For internal communication I have configured a Headless Service. When I ping from a pod in application cluster to the service it works (Works the other way round too - DB pod to service works). But sometimes, for example if I continuously execute ping command for like 3 times, the third time it gives an error - <strong>&quot;ping: : Temporary failure in name resolution&quot;</strong>. Why is this happening?</p>
Dusty
<p>As far as I know this is usually a name resolution error and shows that your DNS server cannot resolve the domain names into their respective IP addresses. This can present a grave challenge as you will not be able to update, upgrade, or even install any software packages on your Linux system. Here I have listed few reasons</p> <p><strong>1.Forgot configuring or Wrongly Configured resolv.conf File</strong></p> <p>The /etc/resolv.conf file is the resolver configuration file in Linux systems. It contains the DNS entries that help your Linux system to resolve domain names into IP addresses.</p> <p>If this file is not present or is there but you are still having the name resolution error, create one and append the Google public DNS server as <code>nameserver 8.8.8.8</code></p> <p>Save the changes and restart the systemd-resolved service as shown.</p> <p><code>$ sudo systemctl restart systemd-resolved.service</code></p> <p>It’s also prudent to check the status of the resolver and ensure that it is active and running as expected:</p> <p><code>$ sudo systemctl status systemd-resolved.service</code></p> <p><strong>2. Due to Firewall Restrictions</strong></p> <p>By some chance if the first solution did not work for you, firewall restrictions could be preventing you from successfully performing DNS queries. Check your firewall and confirm if port 53 (used for DNS – Domain Name Resolution ) and port 43 are open. If the ports are blocked, open them as follows:</p> <p>For UFW firewall (Ubuntu / Debian and Mint) To open ports 53 &amp; 43 on the UFW firewall run the commands below:</p> <pre><code>$ sudo ufw allow 43/tcp $ sudo ufw reload``` For firewalld (RHEL / CentOS / Fedora) For Redhat based systems such as CentOS, invoke the commands below: ```$ sudo firewall-cmd --add-port=53/tcp --permanent $ sudo firewall-cmd --add-port=43/tcp --permanent $ sudo firewall-cmd --reload </code></pre> <p>I hope that you now have an idea about the ‘temporary failure in name resolution‘ error. I also found a similar git issue hope that helps</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/6667" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/6667</a></p>
Aditya Ramakrishnan
<p>Kubernetes not able to find metric-server api.I am using Kubernetes with Docker on Mac. I was trying to do HPA from <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="noreferrer">following example</a>. However, when I execute command <code>kubectl get hpa</code>, My target still was unknown. Then I tried, <code>kubectl describe hpa</code>. Which gave me error like below:</p> <pre><code> Name: php-apache Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Sun, 07 Oct 2018 12:36:31 -0700 Reference: Deployment/php-apache Metrics: ( current / target ) resource cpu on pods (as a percentage of request): &lt;unknown&gt; / 5% Min replicas: 1 Max replicas: 10 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedComputeMetricsReplicas 1h (x34 over 5h) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API Warning FailedGetResourceMetric 1m (x42 over 5h) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API </code></pre> <p>I am using <a href="https://github.com/kubernetes-incubator/metrics-server" rel="noreferrer">metrics-server</a> as suggested in Kubernetes documentation. I also tried doing same just using Minikube. But that also didn't work.</p> <p>Running <code>kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes</code> outputs :</p> <pre><code>{ &quot;kind&quot;: &quot;NodeMetricsList&quot;, &quot;apiVersion&quot;: &quot;metrics.k8s.io/v1beta1&quot;, &quot;metadata&quot;: { &quot;selfLink&quot;: &quot;/apis/metrics.k8s.io/v1beta1/nodes&quot; }, &quot;items&quot;: [] } </code></pre>
Vivek
<p>Use official metrics server - <a href="https://github.com/kubernetes-sigs/metrics-server" rel="noreferrer">https://github.com/kubernetes-sigs/metrics-server</a></p> <p>If you use one master node, run this command to create the <code>metrics-server</code>:</p> <pre><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml </code></pre> <p>If you have HA (High availability) cluster, use this:</p> <pre><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml </code></pre> <p>Then use can use <code>kubectl top nodes</code> or <code>kubectl top pods -A</code> and get something like:</p> <pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% </code></pre>
Akhil Manepalli
<p>Forgive me for asking a stupid question but I can't seem to find anywhere in the Kubernetes API reference how to query logs via the REST API if there's more than one container running inside the pod?</p> <p><code>cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log</code></p> <p>Returns:</p> <blockquote> <p>{&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;a container name must be specified for pod my-app-1, choose one of: [nginx php-fpm]&quot;,&quot;reason&quot;:&quot;BadRequest&quot;,&quot;code&quot;:400}</p> </blockquote> <p>I tried:</p> <p><code>cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/nginx/log</code></p> <p>and it results in an error that the resource can't be found.</p> <p>How do I specify the container name when making an HTTP request to the API?</p>
Joel
<p>Figured it out - I needed to add <strong>container</strong> using a query parameter:</p> <p><em>?container=nginx</em></p> <p><strong>Working Example:</strong></p> <p><code>cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log?container=nginx</code></p>
Joel
<p>I try to mount a linux directory as a shared directory for multiple containers in minikube.<br /> Here is my config:</p> <p><code>minikube start --insecure-registry=&quot;myregistry.com:5000&quot; --mount --mount-string=&quot;/tmp/myapp/k8s/:/data/myapp/share/&quot;</code></p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: manual provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: myapp-share-storage spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany local: path: &quot;/data/myapp/share/&quot; nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - minikube --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myapp-share-claim spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: io.kompose.service: myapp-server name: myapp-server spec: selector: matchLabels: io.kompose.service: myapp-server template: metadata: labels: io.kompose.service: myapp-server spec: containers: - name: myapp-server image: myregistry.com:5000/server-myapp:alpine ports: - containerPort: 80 resources: {} volumeMounts: - mountPath: /data/myapp/share name: myapp-share env: - name: storage__root_directory value: /data/myapp/share volumes: - name: myapp-share persistentVolumeClaim: claimName: myapp-share-claim status: {} </code></pre> <p>It works with pitfalls: Statefulset are not supported, they bring deadlock errors :</p> <ul> <li>pending PVC: waiting for first consumer to be created before binding</li> <li>pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind</li> </ul> <p>Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:</p> <ul> <li>The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)</li> <li>Minikube doesn't honor mount request</li> </ul> <p>How can I make it just work?</p>
Kiruahxh
<p>Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command <code>minikube mount /mydir/:/data/myapp/share/</code> after starting the minikube since <code>--mount --mount-strings=&quot;/mydir/:/data/myapp/share/&quot;</code> wasn't working.</p>
EudaldGM
<p>I'm following this <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Link</a> to install <code>nginx-ingress-controller</code> on my bare metal server <code>Kubernetes-v.1.19.16</code></p> <p>The below commands i have executed as part of installation.</p> <pre><code>$ git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v2.4.0 $ cd kubernetes-ingress/deployments $ kubectl apply -f common/ns-and-sa.yaml $ kubectl apply -f rbac/rbac.yaml $ kubectl apply -f rbac/ap-rbac.yaml $ kubectl apply -f rbac/apdos-rbac.yaml $ kubectl apply -f common/default-server-secret.yaml $ kubectl apply -f common/nginx-config.yaml $ kubectl apply -f common/ingress-class.yaml $ kubectl apply -f daemon-set/nginx-ingress.yaml </code></pre> <p>I have followed <code>DaemonSet</code> method.</p> <pre><code>$ kubectl get all -n nginx-ingress NAME READY STATUS RESTARTS AGE pod/nginx-ingress-bcrk5 0/1 Running 0 19m pod/nginx-ingress-ndpfz 0/1 Running 0 19m pod/nginx-ingress-nvp98 0/1 Running 0 19m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/nginx-ingress 3 3 0 3 0 &lt;none&gt; 19m </code></pre> <p>For all three <code>nginx-ingress</code> pods same error it shown.</p> <pre><code>$ kubectl describe pods nginx-ingress-bcrk5 -n nginx-ingress Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 38m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-bcrk5 to node-4 Normal Pulling 38m kubelet Pulling image &quot;nginx/nginx-ingress:2.4.0&quot; Normal Pulled 37m kubelet Successfully pulled image &quot;nginx/nginx-ingress:2.4.0&quot; in 19.603066401s Normal Created 37m kubelet Created container nginx-ingress Normal Started 37m kubelet Started container nginx-ingress Warning Unhealthy 3m13s (x2081 over 37m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <pre><code>$ kubectl logs -l app=nginx-ingress -n nginx-ingress E1007 03:18:37.278678 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org) W1007 03:18:55.714313 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org) E1007 03:18:55.714361 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org) W1007 03:19:00.542294 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org) E1007 03:19:00.542340 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org) </code></pre> <p>Still <code>READY</code> and <code>UP-TO-DATE</code> state showing <code>0</code>, Ideally it show <code>3</code> in both the categories. Please let me know what i'm missing here as part of installation?</p> <p>Any help is appreciated.</p>
user4948798
<p>I'd recommend installing it using <code>helm</code></p> <p>See <a href="https://github.com/nginxinc/kubernetes-ingress/tree/main/deployments/helm-chart" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/main/deployments/helm-chart</a></p> <pre class="lang-bash prettyprint-override"><code>helm repo add nginx-stable https://helm.nginx.com/stable helm install nginx-ingress nginx-stable/nginx-ingress \ --namespace $NAMESPACE \ --version $VERSION </code></pre> <p>You can look for versions compatibles with your Kubernetes cluster version using:</p> <pre class="lang-bash prettyprint-override"><code>helm search repo nginx-stable/nginx-ingress --versions </code></pre> <p>When installation is well finished, you should see ingress-controller service that holds an <code>$EXTERNAL-IP</code></p> <pre class="lang-bash prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.0.XXX.XXX XX.XXX.XXX.XX 80:30578/TCP,443:31874/TCP 548d </code></pre>
Reda E.
<p>I am trying to add ELK to my project which is running on kubernetes. I want to pass by filebeat -&gt; logstach then elastic search. I prepared my filebeat.yml file and in my company the filebeat is configured as an agent in the cluster which i don't realy know what it means? I want to know how to configure the filebeat in this case ? just adding the file in the project and it will be taken into considiration once the pod started or how does it work ?</p>
fbm fatma
<p>You can configure the Filebeat in some ways.</p> <p>1 - You can configure it using the DeamonSet, meaning each node of your Kubernetes architecture will have one POD of Filebeat. Usually, in this architecture, you'll need to use only one filebeat.yaml configuration file and set the inputs, filters, outputs (output to Logstash, Elasticsearch, etc.), etc. In this case, your filebeat will need root access inside your cluster.</p> <p>2 - Using Filebeat as a Sidecar with your application k8s resource. You can configure an emptyDir in the Deployment/StatefulSet, share it with the Filebeat Sidecar, and set the Filebeat to monitor this directory.</p>
Marcos Rosse
<p>I deployed a web application on kubernetes cluster. This application consists of multiple nodes, and each node has multiple pods. How can I do the performance measurement of whole application? I can see some metrics results on prometheus/ grafana but these results for each node/pod, not for whole application. I am just trying to understand the bigger picture. For example, I see that application data is stored in redis(pod), but is it enough to look into only that pod to measure latency?</p>
user19717254
<p>Every kubelet has <code>cAdvisor</code> (<a href="https://github.com/google/cadvisor" rel="nofollow noreferrer">link</a>) integrated into the binary. <code>cAdvisor</code> provides container users an understanding of the resource usage and performance characteristics of their running containers. Combine <code>cAdvisor</code> with an additional exporter like <code>JMX exporter</code> (for java, <a href="https://github.com/prometheus/jmx_exporter" rel="nofollow noreferrer">link</a>), <code>BlackBox exporter</code> for probe-ing urls (response time, monitor http codes, etc. <a href="https://github.com/prometheus/blackbox_exporter" rel="nofollow noreferrer">link</a>). There are also frameworks, that provide metrics such as Java Springboot on path <code>/actuator/prometheus</code> and you can scrape these metrics. There are many different exporters (<a href="https://github.com/prometheus?q=exporter&amp;type=all&amp;language=&amp;sort=" rel="nofollow noreferrer">link</a>), with each doing something else. When you gather all these metrics, you can have a bigger overview about the state of your application. Couple this with handmade Grafana dashboards and Alerting (<code>AlertManager</code> e.g.) and you can monitor almost everything about your application.</p> <p>As per the prometheus/grafana stack, I guess what you are talking about is the <code>kube-prom-stack</code> with default dashboards already implemented into them.</p>
Pavol Krajkovič
<p>I am using the following chart to deploy a Cassandra cluster to my gke cluster. <a href="https://github.com/k8ssandra/k8ssandra/tree/main/charts/k8ssandra" rel="nofollow noreferrer">https://github.com/k8ssandra/k8ssandra/tree/main/charts/k8ssandra</a></p> <p>However, the statefulset stuck in state 1/2 (the cassandra container status is always unhealthy)</p> <p>Here's my values.yaml</p> <pre><code>cassandra: auth: superuser: secret: cassandra-admin-secret clusterName: cassandra-cluster version: &quot;4.0.0&quot; cassandraLibDirVolume: storageClass: standard size: 5Gi allowMultipleNodesPerWorker: true resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi datacenters: - name: dc1 size: 1 racks: - name: default stargate: enabled: true replicas: 1 heapMB: 256 cpuReqMillicores: 200 cpuLimMillicores: 500 kube-prometheus-stack: enabled: False </code></pre> <pre><code>NAME READY STATUS RESTARTS AGE cassandra-cluster-dc1-default-sts-0 1/2 Running 0 77m </code></pre> <p>And then I describe the pod</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 2m11s (x478 over 81m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500 </code></pre> <p>Finally, I print the log of the <strong>cassandra</strong> container.</p> <pre><code>INFO [nioEventLoopGroup-2-2] 2022-04-20 11:09:35,711 Cli.java:617 - address=/10.12.11.58:51000 url=/api/v0/metadata/endpoints status=500 Internal Server Error INFO [nioEventLoopGroup-3-14] 2022-04-20 11:09:37,718 UnixSocketCQLAccess.java:88 - Cannot create Driver CQLSession as the driver socket has not been created. This should resolve once Cassandra has started and created the socket at /tmp/cassandra.sock INFO [nioEventLoopGroup-2-1] 2022-04-20 11:09:37,720 Cli.java:617 - address=/10.12.11.58:51132 url=/api/v0/metadata/endpoints status=500 Internal Server Error INFO [nioEventLoopGroup-3-15] 2022-04-20 11:09:37,750 UnixSocketCQLAccess.java:88 - Cannot create Driver CQLSession as the driver socket has not been created. This should resolve once Cassandra has started and created the socket at /tmp/cassandra.sock INFO [nioEventLoopGroup-2-2] 2022-04-20 11:09:37,750 Cli.java:617 - address=/10.12.11.1:48478 url=/api/v0/probes/readiness status=500 Internal Server Error INFO [nioEventLoopGroup-3-16] 2022-04-20 11:09:39,741 UnixSocketCQLAccess.java:88 - Cannot create Driver CQLSession as the driver socket has not been created. This should resolve once Cassandra has started and created the socket at /tmp/cassandra.sock </code></pre> <p>and the logs of <strong>server-system-logger</strong></p> <pre><code>tail: cannot open '/var/log/cassandra/system.log' for reading: No such file or directory </code></pre> <p>How can I solve this problem? Thanks.</p>
user13118342
<p>The message in the <code>cassandra</code> container says it should resolve itself once Cassandra is up and running which is correct.</p> <p>Similarly no logs are available with the <code>server-system-logger</code> container until Cassandra has started and more precisely, not until the logging framework has initialized.</p>
John Sanda
<p>I installed Kong (Kong proxy+kong ingress controller) over Kubernetes/Kubesphere cluster with Istio mesh inside, and I added annotations and ingress types needed, so am able to access only the Kong Proxy at node exposed IP and port, but am unable neither add rules nor access Admin GUI or do any kind of configuration, every request I do to my Kong end-point like</p> <pre><code>curl -i -X GET http://10.233.124.79:8000/rules </code></pre> <p>or any kind of request to the proxy, I get the same response of:</p> <blockquote> <pre><code>Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/2.2.0 {&quot;message&quot;:&quot;no Route matched with those values&quot;} </code></pre> </blockquote> <p>Am not able to invoke Admin API, its pod-container is only listening to 127.0.0.1, my environment var's for kong-proxy pod</p> <pre><code>KONG_PROXY_LISTEN 0.0.0.0:8000, 0.0.0.0:8443 ssl http2 KONG_PORT_MAPS 80:8000, 443:8443 KONG_ADMIN_LISTEN 127.0.0.1:8444 ssl KONG_STATUS_LISTEN 0.0.0.0:8100 KONG_DATABASE off KONG_NGINX_WORKER_PROCESSES 2 KONG_ADMIN_ACCESS_LOG /dev/stdout KONG_ADMIN_ERROR_LOG /dev/stderr KONG_PROXY_ERROR_LOG /dev/stderr </code></pre> <p>And env. var's for ingress-controller: CONTROLLER_KONG_ADMIN_URL <a href="https://127.0.0.1:8444" rel="nofollow noreferrer">https://127.0.0.1:8444</a> CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY true CONTROLLER_PUBLISH_SERVICE kong/kong-proxy</p> <p>So how to be able to expose Admin GUI over the mesh over the nodeport and how to able to invoke Admin API, to add rules, etc?</p>
Concepts make sense
<p>Yes, first you should add rules.</p> <p>You can directly add routers in KubeSphere. See the <a href="https://kubesphere.com.cn/forum/d/3850-kubespherekong" rel="nofollow noreferrer">documentation</a> for more info.</p>
Zack
<p>This is an easy to run version of the code I wrote to do port-forwarding via client-go. There are hardcoded pod name, namespace, and port. You can change them with the one you have running.</p> <pre class="lang-golang prettyprint-override"><code>package main import ( "flag" "net/http" "os" "path/filepath" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/tools/portforward" "k8s.io/client-go/transport/spdy" ) func main() { stopCh := make(&lt;-chan struct{}) readyCh := make(chan struct{}) var kubeconfig *string if home := "/home/gianarb"; home != "" { kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file") } else { kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file") } flag.Parse() // use the current context in kubeconfig config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) if err != nil { panic(err.Error()) } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } reqURL := clientset.RESTClient().Post(). Resource("pods"). Namespace("default"). Name("test"). SubResource("portforward").URL() transport, upgrader, err := spdy.RoundTripperFor(config) if err != nil { panic(err) } dialer := spdy.NewDialer(upgrader, &amp;http.Client{Transport: transport}, http.MethodPost, reqURL) fw, err := portforward.New(dialer, []string{"9999:9999"}, stopCh, readyCh, os.Stdout, os.Stdout) if err != nil { panic(err) } if err := fw.ForwardPorts(); err != nil { panic(err) } } </code></pre> <p>Version golang 1.13:</p> <pre><code> k8s.io/api v0.0.0-20190409021203-6e4e0e4f393b k8s.io/apimachinery v0.0.0-20190404173353-6a84e37a896d k8s.io/cli-runtime v0.0.0-20190409023024-d644b00f3b79 k8s.io/client-go v11.0.0+incompatible </code></pre> <p>The error I get is </p> <blockquote> <p>error upgrading connection: </p> </blockquote> <p>but there is nothing after the <code>:</code>. Do you have any experience with this topic? Thanks</p>
GianArb
<p>The <code>*rest.Request</code> has a <code>Prefix(string)</code> method you can use to insert the missing subpath:</p> <pre class="lang-golang prettyprint-override"><code>reqURL := clientset.RESTClient().Post(). Prefix(&quot;api/v1&quot;). Resource(&quot;pods&quot;). Namespace(&quot;default&quot;). Name(&quot;test&quot;). SubResource(&quot;portforward&quot;).URL() </code></pre>
tomaspinho
<p>I am trying to deploy a mariadb deployment , I have the root password from GCP Secret Manager and stored in a volume mount. I need a way to give the env var the value from that file , please check line 38 .</p> <pre><code> 1 apiVersion: apps/v1 2 kind: Deployment 3 metadata: 4 name: mariadb-deployment 5 namespace: readonly-ns 6 spec: 7 replicas: 8 selector: 9 matchLabels: 10 app: mariadb 11 template: 12 metadata: 13 labels: 14 app: mariadb 15 spec: 16 volumes: 17 - name: cert-volume 18 emptyDir: {} 19 serviceAccountName: readonly-sa 20 initContainers: 21 - name: init 22 image: google/cloud-sdk:slim 23 command: [&quot;/bin/sh&quot;] 24 args: 25 - -c 26 - &gt;- 27 gcloud secrets versions access &quot;latest&quot; --secret=bq-readonly-key &gt; /etc/gsm/key.pem 28 volumeMounts: 29 - name: cert-volume 30 mountPath: /etc/gsm/ 31 containers: 32 - name: mariadb 33 image: mariadb 34 ports: 35 - containerPort: 3306 36 env: 37 - name: MARIADB_ROOT_PASSWORD 38 value: &quot;/etc/gsm/key.pem&quot; # I need a way to give this env var a value from that file path 39 volumeMounts: 40 - name: cert-volume 41 mountPath: /etc/gsm/ </code></pre> <p>I could not find it online, there is Secret and configMap , but those are not an option for me .</p>
Krikor Garabed Kafalian
<p>There is a way to create a secret or configmap using a Job with access to create, and update resources on Kubernetes. I think you can adapt it to a init container for example.</p> <p>ServiceAccount, Role and RoleBiding:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: secret-creator --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-creator rules: - apiGroups: [&quot;&quot;] resources: [&quot;secrets&quot;] verbs: [&quot;create&quot;, &quot;update&quot;, &quot;get&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: secret-creator subjects: - kind: User name: system:serviceaccount:default:secret-creator apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: secret-creator apiGroup: rbac.authorization.k8s.io </code></pre> <p>Job:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: secret-creator spec: template: metadata: spec: volumes: - name: cert-volume persistentVolumeClaim: claimName: my-existent-pvc serviceAccountName: secret-creator serviceAccount: secret-creator containers: - image: bitnami/kubectl name: secret-creator command: - /bin/bash - -c args: - kubectl create secret generic app-x-secret --from-file=/etc/sec/key.pem resources: {} volumeMounts: - name: cert-volume mountPath: /etc/sec/key.pem subPath: key.pem restartPolicy: Never </code></pre> <p>Deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: app name: app spec: replicas: 1 selector: matchLabels: app: app template: metadata: labels: app: app spec: containers: - image: bitnami/bitnami-shell name: app command: - /bin/bash - -c args: - sleep 360 env: - name: APP_PASSWORD valueFrom: secretKeyRef: name: app-x-secret key: key.pem </code></pre> <p>Github: <a href="https://github.com/marcosrosse/k8s-secret-from-volume" rel="nofollow noreferrer">https://github.com/marcosrosse/k8s-secret-from-volume</a></p>
Marcos Rosse
<p>I'm trying to run a simple flexvolume plugin driver on windows node to enable connectivity with an external SMB share. I followed the steps listed out here <a href="https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows" rel="nofollow noreferrer">https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows</a></p> <p>Placed the driver plugin in the mentioned path but the problem is the plugin is not getting picked up by gke. The error details are as below.</p> <pre><code> Warning FailedMount 8s (x2 over 21s) kubelet, gke-windows-node-pool-e4e7a7bf-f2pc Unable to attach or mount volumes: unmounted volumes=[smb-volume], unattached volumes=[default-token-jf28b smb-volume]: failed to get Plugin from volumeSpec for volume "smb-volume" err=no volume plugin matched </code></pre> <p>Not sure what I'm missing here. Any help would be great. Thanks in Advance.</p>
Init_Rebel
<p>Just faced with a similar issue on a kubeadm on prem configuration, have used <a href="https://learn.microsoft.com/en-us/sysinternals/downloads/procmon" rel="nofollow noreferrer">Process Monitor</a> to find the proper location the kubelet.exe process looks for volume plugins.</p> <p>As result my actual windows node SMB preparation:</p> <pre><code>curl -L https://github.com/microsoft/K8s-Storage-Plugins/releases/download/V0.0.3/flexvolume-windows.zip -o flexvolume-windows.zip Expand-Archive flexvolume-windows.zip C:\var\lib\kubelet\usr\libexec\kubernetes\kubelet-plugins\volume\exec\ </code></pre>
sz.krisz
<p>My eks.tf file</p> <pre><code>data &quot;aws_availability_zones&quot; &quot;azs&quot; {} module &quot;myapp-vpc&quot; { source = &quot;terraform-aws-modules/vpc/aws&quot; version = &quot;3.7.0&quot; name = &quot;myapp-vpc&quot; cidr = var.vpc_cidr_block private_subnets = var.private_subnets_cidr_blocks public_subnets = var.public_subnets_cidr_blocks azs = data.aws_availability_zones.azs.names enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true tags = { &quot;kubernetes.io/cluster/myapp-cluster&quot; = &quot;shared&quot; } private_subnet_tags = { &quot;kubernetes.io/cluster/myapp-cluster&quot; = &quot;shared&quot; &quot;kubernetes.io/role/internal-elb&quot; = 1 } public_subnet_tags = { &quot;kubernetes.io/cluster/myapp-cluster&quot; = &quot;shared&quot; &quot;kubernetes.io/role/elb&quot; = 1 } </code></pre> <p>I got this error</p> <pre><code>│ Error: error creating EKS Cluster (myapp-cluster): InvalidParameterException: unsupported Kubernetes version │ { │ RespMetadata: { │ StatusCode: 400, │ RequestID: &quot;073bff37-1d18-4d11-82c9-226b92791a70&quot; │ }, │ ClusterName: &quot;myapp-cluster&quot;, │ Message_: &quot;unsupported Kubernetes version&quot; │ } │ │ with module.eks.aws_eks_cluster.this[0], │ on .terraform/modules/eks/main.tf line 11, in resource &quot;aws_eks_cluster&quot; &quot;this&quot;: │ 11: resource &quot;aws_eks_cluster&quot; &quot;this&quot; { </code></pre> <p>I went for terraform init and plan. What should I check in my terraform.tfstate file?</p>
Richard Rublev
<p>Have you tried to change the versions of kubernetes/cluster? I had the same issue and I changed the version of the cluster and it worked.</p>
Matho Avito
<p>This question concerns kubernetes v1.24 and up</p> <p>So I can create tokens for service accounts with</p> <pre><code>kubectl create token myserviceaccount </code></pre> <p>The created token works and serves the purpose, but what I find confusing is that when I <code>kubectl get sa</code> SECRETS field of myserviceaccount is still 0. The token doesn't appear in <code>kubectl get secrets</code> either.</p> <p>I've also seen that I can pass <code>--bound-object-kind</code> and <code>--bound-object-name</code> to <code>kubectl create token</code> but this doesn't seem to do anything (visible) either...</p> <p>Is there a way to see created token? And what is the purpose of --bound.. flags?</p>
golder3
<p>Thanks to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">docs link</a> I've stumbled upon today (I don't know how I've missed it when asking the question because I've spent quite some time browsing through the docs...) I found the information I was looking for. I feel like providing this answer because I find v1d3rm3's answer incomplete and not fully accurate.</p> <p>The kubernetes docs confirm v1d3rm3's claim (which is btw the key to answering my question):</p> <blockquote> <p>The created token is a signed JSON Web Token (JWT).</p> </blockquote> <p>Since the token is JWT token the server can verify if it has signed it, hence no need to store it. JWTs expiry time is set not because the token is not associated with an object (it actually is, as we'll see below) but because the server has no way of invalidating a token (it would actually need to keep track of invalidated tokens because tokens aren't stored anywhere and any token with good signature is valid). To reduce the damage if a token gets stolen there is an expiry time.</p> <p>Signed JWT token contains all the necessary information inside of it.</p> <p>The decoded token (created with <code>kubectl create token test-sa</code> where test-sa is service account name) looks like this:</p> <pre><code>{ &quot;aud&quot;: [ &quot;https://kubernetes.default.svc.cluster.local&quot; ], &quot;exp&quot;: 1666712616, &quot;iat&quot;: 1666709016, &quot;iss&quot;: &quot;https://kubernetes.default.svc.cluster.local&quot;, &quot;kubernetes.io&quot;: { &quot;namespace&quot;: &quot;default&quot;, &quot;serviceaccount&quot;: { &quot;name&quot;: &quot;test-sa&quot;, &quot;uid&quot;: &quot;dccf5808-b29b-49da-84bd-9b57f4efdc0b&quot; } }, &quot;nbf&quot;: 1666709016, &quot;sub&quot;: &quot;system:serviceaccount:default:test-sa&quot; } </code></pre> <p>Contrary to v1d3rm3 answer, <strong>This token IS associated with a service account automatically</strong>, as the kubernets docs link confirm and as we can also see from the token content above.</p> <p>Suppose I have a secret I want to bind my token to (for example <code>kubectl create token test-sa --bound-kind Secret --bound-name my-secret</code> where test-sa is service account name and my-secret is the secret I'm binding token to), the decoded token will look like this:</p> <pre><code>{ &quot;aud&quot;: [ &quot;https://kubernetes.default.svc.cluster.local&quot; ], &quot;exp&quot;: 1666712848, &quot;iat&quot;: 1666709248, &quot;iss&quot;: &quot;https://kubernetes.default.svc.cluster.local&quot;, &quot;kubernetes.io&quot;: { &quot;namespace&quot;: &quot;default&quot;, &quot;secret&quot;: { &quot;name&quot;: &quot;my-secret&quot;, &quot;uid&quot;: &quot;2a44872f-1c1c-4f18-8214-884db5f351f2&quot; }, &quot;serviceaccount&quot;: { &quot;name&quot;: &quot;test-sa&quot;, &quot;uid&quot;: &quot;dccf5808-b29b-49da-84bd-9b57f4efdc0b&quot; } }, &quot;nbf&quot;: 1666709248, &quot;sub&quot;: &quot;system:serviceaccount:default:test-sa&quot; } </code></pre> <p>Notice that binding happens inside the token, under <strong>kubernetes.io</strong> key and if you describe my-secret you will still not see the token. So the --bound-... flags weren't visibly (from secret object) doing anything because binding happens inside the token itself...</p> <p>Instead of decoding JWT tokens, we can also see details in TokenRequest object with</p> <pre><code>kubectl create token test-sa -o yaml </code></pre>
golder3
<p>What is the difference between Master Node and Control Plane?</p> <p>Is the same or is there any difference?</p>
alex
<p>Have a look at <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#urgent-upgrade-notes" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#urgent-upgrade-notes</a> . This should answer your Question:</p> <ul> <li><strong>The label applied to control-plane nodes &quot;node-role.kubernetes.io/master&quot; is now deprecated and will be removed in a future release after a GA deprecation period.</strong></li> <li><strong>Introduce a new label &quot;node-role.kubernetes.io/control-plane&quot; that will be applied in parallel to &quot;node-role.kubernetes.io/master&quot; until the removal of the &quot;node-role.kubernetes.io/master&quot; label.</strong></li> </ul> <p>I think also important is this:</p> <ul> <li>Make &quot;kubeadm upgrade apply&quot; add the &quot;node-role.kubernetes.io/control-plane&quot; label on existing nodes that only have the &quot;node-role.kubernetes.io/master&quot; label during upgrade.</li> <li>Please adapt your tooling built on top of kubeadm to use the &quot;node-role.kubernetes.io/control-plane&quot; label.</li> <li>The taint applied to control-plane nodes &quot;node-role.kubernetes.io/master:NoSchedule&quot; is now deprecated and will be removed in a future release after a GA deprecation period.</li> <li>Apply toleration for a new, future taint &quot;node-role.kubernetes.io/control-plane:NoSchedule&quot; to the kubeadm CoreDNS / kube-dns managed manifests. Note that this taint is not yet applied to kubeadm control-plane nodes.</li> <li>Please adapt your workloads to tolerate the same future taint preemptively.</li> </ul>
weranders
<p>I see in an article that I can access to pods from kubeproxy, so what is the role of kubernetes service here? and what is the difference between Kube Proxy and service? finally, is kube proxy part of service?</p>
abdogh
<p>As far as I understand:</p> <p>Service is a Kubernetes object that has a stable name and stable IP and sits in front of a set of pods. All requests sent to the pods should go to the service.</p> <p>Kube-proxy is a networking component running on every cluster node(basically its a Daemonset). It implements the low-level rules to allow communication to pods from inside as well as outside the Kubernetes Cluster. We can say that kube-proxy is a part of service.</p> <p>So when a user tries to reach an application deployed on Kubernetes first it reaches the service and then forwards the request one of the underlying pods. This is done by using the rules that Kube proxy created.</p> <p>For more understanding refer this video : <a href="https://www.youtube.com/watch?v=kV4biO6it3o" rel="nofollow noreferrer">Kube proxy</a> &amp; blog <a href="https://betterprogramming.pub/k8s-a-closer-look-at-kube-proxy-372c4e8b090" rel="nofollow noreferrer">Closer look at Kube proxy</a></p>
sidharth vijayakumar
<p>I have these volume mounts right now defined in my deployment,</p> <pre><code>volumeMounts: - name: website-storage mountPath: /app/upload readOnly: false subPath: foo/upload - name: website-storage mountPath: /app/html readOnly: true subPath: foo/html/html </code></pre> <p>Now I want to mount another path from my PVC into <code>/app/html/website-content</code> and this is what I attempted with,</p> <pre><code>volumeMounts: - name: website-storage mountPath: /app/upload readOnly: false subPath: foo/upload - name: website-storage mountPath: /app/html readOnly: true subPath: foo/html/html - name: website-storage mountPath: /app/html/website-content readOnly: true subPath: foo/website-content </code></pre> <p>This does not work and gives an error during mounting. Is it possible to do this? Do I have to explicitly create the <code>website-content</code> folder prior to mounting it? Thanx in advance!</p>
Gayan Jayasingha
<p>The cause of the issue is that during pod initialization there is an attempt to create directory <code>website-content</code> in the <code>/app/html</code> location which will cause error as the <code>/app/html</code> is mounted read only.</p> <p>You cannot create a folder in the read only system that means you can't mount a volume as the folder doesn't exist, but if there was already the folder created, you can mount the volume.</p> <p>So all you need is just to create a directory <code>website-content</code> in the <code>foo/html/html</code> location on the volume before you attach it into container. Then, as it will be mounted to the <code>/app/html</code> location, there will be directory <code>/app/html/website-content</code>.</p> <p>For example, you can use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> for that. Add this code to your deployment file:</p> <pre><code>initContainers: - name: init-container image: busybox volumeMounts: - name: website-storage mountPath: /my-storage readOnly: false command: ['sh', '-c', 'mkdir -p /my-storage/foo/html/html/website-content'] </code></pre> <p>When the pod is running, you check mount points on the pod using <code>kubectl describe pod {pod-name}</code>:</p> <blockquote> <pre><code>Mounts: /app/html from website-storage (ro,path=&quot;foo/html/html&quot;) /app/html/website-content from website-storage (ro,path=&quot;foo/website-content&quot;) /app/upload from website-storage (rw,path=&quot;foo/upload&quot;) </code></pre> </blockquote>
Mikolaj S.
<p>We're having a bare metal K8s cluster with an NGINX Ingress Controller.</p> <p>Is there a way to tell how much traffic is transmitted/received of each Ingress?</p> <p>Thanks!</p>
Quang Linh Le
<p>Ingress Controllers are implemented as standard Kubernetes applications. Any monitoring method adopted by organizations can be applied to Ingress controllers to track the health and lifetime of k8s workloads. To track network traffic statistics, controller-specific mechanisms should be used.</p> <p>To <strong>observe Kubernetes Ingress traffic</strong> you can send your statistic to <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> and view them in <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> (widely adopted open source software for data visualization).</p> <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/monitoring.md" rel="nofollow noreferrer">Here</a> is a monitoring guide from the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a> project, where you can read how do do it step by step. Start with installing those tools.</p> <p>To deploy Prometheus in Kubernetes run the below command:</p> <pre><code>kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/ </code></pre> <p>To install grafana run this one:</p> <pre><code>kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/ </code></pre> <p>Follow the next steps in the mentioned before <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/monitoring.md" rel="nofollow noreferrer">monitoring guide</a>.</p> <p>See also <a href="https://traefik.io/blog/observing-kubernetes-ingress-traffic-using-metrics/" rel="nofollow noreferrer">this article</a> and <a href="https://stackoverflow.com/questions/57755180/standard-way-to-monitor-ingress-traffic-in-k8-or-eks">this similar question</a>.</p>
kkopczak
<p>I want to overwrite the file on the pod container. Right now I have <code>elasticsearch.yml</code> at location <code>/usr/share/elasticsearch/config</code>.</p> <p>I was trying to achieve that with <code>initContainer</code> at kubernetes deployment file, so I added something like:</p> <pre class="lang-yaml prettyprint-override"><code> - name: disabled-the-xpack-security image: busybox command: - /bin/sh - -c - | sleep 20 rm /usr/share/elasticsearch/config/elasticsearch.yml cp /home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml /usr/share/elasticsearch/config/ securityContext: privileged: true </code></pre> <p>But this doesn't work, error looks like:</p> <pre><code>rm: can't remove '/usr/share/elasticsearch/config/elasticsearch.yml': No such file or directory cp: can't stat '/home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml': No such file or directory </code></pre> <p>I was trying to use some <code>echo &quot;some yaml config&quot; &gt;&gt; elasticsearch.yml</code>, but this kind of workarounds doesn't work, because I was able to keep proper yaml formatting.</p> <p>Do you have any suggestions, how can I do this?</p>
Ice
<p>Note if you don't want to override everything in the mounted directory you could mount the file only using &quot;subPath&quot; in whatever directory you want.</p> <p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath</a></p>
ThaSami