Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I have a question about openshift 4.2 operator dependency. I have 2 CRD yaml files - 1 for my own operator and another one for elastic search. When I try to install my own operator and decale a dependency with Elastic search operator (no CRDs have been created in the cluster for either of these earlier), can openshift automatically resolve dependencies and install the dependent CRDs from the yaml files first before installing the actual operator? In this scenario, if I declare a dependency on Elastic search , can openshift automatically install the Elastic search operator before installing my operator (assuming the Elasticsearch CRD resource didnt exist in the cluster) ? Or should the dependent CRD exist in the cluster for the dependency to be resolved ? Can I install both CRDs together from scratch on a brand new cluster ?</p>
user1722908
<p>In your case, you should declare "required" item in your "customresourcedefinitions" section of your "CSV". Then, the openshift/OLM will resolve the dependencies and install CRD and Operator of dependencies before your Operator is installed.</p>
vincent pli
<p>Can anyone tell me how to list all namespaces in k8s using Go? I have been referencing this link but couldn't find anything that can list all namespaces.</p> <p>Link: <a href="https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/k8s" rel="nofollow noreferrer">https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/k8s</a></p> <p>I don't see any <code>ListNamespaces</code> functions for the k8s package in Go.</p>
TechGirl
<p>Try <a href="https://github.com/kubernetes/client-go/tree/master/examples" rel="noreferrer">kubernetes/client-go</a>, you can do like <code>clientset.CoreV1().Namespaces(&quot;&quot;).List(context.TODO(), metav1.ListOptions{})</code>. Your <code>clientset</code> maybe instantiate within the cluster or outside.</p>
gohm'c
<p>I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.</p> <p>based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference) Still, I tried it and that also gives the same error.</p> <p>Here is my ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: postgres-configmap-initscripts data: poststart.sh: | #!/bin/bash echo &quot;It`s done&quot; </code></pre> <p>Here is my StatefulSet:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres-statefulset spec: .... serviceName: postgres-service replicas: 1 template: ... spec: initContainers: - name: &quot;postgres-ghost&quot; image: alpine volumeMounts: - mountPath: /scripts name: postgres-scripts containers: - name: postgres image: postgres lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;/scripts/poststart.sh&quot; ] ports: - containerPort: 5432 name: dbport .... volumeMounts: - mountPath: /scripts name: postgres-scripts volumes: - name: postgres-scripts configMap: name: postgres-configmap-initscripts items: - key: poststart.sh path: poststart.sh </code></pre> <p>The error I am getting:</p> <p><a href="https://i.stack.imgur.com/cuHDY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cuHDY.png" alt="enter image description here" /></a></p>
Brijesh Shah
<p><code>postStart</code> hook will be call at least once but may be call <strong>more than once</strong>, this is not a good place to run script.</p> <p>The <code>poststart.sh</code> file that mounted as ConfigMap will not have execute mode hence the permission error.</p> <p>It is better to run script in <code>initContainers</code>, here's an quick example that do a simple <code>chmod</code>; while in your case you can execute the script instead:</p> <pre><code>cat &lt;&lt; EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: busybox data: test.sh: | #!/bin/bash echo &quot;It's done&quot; --- apiVersion: v1 kind: Pod metadata: name: busybox labels: run: busybox spec: volumes: - name: scripts configMap: name: busybox items: - key: test.sh path: test.sh - name: runnable emptyDir: {} initContainers: - name: prepare image: busybox imagePullPolicy: IfNotPresent command: [&quot;ash&quot;,&quot;-c&quot;] args: [&quot;cp /scripts/test.sh /runnable/test.sh &amp;&amp; chmod +x /runnable/test.sh&quot;] volumeMounts: - name: scripts mountPath: /scripts - name: runnable mountPath: /runnable containers: - name: busybox image: busybox imagePullPolicy: IfNotPresent command: [&quot;ash&quot;,&quot;-c&quot;] args: [&quot;while :; do . /runnable/test.sh; sleep 1; done&quot;] volumeMounts: - name: scripts mountPath: /scripts - name: runnable mountPath: /runnable EOF </code></pre>
gohm'c
<p>I am having issues in my current Kubernetes minikube set up getting pods to connect to ClusterIP services. My current setup environment looks like this:</p> <pre><code>OS: Rocky Linux 8 Guest Hosted with VMware on a Windows 10 Machine VMware has 'Virtualize Intel VT-x/EPT or AMD-V/RVI' enabled Minikube (v1.24.0) is running with docker (Docker version 20.10.11, build dea9396) as its driver </code></pre> <p>To isolate the problem I have started using this simple <a href="https://github.com/rogertinsley/golang-k8s-helloworld" rel="nofollow noreferrer">golang hello world image</a>. Simply put, if you <code>wget url:8080</code> you will download an index.html.</p> <p>After building the image locally I create a pod with:</p> <p><code>kubectl run hello --image=hello --port=8080 --labels='app=hello'</code></p> <p>The pod spins up fine, and I can exec into it. Inside the pod, if I run:</p> <p><code>wget localhost:8080</code> or <code>wget 172.17.0.3:8080</code></p> <p>I get the expected output of:</p> <pre><code>converted 'http://172.17.0.3:8080' (ANSI_X3.4-1968) -&gt; 'http://172.17.0.3:8080' (UTF-8) --2022-01-09 20:15:44-- http://172.17.0.3:8080/ Connecting to 172.17.0.3:8080... connected. HTTP request sent, awaiting response... 200 OK Length: 13 [text/plain] Saving to: 'index.html' index.html 100%[==============================================================================================&gt;] 13 --.-KB/s in 0s 2022-01-09 20:15:44 (3.11 MB/s) - 'index.html' saved [13/13] </code></pre> <p>Now, if I expose the pod with: <code>kubectl expose pod hello --name=hello-service --port=8080 --target-port=8080</code> the service is started as <code>hello-service</code> and describing it outputs the following:</p> <pre><code>Name: hello-service Namespace: default Labels: app=hello Annotations: &lt;none&gt; Selector: app=hello Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.101.73.45 IPs: 10.101.73.45 Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP Endpoints: 172.17.0.3:8080 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>The ports are set and the Endpoint exists, so from everything I've read this should work. So I exec back into the pod and try to wget the service and I get:</p> <pre><code>root@hello:/go/src/app# wget hello-service:8080 converted 'http://hello-service:8080' (ANSI_X3.4-1968) -&gt; 'http://hello-service:8080' (UTF-8) --2022-01-09 20:36:06-- http://hello-service:8080/ Resolving hello-service (hello-service)... 10.101.73.45 Connecting to hello-service (hello-service)|10.101.73.45|:8080... failed: Connection timed out. </code></pre> <p>The same happens when I try <code>wget 10.101.73.45:8080</code>, which of course makes sense because hello-service resolved to the correct IP in the previous wget.</p> <p>Now, I'm no expert at Kubernetes, obviously, but this next part is weird to me. If I instead expose the pod with a nodePort, everything works as you would expect. Using the following definition file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: hello-service spec: selector: app: hello ports: - protocol: TCP port: 8080 targetPort: 8080 nodePort: 31111 type: NodePort </code></pre> <p>I can hit the pod from the nodePort. A simple <code>wget 192.168.49.2:31111</code> and I get the expected output:</p> <pre><code>--2022-01-09 15:00:48-- http://192.168.49.2:31111/ Connecting to 192.168.49.2:31111... connected. HTTP request sent, awaiting response... 200 OK Length: 13 [text/plain] Saving to: ‘index.html’ index.html 100%[============================================================================================&gt;] 13 --.-KB/s in 0s 2022-01-09 15:00:48 (3.05 MB/s) - ‘index.html’ saved [13/13] </code></pre> <p>Anyway, I'm at my amateur wits end here. It's been a few days of trying to find similar issues that we're not just &quot;oh you did not label your container correctly&quot; or &quot;there is a typo in your port listings&quot; with little luck. I think this situation is unique enough to warrant its post.</p>
A. Diaz
<p>I've found minikube-based services do not allow a pod to get redirected back to itself, causing the timeout. By increasing your replicas to 2, the service-url curl no longer times out as the sibling pod can then be reached.</p>
Jim Lindeman
<p>I am a junior developer currently running a service in a Kubernetes environment.</p> <p>How can I check if a resource inside Kubernetes has been deleted for some reason?</p> <p>As a simple example, if a deployment is deleted, I want to know which user deleted it.</p> <p>Could you please tell me which log to look at.</p> <p>And I would like to know how to collect these logs.</p> <p>I don't have much experience yet, so I'm asking for help. Also, if you have a reference or link, please share it. It will be very helpful to me.</p> <p>Thank you:)</p>
j_shoon24
<p>Start with <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">enabling audit</a> with lots of <a href="https://www.google.com/search?q=kubernetes+audit+log+example&amp;rlz=1C1GCEU_enMY949MY949&amp;oq=kubernetes+audit+log&amp;aqs=chrome.2.69i59j0i512l9.4319j0j4&amp;sourceid=chrome&amp;ie=UTF-8" rel="nofollow noreferrer">online resources</a> about doing this.</p>
gohm'c
<p>I have a nodejs app running as a Service that requires access to a mysql database running as another service (same namespace).</p> <p>I also have a mysql file that I will be importing to the database.</p> <p>here is my workflow :</p> <ol> <li><p>Set up a Secret that contain the root password alongside new database credentials (db_name, db_user, db_password).</p> </li> <li><p>Set up a ConfigMap with SQL script to create the db structure.</p> </li> <li><p>Finally, deploy mysql with pv/pvc, here is the yaml file content :</p> </li> </ol> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data&quot; --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: mysql spec: ports: - port: 3306 selector: app: mysql --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: DB_ROOT - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-secrets key: DB_NAME - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secrets key: DB_USER - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: DB_PASS ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: mysql-initdb mountPath: /docker-entrypoint-initdb.d volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: mysql-initdb configMap: name: mysql-initdb-config </code></pre> <p>Because my nodejs app doesn't access the database, I wanted to verify if the new db has been created and the sql file imported, running <code>kubectl exec</code> followed by <code>mysql -u root -p</code> is giving me this error :</p> <pre><code>ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) </code></pre> <p>Within the pod, running <code>echo $MYSQL_ROOT_PASSWORD</code> or any other env variables returns the correct value.</p> <p>What am I doing wrong ?</p>
jaquemus
<p>Solved the issue by editing the line : <code>mountPath: /var/lib/mysql</code></p> <p>I changed it to <code>/mnt/data</code> and the deployment works like a charm.</p>
jaquemus
<p>I'm learning microservice with docker and k8s. This project is a simple node.js express app and I tried to add a reverse proxy in it. I already pushed the image (both the express app as well as the reverse proxy) to my docker hub.</p> <p>Now I encounter a problem when I tried to access an endpoint in my pods. My pod is running, and when I tried the endpoint:</p> <p>curl:curl http://my-app-2-svc:8080/api/health</p> <p>I get connection refused, I'm wondering what' wrong with that?</p> <p>It seems that there are some problem with the liveness probe.. When I describe the pods: it shows:</p> <p><strong>Liveness probe failed: Get &quot;http://192.168.55.232:8080/health&quot;: dial tcp 192.168.55.232:8080: connect: connection refused</strong></p> <p><a href="https://i.stack.imgur.com/3qsAU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3qsAU.jpg" alt="enter image description here" /></a></p> <p>Here is my github link for this project: <a href="https://github.com/floatingstarlight/Microservice" rel="nofollow noreferrer">https://github.com/floatingstarlight/Microservice</a></p> <p>Can anyone helps me with that? Thanks!</p>
mundanus1130
<pre><code>... livenessProbe: httpGet: path: /health port: 8080 ... </code></pre> <p>If you asked for liveness probe, you need to implement it or your pod will be seen as a failure and restart every now and then. You can update your <a href="https://github.com/floatingstarlight/Microservice/blob/main/server.js" rel="nofollow noreferrer">server.js</a> to the minimum as follow to get up and running:</p> <pre><code>const express = require('express') const app = express() const port = 8080 app.get('/health', (req, res) =&gt; { res.send('ok') }) app.listen(port, () =&gt; { console.log(`Listening on port ${port}`) }) </code></pre>
gohm'c
<p>We have a Kubernetes cluster where we have some financial software running - Strands. When we are trying to access one of the pages on our frontend, the request from frontend is being sent to Kubernetes pod which should process this request.</p> <p>The pod runs Tomcat and we see that the request is being rejected with the following message:</p> <pre><code>2022-10-31T10:30:46,133 INFO [UserSessionFilter.java:153] : URI :/bfm-web/config/get.action 2022-10-31T10:30:46,133 INFO [UserSessionFilter.java:154] : Remote Host :10.240.0.103 2022-10-31T10:30:46,133 INFO [UserSessionFilter.java:155] : Remote Port :41898 2022-10-31T10:30:54,295 INFO [UserSessionFilter.java:152] : Request not allowed. Just return a 403 status 2022-10-31T10:30:54,296 INFO [UserSessionFilter.java:153] : URI :/bfm-web/config/get.action 2022-10-31T10:30:54,296 INFO [UserSessionFilter.java:154] : Remote Host :10.240.0.229 2022-10-31T10:30:54,296 INFO [UserSessionFilter.java:155] : Remote Port :57206 </code></pre> <p>I am not familiar with Tomcat or Java and do not really know where to look for. I tried to check web.xml file for some filter but could find any clues. Can this be related to some Kubernetes authorization settings?</p> <p>Let me know what info I can share with you to help, here are some other logs and also backend uses PostgreSQL database for user data, however it seems to work well:</p> <pre><code>2022-10-27T11:16:20,751 INFO [{omitted_due_to_sec_reasons}HttpHeaderUserSessionFilter.java:71] : The header name [user.header.name] has been set to HTTP_STRANDS_USER </code></pre> <p>This one above seems interesting to me because it seems to be a custom filter which sets the filter to accept a specific header (I intentionally omitted some company info). Does anyone know where I can find these filters?</p> <pre><code>10.240.0.103 - - [28/Oct/2022:14:02:33 +0000] &quot;GET /bfm-web/config/get.action HTTP/1.1&quot; 403 707 10.240.0.103 - - [31/Oct/2022:10:12:01 +0000] &quot;GET /bfm-web/config/get.action HTTP/1.1&quot; 403 743 10.240.0.103 - - [31/Oct/2022:10:24:31 +0000] &quot;GET /bfm-web/config/get.action HTTP/1.1&quot; 403 743 10.240.0.103 - - [31/Oct/2022:10:24:36 +0000] &quot;GET /bfm-web/config/get.action HTTP/1.1&quot; 403 743 10.240.0.103 - - [31/Oct/2022:10:30:46 +0000] &quot;GET /bfm-web/config/get.action HTTP/1.1&quot; 403 743 10.240.0.229 - - [31/Oct/2022:10:30:54 +0000] &quot;GET /bfm-web/config/get.action HTTP/1.1&quot; 403 743 </code></pre> <p>And other:</p> <pre><code>31-Oct-2022 10:12:01.856 INFO [http-nio-8080-exec-10] org.apache.catalina.core.ApplicationContext.log Request not allowed. Just return a 403 status. URI :: /bfm-web/config/get.action 31-Oct-2022 10:24:31.966 INFO [http-nio-8080-exec-2] org.apache.catalina.core.ApplicationContext.log Request not allowed. Just return a 403 status. URI :: /bfm-web/config/get.action 31-Oct-2022 10:24:36.951 INFO [http-nio-8080-exec-3] org.apache.catalina.core.ApplicationContext.log Request not allowed. Just return a 403 status. URI :: /bfm-web/config/get.action 31-Oct-2022 10:30:46.133 INFO [http-nio-8080-exec-5] org.apache.catalina.core.ApplicationContext.log Request not allowed. Just return a 403 status. URI :: /bfm-web/config/get.action 31-Oct-2022 10:30:54.297 INFO [http-nio-8080-exec-6] org.apache.catalina.core.ApplicationContext.log Request not allowed. Just return a 403 status. URI :: /bfm-web/config/get.action </code></pre>
t3ranops
<p>The issue was related to missing &quot;enable-underscores-in-headers: 'true'&quot; on Ingress NGINX controller. Adding it and restarting the pods did the magic.</p>
t3ranops
<p>I have an EKS cluster and an RDS (mariadb). I am trying to make a backup of given databases though a script in a <code>CronJob</code>. The <code>CronJob</code> object looks like this:</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: mysqldump namespace: mysqldump spec: schedule: &quot;* * * * *&quot; concurrencyPolicy: Replace jobTemplate: spec: template: spec: containers: - name: mysql-backup image: viejo/debian-mysqldump:latest envFrom: - configMapRef: name: mysqldump-config args: - /bin/bash - -c - /root/mysqldump.sh &quot;(${MYSQLDUMP_DATABASES})&quot; &gt; /proc/1/fd/1 2&gt;/proc/1/fd/2 || echo KO &gt; /tmp/healthcheck resources: limits: cpu: &quot;0.5&quot; memory: &quot;0.5Gi&quot; restartPolicy: OnFailure </code></pre> <p>The script is called <code>mysqldump.sh</code>, which gets all necessary details from a <code>ConfigMap</code> object. It makes the dump of the databases in an environment variable <code>MYSQLDUMP_DATABASES</code>, and moves it to S3 bucket.</p> <p><em>Note: I am going to move some variables to a <code>Secret</code>, but before I need this to work.</em></p> <p>What happens is NOTHING. The script is never getting executed I tried putting a &quot;echo starting the backup&quot;, before the script, and &quot;echo backup ended&quot; after it, but I don't see none of them. If I'd access the container and execute the same exact command manually, it works:</p> <pre><code>root@mysqldump-27550908-sjwfm:/# /root/mysqldump.sh &quot;(${MYSQLDUMP_DATABASES})&quot; &gt; /proc/1/fd/1 2&gt;/proc/1/fd/2 || echo KO &gt; /tmp/healthcheck root@mysqldump-27550908-sjwfm:/# </code></pre> <p><a href="https://i.stack.imgur.com/hJetE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hJetE.png" alt="enter image description here" /></a></p> <p>Can anyone point out a possible issue?</p>
suren
<p>Try change <code>args</code> to <code>command</code>:</p> <pre><code>... command: - /bin/bash - -c - /root/mysqldump.sh &quot;(${MYSQLDUMP_DATABASES})&quot; &gt; /proc/1/fd/1 2&gt;/proc/1/fd/2 || echo KO &gt; /tmp/healthcheck ... </code></pre>
gohm'c
<p>I have implemented RBAC in my project. Given permission to scale deployment but there is no upper limit to scale and it is getting misused. People are scaling deployments to 15, 20 pods. Is there any way to restrict them from scaling and allowing to scale upto certain maximum limit ?</p>
Yogesh Jilhawar
<p>It's not possible. We will give permission at API call not on the resource.We can solve your problem by setting pod quota for specific namespace. <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/</a></p>
Akshay Gopani
<pre><code> **Yaml for kubernetes that is first used to create raft backup and then upload into gas bucket** apiVersion: batch/v1beta1 kind: CronJob metadata: labels: app.kubernetes.io/component: raft-backup numenapp: raft-backup name: raft-backup namespace: raft-backup spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 3 jobTemplate: spec: template: metadata: annotations: vault.security.banzaicloud.io/vault-addr: https://vault.vault-internal.net:8200 labels: app.kubernetes.io/component: raft-backup spec: containers: - args: - | SA_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); export VAULT_TOKEN=$(vault write -field=token auth/kubernetes/login jwt=$SA_TOKEN role=raft-backup); vault operator raft snapshot save /share/vault-raft.snap; echo &quot;snapshot is success&quot; command: [&quot;/bin/sh&quot;, &quot;-c&quot;] env: - name: VAULT_ADDR value: https://vault.vault-internl.net:8200 image: vault:1.10.9 imagePullPolicy: Always name: snapshot volumeMounts: - mountPath: /share name: share - args: - -ec - sleep 500 - &quot;until [ -f /share/vault-raft.snap ]; do sleep 5; done;\ngsutil cp /share/vault-raft.snap\ \ gs://raft-backup/vault_raft_$(date +\&quot;\ %Y%m%d_%H%M%S\&quot;).snap;\n&quot; command: - /bin/sh image: gcr.io/google.com/cloudsdktool/google-cloud-cli:latest imagePullPolicy: IfNotPresent name: upload securityContext: allowPrivilegeEscalation: false volumeMounts: - mountPath: /share name: share restartPolicy: OnFailure securityContext: fsGroup: 1000 runAsGroup: 1000 runAsUser: 1000 serviceAccountName: raft-backup volumes: - emptyDir: {} name: share schedule: '*/3 * * * *' startingDeadlineSeconds: 60 successfulJobsHistoryLimit: 3 suspend: false </code></pre> <p><strong>Error while running gsutil command inside the upload pod</strong></p> <p>$ gsutil Traceback (most recent call last): File &quot;/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py&quot;, line 172, in ActiveConfig return ActiveConfig(force_create=True) File &quot;/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py&quot;, line 492, in ActiveConfig config_name = _CreateDefaultConfig(force_create) File &quot;/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/configurations/named_configs.py&quot;, line 640, in _CreateDefaultConfig file_utils.MakeDir(paths.named_config_directory) File &quot;/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/util/files.py&quot;, line 125, in MakeDir os.makedirs(path, mode=mode) File &quot;/usr/bin/../lib/google-cloud-sdk/platform/bundledpythonunix/lib/python3.9/os.py&quot;, line 215, in makedirs makedirs(head, exist_ok=exist_ok) File &quot;/usr/bin/../lib/google-cloud-sdk/platform/bundledpythonunix/lib/python3.9/os.py&quot;, line 215, in makedirs makedirs(head, exist_ok=exist_ok) File &quot;/usr/bin/../lib/google-cloud-sdk/platform/bundledpythonunix/lib/python3.9/os.py&quot;, line 225, in makedirs mkdir(name, mode) OSError: [Errno 30] Read-only file system: '/home/cloudsdk/.config' $ command terminated with exit code 137</p> <pre><code></code></pre>
kapil dev
<p><code>OSError: [Errno 30] Read-only file system: '/home/cloudsdk/.config' $ command terminated with exit code 137</code></p> <p>It seems you don't give enought permission in your cronJob.</p> <p>Try to change :</p> <pre><code>securityContext: fsGroup: 1000 runAsGroup: 1000 runAsUser: 1000 </code></pre> <p>by :</p> <pre><code>securityContext: privileged: true </code></pre> <p>Tell me if it works or not and we can discuss about it.</p> <p>Edit for complete response :</p> <p>Use this <code>apiVersion: batch/v1</code> instead of <code>apiVersion: batch/v1beta1</code></p>
sbrienne
<p>I have installed docker desktop on my windows 10 and have enabled Kubernetes. When I run the <code>kubectl config current-context</code> command I am getting this response <code>gke_k8s-demo-263903_asia-south1-a_kubia</code>. How do I set up the context to point to <code>docker-desktop</code>? I remember that I had worked with GKE earlier but not sure how to reset the context.</p>
zilcuanu
<p>From your local machine run, you should see docker-desktop listed:</p> <blockquote> <p>kubectl config get-contexts</p> </blockquote> <p>Then run the below:</p> <blockquote> <p>kubectl config use-context docker-desktop</p> </blockquote> <p>If the cluster name you want to communicate with is not listed, it means you haven't got to context file to the cluster.</p>
m8usz
<p>I have a file .mqsc with a commands for create queues(ibm mq).<br /> How to run a script by kubectl?<br /> <code>kubectl exec -n test -it mq-0 -- /bin/bash -f create_queues.mqsc</code> doesn't work.<br /> log:</p> <blockquote> <p>/bin/bash: create_queues.mqsc: No such file or directory command terminated with exit code 127</p> </blockquote>
AbrA
<p>Most probably your script is not under the &quot;/&quot; directory in docker. You need to find whole path after that you need to execute script</p>
Kağan Mersin
<p>My issue is exactly the same as <a href="https://stackoverflow.com/questions/66275458/could-not-access-kubernetes-ingress-in-browser-on-windows-home-with-minikube">this</a>. But replication the question again for your reference::</p> <p>I am facing the problem which is that I could not access the Minikube Ingress on the Browser using it's IP. I have installed Minikube on Windows 10 Home, and starting the minikube with docker driver(<code>minikube start --driver=docker</code>).</p> <blockquote> <p><strong>System info:</strong></p> <p>Windows 10 Home</p> <p>Minikube(v1.18.1)</p> <p>Docker(driver for minikube) - Docker engine version 20.10.5</p> </blockquote> <p>I am following this official document - <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/</a></p> <p>First I created the deployment by running this below command on Minikube.</p> <p><code>kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0</code></p> <p>The deployment get created which can be seen on the below image: enter image description here</p> <p><a href="https://i.stack.imgur.com/5byqt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5byqt.png" alt="enter image description here" /></a></p> <p>Next, I exposed the deployment that I created above. For this I ran the below command.</p> <p><code>kubectl expose deployment web --type=NodePort --port=8080</code></p> <p>This created a service which can be seen by running the below command:</p> <p><code>kubectl get service web</code> The screenshot of the service is shown below: <a href="https://i.stack.imgur.com/GkqAe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GkqAe.png" alt="enter image description here" /></a></p> <ol start="3"> <li>I can now able to visit the service on the browser by running the below command:</li> </ol> <p><code>minikube service web</code></p> <p>In the below screenshot you can see I am able to view it on the browser.</p> <p><a href="https://i.stack.imgur.com/7760P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7760P.png" alt="enter image description here" /></a></p> <ol start="4"> <li>Next, I created an Ingress by running the below command:</li> </ol> <p><code>kubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml</code></p> <p>The ingress gets created and I can verify it by running the below command:</p> <p><code>kubectl get ingress</code> The screenshot for this is given below: <a href="https://i.stack.imgur.com/80QBA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/80QBA.png" alt="enter image description here" /></a></p> <p>The ingress ip is listed as 192.168.49.2. So that means if I should open it in the browser then it should open, but unfortunately not. It is showing site can't be reached. See the below screenshot.</p> <p><a href="https://i.stack.imgur.com/LeIOM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LeIOM.png" alt="enter image description here" /></a></p> <p>What is the problem. Please provide me a solution for it?</p> <p>I also added the mappings on etc\hosts file.</p> <p><code>192.168.49.2 hello-world.info</code></p> <p>Then I also tried opening hello-world.info on the browser but no luck.</p> <p>In the below picture I have done ping to hello-world.info which is going to IP address 192.168.49.2. This shows etc\hosts mapping is correct:</p> <p><a href="https://i.stack.imgur.com/OKGam.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OKGam.png" alt="enter image description here" /></a></p> <p>I also did curl to minikube ip and to hello-world.info and both get timeout. See below image:</p> <p><a href="https://i.stack.imgur.com/yMML2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yMML2.png" alt="enter image description here" /></a></p> <p>The <code>kubectl describe services web</code> provides the following details:</p> <pre><code>Name: web Namespace: default Labels: app=web Annotations: &lt;none&gt; Selector: app=web Type: NodePort IP: 10.100.184.92 Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP NodePort: &lt;unset&gt; 31880/TCP Endpoints: 172.17.0.4:8080 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>The <code>kubectl describe ingress example-ingress</code> gives the following output:</p> <pre><code>Name: example-ingress Namespace: default Address: 192.168.49.2 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- hello-world.info / web:8080 172.17.0.4:8080) Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 Events: &lt;none&gt; </code></pre> <p><em><strong>The issue seems to have been resolved there</strong></em>, by following the below instructions(as posted in the comments):</p> <blockquote> <p>Once you setup the ingress with necessary change, i guess you are in the powershell of windows with minikube running right? Make sure you ‘enable addons ingress’ and have a separate console running ‘minikube tunnel’ as well. Also, add the hostname and ip address to windows’ host table. Then type ‘minikue ssh’ in powershell, it gives you command line. Then you can ‘curl myapp.com’ then you should get response as expected.</p> </blockquote> <p>Nevertheless, in my case, <em><strong>the minikube tunnel is not responding upon giving the <code>minikube tunnel</code> command.</strong></em> :</p> <p><a href="https://i.stack.imgur.com/dVd3Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dVd3Z.png" alt="enter image description here" /></a></p> <p><em><strong>I am not able to <code>curl hello-world.info</code> even through <code>minikube ssh</code>. Kindly help!</strong></em></p>
vagdevi k
<p><strong>On Windows</strong></p> <p>After some decent amount of time, I came to the conclusion that ingress has some conflicts to work with Docker on Windows10-Home. Things are working fine if we want to expose a service of NodePort type but Ingress is troublesome.</p> <p>Further, I tried to set up WSL 2 with Ubuntu in Windows 10 but no luck.</p> <p>Finally, <strong>the following worked for Ingress of Minikube on Windows10 Home:</strong></p> <ul> <li>Install <a href="https://www.virtualbox.org/wiki/Downloads" rel="noreferrer">VirtualBox</a></li> <li>Uncheck the <strong>Virtual Machine Platform</strong> and <strong>Windows Hypervisor Platform</strong> options from <code>Control Panel -&gt; Programs -&gt; Turn Windows Features on and off (under Programs and Features)</code> and then click ok. Restart your computer if prompted to.</li> <li>Now, execute the following commands in a new cmd</li> </ul> <pre><code>minikube delete minikube start --driver=virtualbox </code></pre> <p>if <code>minikube start --driver=virtualbox</code> doesn't work, then use <code>minikube start --driver=virtualbox --no-vtx-check</code>.</p> <p>This process solved my problem and ingress is working fine on my Windows 10 Home Minikube.</p> <p><strong>On Ubuntu</strong></p> <p>Finally, Docker on Ubuntu is inherently supporting Minikube Ingress seamlessly without any glitch.</p>
vagdevi k
<p>I'm trying to run Stateful set with my own scripts, and I'm able to run first script that will spin up <code>mongodb</code> and setup some users etc, but the second script in <code>postStart</code> block, named <code>configure.sh</code> is never executed for some reason.</p> <p>Here's the <code>StatefulSet</code> manifest <code>yaml</code>:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mongo labels: component: mongo spec: selector: matchLabels: component: mongo serviceName: mongo replicas: 1 template: metadata: labels: component: mongo spec: containers: - name: mongo image: mongo:latest command: [ &quot;/bin/bash&quot;, &quot;-c&quot; , &quot;+m&quot;] workingDir: /mongo/scripts args: - /mongo/scripts/mongo-start.sh livenessProbe: exec: command: - &quot;bin/bash&quot; - &quot;-c&quot; - mongo -u $MONGO_USER -p $MONGO_PASSWORD --eval db.adminCommand\(\&quot;ping\&quot;\) failureThreshold: 3 successThreshold: 1 periodSeconds: 10 timeoutSeconds: 5 ports: - containerPort: 27017 imagePullPolicy: Always lifecycle: postStart: exec: command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;/mongodb/configure.sh&quot;] volumeMounts: - name: mongo-persistent-storage mountPath: /data/db - name: mongo-scripts mountPath: /mongo/scripts - name: mongo-config mountPath: /mongodb/configure.sh subPath: configure.sh env: - name: MONGO_USER_APP_NAME valueFrom: configMapKeyRef: key: MONGO_USER_APP_NAME name: mongo-auth-env - name: MONGO_USER_APP_PASSWORD valueFrom: configMapKeyRef: key: MONGO_USER_APP_PASSWORD name: mongo-auth-env - name: MONGO_USER valueFrom: configMapKeyRef: key: MONGO_USER name: mongo-auth-env - name: MONGO_PASSWORD valueFrom: configMapKeyRef: key: MONGO_PASSWORD name: mongo-auth-env - name: MONGO_BIND_IP valueFrom: configMapKeyRef: key: MONGO_BIND_IP name: mongo-config-env restartPolicy: Always volumes: - name: mongo-scripts configMap: name: mongo-scripts defaultMode: 0777 - name: mongo-config configMap: name: mongo-config defaultMode: 0777 - name: mongo-config-env configMap: name: mongo-config-env defaultMode: 0777 - name: mongo-auth-env configMap: name: mongo-auth-env defaultMode: 0777 volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p><code>mongo-start.sh</code> which is in <code>/scripts</code> folder with another scripts, is being executed, but after <code>Pod</code> is up and running, <code>configure.sh</code> is never executed, logs are not helpful, <code>kubectl describe pod</code> returns it as recognized, but it never runs. <code>ConfigMaps</code> are all deployed and their content and paths are also ok. Is there any other way to run the script after another, or I'm doing something wrong, been searching on <code>SO</code> and official docs, that's the only examples I found. Tnx</p> <p><strong>EDIT</strong> it started somehow, but with:</p> <pre><code>Exec lifecycle hook ([/bin/bash -c /mongodb/mongodb-config.sh]) for Container &quot;mongo&quot; in Pod &quot;mongo-0_test(e9db216d-c1c2-4f19-b85e-19b210a22bbb)&quot; failed - error: command '/bin/bash -c /mongodb/mongodb-config.sh' exited with 1: , message: &quot;MongoDB shell version v4.2.12\nconnecting to: mongodb://mongo:27017/?authSource=admin&amp;compressors=disabled&amp;gssapiServiceName=mongodb\n2021-11-24T22:16:50.520+0000 E QUERY [js] Error: couldn't connect to server mongo:27017, connection attempt failed: SocketException: Error connecting to mongo:27017 (172.20.3.3:27017) :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:353:17\n@(connect):2:6\n2021-11-24T22:16:50.522+0000 </code></pre> <p>Content of the configure.sh:</p> <pre><code>#!/bin/bash mongo --username $MONGO_USER_ROOT_NAME --password &quot;$MONGO_USER_ROOT_PASSWORD&quot; --authenticationDatabase &quot;$MONGO_AUTH_SOURCE&quot; --host mongo --port &quot;$MONGO_PORT&quot; &lt; create.js </code></pre> <p>If I remove postStart part, and init into container, I can successfully run the script..</p>
dejanmarich
<p>There is no guarantee that <code>postStart</code> hook will be call after container entry point. Also, <code>postStart</code> hook can be called more than once. The error was because by the time <code>configure.sh</code> was executed; the mongodb was not up and running yet. If your <code>configure.sh</code> script is idempotent, you can do a wait before proceed to next step:</p> <pre><code>#!/bin/bash until mongo --nodb --disableImplicitSessions --host mongo --username $MONGO_USER_ROOT_NAME --password $MONGO_USER_ROOT_PASSWORD --eval 'db.adminCommand(&quot;ping&quot;)'; do sleep 1; done mongo --username $MONGO_USER_ROOT_NAME --password &quot;$MONGO_USER_ROOT_PASSWORD&quot; --authenticationDatabase &quot;$MONGO_AUTH_SOURCE&quot; --host mongo --port &quot;$MONGO_PORT&quot; &lt; create.js </code></pre>
gohm'c
<p>I deployed a helm chart and used it for more than ten days, but when I used it today, I found that the <code>pod was missing</code>, and the statefulset instance became <code>0/0</code>. I checked the history through <code>kubectl rollout history</code>, and the result was 1, which has not been modified. , what is the reason for this problem?</p> <p>PS: Some records are as follows</p> <pre><code>kubectl get sts -n proxy NAME READY AGE sts-test 0/0 16d </code></pre> <pre><code>$ kubectl rollout history sts/sts-test statefulset.apps/sts-test REVISION 1 </code></pre> <pre><code>kubectl get sts sts-test -o yaml apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: &quot;2022-06-09T03:22:23Z&quot; generation: 2 labels: app.kubernetes.io/instance: sts-test-integration app.kubernetes.io/managed-by: Tiller app.kubernetes.io/name: sts-test app.kubernetes.io/version: &quot;1.0&quot; helm.sh/chart: sts-test-0.1.0 managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/version: {} f:helm.sh/chart: {} f:spec: f:podManagementPolicy: {} f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/name: {} f:serviceName: {} f:template: f:metadata: f:labels: .: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/name: {} f:spec: f:containers: k:{&quot;name&quot;:&quot;sts-test&quot;}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:ports: .: {} k:{&quot;containerPort&quot;:3306,&quot;protocol&quot;:&quot;TCP&quot;}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:tcpSocket: .: {} f:port: {} f:timeoutSeconds: {} f:resources: {} f:securityContext: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{&quot;mountPath&quot;:&quot;/data/mysql&quot;}: .: {} f:mountPath: {} f:name: {} f:subPath: {} k:{&quot;mountPath&quot;:&quot;/var/log/mysql&quot;}: .: {} f:mountPath: {} f:name: {} f:subPath: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:updateStrategy: f:type: {} f:volumeClaimTemplates: {} manager: Go-http-client operation: Update time: &quot;2022-06-09T03:22:23Z&quot; - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:replicas: {} manager: kubectl operation: Update time: &quot;2022-06-24T11:37:22Z&quot; - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:collisionCount: {} f:currentRevision: {} f:observedGeneration: {} f:replicas: {} f:updateRevision: {} manager: kube-controller-manager operation: Update time: &quot;2022-06-24T11:37:24Z&quot; name: sts-test namespace: proxy resourceVersion: &quot;5821333&quot; selfLink: /apis/apps/v1/namespaces/proxy/statefulsets/sts-test uid: 8bb73b11-8ee9-44e1-8ead-f5b7c07c5f2e spec: podManagementPolicy: Parallel replicas: 0 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/instance: sts-test-integration app.kubernetes.io/name: sts-test serviceName: sts-test-svc template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: sts-test-integration app.kubernetes.io/name: sts-test spec: containers: - image: sts-test-integration:v0.1.0 imagePullPolicy: IfNotPresent name: sts-test ports: - containerPort: 3306 name: mysql protocol: TCP resources: {} securityContext: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data/mysql name: sts-test-data subPath: mysqldata-pvc - mountPath: /var/log/mysql name: sts-test-data subPath: mysqllog-pvc dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: sts-test serviceAccountName: sts-test terminationGracePeriodSeconds: 30 updateStrategy: type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: sts-test-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 50G storageClassName: local-path volumeMode: Filesystem status: phase: Pending status: collisionCount: 0 currentRevision: sts-test-6f95c95b57 observedGeneration: 2 replicas: 0 updateRevision: sts-test-6f95c95b57 </code></pre>
moluzhui
<pre><code>... - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:replicas: {} # &lt;-- replicas count has changed manager: kubectl operation: Update time: &quot;2022-06-24T11:37:22Z&quot; # &lt;-- at this time ... </code></pre> <p><code>kubectl scale statefulset sts-test --replicas=0</code> will not create a new history record by default.</p>
gohm'c
<p>I am working on a Multi-Container Flask App, which involves a Web container(Flask app), Postgres container(for DB services), and a Redis container(for Caching services).</p> <p>Web app has <code>web_deployment.yaml</code> and <code>web_service.yaml</code> files. Postgres app has <code>postgres_deployment.yaml</code> and <code>postgres_service.yaml</code> files. Redis app has <code>redis_deployment.yaml</code> and <code>redis_service.yaml</code> files.</p> <p>My <code>web_deployment.yaml</code> file looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 3 template: metadata: labels: component: my-web-app spec: containers: - name: my-web-app-container image: web_app_image:latest ports: - containerPort: 80 env: - name: REDIS_HOST value: redis-service - name: REDIS_PORT value: '6379' - name: POSTGRES_USER value: username - name: POSTGRES_HOST value: postgres-service - name: POSTGRES_PORT value: '5432' - name: POSTGRES_DB value: postgres_db - name: PGPASSWORD valueFrom: secretKeyRef: name: pgpassword key: PGPASSWORD selector: matchLabels: component: my-web-app </code></pre> <p>The <code>postgres_deployment.yaml</code> file looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: component: postgres template: metadata: labels: component: postgres spec: volumes: - name: postgres-storage persistentVolumeClaim: claimName: database-persistent-volume-claim containers: - name: postgres image: postgres:12-alpine ports: - containerPort: 5432 volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data subPath: postgres env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: pgpassword key: PGPASSWORD </code></pre> <p>While trying to establish connection for web container with the postgres container, I got the following issue:</p> <pre><code> File &quot;/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py&quot;, line 211, in raise_ raise exception File &quot;/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py&quot;, line 599, in __connect connection = pool._invoke_creator(self) File &quot;/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/create.py&quot;, line 578, in connect return dialect.connect(*cargs, **cparams) File &quot;/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py&quot;, line 584, in connect return self.dbapi.connect(*cargs, **cparams) File &quot;/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py&quot;, line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: password authentication failed for user &quot;username&quot; </code></pre>
vagdevi k
<p>I successfully fixed it!</p> <p>The mistake was that, I just mentioned the password in the <code>posgres_deployment.yaml</code> file, but I should also mention the database name and the username, using which the <code>web_deployment.yaml</code> is trying to access this db service.</p> <p>Now the new <code>postgres_deployment.yaml</code> file, after the correction, looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: component: postgres template: metadata: labels: component: postgres spec: volumes: - name: postgres-storage persistentVolumeClaim: claimName: database-persistent-volume-claim containers: - name: postgres image: postgres:12-alpine ports: - containerPort: 5432 volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data subPath: postgres env: - name: POSTGRES_USER value: username - name: POSTGRES_DB value: postgres_db - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: pgpassword key: PGPASSWORD </code></pre>
vagdevi k
<p>I have a network issue on my cluster and at first I thought it was a routing issue but discovered that maybe the outgoing packet from the cluster isn't getting wrapped with the node ip when leaving the node.</p> <p>Background is that I have two clusters. I set up the first one (months ago) manually using <a href="https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html" rel="nofollow noreferrer">this guide</a> and it worked great. Then the second one I built multiple times as I created/debugged anisble scripts to automate how I created the first cluster.</p> <p>On cluster2 I have the network issue... I can get to pods on other nodes but can't get to anything on my regular network. I have tcpdump'd the physical interface on node0 in cluster2 when pinging from a busybox pod and the 172.16.0.x internal pod ip is visible at that interface as the source ip - and my network outside the node has no idea what to do with it. But on cluster1 this same test shows the node ip in place of the pod ip - which is how I assume it should work.</p> <p>My question is how can I troubleshoot this? Any ideas would be great as I have been at this for several days now. Even if it seems obvious as I can no longer see the forest through the trees... ie. both clusters look the same everywhere I know how to check :)</p> <p>caveat to "my clusters are the same": Cluster1 is running kubectl 1.16 cluster2 is running 1.18</p> <p>----edit after @Matt dropped some kube-proxy knowledge on me----</p> <p>Did not know that kube-proxy rules could just be read by iptables command! Awesome!</p> <p>I think my problem is those 10.net addresses in the broke cluster. I don't even know where those came from as they are not in any of my ansible config scripts or kube init files... I use all 172's in my configs.</p> <p>I do pull some configs direct from source (flannel and CSI/CPI stuff) I'll pull those down and inspect them to see if the 10's are in there... Hopefully it's in the flannel defaults or something and I can just change that yml file!</p> <p>cluster1 working:</p> <pre><code>[root@k8s-master ~]# iptables -t nat -vnL| grep POSTROUTING -A5 Chain POSTROUTING (policy ACCEPT 22 packets, 1346 bytes) pkts bytes target prot opt in out source destination 6743K 550M KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 3383K 212M RETURN all -- * * 172.16.0.0/16 172.16.0.0/16 117K 9002K MASQUERADE all -- * * 172.16.0.0/16 !224.0.0.0/4 0 0 RETURN all -- * * !172.16.0.0/16 172.16.0.0/24 0 0 MASQUERADE all -- * * !172.16.0.0/16 172.16.0.0/16 </code></pre> <p>cluster2 - not working:</p> <pre><code>[root@testvm-master ~]# iptables -t nat -vnL | grep POSTROUTING -A5 Chain POSTROUTING (policy ACCEPT 1152 packets, 58573 bytes) pkts bytes target prot opt in out source destination 719K 37M KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ 0 0 RETURN all -- * * 10.244.0.0/16 10.244.0.0/16 0 0 MASQUERADE all -- * * 10.244.0.0/16 !224.0.0.0/4 131K 7849K RETURN all -- * * !10.244.0.0/16 172.16.0.0/24 0 0 MASQUERADE all -- * * !10.244.0.0/16 10.244.0.0/16 </code></pre>
Levi Silvertab
<p>Boom! @Matt advice for the win.</p> <p>Using iptables to verify the nat rules that flannel was applying did the trick. I was able to find the 10.244 subnet in the flannel config that was referenced in the guide I was using.</p> <p>I had two options. 1. download and alter the flannel yaml before deploying the CNI or 2. make my kubeadmin init subnet declaration match what flannel has.</p> <p>I went with option 2 because I don't want to alter the flannel config everytime... I just want to pull down their latest file and run with it. This worked quite nicely to resolve my issue.</p>
Levi Silvertab
<p>I am following <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html</a> to link EFS to EKS. I have two namespace under my k8s clusters: <code>dev</code> and <code>stage</code>, and from I understand, I'd need to have two PersistentVolumeClaims that map to the shared PersistentVolume and StorageClass. So after I ran below 3 commands:</p> <pre><code>kubectl apply -f specs/pv.yaml kubectl apply -f specs/claim.yaml kubectl apply -f specs/storageclass.yaml </code></pre> <p>and from <code>kubectl get sc,pv,pvc1 -n dev</code>, I am able to see all 3 items just fine. However, as I tried to add to <code>stage</code> namespace - <code>kubectl apply -f specs/claim.yaml --namespace=stage</code>, I got below errors as <code>efs-claim</code> becomes stuck in a forever PENDING status:</p> <pre><code>Name: efs-claim Namespace: stage StorageClass: efs-sc Status: Pending Volume: Labels: &lt;none&gt; Annotations: volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: foo-api-stage-chart-12345-abcde foo-api-stage-chart-12345-abcde Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 107s efs.csi.aws.com_ip-xxx External provisioner is provisioning volume for claim &quot;stage/efs-claim&quot; Warning ProvisioningFailed 107s efs.csi.aws.com_ip-xxx failed to provision volume with StorageClass &quot;efs-sc&quot;: rpc error: code = InvalidArgument desc = Missing provisioningMode parameter Normal ExternalProvisioning 4s (x8 over 107s) persistentvolume-controller waiting for a volume to be created, either by external provisioner &quot;efs.csi.aws.com&quot; or manually created by system administrator </code></pre> <p>What is causing</p> <blockquote> <p>efs.csi.aws.com_ip-xxx failed to provision volume with StorageClass &quot;efs-sc&quot;: rpc error: code = InvalidArgument desc = Missing provisioningMode parameter</p> </blockquote> <p>I did not have to provide such parameter on <code>dev</code> namespace so why it is required for a different namespace like <code>stage</code>?</p>
夢のの夢
<p>To resolve the error that you are seeing, re-apply your StorageClass with:</p> <pre><code>... parameters: provisioningMode: efs-ap fileSystemId: &lt;ID of the file system created on EFS&gt; ... </code></pre> <p>If multiple pods going to read/write to the file system, re-apply PersistentVolume with:</p> <pre><code>... spec: ... accessModes: - ReadWriteMany ... </code></pre> <p>Should the problem persist do post your StorageClass, PersistentVolumeClaim and PersistentVolume spec in your question.</p>
gohm'c
<p>I read in documentation of K8S <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery</a></p> <p>But i can't find in documentation and online when K8S rotates the key? Each day/week/month/other time? And how to configure it.</p> <p>Any idea?</p>
tamirz12345
<p>The official document assumed you have solid OIDC know-how. Here's a good start with example to follow: <a href="https://banzaicloud.com/blog/kubernetes-oidc/" rel="nofollow noreferrer">https://banzaicloud.com/blog/kubernetes-oidc/</a></p>
gohm'c
<p>My application consumes a lot of resources during it's start up, like preparing the data structures and files. However, once it's done, it consumes minimal resources during it's course of running.</p> <p>Is there any strategy to configure the pod's resource limits in a Kubernetes cluster to deal with this kind of situation?</p> <p>When I give it too little, it fails to start; when I give it too much, it's such a waste.</p>
bunky
<p>Ok this is embarrassing, I set the readinessProbe to just 15secs, which is ok for higher resource limits but not enough for lower ones.</p> <p>Lengthen this one solves the problem.</p>
bunky
<p>I am trying to set access control for using the &quot;shell&quot; UI button on the deploy dashboard, and only need this for one single pod. By using the k8s RBAC auth model, I need something like this and binding to a role:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>- apiGroups: [""] resources: ["pods/exec"] resourceNames: [“api-server-f5b95446b-58wz4”] verbs: ["create"]</code></pre> </div> </div> </p> <p>However, the postfix &quot;-f5b95446b-58wz4&quot; is randomly generated during deploy time, and it will change constantly. So this solution won't work.</p> <p>If the resourceNames could support wild card string then it will resolve my issue, but looks like it is a known gap and not supported ATM (<a href="https://github.com/kubernetes/kubernetes/issues/56582" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/56582</a>)</p> <p>Anyone idea if there is better way that I can achieve this? Thanks!</p>
DubsWin
<p>You can use a policy engine like Kyverno to control what can/cannot. Example, prevent exec into to a specific pod filter by name like <a href="https://kyverno.io/policies/other/block-pod-exec-by-pod-and-container/" rel="nofollow noreferrer">this</a>.</p>
gohm'c
<p>I'm currently migrating a DAG from airflow version 1.10.10 to 2.0.0.</p> <p>This DAG uses a custom python operator where, depending on the complexity of the task, it assigns resources dynamically. The problem is that the import used in v1.10.10 (<strong>airflow.contrib.kubernetes.pod import Resources</strong>) no longer works. I read that for v2.0.0 I should use <strong>kubernetes.client.models.V1ResourceRequirements</strong>, but I need to build this resource object dynamically. This might sound dumb, but I haven't been able to find the correct way to build this object.</p> <p>For example, I've tried with</p> <pre><code> self.resources = k8s.V1ResourceRequirements( request_memory=get_k8s_resources_mapping(resource_request)['memory'], limit_memory=get_k8s_resources_mapping(resource_request)['memory_l'], request_cpu=get_k8s_resources_mapping(resource_request)['cpu'], limit_cpu=get_k8s_resources_mapping(resource_request)['cpu_l'] ) </code></pre> <p>or</p> <pre><code> self.resources = k8s.V1ResourceRequirements( requests={'cpu': get_k8s_resources_mapping(resource_request)['cpu'], 'memory': get_k8s_resources_mapping(resource_request)['memory']}, limits={'cpu': get_k8s_resources_mapping(resource_request)['cpu_l'], 'memory': get_k8s_resources_mapping(resource_request)['memory_l']} ) </code></pre> <p>(get_k8s_resources_mapping(resource_request)['xxxx'] just returns a value depending on the resource_request, like '2Gi' for memory or '2' for cpu)</p> <p>But they don't seem to work. The task fails.</p> <p>So, my question is, how would you go about correctly building a V1ResourceRequirements in Python? And, how should it look in the executor_config attribute of the task instance? Something like this, maybe?</p> <pre><code>'resources': {'limits': {'cpu': '1', 'memory': '512Mi'}, 'requests': {'cpu': '1', 'memory': '512Mi'}} </code></pre>
Alain
<p>The The proper syntax is:</p> <p>For <strong>apache-airflow-providers-cncf-kubernetes&gt;=5.3.0</strong>:</p> <pre><code>from kubernetes import client from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator KubernetesPodOperator( ..., container_resources = client.V1ResourceRequirements( requests={&quot;cpu&quot;: &quot;1000m&quot;, &quot;memory&quot;: &quot;8G&quot;}, limits={&quot;cpu&quot;: &quot;16000m&quot;, &quot;memory&quot;: &quot;128G&quot;} ) ) </code></pre> <p>If you would like to generate it dynamically simply replace the values in requests/limits with function that returns the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1ResourceRequirements.md" rel="nofollow noreferrer">expected string</a>.</p> <hr /> <p>Below are changes needed for the code to work on earlier versions.</p> <p>For <strong>apache-airflow-providers-cncf-kubernetes&lt;5.3.0 and &gt;=4.2.0</strong>:</p> <p>change import path to:</p> <pre><code>from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator </code></pre> <p>For <strong>apache-airflow-providers-cncf-kubernetes&lt;4.2.0</strong>:</p> <p>Change <code>container_resources</code> to <code>resources</code></p>
Elad Kalif
<p>I am trying to deploy an image in k3s but I am getting an error like this. I have made sure that there is no syntax error. I have also added match labels in my spec, but don't know what causing the issue</p> <pre><code>spec.selector: Required value spec.template.metadata.labels: Invalid value: map[string]string{...} selector does not match template labels </code></pre> <p>This is my yaml file</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: labels: app: darwin_tritron_server name: darwin_tritron_server spec: replicas: 3 selector: matchLabels: app: darwin_tritron_server template: metadata: labels: app: darwin_tritron_server spec: containers: - args: - &quot;cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models --allow-gpu-metrics=false --strict-model-config=false&quot; command: - /bin/sh - &quot;-c&quot; image: &quot;nvcr.io/nvidia/tritonserver:20.12-py3&quot; name: flower ports: - containerPort: 8000 name: http-triton - containerPort: 8001 name: grpc-triton - containerPort: 8002 name: metrics-triton resources: limits: nvidia.com/mig-1g.5gb: 1 volumeMounts: - mountPath: /models name: models volumes: - name: models nfs: path: &lt;path/to/flowerdemo/model/files&gt; readOnly: false server: &quot;&lt;IP address of the server&gt;&quot; </code></pre> <p>Any help would be appreciated</p>
Siddharth
<p>Wrong yaml indent in your spec, try:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: labels: app: darwin_tritron_server name: darwin_tritron_server spec: replicas: 3 selector: matchLabels: app: darwin_tritron_server template: metadata: labels: app: darwin_tritron_server spec: containers: - args: - &quot;cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models --allow-gpu-metrics=false --strict-model-config=false&quot; command: - /bin/sh - &quot;-c&quot; image: &quot;nvcr.io/nvidia/tritonserver:20.12-py3&quot; name: flower ports: - containerPort: 8000 name: http-triton - containerPort: 8001 name: grpc-triton - containerPort: 8002 name: metrics-triton resources: limits: nvidia.com/mig-1g.5gb: 1 volumeMounts: - mountPath: /models name: models volumes: - name: models nfs: path: &lt;path/to/flowerdemo/model/files&gt; readOnly: false server: &quot;&lt;IP address of the server&gt;&quot; </code></pre>
gohm'c
<p>I am trying to expose a simple grafana service exposed outside the cluster. Here is what i have done so far in terms of troubleshooting and researching before posting here and little bit of details of my setup.</p> <p>Grafana deployment YAML</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: grafana name: grafana spec: selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: replicas: 2 securityContext: fsGroup: 472 supplementalGroups: - 0 containers: - name: grafana image: 'grafana/grafana:8.0.4' imagePullPolicy: IfNotPresent ports: - containerPort: 3000 name: http-grafana protocol: TCP env: - name: GF_DATABASE_CA_CERT_PATH value: /etc/grafana/BaltimoreCyberTrustRoot.crt.pem readinessProbe: failureThreshold: 3 httpGet: path: /robots.txt port: 3000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 2 livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 3000 timeoutSeconds: 1 resources: requests: cpu: 250m memory: 750Mi volumeMounts: - name: grafana-configmap-pv mountPath: /etc/grafana/grafana.ini subPath: grafana.ini - name: grafana-pv mountPath: /var/lib/grafana - name: cacert mountPath: /etc/grafana/BaltimoreCyberTrustRoot.crt.pem subPath: BaltimoreCyberTrustRoot.crt.pem volumes: - name: grafana-pv persistentVolumeClaim: claimName: grafana-pvc - name: grafana-configmap-pv configMap: name: grafana-config - name: cacert configMap: name: mysql-cacert </code></pre> <p>Grafana Service yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: grafana spec: type: ClusterIP ports: - port: 3000 protocol: TCP targetPort: http-grafana clusterIP: 10.7.2.57 selector: app: grafana sessionAffinity: None </code></pre> <p>I have nginx installed as Ingress controller. Here is the YAML for nginx controller service</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx-controller namespace: ingress-nginx labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/version: 1.0.0 helm.sh/chart: ingress-nginx-4.0.1 spec: ports: - name: http protocol: TCP appProtocol: http port: 80 targetPort: http nodePort: 32665 - name: https protocol: TCP appProtocol: https port: 443 targetPort: https nodePort: 32057 selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx clusterIP: 10.7.2.203 clusterIPs: - 10.7.2.203 type: NodePort sessionAffinity: None externalTrafficPolicy: Cluster status: loadBalancer: {} </code></pre> <p>Ingress resource yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress spec: rules: - host: test.grafana.com http: paths: - path: / pathType: Prefix backend: service: name: grafana port: number: 3000 </code></pre> <p>The ingress ip 10.7.0.5 is not accessible at all. I have tried redploying the resources various times. The grafana POD ips are accessible with port 3000, i am able to login etc but just been unable to access grafana through the nginx load balancer. What am i missing?</p> <p>EDITED:</p> <p>Results of kubectl get services</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.7.2.55 &lt;none&gt; 3000/TCP 2d14h hello-world ClusterIP 10.7.2.140 &lt;none&gt; 80/TCP 42m kubernetes ClusterIP 10.7.2.1 &lt;none&gt; 443/TCP 9d </code></pre> <p>Results of kubectl get ingress</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE grafana-ingress &lt;none&gt; * 10.7.0.5 80 2d2h </code></pre> <p>Results of kubectl get pods</p> <pre><code>default grafana-85cdb8c656-6zgkg 1/1 Running 0 2d21h default grafana-85cdb8c656-7n626 1/1 Running 0 2d21h default hello-world-78796d6bfd-fwb98 1/1 Running 0 2d12h ingress-nginx ingress-nginx-controller-57ffff5864-rw57w 1/1 Running 0 2d12h </code></pre>
rocky
<p>Your ingress controller’s Service is of type NodePort, which means it does not have a public IP address. The Service’s ClusterIP (10.7.2.203) is only useful in the cluster’s internal network.</p> <p>If your cluster’s nodes have public IP addresses, you can use those to connect to the ingress controller. Since its Service is of type NodePort, it listens on specific ports on all of your cluster’s nodes. Based on the Service spec you provided, these ports are 32665 for HTTP and 32057 for HTTPS.</p> <p>If you want your ingress controller to have a dedicated IP address, you can change its Service’s type to LoadBalancer. Your Kubernetes service provider will assign a public IP address to your Service. You can then use that IP address to connect to your ingress controller.</p> <p>This only works if you are using a managed Kubernetes service. If you are self-managing, then you need to set up a loadbalancer that listens on a public IP address and routes traffic to your cluster’s nodes.</p>
Arthur Busser
<p>I want to get the last time a secret was modified via the kube api. I cannot seem to find a way to access this information. I had a look at events but I cannot find any for Secrets.</p> <p>An example would be I create a secret called my-secret, I then update this the next day but I want to know what time it was updated and not the creation time.</p> <p>Any help would be great thanks.</p>
Westy10101
<p>The following command will give you the secret chronological history:</p> <p><code>kubectl get secret &lt;name&gt; --namespace &lt;namespace&gt; --show-managed-fields -o jsonpath='{range .metadata.managedFields[*]}{.manager}{&quot; did &quot;}{.operation}{&quot; at &quot;}{.time}{&quot;\n&quot;}{end}'</code></p> <p>Example, create a secret: <code>kubectl create secret generic test --from-literal user=$(echo 'somebody' | base64)</code></p> <p>Run the above command:</p> <blockquote> <p>kubectl-create did Update at <strong>2021-12-06T01:12:17Z</strong></p> </blockquote> <p>Retrieve the created secret <code>kubectl get secret test -o yaml &gt; test.yaml</code>. Replace the value for &quot;user&quot; in the yaml with <code>echo 'nobody' | base64</code> output and re-apply <code>kubectl apply -f test.yaml</code>.</p> <p>Run the above command and it reports the last update action and timestamp:</p> <blockquote> <p>kubectl-create did Update at <strong>2021-12-06T01:12:17Z</strong></p> <p>kubectl-client-side-apply did Update at <strong>2021-12-06T01:13:33Z</strong></p> </blockquote> <p>Now do a replace <code>kubectl patch secret test --type='json' -p='[{&quot;op&quot; : &quot;replace&quot; ,&quot;path&quot; : &quot;/data/user&quot; ,&quot;value&quot; : &quot;aGVsbG93b3JsZAo=&quot;}]'</code></p> <p>Run the above command again:</p> <blockquote> <p>kubectl-create did Update at <strong>2021-12-06T01:12:17Z</strong></p> <p>kubectl-client-side-apply did Update at <strong>2021-12-06T01:13:33Z</strong></p> <p>kubectl-patch did Update at <strong>2021-12-06T01:21:57Z</strong></p> </blockquote> <p>The command correctly reports all the changes made to the secret.</p>
gohm'c
<p>I'm running Apache Airflow on a Kubernetes cluster and I'm facing an issue with retrieving logs from deleted pods. I can easily get logs from currently running or recently terminated pods using kubectl logs, but I'm unable to get logs from older, deleted pods.</p> <h3>My versions</h3> <pre><code>awswrangler==2.19.0 apache-airflow-providers-cncf-kubernetes==4.3.0 apache-airflow-providers-amazon==5.0.0 boto3==1.24.56 gnupg==2.3.1 PyYAML==6.0 </code></pre> <p>Here's what I've tried:</p> <pre><code># This works fine and gives me the actual log kubectl logs my-current-pod-239847283947 # This doesn't work for an old, deleted pod kubectl logs my-old-pod-928374928374 The second command returns the following error: </code></pre> <h3>The error</h3> <pre><code>Error from server (NotFound): pods &quot;my-old-pod&quot; not found </code></pre> <p>I understand that the pod is deleted, but is there a way to retrieve its logs or at least configure Airflow or Kubernetes to save these logs for future reference?</p> <p>NOTE: I'm using AWS for storage</p> <p>Any help would be greatly appreciated!</p>
The Dan
<p>You might be able to get the logs by following <a href="https://stackoverflow.com/questions/57007134/how-to-see-logs-of-terminated-pods">How to see logs of terminated pods</a></p> <p>However it's much easier to handle this by simply keep the pod you need.</p> <p>Airflow offers the option to delete the pod when the task is finished you simply need to disable the deletion.</p> <p>For <code>apache-airflow-providers-cncf-kubernetes&gt;=7.2.0</code>:</p> <pre><code>KubernetesPodOperator(..., on_finish_action='keep_pod') </code></pre> <p>NOTE: If you don't mind about logs of successful tasks you can also set to <code>on_finish_action='delete_succeeded_pod'</code> which will delete only successful pods thus leaving the errored ones for further investigation. This offers much more flexibility than older versions of the provider (See <a href="https://github.com/apache/airflow/pull/30718" rel="nofollow noreferrer">PR</a>).</p> <p>For <code>apache-airflow-providers-cncf-kubernetes&lt;7.2.0</code>:</p> <pre><code>KubernetesPodOperator(..., is_delete_operator_pod=False) </code></pre> <p>If you no longer need the pods you can delete them with kubectl. You can set also script (to be executed with Airflow) that will clean older pods (for example delete all pods older than X days).</p>
Elad Kalif
<p>I'm using fluentd in my kubernetes cluster to collect logs from the pods and send them to the elasticseach. Once a day or two the fluetnd gets the error:</p> <pre><code>[warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error=“buffer space has too many data” location=“/fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.7.4/lib/fluent/plugin/buffer.rb:265:in `write’” </code></pre> <p>And the fluentd stops sending logs, until I reset the fluentd pod.</p> <p>How can I avoid getting this error?</p> <p>Maybe I need to change something in my configuration?</p> <pre><code>&lt;match filter.Logs.**.System**&gt; @type elasticsearch host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}" port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}" scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME']}" user "#{ENV['FLUENT_ELASTICSEARCH_USER']}" password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}" logstash_format true logstash_prefix system type_name systemlog time_key_format %Y-%m-%dT%H:%M:%S.%NZ time_key time log_es_400_reason true &lt;buffer&gt; flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}" flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}" chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '8M'}" queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}" retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}" retry_forever true &lt;/buffer&gt; &lt;/match&gt; </code></pre>
Or Nahum
<p>The default buffer type is memory check: <a href="https://github.com/uken/fluent-plugin-elasticsearch/blob/master/lib/fluent/plugin/out_elasticsearch.rb#L63" rel="noreferrer">https://github.com/uken/fluent-plugin-elasticsearch/blob/master/lib/fluent/plugin/out_elasticsearch.rb#L63</a></p> <p>There are two disadvantages to this type of buffer - if the pod or containers are restarted logs that in the buffer will be lost. - if all the RAM allocated to the fluentd is consumed logs will not be sent anymore</p> <p>Try to use file-based buffers with the below configurations </p> <pre><code>&lt;buffer&gt; @type file path /fluentd/log/elastic-buffer flush_thread_count 8 flush_interval 1s chunk_limit_size 32M queue_limit_length 4 flush_mode interval retry_max_interval 30 retry_forever true &lt;/buffer&gt; </code></pre>
Al-waleed Shihadeh
<p>The app consists of 3 components: frontend, backend and Redis. The frontend communicates only with the backend, the Redis service communicates only with the backend service. All pods run correctly, at least logs don't show anything disturbing.</p> <p>The backend is built with NodeJS/ExpressJS, the frontend: with React.</p> <p>Services are configured in the following way:</p> <p><code>frontend:</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: plg-frontend labels: name: plg-frontend-service app: playground-app spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30010 protocol: TCP selector: name: plg-frontend-pod app: playground-app </code></pre> <p><code>backend:</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: plg-backend labels: name: plg-backend-service app: playground-app spec: ports: - port: 4000 targetPort: 5011 selector: name: plg-backend-pod app: playground-app </code></pre> <p>All services run correctly:</p> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE plg-backend ClusterIP 10.233.23.12 &lt;none&gt; 4000/TCP 2d3h plg-frontend NodePort 10.233.63.20 &lt;none&gt; 80:30010/TCP 2d3h redis ClusterIP 10.233.59.37 &lt;none&gt; 6379/TCP 2d3h </code></pre> <p>At the moment, the frontend running on host's IP, let's say <a href="http://10.11.12.13:30010" rel="nofollow noreferrer">http://10.11.12.13:30010</a>, tries to call the backend's internal endpoint, so 10.233.23.12:5011 (the backend's <code>targetPort</code>). Connection times out. Should I expose the backend service to become <code>NodePort</code> to be accessible?</p>
AbreQueVoy
<p>No need to expose using NodePort unless you intend to let the backend service accessible on the host network. Your frontend should be calling 10.233.23.12:<strong>4000</strong>, not 5011 which is the backing pods listening on.</p>
gohm'c
<p>I am a student and have to make a Bachelor thesis for my school. Is it possible to make a hybrid Kubernetes cluster, and how is this possible?</p> <p>Is there a good application I can run in this cluster to show that it works?</p> <p>I have made an AKS cluster and a on-prem cluster. Then I made a nginx loadbalancer and loadbalanced the 2, but the application isn't synced (which is logical). I tried using rancher but somehow I always got errors while trying to make a local cluster. Is it possible to have the storages synced somehow and be able to control the 2 clusters from one place or just make them one cluster? I have found you can use Azure Arc with azure AKS, is this a viable solution? Should I use a VPN instead?</p>
Florian
<p>If by hybrid k8s cluster you mean a cluster that has nodes over different cloud providers, then yes that is entirely possible.</p> <p>You can create a simple example cluster of this by using <a href="https://docs.k3s.io/" rel="nofollow noreferrer">k3s</a> (lightweight Kubernetes) and then using the <a href="https://docs.k3s.io/installation/network-options#distributed-hybrid-or-multicloud-cluster" rel="nofollow noreferrer">--node-external-ip</a> flag. This tells your nodes to talk to eachother via their public IP.</p> <p>This sort of setup is described in <a href="https://kubernetes.io/docs/setup/best-practices/multiple-zones/" rel="nofollow noreferrer">Running in Multiple Zones</a> on the Kubernetes Documentation. You will have to configure the different places you place nodes at as different zones.<br /> You can fix storage on a cluster like this by using CSI drivers for the different environments you use, like AWS, GCP, AKS, etc. When you then deploy a PVC and it creates a PV at AWS for example, when you mount this volume on a pod, that pod will always be scheduled in the zone the PV resides in, otherwise scheduling will be impossible.</p> <p>I personally am not running this set up in production, but I am using a technique that also suits this multiple zones idea with regards to networking. To save money on my personal cluster, I am telling my Nginx ingress controller to not make a LoadBalancer resource and to run the controllers as a DaemonSet. The Nginx controller pods have a HostPort open on the node they run on (since its a DaemonSet there won't be more than one of those pods per node) and this HostPort opens ports 80 and 443 on the host. When you then add more nodes, every one of the nodes with an ingress controller pod on it will become an ingress entrypoint. Just set up your DNS records to include all of those nodes and you'll have them load balanced.</p>
DutchEllie
<p>I'm trying to accomplish the following: Create a new service to access the web application using the service-definition-1.yaml file</p> <pre class="lang-yaml prettyprint-override"><code> Name: webapp-service Type: NodePort targetPort: 8080 port: 8080 nodePort: 30080 selector: simple-webapp </code></pre> <p>I ran the command</p> <pre class="lang-sh prettyprint-override"><code>kubectl create service nodeport webapp-service --tcp=8080:8080 --node-port=30080 </code></pre> <p>and got everything I wanted. However, I have to manually create &amp; edit the yaml file to add the selector: <code>simple-webapp</code>.</p> <p>I was curious if I could specify the selectors for a service through the command line?</p>
Parker Shamblin
<p>Try:</p> <pre><code>kubectl create service nodeport webapp-service --tcp 8080:8080 --node-port 30080 \ --dry-run=client -o yaml | kubectl set selector --local -f - app=simple-webapp -o yaml </code></pre>
gohm'c
<p>I want to configure native Kubernetes cluster using Terraform script. I tried this Terraform script:</p> <pre><code>terraform { required_providers { kubernetes = { source = &quot;hashicorp/kubernetes&quot; version = &quot;2.13.1&quot; } kubectl = { source = &quot;gavinbunney/kubectl&quot; version = &quot;1.14.0&quot; } helm = { source = &quot;hashicorp/helm&quot; version = &quot;2.6.0&quot; } } } provider &quot;kubectl&quot; { # run kubectl cluster-info to get expoint and port host = &quot;https://192.168.1.139:6443/&quot; token = &quot;eyJhbGciOiJSUzI1NiIsImt.....&quot; insecure = &quot;true&quot; } provider &quot;kubernetes&quot; { # run kubectl cluster-info to get expoint and port host = &quot;https://192.168.1.139:6443/&quot; token = &quot;eyJhbGciOiJSUzI1NiIsImt.....&quot; insecure = &quot;true&quot; } resource &quot;kubernetes_namespace&quot; &quot;example&quot; { metadata { annotations = { name = &quot;example-annotation&quot; } labels = { mylabel = &quot;label-value&quot; } name = &quot;terraform-example-namespace&quot; } } </code></pre> <p>ref: <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs</a> <a href="https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs" rel="nofollow noreferrer">https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs</a></p> <p>I tried to create a user from this tutorial: <a href="https://killercoda.com/kimwuestkamp/scenario/k8s1.24-serviceaccount-secret-changes" rel="nofollow noreferrer">https://killercoda.com/kimwuestkamp/scenario/k8s1.24-serviceaccount-secret-changes</a></p> <pre><code>kubectl create sa cicd kubectl get sa,secret cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cicd spec: serviceAccount: cicd containers: - image: nginx name: cicd EOF kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl create token cicd kubectl create token cicd --duration=999999h cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: cicd annotations: kubernetes.io/service-account.name: &quot;cicd&quot; EOF kubectl get sa,secret kubectl describe secret cicd kubectl describe sa cicd kubectl get sa cicd -oyaml kubectl get sa,secret </code></pre> <p>When I run the Terraform script I get error:</p> <pre><code>kubernetes_namespace.example: Creating... ╷ │ Error: namespaces is forbidden: User &quot;system:serviceaccount:default:cicd&quot; cannot create resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope │ │ with kubernetes_namespace.example, │ on main.tf line 36, in resource &quot;kubernetes_namespace&quot; &quot;example&quot;: │ 36: resource &quot;kubernetes_namespace&quot; &quot;example&quot; { </code></pre> <p>Can you advise what user configuration I'm missing?</p> <p>Can you advise what is the proper way to implement this script and provision HELM chart into native Kubernetes.</p>
Peter Penzov
<blockquote> <p>Error: namespaces is forbidden: User &quot;system:serviceaccount:default:cicd&quot; cannot create resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope</p> </blockquote> <p>The service account <code>cicd</code> in namespace <code>default</code> is lacked of permissions. You can first assign <code>cluster-admin</code> permissions to ensure your pipeline is functioning, then trim the permissions gradually according to your use case. Apply the following spec before your pipeline starts:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: &lt;of your own&gt; roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cicd namespace: default </code></pre>
gohm'c
<p>I am trying to refresh my K8s knowledge and am following <a href="https://github.com/knrt10/kubernetes-basicLearning" rel="nofollow noreferrer">this tutorial</a>, but am running in some problems. My current cluster (<code>minikube</code>) contains one pod called <code>kubia</code>. This pod is alive and well and contains a simple Webserver.</p> <p>I want to expose that server via a <code>kubectl expose pod kubia --type=LoadBalancer --name kubia-http</code>.</p> <p><strong>Problem:</strong> According to my K8s dashboard, <code>kubia-http</code> gets stuck on startup.</p> <p><strong>Debugging:</strong></p> <p><code>kubectl describe endpoints kubia-http</code> gives me</p> <pre><code>Name: kubia-http Namespace: default Labels: run=kubia Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-11-20T15:41:29Z Subsets: Addresses: 172.17.0.5 NotReadyAddresses: &lt;none&gt; Ports: Name Port Protocol ---- ---- -------- &lt;unset&gt; 8080 TCP Events: &lt;none&gt; </code></pre> <p>When debugging I tried to answer the following questions:</p> <p>1.) Is my service missing an endpoint?</p> <p><code>kubectl get pods --selector=run=kubia</code> gives me one <code>kubia</code> pod. So, I am not missing an endpoint.</p> <p>2.) Does my service try to access the wrong port when communicating with the pod?</p> <p>From my pod yaml:</p> <pre><code> containers: - name: kubia ports: - containerPort: 8080 protocol: TCP </code></pre> <p>From my service yaml:</p> <pre><code> ports: - protocol: TCP port: 8080 targetPort: 8080 nodePort: 32689 </code></pre> <p>The service tries to access the correct port.</p> <p><strong>What is a good approach to debug this problem?</strong></p>
User12547645
<p>How does the below command output looks like?</p> <ol> <li><code>kubectl get services kubia-http</code></li> <li><code>kubectl describe services kubia-http</code></li> </ol> <p>Does everything looks normal there?</p> <p>I think you are facing similar issue mentioned in this <a href="https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending">question</a>. So if <code>kubectl get services kubia-http</code> looks good except the known expected behavior <code>external ip pending on minikube</code>, you should able to access the service using nodeport or clusterip</p>
Syam Sankar
<p>I am working on a requirement wherein we want to update a specific kernel parameter &quot;net.ipv4.tcp_retries2&quot; to &quot;5&quot; in the Kubernetes POD.</p> <p><strong>We are using AKS cluster v1.21.7</strong></p> <p>I tried using securityContext to set the above sysctl parameters but it failed</p> <pre><code> template: metadata: labels: app.kubernetes.io/name: weather-forecast-api app.kubernetes.io/instance: RELEASE-NAME spec: serviceAccountName: RELEASE-NAME-weather-forecast-api securityContext: sysctls: - name: net.ipv4.tcp_retries2 value: &quot;5&quot; </code></pre> <p>When I applied the above changes in the AKS, the pod failed to run and gave the error</p> <blockquote> <p>forbidden sysctl: &quot;net.ipv4.tcp_retries2&quot; not whitelisted</p> </blockquote> <p>I know we can modify kernel-level settings at the Kubelet level on a bare-bone Kubernetes cluster but in my case, it is a managed cluster from Azure.</p>
Pradeep
<p>Use an init container to set:</p> <pre><code>... template: metadata: labels: app.kubernetes.io/name: weather-forecast-api app.kubernetes.io/instance: RELEASE-NAME spec: serviceAccountName: RELEASE-NAME-weather-forecast-api initContainers: - name: sysctl image: busybox securityContext: privileged: true command: [&quot;sh&quot;, &quot;-c&quot;, &quot;sysctl -w net.ipv4.tcp_retries2=3&quot;] ... </code></pre>
gohm'c
<p>I run the ingress controller and put one annotation for round_robin algorithm to run. but seems there is no event run there. is it ok if no event in my ingress description? what event indicates there?</p> <pre><code># kubectl describe ingress -n ingress Name: nginx-ingress Namespace: ingress Address: 192.168.10.10,192.168.10.45 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- website.com / website1:80 (10.42.0.139:80,10.42.1.223:80,10.42.2.98:80 + 1 more...) /website2 website2:80 (10.42.0.140:80,10.42.1.232:80,10.42.2.74:80 + 1 more...) /website3 website3:80 (10.42.0.141:80,10.42.1.215:80,10.42.2.58:80 + 1 more...) Annotations: nginx.ingress.kubernetes.io/load-balance: round_robin Events: &lt;none&gt; </code></pre> <p>I deploy my ingress with this bro, and I have my ingress class. update 1.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress namespace: ingress annotations: nginx.ingress.kubernetes.io/load-balance: ewma spec: ingressClassName: nginx rules: - host: service.com http: paths: - path: / pathType: Prefix backend: service: name: service1 port: number: 80 - path: /service2 pathType: Prefix backend: service: name: service2 port: number: 80 - path: /service3 pathType: Prefix backend: service: name: service3 port: number: 80 </code></pre> <p>with another annotation i get the event in my ingress</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 25m (x4 over 113m) nginx-ingress-controller Scheduled for sync Normal Sync 22m (x39 over 7d2h) nginx-ingress-controller Scheduled for sync Normal Sync 22m (x41 over 7d2h) nginx-ingress-controller Scheduled for sync Normal Sync 22m (x22 over 5d10h) nginx-ingress-controller Scheduled for sync Normal Sync 8m42s (x2 over 9m21s) nginx-ingress-controller Scheduled for sync </code></pre> <pre><code># kubectl describe ingress -n ingress Name: nginx-ingress Namespace: ingress Address: 192.168.10.10,192.168.10.45 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- website1.com / nginx-deployment:80 (10.42.0.222:80,10.42.1.32:80,10.42.2.155:80 + 1 more...) /website2 nginx-video:80 (10.42.0.220:80,10.42.1.30:80,10.42.2.153:80 + 1 more...) /website3 nginx-document:80 (10.42.0.221:80,10.42.1.31:80,10.42.2.154:80 + 1 more...) Annotations: nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/session-cookie-expires: 172800 nginx.ingress.kubernetes.io/session-cookie-max-age: 172800 nginx.ingress.kubernetes.io/session-cookie-name: route nginx.ingress.kubernetes.io/upstream-hash-by: ewma Events: &lt;none&gt; </code></pre>
newcomers
<p>This means your cluster has no ingress controller installed. When there's a ingress controller (such as <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">ingress-nginx</a>) installed in your cluster, a series of events will triggered to process your ingress request. These events will show in your describe command.</p> <p>If you do have ingress controller but it is not registered as the default ingress class for your cluster, you can add the annotation <code>kubernetes.io/ingress.class: &lt;name of your IngressClass, example &quot;nginx&quot;&gt;</code> to your ingress spec or:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress ... spec: ingressClassName: &lt;name of your IngressClass, example &quot;nginx&quot;&gt; ... </code></pre>
gohm'c
<p>Hi I am having a shell script with name test.sh.It takes two arguments --a --b.I want to execute it using kubectl exec</p> <p>I am currently following kubectl exec pod-name -- path-to-shell-file [&quot;Arguments&quot;]</p> <p>If i try it without the arguments the shell is working fine.</p> <p>But this is not working.</p> <p>Can you guys help me out.</p>
Sabarinathan
<p>I think <code>kubectl exec pod-name -- shell-bin shell-script-path --a --b</code> should work. For example <code>kubectl exec pod-name -- /bin/bash /path_to/test.sh --a --b</code>. What is the error you are getting?</p> <p><strong>EDIT</strong></p> <p>Adding a working example</p> <pre><code> ~ $ kubectl exec nginx-deployment-7c8f47687f-b5qmw -- cat /test.sh echo The arguments for test.sh are $@ ~ $ kubectl exec nginx-deployment-7c8f47687f-b5qmw -- /bin/bash /test.sh --a --b The arguments for test.sh are --a --b </code></pre>
Syam Sankar
<p>I am trying to create alerts in Prometheus on Kubernetes and sending them to a Slack channel. For this i am using the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">prometheus-community</a> helm-charts (which already includes the alertmanager). As i want to use my own alerts I have also created an <em>values.yml</em> (shown below) strongly inspired from <a href="https://gist.github.com/l13t/d432b63641b6972b1f58d7c037eec88f" rel="nofollow noreferrer">here</a>. If I port forward Prometheus I can see my Alert there going from inactive, to pending to firing, but no message is sent to slack. I am quite confident that my alertmanager configuration is fine (as I have tested it with some prebuild alerts of another chart and they were sent to slack). So my best guess is that I add the alert in the wrong way (in the serverFiles part), but I can not figure out how to do it correctly. Also, the alertmanager logs look pretty normal to me. Does anyone have an idea where my problem comes from?</p> <pre><code>--- serverFiles: alerting_rules.yml: groups: - name: example rules: - alert: HighRequestLatency expr: sum(rate(container_network_receive_bytes_total{namespace=&quot;kube-logging&quot;}[5m]))&gt;20000 for: 1m labels: severity: page annotations: summary: High request latency alertmanager: persistentVolume: storageClass: default-hdd-retain ## Deploy alertmanager ## enabled: true ## Service account for Alertmanager to use. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ## serviceAccount: create: true name: &quot;&quot; ## Configure pod disruption budgets for Alertmanager ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget ## This configuration is immutable once created and will require the PDB to be deleted to be changed ## https://github.com/kubernetes/kubernetes/issues/45398 ## podDisruptionBudget: enabled: false minAvailable: 1 maxUnavailable: &quot;&quot; ## Alertmanager configuration directives ## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file ## https://prometheus.io/webtools/alerting/routing-tree-editor/ ## config: global: resolve_timeout: 5m slack_api_url: &quot;I changed this url for the stack overflow question&quot; route: group_by: ['job'] group_wait: 30s group_interval: 5m repeat_interval: 12h #receiver: 'slack' routes: - match: alertname: DeadMansSwitch receiver: 'null' - match: receiver: 'slack' continue: true receivers: - name: 'null' - name: 'slack' slack_configs: - channel: 'alerts' send_resolved: false title: '[{{ .Status | toUpper }}{{ if eq .Status &quot;firing&quot; }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification' text: &gt;- {{ range .Alerts }} *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}` *Description:* {{ .Annotations.description }} *Graph:* &lt;{{ .GeneratorURL }}|:chart_with_upwards_trend:&gt; *Runbook:* &lt;{{ .Annotations.runbook }}|:spiral_note_pad:&gt; *Details:* {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{ end }} {{ end }} </code></pre>
Manuel
<p>So I have finally solved the problem. The problem apparently was that the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> and the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">prometheus</a> helm charts work a bit differently. So instead of alertmanager.config I had to insert the code (everything starting from global) at alertmanagerFiles.alertmanager.yml.</p>
Manuel
<p>**Unable to connect to the server: getting credentials: decoding stdout: no kind &quot;ExecCredential&quot; is registered for version &quot;client.authentication.k8s.io/v1alpha1&quot; in scheme &quot;pkg/client/auth/exec/exec.go:62&quot; **</p> <pre><code>2022-09-16 16:35:00 [ℹ] eksctl version 0.111.0 2022-09-16 16:35:00 [ℹ] using region ap-south-1 2022-09-16 16:35:00 [ℹ] skipping ap-south-1c from selection because it doesn't support the following instance type(s): t2.micro 2022-09-16 16:35:00 [ℹ] setting availability zones to [ap-south-1a ap-south-1b] 2022-09-16 16:35:00 [ℹ] subnets for ap-south-1a - public:192.168.0.0/19 private:192.168.64.0/19 2022-09-16 16:35:00 [ℹ] subnets for ap-south-1b - public:192.168.32.0/19 private:192.168.96.0/19 2022-09-16 16:35:00 [ℹ] nodegroup &quot;ng-1&quot; will use &quot;&quot; [AmazonLinux2/1.23] 2022-09-16 16:35:00 [ℹ] using Kubernetes version 1.23 2022-09-16 16:35:00 [ℹ] creating EKS cluster &quot;basic-cluster&quot; in &quot;ap-south-1&quot; region with managed nodes 2022-09-16 16:35:00 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup 2022-09-16 16:35:00 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --cluster=basic-cluster' 2022-09-16 16:35:00 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster &quot;basic-cluster&quot; in &quot;ap-south-1&quot; 2022-09-16 16:35:00 [ℹ] CloudWatch logging will not be enabled for cluster &quot;basic-cluster&quot; in &quot;ap-south-1&quot; 2022-09-16 16:35:00 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-south-1 --cluster=basic-cluster' 2022-09-16 16:35:00 [ℹ] 2 sequential tasks: { create cluster control plane &quot;basic-cluster&quot;, 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup &quot;ng-1&quot;, } } 2022-09-16 16:35:00 [ℹ] building cluster stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:35:00 [ℹ] deploying stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:35:30 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:36:01 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:37:01 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:38:01 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:39:01 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:40:01 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:41:02 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:42:02 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:43:02 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:44:02 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:45:02 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:46:03 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-cluster&quot; 2022-09-16 16:48:05 [ℹ] building managed nodegroup stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:48:05 [ℹ] deploying stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:48:05 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:48:36 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:49:22 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:49:53 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:51:15 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:52:09 [ℹ] waiting for CloudFormation stack &quot;eksctl-basic-cluster-nodegroup-ng-1&quot; 2022-09-16 16:52:09 [ℹ] waiting for the control plane availability... 2022-09-16 16:52:09 [✔] saved kubeconfig as &quot;/home/santhosh_puvaneswaran/.kube/config&quot; 2022-09-16 16:52:09 [ℹ] no tasks 2022-09-16 16:52:09 [✔] all EKS cluster resources for &quot;basic-cluster&quot; have been created 2022-09-16 16:52:09 [ℹ] nodegroup &quot;ng-1&quot; has 3 node(s) 2022-09-16 16:52:09 [ℹ] node &quot;ip-192-168-15-31.ap-south-1.compute.internal&quot; is ready 2022-09-16 16:52:09 [ℹ] node &quot;ip-192-168-35-216.ap-south-1.compute.internal&quot; is ready 2022-09-16 16:52:09 [ℹ] node &quot;ip-192-168-36-191.ap-south-1.compute.internal&quot; is ready 2022-09-16 16:52:09 [ℹ] waiting for at least 3 node(s) to become ready in &quot;ng-1&quot; 2022-09-16 16:52:09 [ℹ] nodegroup &quot;ng-1&quot; has 3 node(s) 2022-09-16 16:52:09 [ℹ] node &quot;ip-192-168-15-31.ap-south-1.compute.internal&quot; is ready 2022-09-16 16:52:09 [ℹ] node &quot;ip-192-168-35-216.ap-south-1.compute.internal&quot; is ready 2022-09-16 16:52:09 [ℹ] node &quot;ip-192-168-36-191.ap-south-1.compute.internal&quot; is ready *2022-09-16 16:52:10 [✖] unable to use kubectl with the EKS cluster (check 'kubectl version'): WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Unable to connect to the server: getting credentials: decoding stdout: no kind &quot;ExecCredential&quot; is registered for version &quot;client.authentication.k8s.io/v1alpha1&quot; in scheme &quot;pkg/client/auth/exec/exec.go:62&quot;* 2022-09-16 16:52:10 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries 2022-09-16 16:52:10 [✔] EKS cluster &quot;basic-cluster&quot; in &quot;ap-south-1&quot; region is ready santhosh_puvaneswaran@it002072: </code></pre> <p>I don't why I am having this error again and again, <a href="https://i.stack.imgur.com/2ttCF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2ttCF.png" alt="enter image description here" /></a></p> <p>I can create a clusters and delete, But can't able to work on it..!</p>
Santhosh Puvaneswaran
<p>You need to update your AWS CLI to &gt;2.7.25 or the latest (recommended), ensure your CLI is pointing to the right region, then try <code>eksctl utils write-kubeconfig --cluster=&lt;name&gt;</code>. Open the kubeconfig file and check <code>client.authentication.k8s.io/v1alpha1</code> has changed to <code>client.authentication.k8s.io/v1beta1</code>.</p>
gohm'c
<p>I need help understanding my Google Filestore issue.</p> <p>I created a Filestore instance with BASIC_HDD and 1TB storage. In GKE, I provisioned a PersistentVolume as following:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: my-fileshare spec: capacity: storage: 1T accessModes: - ReadWriteMany nfs: path: /fileshare server: xxx.xxx.xxx.xxx </code></pre> <p>My question is, can I create multiple PersistentVolumeClaims from this 1TB? Like multiple PVCs using i.e. 100GB each?</p>
torblerone
<p>I've tested it myself now.</p> <p>PV's and PVC's have a <strong>1:1 relationship</strong>. Every PersistentVolume can carry <em>exactly</em> one PersistentVolumeClaim.</p> <p><strong>But</strong> you can create multiple PersistentVolumes! For example, if you want to split your <code>1TB</code> (the minimum for a HDD instance) into 5 equal parts, you can just create 5 PersistentVolumes with different names that use the same fileshare. Just take the script from my question, change the name and scale the size down to <code>200G</code>. Tada, you have split up your Filestore instance to 5 PersistentVolumes and can now use it for different purposes.</p>
torblerone
<p>I updated the Kubernetes version of the control plane of my EKS cluster to 1.20, but want to update to 1.21. However, I get</p> <pre><code>Kubelet version of Fargate pods must be updated to match cluster version 1.20 before updating cluster version; Please recycle all offending pod replicas </code></pre> <p>Of course I get a similar errro with <code>eksctl</code></p> <pre><code>$ eksctl upgrade cluster --name sandbox --approve 2022-08-30 14:46:34 [ℹ] will upgrade cluster &quot;sandbox&quot; control plane from current version &quot;1.20&quot; to &quot;1.21&quot; Error: operation error EKS: UpdateClusterVersion, https response error StatusCode: 400, RequestID: 7845750b-e906-4814-a38c-466e6dab8864, InvalidRequestException: Kubelet version of Fargate pods must be updated to match cluster version 1.20 before updating cluster version; Please recycle all offending pod replicas </code></pre> <p>How do I update the kubelet version on Fargate pods?</p>
Chris F
<blockquote> <p>Please recycle all offending pod replicas</p> </blockquote> <p>Do a <code>kubectl rollout restart</code> to workload controller (eg. deployment, statefulset) that managed those pods currently running on Fargate. This will replace your current Fargate node with a kubelet version same as the control plane. For standalone pod just delete and re-apply.</p>
gohm'c
<p>I want to create something similar with this:</p> <p><a href="https://i.stack.imgur.com/Gs92A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gs92A.png" alt="enter image description here" /></a></p> <p>Basically I have a microservice and I want to scale the instances of it, but to use the same MongoDB container. These containers should be connected to a load balancer. How can I achieve this using Kubernetes?</p>
George Valentin
<p>You can use k8s <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">HorizontalPodAutoscaler</a> to scale out your microservice as the demand goes up. When all of your nodes are filled up with microservices, you use <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">cluster autoscaler</a> to scale out your node so more microservices can come online.</p>
gohm'c
<p>I have an EKS cluster v1.23 with Fargate nodes. Cluster and Nodes are in v1.23.x</p> <pre><code>$ kubectl version --short Server Version: v1.23.14-eks-ffeb93d </code></pre> <p>Fargate nodes are also in v1.23.14</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION fargate-ip-x-x-x-x.region.compute.internal Ready &lt;none&gt; 7m30s v1.23.14-eks-a1bebd3 fargate-ip-x-x-x-xx.region.compute.internal Ready &lt;none&gt; 7m11s v1.23.14-eks-a1bebd3 </code></pre> <p>When I tried to upgrade cluster to 1.24 from AWS console, it gives this error.</p> <pre><code>Kubelet version of Fargate pods must be updated to match cluster version 1.23 before updating cluster version; Please recycle all offending pod replicas </code></pre> <p>What are the other things I have to check?</p>
Ruwan Vimukthi Mettananda
<blockquote> <p>Fargate nodes are also in v1.23.14</p> </blockquote> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION fargate-ip-x-x-x-x.region.compute.internal Ready &lt;none&gt; 7m30s v1.23.14-eks-a1bebd3 fargate-ip-x-x-x-xx.region.compute.internal Ready &lt;none&gt; 7m11s v1.23.14-eks-a1bebd3 </code></pre> <p>From your question you only have 2 nodes, likely you are running only the coredns. Try <code>kubectl scale deployment coredns --namespace kube-system --replicas 0</code> then upgrade. You can scale it back to 2 when the control plane upgrade is completed. Nevertheless, ensure you have selected the correct cluster on the console.</p>
gohm'c
<p>I created a kubernetes cluster on my debian 9 machine using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">kind</a>.<br> Which apparently works because I can run <code>kubectl cluster-info</code> with valid output.</p> <p>Now I wanted to fool around with the tutorial on <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/#kubernetes-basics-modules" rel="nofollow noreferrer">Learn Kubernetes Basics</a> site.</p> <p>I have already deployed the app </p> <pre><code>kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 </code></pre> <p>and started the <code>kubectl proxy</code>. </p> <p>Output of <code>kubectl get deployments</code></p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp 1/1 1 1 17m </code></pre> <p>My problem now is: when I try to see the output of the application using <code>curl</code> I get</p> <blockquote> <p>Error trying to reach service: 'dial tcp 10.244.0.5:80: connect: connection refused'</p> </blockquote> <p>My commands</p> <pre><code>export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/ </code></pre> <p>For the sake of completeness I can run <code>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/</code> and I get valid output.</p>
Norbert Bartko
<p>Try add :8080 after the <strong>$POD_NAME</strong></p> <pre><code> curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/ </code></pre>
Qin Chenfeng
<p>Remove time and tag from fluentd output plugin stdout with json</p> <p>Fluentd's output plugin produces output like:</p> <p>2017-11-28 11:43:13.814351757 +0900 tag: {"field1":"value1","field2":"value2"}</p> <p>So timestamp and tag are before the json. How can I remove these fields - I only like to have the json output</p> <pre><code>&lt;match pattern&gt; @type stdout &lt;/match&gt; </code></pre> <p>expected output: {"field1":"value1","field2":"value2"}</p>
tommes
<p>Set the json format type which by default doesn't includes time and tag in output:</p> <pre><code>&lt;match pattern&gt; @type stdout &lt;format&gt; @type json &lt;/format&gt; &lt;/match&gt; </code></pre>
Sergio Perez Rodriguez
<p>I am trying to set up persistent storage with the new <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">prometheus-community</a> helm chart. I have modified the helm <em>values</em> files as seen below. Currently when the chart is reinstalled (I use <a href="https://docs.tilt.dev/api.html" rel="nofollow noreferrer">Tiltfiles</a> for this) the PVC is deleted and therefore the data is not persisted. I assume that the problem could have something to do with the fact that there is no statefulset running for the server, but I am not sure how to fix it.</p> <p>(The solutions from <a href="https://stackoverflow.com/questions/56391693/how-to-enable-persistence-in-helm-prometheus-operator">here</a> does not solve my problem, as it is for the old chart.)</p> <pre><code>server: persistentVolume: enabled: true storageClass: default accessModes: - ReadWriteOnce size: 8Gi </code></pre>
Manuel
<p>I enabled the statefulset on the prometheus server and now it seems to work.</p> <pre><code>server: persistentVolume: enabled: true storageClass: default-hdd-retain accessModes: - ReadWriteOnce size: 40Gi statefulSet: enabled: true </code></pre>
Manuel
<p>I developed the yaml files for kubernetes and skaffold and the dockerfile. My deployment with Skaffold work well in my local machine.</p> <p>Now I need to implement the same deployment in my k8s cluster in my Google Cloud project, triggered by new tags in a GitHub repository. I found that I have to use Google Cloud Build, but I don't know how to execute Skaffold from the cloudbuild.yaml file.</p>
august0490
<p>There is a skaffold image in <a href="https://github.com/GoogleCloudPlatform/cloud-builders-community" rel="noreferrer">https://github.com/GoogleCloudPlatform/cloud-builders-community</a></p> <p>To use it, follow the following steps:</p> <ul> <li>Clone the repository</li> </ul> <pre><code>git clone https://github.com/GoogleCloudPlatform/cloud-builders-community </code></pre> <ul> <li>Go to the skaffold directory</li> </ul> <pre><code>cd cloud-builders-community/skaffold </code></pre> <ul> <li>Build the image:</li> </ul> <pre><code>gcloud builds submit --config cloudbuild.yaml . </code></pre> <p>Then, in your <code>cloudbuild.yaml</code>, you can add a step based on this one:</p> <pre class="lang-yaml prettyprint-override"><code>- id: 'Skaffold run' name: 'gcr.io/$PROJECT_ID/skaffold:alpha' # https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/skaffold env: - 'CLOUDSDK_COMPUTE_ZONE=us-central1-a' - 'CLOUDSDK_CONTAINER_CLUSTER=[YOUR_CLUSTER_NAME]' entrypoint: 'bash' args: - '-c' - | gcloud container clusters get-credentials [YOUR_CLUSTER_NAME] --region us-central1-a --project [YAOUR_PROJECT_NAME] if [ &quot;$BRANCH_NAME&quot; == &quot;master&quot; ] then skaffold run fi </code></pre>
g_lasso
<p>I am trying to use this startup probe</p> <pre><code> startupProbe: exec: command: - &gt;- test $(java -cp /the/path/to/the/app.jar app.Controller port=4990 cmd=&quot;v&quot; | grep -E 'r5|v5' | wc -l) -eq 2 &amp;&amp; echo $? || echo $? initialDelaySeconds: 5 #How many seconds to wait after the container has started timeoutSeconds: 1 #Timeout of the probe successThreshold: 1 #Threshold needed to mark the container healthy periodSeconds : 10 #Wait time between probe executions failureThreshold: 110 #Threshold needed to mark the container unhealthy. </code></pre> <p>It is a java app that can take a long time (~15 mins) to load some components. It is a legacy app, not much I can do about it.</p> <p>I am greping the response, trying to get two lines as a result, and then returning 0 or 1 as exit codes for the probe to decide.</p> <p>I get his error though</p> <pre><code>Startup probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec &quot;8ad860ffc7a87e95fb65e37dc14945c04fa205f9a524c7f7f08f9d6ef7d75&quot;: OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: &quot;test $(java -cp /the/path/to/the/app.jar app.Controller port=4990 cmd=\&quot;v\&quot; | grep -E 'r5|v5' | wc -l) -eq 2 &amp;&amp; echo $? || echo $?&quot;: stat test $(java -cp /the/path/to/the/app.jar app.Controller port=4990 cmd=&quot;v&quot; | grep -E 'r5|v5' | wc -l) -eq 2 &amp;&amp; echo $? || echo $?: no such file or directory: unknown </code></pre> <p>If I dont use the <code>&amp;&amp; echo $? || echo $?</code> part, the probe throws error that the exit code is <code>2</code> and not <code>1</code>.</p> <p>What is going on ??</p>
Kostas Demiris
<p>You need to start a shell to run your script:</p> <pre><code>startupProbe: exec: command: - sh - -c - &gt;- ... </code></pre>
gohm'c
<p>i have tried the following commands to check the zookeeper health and its corresponding error i am getting</p> <ol> <li>sh -c zookeeper-ready 2181 (error: zookeeper-ready command not found)</li> <li>i have tried all echo commands (error: it is not a file)</li> <li>/apache-zookeeper-3.5.5-bin/bin/zkServer.sh start (error: cannot be able to start)</li> <li>/apache-zookeeper-3.5.5-bin/bin/zkServer.sh stop (error: zookeeper stopping ...... there is no zookeeper to stop)</li> <li>/apache-zookeeper-3.5.5-bin/bin/zkServer.sh status (error: when i am stopping the zookeeper the probe needs to fail for this command but it is not happening. it needs to be done)</li> </ol> <p>and i have used these commands in go file as</p> <pre><code> LivenessProbe: &amp;corev1.Probe{ Handler: corev1.Handler{ Exec: &amp;corev1.ExecAction{ Command: []string{&quot;sh&quot;, &quot;/apache-zookeeper-3.5.5-bin/bin/zkServer.sh&quot; , &quot;status&quot;, }, }, }, InitialDelaySeconds: 30, TimeoutSeconds: 5, }, ReadinessProbe: &amp;corev1.Probe{ Handler: corev1.Handler{ Exec: &amp;corev1.ExecAction{ Command: []string{ &quot;sh&quot;, &quot;/apache-zookeeper-3.5.5-bin/bin/zkServer.sh&quot; , &quot;status&quot;, }, }, }, InitialDelaySeconds: 30, TimeoutSeconds: 5, }, </code></pre>
mokshagna sai teja
<p>To check liveness and rediness for zookeeper you can use following command</p> <p><code>echo &quot;ruok&quot; | timeout 2 nc -w 2 localhost 2181 | grep imok</code></p> <p>but make sure to set the env variable <code>ZOO_4LW_COMMANDS_WHITELIST=ruok</code> other wise the the check will fail.</p>
amrit sandhu
<p>I want to create a Kubernetes cluster in AWS using the command:</p> <pre><code>eksctl create cluster \ --name claireudacitycapstoneproject \ --version 1.17 \ --region us-east-1 \ --nodegroup-name standard-workers \ --node-type t2.micro \ --nodes 2 \ --nodes-min 1 \ --nodes-max 3 \ --managed </code></pre> <p>This ends with errors that infroms that:</p> <pre><code>2021-10-22 21:25:46 [ℹ] eksctl version 0.70.0 2021-10-22 21:25:46 [ℹ] using region us-east-1 2021-10-22 21:25:48 [ℹ] setting availability zones to [us-east-1a us-east-1b] 2021-10-22 21:25:48 [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19 2021-10-22 21:25:48 [ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19 2021-10-22 21:25:48 [ℹ] nodegroup &quot;standard-workers&quot; will use &quot;&quot; [AmazonLinux2/1.17] 2021-10-22 21:25:48 [ℹ] using Kubernetes version 1.17 2021-10-22 21:25:48 [ℹ] creating EKS cluster &quot;claireudacitycapstoneproject&quot; in &quot;us-east-1&quot; region with managed nodes 2021-10-22 21:25:48 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup 2021-10-22 21:25:48 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=claireudacitycapstoneproject' 2021-10-22 21:25:48 [ℹ] CloudWatch logging will not be enabled for cluster &quot;claireudacitycapstoneproject&quot; in &quot;us-east-1&quot; 2021-10-22 21:25:48 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=claireudacitycapstoneproject' 2021-10-22 21:25:48 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster &quot;claireudacitycapstoneproject&quot; in &quot;us-east-1&quot; 2021-10-22 21:25:48 [ℹ] 2 sequential tasks: { create cluster control plane &quot;claireudacitycapstoneproject&quot;, 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup &quot;standard-workers&quot;, } } 2021-10-22 21:25:48 [ℹ] building cluster stack &quot;eksctl-claireudacitycapstoneproject-cluster&quot; 2021-10-22 21:25:51 [ℹ] deploying stack &quot;eksctl-claireudacitycapstoneproject-cluster&quot; 2021-10-22 21:26:21 [ℹ] waiting for CloudFormation stack &quot;eksctl-claireudacitycapstoneproject-cluster&quot; 2021-10-22 21:26:52 [ℹ] waiting for CloudFormation stack &quot;eksctl-claireudacitycapstoneproject-cluster&quot; 2021-10-22 21:26:54 [✖] unexpected status &quot;ROLLBACK_IN_PROGRESS&quot; while waiting for CloudFormation stack &quot;eksctl-claireudacitycapstoneproject-cluster&quot; 2021-10-22 21:26:54 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure 2021-10-22 21:26:54 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS 2021-10-22 21:26:54 [!] AWS::EC2::VPC/VPC: DELETE_IN_PROGRESS 2021-10-22 21:26:54 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS 2021-10-22 21:26:54 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – &quot;Resource creation cancelled&quot; 2021-10-22 21:26:54 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – &quot;Resource creation cancelled&quot; 2021-10-22 21:26:54 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – &quot;Resource creation cancelled&quot; 2021-10-22 21:26:54 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – &quot;API: iam:CreateRole User: arn:aws:iam::602502938985:user/CLI is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::602502938985:role/eksctl-claireudacitycapstoneproject-cl-ServiceRole-4CR9Z6NRNU49 with an explicit deny&quot; 2021-10-22 21:26:54 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console 2021-10-22 21:26:54 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-1 --name=claireudacitycapstoneproject' 2021-10-22 21:26:54 [✖] ResourceNotReady: failed waiting for successful resource state Error: failed to create cluster &quot;claireudacitycapstoneproject&quot; </code></pre> <p>Previously, I run the same command and receive the following errors:</p> <pre><code>Error: checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed </code></pre> <p>What permission do I need to provide to the AWS user to execute it?</p>
Arefe
<p><code>What permission do I need to provide to the AWS user to execute it?</code></p> <p>You can check the minimum IAM requirement to run eksctl <a href="https://eksctl.io/usage/minimum-iam-policies/" rel="nofollow noreferrer">here</a>.</p>
gohm'c
<p>when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? The example I am using is dagster. When installing with:</p> <p><code>helm install dagster dagster/dagster \ --namespace dagster \ --create-namespace</code></p> <p>everything starts up fine and secrets are created. When updating the image and tag and performing an upgrade with:</p> <p><code>helm upgrade -f charts/dagster-user-deployments/values.yaml dagster ./charts/dagster-user-deployments -n dagster</code></p> <p>the image is upgraded, but all secrets are deleted. Why would/ could this happen?</p> <p>After running the upgrade command, I expect secrets to still be in place, and the new image to be pulled and run.</p>
pomply
<p><code>when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this?</code></p> <p>This is currently how helm works, here's the <a href="https://github.com/helm/helm-www/issues/1259" rel="nofollow noreferrer">issue opened</a> for discussion, there are several workarounds provided here as well.</p>
gohm'c
<p>I have created a pod using below yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod2 labels: app: dc1 spec: containers: - name: cont1 image: nginx </code></pre> <p>Now, I am creating a <code>deployment controller</code> with the selector value as <code>app=dc1</code> using the below command: <br> <code>kubectl create deploy dc1 --image=nginx</code></p> <p>Note: When we create a deployment with the name &quot;dc1&quot;, it automatically creates selector <code>app=dc1</code> for the deployment controller.</p> <p><br><br></p> <p>I notice that the deployment controller <code>creates a new pod</code> instead of selecting the already existing pod.</p> <pre><code>NAME READY STATUS RESTARTS AGE LABELS dc1-969ff47-ljbxk 1/1 Running 0 32m app=dc1,pod-template-hash=969ff47 pod1 1/1 Running 0 33m app=dc1 </code></pre> <br> <p><strong>Question:</strong> <br> Why <code>dc1</code> is not selecting the existing <code>pod1</code> which has the same label <code>app=dc1?</code></p> <p><br><br></p>
meallhour
<p><code>Why dc1 is not selecting the existing pod1 which has the same label app=dc1?</code></p> <p>Because pod1 was not originated by a deployment which spawn a replicaset that would add <code>pod-template-hash</code> to the pod labels (which your standalone pod will not have). In your case of replicaset, you didn't have <code>pod-template-hash</code> as part of its <code>selector.matchLabels</code>. If you do add one, the replicaset will not select your standalone pod.</p>
gohm'c
<p>As I found, the best way to have zero down time even when one datacenter is down, is using kubernetes between at least two servers from two datacenters.</p> <p>So because I wanted to use servers in Iran. I've heard low performance about infrastructure.</p> <p>The question is that if I want to have master-master replication for mysql, in one server failure, how can I sync repaired server in kubernetes clustring?</p>
Hamid Shariati
<p>K8s is the platform, it doesn't change how MySQL HA works. Example, if you have dedicated servers for MySQL, these servers become &quot;pods&quot; in K8s. What you need to do <strong>at MySQL level</strong> when any of the server is gone for whatever reason; is the same as what you need to do when you run it as a pod. In fact, K8s help you by automatically start a new pod. Where in former case, you will need to provision a new physical server - the time required is obvious here. You will normally run script to re-establish the HA, the same apply to K8s where you can run the recovery script as the init container before the actual MySQL server container is started.</p>
gohm'c
<p>When I do a <code>kubectl get pod pod_name -o yaml</code> I have a field like this.</p> <pre><code> - name: MY_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName </code></pre> <p>I wanted to know the value of this <code>MY_NODE_NAME</code>.</p> <p>Unfortunately, I cannot do a <code>env</code> or <code>exec</code> to my pod. Is there any way to get the value of this reference?</p>
Senthil Kumaran
<blockquote> <p>I cannot do a env or exec to my pod. Is there any way to get the value of this reference?</p> </blockquote> <p>Try: <code>kubectl get pod &lt;name&gt; --namespace &lt;ns&gt; -o jsonpath='{.spec.nodeName}'</code></p>
gohm'c
<p>I have created a Kubernetes cluster in the <a href="https://upcloud.com/community/tutorials/deploy-kubernetes-using-kubespray/" rel="nofollow noreferrer">cloud- using this tutorial</a> and deployed [to the cluster] a backend application called <code>chatapp</code> from the Docker private registry. Since there is no option to include service type as <code>LoadBalancer</code>, I had to restore to <code>NodePort</code> type. Here is the <code>chatapp-deployment.yml</code> file for reference:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: chatapp spec: selector: app: chatapp ports: - protocol: &quot;TCP&quot; port: 6443 targetPort: 3000 type: NodePort externalIPs: - A.B.C.D --- apiVersion: apps/v1 kind: Deployment metadata: name: chatapp labels: app: chatapp spec: replicas: 2 selector: matchLabels: app: chatapp template: metadata: labels: app: chatapp spec: imagePullSecrets: - name: regsecret containers: - name: chatapp image: sebastian/chatapp imagePullPolicy: Always command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;while true; do echo hello; sleep 10;done&quot;] ports: - containerPort: 3000 </code></pre> <p><strong>Note:</strong> I removed the external IP for security reasons.</p> <p>I had to assign external IP manually since I couldn't set-up <code>LoadBalancer</code> as service type. Whenever I try accessing <code>http://A.B.C.D:6443</code>, I get the following:</p> <pre><code>Client sent an HTTP request to an HTTPS server. </code></pre> <p>I went through this <a href="https://stackoverflow.com/questions/61017106/client-sent-an-http-request-to-an-https-server">link</a> but couldn't fix my issue with it. The external IP I have used is from the <code>master-o</code>.</p> <p>While trying to access it with <a href="https://A.B.C.D:6443" rel="nofollow noreferrer">https://A.B.C.D:6443</a>, I get the following <code>403</code> message:</p> <pre><code>{ &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;forbidden: User \&quot;system:anonymous\&quot; cannot get path \&quot;/\&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { }, &quot;code&quot;: 403 </code></pre> <p>How can I authorize access to my cluster? Any feedbacks and suggestions would be appreciated.</p>
Sebastian
<p>Your request has reached the k8s api-server at 6443 instead of your chatapp. To access your chatapp; first retrieve the nodePort number: <code>kubectl describe service chatapp | grep -i nodeport</code>, then use this # to access your app at <code>http://a.b.c.d:&lt;nodePort&gt;</code></p>
gohm'c
<p>I am trying to use wildcard on <code>kubectl cp</code> command however its failling to recognise wildcard. </p> <pre><code>$ kubectl cp mypod:/tmp/exampleFiles.* /tmp tar: /tmp/exampleFiles.*: Cannot stat: No such file or directory tar: Exiting with failure status due to previous errors </code></pre> <p>Although my Kubernetes is up to date (v1.15.2) and according to <a href="https://github.com/kubernetes/kubernetes/issues/78854" rel="nofollow noreferrer">this git issue</a> wildcard problem is fixed, but I am confused that why its not working for me. Is my syntax wrong? what do you think the problem is? please help!</p>
Saber
<p>For everyone's benefit, I am pasting working code as suggested by <strong>heyjared</strong> user on <a href="https://github.com/kubernetes/kubernetes/issues/78854" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/78854</a></p> <pre><code>Use find with xargs as a workaround: find . | xargs -i{} kubectl cp {} namespace/pod:path </code></pre> <p>It worked for me. I replaced &quot;.&quot; with my needed source path.</p> <p><strong>Example:</strong></p> <pre><code>find &lt;source_path&gt; | xargs -i{} kubectl cp {} namespace/pod:&lt;path_inside_pod&gt; </code></pre> <p>Like:</p> <pre><code>find /var/tmp/data/ | xargs -i{} kubectl cp {} namespace/pod:/tmp/ </code></pre> <p>This will copy all contents inside local /var/tmp/data/ directory and store it under /tmp/ inside pod.</p>
Pankaj Yadav
<p>In my case, I have to deploy a deployment first and then patch a preStop hook to the deployment in jenkins.</p> <p>I try to use</p> <pre><code>kubectl -n mobile patch deployment hero-orders-app --type &quot;json&quot; -p '[ {&quot;op&quot;:&quot;add&quot;,&quot;path&quot;:&quot;/spec/template/spec/containers/0/lifecycle/preStop/exec/command&quot;,&quot;value&quot;:[]}, {&quot;op&quot;:&quot;add&quot;,&quot;path&quot;:&quot;/spec/template/spec/containers/0/lifecycle/preStop/exec/command/-&quot;,&quot;value&quot;:&quot;/bin/sleep&quot;}, {&quot;op&quot;:&quot;add&quot;,&quot;path&quot;:&quot;/spec/template/spec/containers/0/lifecycle/preStop/exec/command/-&quot;,&quot;value&quot;:&quot;10&quot;}]' </code></pre> <p>but it returns</p> <pre><code>the request is invaild </code></pre> <p>If patch command can add non-existence path? or I need to change another solution?</p> <p>And here is hero-orders-app deployment file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hero-orders-app$SUFFIX namespace: $K8S_NAMESPACE labels: branch: $LABEL run: hero-orders-app$SUFFIX spec: selector: matchLabels: run: hero-orders-app$SUFFIX revisionHistoryLimit: 5 minReadySeconds: 10 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% template: metadata: labels: run: hero-orders-app$SUFFIX namespace: $K8S_NAMESPACE branch: $LABEL role: hero-orders-app spec: dnsConfig: options: - name: ndots value: &quot;1&quot; topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: run: hero-orders-app$SUFFIX affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: run operator: In values: - hero-orders-app$SUFFIX topologyKey: kubernetes.io/hostname imagePullSecrets: - name: $K8S_IMAGE_SECRETS containers: - name: hero-orders-$CLUSTER_NAME image: $K8S_IMAGE imagePullPolicy: IfNotPresent securityContext: runAsUser: 1000 allowPrivilegeEscalation: false capabilities: drop: - CHOWN - NET_RAW - SETPCAP ports: - containerPort: 3000 protocol: TCP resources: limits: cpu: $K8S_CPU_LIMITS memory: $K8S_RAM_LIMITS requests: cpu: $K8S_CPU_REQUESTS memory: $K8S_RAM_REQUESTS readinessProbe: httpGet: path: /gw-api/v2/_manage/health port: 3000 initialDelaySeconds: 15 timeoutSeconds: 10 livenessProbe: httpGet: path: /gw-api/v2/_manage/health port: 3000 initialDelaySeconds: 20 timeoutSeconds: 10 periodSeconds: 45 </code></pre> <p>And it running on AWS with service, pdb and hpa.</p> <p>here is my kubectl version</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.2&quot;, GitCommit:&quot;092fbfbf53427de67cac1e9fa54aaa09a28371d7&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-06-16T12:59:11Z&quot;, GoVersion:&quot;go1.16.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.9&quot;, GitCommit:&quot;7a576bc3935a6b555e33346fd73ad77c925e9e4a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-07-15T20:56:38Z&quot;, GoVersion:&quot;go1.15.14&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
tingyu gu
<p>I will use a simpler deployment to demonstrate patching a lifecycle hook, where you can use the same technique for your own deployment.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: busybox command: [&quot;ash&quot;,&quot;-c&quot;,&quot;while :; do echo $(date); sleep 1; done&quot;] </code></pre> <p>The path is up to <code>/spec/template/spec/containers/0/lifecycle</code> or you will get the response <strong>&quot;The request is invalid.&quot;</strong></p> <pre><code>kubectl patch deployment busybox --type json -p '[{&quot;op&quot;:&quot;add&quot;,&quot;path&quot;:&quot;/spec/template/spec/containers/0/lifecycle&quot;,&quot;value&quot;:{&quot;preStop&quot;: {&quot;exec&quot;: {&quot;command&quot;: [&quot;/bin/sleep&quot;,&quot;10&quot;]}}}}]' deployment.apps/busybox patched </code></pre> <p>Upon patched the deployment will restart. You can do a <code>kubectl get deployment busybox -o yaml</code> to examine the patched. If you patch again with same value there will be no change.</p>
gohm'c
<p>I'm trying to deploy a docker container to my Kubernetes cluster, but I'm running into an issue with passing the required command-line arguments to the container. I need to pass two arguments called <code>--provider local</code> and <code>--basedir /tmp</code>. Here is what the docker run command looks like (I can run this without any issues on my docker host):</p> <pre><code>docker run -d -p 8080:8080 --name transfer-sh -v /tmp:/tmp dutchcoders/transfer.sh:latest --provider local --basedir /tmp </code></pre> <p>However, when I apply the deployment YAML to my cluster the container fails with this error (I'm running <code>kubectl apply -f deploy.yaml</code> to apply my changes to the cluster):</p> <blockquote> <p>Incorrect Usage. flag provided but not defined: -provider local</p> </blockquote> <p>So my YAML specifies that the flag should be <code>--provider</code>, but for some reason I haven't been able to find yet the container only sees <code>-provider</code> which is indeed not a valid option. This is the full help message:</p> <pre><code>NAME: transfer.sh - transfer.sh DESCRIPTION: Easy file sharing from the command line USAGE: transfer.sh [flags] command [arguments...] COMMANDS: version help, h Shows a list of commands or help for one command FLAGS: --listener value 127.0.0.1:8080 (default: &quot;127.0.0.1:8080&quot;) [$LISTENER] --profile-listener value 127.0.0.1:6060 [$PROFILE_LISTENER] --force-https [$FORCE_HTTPS] --tls-listener value 127.0.0.1:8443 [$TLS_LISTENER] --tls-listener-only [$TLS_LISTENER_ONLY] --tls-cert-file value [$TLS_CERT_FILE] --tls-private-key value [$TLS_PRIVATE_KEY] --temp-path value path to temp files (default: &quot;/tmp&quot;) [$TEMP_PATH] --web-path value path to static web files [$WEB_PATH] --proxy-path value path prefix when service is run behind a proxy [$PROXY_PATH] --proxy-port value port of the proxy when the service is run behind a proxy [$PROXY_PORT] --email-contact value email address to link in Contact Us (front end) [$EMAIL_CONTACT] --ga-key value key for google analytics (front end) [$GA_KEY] --uservoice-key value key for user voice (front end) [$USERVOICE_KEY] --provider value s3|gdrive|local [$PROVIDER] --s3-endpoint value [$S3_ENDPOINT] --s3-region value (default: &quot;eu-west-1&quot;) [$S3_REGION] --aws-access-key value [$AWS_ACCESS_KEY] --aws-secret-key value [$AWS_SECRET_KEY] --bucket value [$BUCKET] --s3-no-multipart Disables S3 Multipart Puts [$S3_NO_MULTIPART] --s3-path-style Forces path style URLs, required for Minio. [$S3_PATH_STYLE] --gdrive-client-json-filepath value [$GDRIVE_CLIENT_JSON_FILEPATH] --gdrive-local-config-path value [$GDRIVE_LOCAL_CONFIG_PATH] --gdrive-chunk-size value (default: 16) [$GDRIVE_CHUNK_SIZE] --storj-access value Access for the project [$STORJ_ACCESS] --storj-bucket value Bucket to use within the project [$STORJ_BUCKET] --rate-limit value requests per minute (default: 0) [$RATE_LIMIT] --purge-days value number of days after uploads are purged automatically (default: 0) [$PURGE_DAYS] --purge-interval value interval in hours to run the automatic purge for (default: 0) [$PURGE_INTERVAL] --max-upload-size value max limit for upload, in kilobytes (default: 0) [$MAX_UPLOAD_SIZE] --lets-encrypt-hosts value host1, host2 [$HOSTS] --log value /var/log/transfersh.log [$LOG] --basedir value path to storage [$BASEDIR] --clamav-host value clamav-host [$CLAMAV_HOST] --perform-clamav-prescan perform-clamav-prescan [$PERFORM_CLAMAV_PRESCAN] --virustotal-key value virustotal-key [$VIRUSTOTAL_KEY] --profiler enable profiling [$PROFILER] --http-auth-user value user for http basic auth [$HTTP_AUTH_USER] --http-auth-pass value pass for http basic auth [$HTTP_AUTH_PASS] --ip-whitelist value comma separated list of ips allowed to connect to the service [$IP_WHITELIST] --ip-blacklist value comma separated list of ips not allowed to connect to the service [$IP_BLACKLIST] --cors-domains value comma separated list of domains allowed for CORS requests [$CORS_DOMAINS] --random-token-length value (default: 6) [$RANDOM_TOKEN_LENGTH] --help, -h show help </code></pre> <p>Here is the Docker Hub for the image I'm trying to deploy: <a href="https://hub.docker.com/r/dutchcoders/transfer.sh" rel="nofollow noreferrer">dutchcoders/transfer.sh</a></p> <p>Here is the GitHub: <a href="https://github.com/dutchcoders/transfer.sh" rel="nofollow noreferrer">https://github.com/dutchcoders/transfer.sh</a></p> <p>My cluster's version is <code>1.23.4</code> and the full deployment YAML is here:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: transfer-sh namespace: transfer-sh labels: app: &quot;transfer-sh&quot; spec: replicas: 1 selector: matchLabels: app: transfer-sh template: metadata: labels: app: transfer-sh spec: containers: - name: transfer-sh image: dutchcoders/transfer.sh:latest args: - &quot;--provider local&quot; - &quot;--basedir /tmp&quot; ports: - containerPort: 8080 </code></pre> <p>I intentionally have not included any persistent volume claims yet. At this point I'm just testing to make sure the container will run.</p> <p>Initially, I though maybe it was some sort of escape sequence issue. After trying all manner of ways to potentially escape the two dashes nothing really changed. I also tried setting environment variables that contained those arguments, but that just resulted in the same behavior where <code>--profile</code> turned into <code>-profile</code>.</p> <p>If anyone has any thoughts I could use the help. I'm a bit stuck at the moment (although still troubleshooting). I am curious if maybe there is a different way to pass in command flags as opposed to arguments (or maybe there isn't any difference as far as k8s is concerned).</p>
WhirlingDerBing
<p>Try:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: transfer-sh namespace: transfer-sh labels: app: transfer-sh spec: replicas: 1 selector: matchLabels: app: transfer-sh template: metadata: labels: app: transfer-sh spec: containers: - name: transfer-sh image: dutchcoders/transfer.sh:latest args: # &lt;-- in this case each arg is individual - --provider - local - --basedir - /tmp ports: - containerPort: 8080 NAME READY UP-TO-DATE AVAILABLE AGE transfer-sh 1/1 1 1 91s </code></pre>
gohm'c
<p>I have a kubernetes cluster for which a workload with Request : 1000m CPU and Limit 1200m CPU is deployed. Node template is 4-8096 and we never hit the RAM as workloads are more compute intensive.</p> <p>My issue is as shown in the picture when the Auto-Scaler scaled the workload to 2 PODs, Kubernetes didn't scheduled the additional pod in the same node even though there is plenty of resources available. (2/3.92). Instead it had to schedule it on a new Node. This is a lot of waste and cost insensitive when we scale further up.</p> <p>Is this normal behavior or what best practices can you recommend to achieve a better resource utilization?</p> <p>Thanks. Nowa.</p> <p><a href="https://i.stack.imgur.com/CcNXx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CcNXx.png" alt="Resource distribiution"></a></p> <p><strong>UPDATE:</strong></p> <p><strong>After applying the autoscaling-profile to optimize-utilization as suggested by the Erhard Czving's answer, the additional pod was scheduled onto the same node. So now as per total requests that's 3/3.92 ~ 76%.</strong> </p> <p><a href="https://i.stack.imgur.com/8FMTm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8FMTm.png" alt="enter image description here"></a></p>
Nowa Concordia
<p>Try the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#autoscaling_profiles" rel="nofollow noreferrer">optimize-utilization</a> autoscaling profile.</p> <p>It should keep the utilization much higher than the default profile, depending on the Kubernetes version used. Around 80% utilization is a good estimate.</p> <p>Apply to a cluster with gcloud commands:</p> <pre><code>gcloud beta container clusters update example-cluster --autoscaling-profile optimize-utilization </code></pre>
phteven
<p>Does anyone know if there is a specific way to configure Kubernetes to allow resolution of SRV type URLs (to internet services) from within pods?</p> <p>Recently had a <a href="https://stackoverflow.com/questions/61820151/minikube-kubernets-pod-cant-connect-to-mongodb-atlas/61853372#61853372">problem</a> preventing me from connecting to MongoDB Atlas. Found the cause to be the use of SRV type URI for connection, was fixed after using legacy URI for connection <a href="https://www.mongodb.com/blog/post/mongodb-3-6-here-to-SRV-you-with-easier-replica-set-connections?jmp=fcb&amp;utm_source=4244&amp;utm_medium=FBPAGE&amp;utm_term=4&amp;linkId=50841309" rel="nofollow noreferrer">(link to description)</a>.</p> <p>Is there some configuration required?</p>
Alexander Lock-Achilleos
<p>Some of the images like Alpine does not support the srv strings. Instead if you want to make it run either use the standard connection string or use the images to make your container which have minimum libraries to support the srv string.</p>
Techgeek
<p>We use the method in the first code block in java, but I don't see a corresponding method in the <a href="https://www.rubydoc.info/gems/google-cloud-storage/0.24.0/Google%2FCloud%2FStorage.new" rel="nofollow noreferrer">rails documentation</a>, Only the second code block:</p> <p><code>Storage storage = StorageOptions.getDefaultInstance().getService();</code></p> <pre><code>storage = Google::Cloud::Storage.new( project: "my-todo-project", keyfile: "/path/to/keyfile.json" ) </code></pre> <p>If we use an application specific service account in the kubernetes cluster. How do we configure the Rails application to work in the local developer environment and also run with a k8s cluster?</p> <p>Also, I would prefer not to use a project_id and a keyfile to initialize, since I will have to manage multiple such JSON files during the initialization process in dev, qa, staging, production environments.</p>
Rpj
<p>Before moving your app to multiple environments, you should set up your deployment pipeline which will handle how your app is configured for different environments, including configuration of service accounts. </p> <p>Below you can find two official google cloud documentations on how to do it, plus one example in gitlab, so you can follow what better suits you.</p> <p><a href="https://cloud.google.com/solutions/continuous-delivery-jenkins-kubernetes-engine" rel="nofollow noreferrer">Continuous deployment to Google Kubernetes Engine using Jenkins</a></p> <p><a href="https://cloud.google.com/solutions/continuous-delivery-spinnaker-kubernetes-engine" rel="nofollow noreferrer">Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine</a></p> <p><a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes" rel="nofollow noreferrer">Git Lab - continuous-deployment-on-kubernetes</a></p> <p>Also, regarding the parameters of instantiation of the cloud storage object, as you can see on the same documentation you provided at your question, the project parameter is the identifier of your storage in the cloud, so if you do not set that your app will not be able to find it. For the Keyfile, it is what allow your service account to authenticate, so you can't make it work without it as well.</p> <p>I hope This information helped you.</p>
Ralemos
<p>In the documentation <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="noreferrer">here</a> it is stated that deleting a pod is a voluntary disruption that <code>PodDisruptionBudget</code> should protect against.</p> <p>I have created a simple test:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: test spec: minAvailable: 1 selector: matchLabels: app: test --- apiVersion: v1 kind: Pod metadata: name: test labels: app: test spec: containers: - name: test image: myimage </code></pre> <p>Now if I run <code>apply</code> and then <code>delete pod test</code>, there is no trouble deleting this pod.</p> <p>If I now run <code>cordon node</code>, then it is stuck as it cannot evict the last pod (which is correct). But the same behavior seems to not be true for deleting the pod.</p> <p>The same goes if I create a deployment with minimum 2 replicas and just delete both at the same time - they are deleted as well (not one by one).</p> <p>Do I misunderstand something here?</p>
Ilya Chernomordik
<p>The link in your question refer to static pod managed by kubelet, guess you want this <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">link</a> instead.</p> <p><code>...if I run apply and then delete pod test, there is no trouble deleting this pod</code></p> <p>PDB protect pod managed by one of the controllers: Deployment, ReplicationController, ReplicaSet or StatefulSet.</p> <p><code>...if I create a deployment with minimum 2 replicas and just delete both at the same time - they are deleted as well (not one by one)</code></p> <p>PDB does not consider explicitly deleting a deployment a voluntary disruptions. From the K8s documentation:</p> <blockquote> <p>Caution: Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses Pod Disruption Budgets.</p> </blockquote> <p>Hope this help to clear the mist.</p>
gohm'c
<p>I'm creating multiple pods at the same time in Openshift, and I also want to check the containers inside the pods are working correctly. Some of these containers can take a while to start-up, and I don't want to wait for one pod to be fully running before starting up the other one.</p> <p>Are there any Openshift / Kubernetes checks I can do to ensure a container has booted up, while also going ahead with other deployments?</p>
solarflare
<p><code>...Some of these containers can take a while to start-up</code></p> <p>Liveness probe is not a good option for containers that requires extended startup time, mainly because you have to set a long time to cater for startup; which is irrelevant after that - result to unable to detect problem on time during execution. Instead, you use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">startup probe</a> to handle and detect problem during startup and handover to liveness probe upon success; or restart container according to its restartPolicy should the startup probe failed.</p>
gohm'c
<p>My DigitalOcean kubernetes cluster is unable to pull images from the DigitalOcean registry. I get the following error message:</p> <pre><code>Failed to pull image &quot;registry.digitalocean.com/XXXX/php:1.1.39&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;registry.digitalocean.com/XXXXXXX/php:1.1.39&quot;: failed to resolve reference &quot;registry.digitalocean.com/XXXXXXX/php:1.1.39&quot;: failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized </code></pre> <p>I have added the kubernetes cluster using DigitalOcean Container Registry Integration, which shows there successfully both on the registry and the settings for the kubernetes cluster.</p> <p><a href="https://i.stack.imgur.com/hOkVJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hOkVJ.png" alt="enter image description here" /></a></p> <p>I can confirm the above address `registry.digitalocean.com/XXXX/php:1.1.39 matches the one in the registry. I wonder if I’m misunderstanding how the token / login integration works with the registry, but I’m under the impression that this was a “one click” thing and that the cluster would automatically get the connection to the registry after that.</p> <p>I have tried by logging helm into a registry before pushing, but this did not work (and I wouldn't really expect it to, the cluster should be pulling the image).</p> <p>It's not completely clear to me how the image pull secrets are supposed to be used.</p> <p>My helm deployment chart is basically the default for API Platform:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ include &quot;api-platform.fullname&quot; . }} labels: {{- include &quot;api-platform.labels&quot; . | nindent 4 }} spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: matchLabels: {{- include &quot;api-platform.selectorLabels&quot; . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include &quot;api-platform.selectorLabels&quot; . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include &quot;api-platform.serviceAccountName&quot; . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }}-caddy securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;{{ .Values.caddy.image.repository }}:{{ .Values.caddy.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.caddy.image.pullPolicy }} env: - name: SERVER_NAME value: :80 - name: PWA_UPSTREAM value: {{ include &quot;api-platform.fullname&quot; . }}-pwa:3000 - name: MERCURE_PUBLISHER_JWT_KEY valueFrom: secretKeyRef: name: {{ include &quot;api-platform.fullname&quot; . }} key: mercure-publisher-jwt-key - name: MERCURE_SUBSCRIBER_JWT_KEY valueFrom: secretKeyRef: name: {{ include &quot;api-platform.fullname&quot; . }} key: mercure-subscriber-jwt-key ports: - name: http containerPort: 80 protocol: TCP - name: admin containerPort: 2019 protocol: TCP volumeMounts: - mountPath: /var/run/php name: php-socket #livenessProbe: # httpGet: # path: / # port: admin #readinessProbe: # httpGet: # path: / # port: admin resources: {{- toYaml .Values.resources | nindent 12 }} - name: {{ .Chart.Name }}-php securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;{{ .Values.php.image.repository }}:{{ .Values.php.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.php.image.pullPolicy }} env: {{ include &quot;api-platform.env&quot; . | nindent 12 }} volumeMounts: - mountPath: /var/run/php name: php-socket readinessProbe: exec: command: - docker-healthcheck initialDelaySeconds: 120 periodSeconds: 3 livenessProbe: exec: command: - docker-healthcheck initialDelaySeconds: 120 periodSeconds: 3 resources: {{- toYaml .Values.resources | nindent 12 }} volumes: - name: php-socket emptyDir: {} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} </code></pre> <p>How do I authorize the kubernetes cluster to pull from the registry? Is this a helm thing or a kubernetes only thing?</p> <p>Thanks!</p>
Brettins
<p>The problem that you have is that you do not have an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">image pull secret</a> for your cluster to use to pull from the registry.</p> <p>You will need to add this to give your cluster a way to authorize its requests to the cluster.</p> <h2>Using the DigitalOcean kubernetes Integration for Container Registry</h2> <p>Digital ocean provides a way to add image pull secrets to a kubernetes cluster in your account. You can link the registry to the cluster in the settings of the registry. Under &quot;DigitalOcean Kuberentes Integration&quot; select edit, then select the cluster you want to link the registry to.</p> <p><img src="https://i.stack.imgur.com/Gq6sG.png" alt="DigitalOceanKubernetesIntegration" /></p> <p>This action adds an image pull secret to all namespaces within the cluster and will be used by default (unless you specify otherwise).</p>
John Cunniff
<p>My airflow service runs as a kubernetes deployment, and has two containers, one for the <code>webserver</code> and one for the <code>scheduler</code>. I'm running a task using a KubernetesPodOperator, with <code>in_cluster=True</code> parameters, and it runs well, I can even <code>kubectl logs pod-name</code> and all the logs show up. </p> <p>However, the <code>airflow-webserver</code> is unable to fetch the logs:</p> <pre><code>*** Log file does not exist: /tmp/logs/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log *** Fetching from: http://pod-name-7dffbdf877-6mhrn:8793/log/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log *** Failed to fetch log file from worker. HTTPConnectionPool(host='pod-name-7dffbdf877-6mhrn', port=8793): Max retries exceeded with url: /log/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log (Caused by NewConnectionError('&lt;urllib3.connection.HTTPConnection object at 0x7fef6e00df10&gt;: Failed to establish a new connection: [Errno 111] Connection refused')) </code></pre> <p>It seems as the pod is unable to connect to the airflow logging service, on port 8793. If I <code>kubectl exec bash</code> into the container, I can curl localhost on port 8080, but not on 80 and 8793.</p> <p>Kubernetes deployment:</p> <pre><code># Deployment apiVersion: apps/v1 kind: Deployment metadata: name: pod-name namespace: airflow spec: replicas: 1 selector: matchLabels: app: pod-name template: metadata: labels: app: pod-name spec: restartPolicy: Always volumes: - name: airflow-cfg configMap: name: airflow.cfg - name: dags emptyDir: {} containers: - name: airflow-scheduler args: - airflow - scheduler image: registry.personal.io:5000/image/path imagePullPolicy: Always volumeMounts: - name: dags mountPath: /airflow_dags - name: airflow-cfg mountPath: /home/airflow/airflow.cfg subPath: airflow.cfg env: - name: EXECUTOR value: Local - name: LOAD_EX value: "n" - name: FORWARDED_ALLOW_IPS value: "*" ports: - containerPort: 8793 - containerPort: 8080 - name: airflow-webserver args: - airflow - webserver - --pid - /tmp/airflow-webserver.pid image: registry.personal.io:5000/image/path imagePullPolicy: Always volumeMounts: - name: dags mountPath: /airflow_dags - name: airflow-cfg mountPath: /home/airflow/airflow.cfg subPath: airflow.cfg ports: - containerPort: 8793 - containerPort: 8080 env: - name: EXECUTOR value: Local - name: LOAD_EX value: "n" - name: FORWARDED_ALLOW_IPS value: "*" </code></pre> <p>note: If airflow is run in dev environment (locally instead of kubernetes) it all works perfectly.</p>
Yuzobra
<p>Creating a Persistent Volume and storing logs on them might help.</p> <pre><code>-- kind: PersistentVolume apiVersion: v1 metadata: name: testlog-volume spec: accessModes: - ReadWriteMany capacity: storage: 2Gi hostPath: path: /opt/airflow/logs/ storageClassName: standard --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: testlog-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: standard </code></pre> <p>if you are using helm chart to deploy airflow, you can use</p> <pre><code> --set executor=KubernetesExecutor --set logs.persistence.enabled=true --set logs.persistence.existingClaim=testlog-volume </code></pre>
Programmer007
<p>I'm trying to make sense of Kubernetes so I can start using it for local development, but I'm stumbling over some basics here...</p> <p>I'm running on Mac OS X and I've installed Kubernetes using Homebrew via <code>brew install kubernetes</code>. After this, I started up a simple app (deployment? service? -- I'm not clear on the terminology) like so:</p> <pre><code>$&gt; minikube start $&gt; kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 $&gt; kubectl expose deployment hello-minikube --type=NodePort --port=8080 </code></pre> <p>I got these commands directly from <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">the Kubernetes documentation</a></p> <p>Now each step of the way I've been trying to read through the Kubernetes docs and experiment a bit with the various command line tools so that I'm not just blindly following, but actually understanding what's going on. And this is where I ran into my first huge hurdle: there are <strong><em>too many IP addresses!</em></strong></p> <pre><code>$&gt; ifconfig ... inet 192.168.2.126 netmask 0xffffff00 broadcast 192.168.2.255 ... $&gt; minikube ip 192.168.64.3 $&gt; kubectl describe pods ... IP: 172.17.0.2 ... $&gt; kubectl describe services ... IP: 10.96.8.59 ... </code></pre> <p>I understand a couple of these:</p> <ul> <li>192.168.2.126 is my local machine's IP address on the network, and it's how another computer on the network would access my computer (if, for instance, I were running nginx or something)</li> <li>192.168.64.3 is the IP address assigned to the virtual machine that Minikube is running. I know that my computer should be able to access this IP directly, but I don't know how to expose this IP address to other computers on the network</li> </ul> <p>But I have <strong><em>no idea</em></strong> what's up with 172.17.0.2 and 10.96.8.59 -- what are these IP addresses, when would I need to use them, and how do I access them?</p> <h2>One <em>more</em> thing</h2> <p>When I try accessing <code>xxx.xxx.xxx.xxx:8080</code> for <strong><em>any</em></strong> of the four IP addresses in my browser, the echoserver application that I'm running does not come up. The host simply times out. However using <code>minikube service hello-minikube --url</code> (per <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">the docs</a>) I get: <a href="http://192.168.64.3:30866" rel="nofollow noreferrer">http://192.168.64.3:30866</a></p> <p>I can understand why I'm getting Minikube's IP address (how <strong>else</strong> would my computer access a container running inside the VM, except to connect to the VM directly and have the VM port forward?) -- but I don't understand:</p> <ul> <li>Where port 30866 came from</li> <li>How Minikube is aware of the hello-minikube service, since it was created using kubectl and not minikube</li> </ul> <p>Also slightly related: I don't know how kubectl knew to connect to the Minikube VM and send commands to the Kubernetes API server on <strong><em>that</em></strong> cluster. I never told kubectl that I had a VM running, nor did I tell it how to connect to that VM. But somehow it <em>just knew</em>.</p> <p>I know this is like three different questions (what each of the IP addresses mean, how Minikube is port forwarding, and how kubectl is communicating with Minikube) - but these are all questions that arose from following the first step of Minikube's tutorial, and I can't seem to find answers on my own.</p>
stevendesu
<p>ifconfig is really helpful in understanding how the origin of each IP address that we are looking at. Each of the IPs are coming from various subnets. </p> <p><code>192.168.64.3</code> is the internal IP address which comes from interface of our virtual machine's network. </p> <p><code>172.17.0.2</code> is the internal IP address which comes from the docker's network interface. This allows network isolation from host and the container that we run.</p> <p><code>10.96.8.59</code> is the internal IP address which comes from the Virtual Machines own network interface. This you can confirm by ssh-ing into the VM (<code>minikube ssh</code>) and running the command ifconfig over there. This allocation can be controlled by the option <code>--service-cluster-ip-range</code> when we run the minikube start</p> <blockquote> <p>what's up with 172.17.0.2 and 10.96.8.59 -- what are these IP addresses, when would I need to use them, and how do I access them?</p> </blockquote> <p>IP address <code>10.96.8.59</code> can be access only through another VM. For example if you attach more VMs to your existing minikube cluster then you can access the <code>10.96.8.59</code> through the other VM. Also IPs in the range of 10.96.xx.xx will be used in intra-pod communication. whereas the <code>172.17.0.2</code> will be used in inter pod communication. </p> <blockquote> <p>Where port 30866 came from?</p> </blockquote> <p>That's a random choice from a range of ports made by the minikube in order to expose your service to your network. By Default the range of ports to which minikube exposes our services is <code>30000-32767</code> But if you want to manipulate that, please refer this <a href="https://github.com/kubernetes/minikube/blob/master/site/content/en/docs/Tasks/nodeport.md#increasing-the-nodeport-range" rel="noreferrer">doc</a></p> <blockquote> <p>how kubectl knew to connect to the Minikube VM</p> </blockquote> <p>If you do minikube start we see that minikube tries to setup the VM and then configure kubectl to get attached to the VM. For this kubectl to know which host to connect to; all these informations are stored in the config file found at <code>~/.kube/</code> here you will find the server IP:port combination.</p>
vinay_kumar
<p>Using KubeEdge and I'm attempting to prevent my edgenode from getting kube-proxy deployed.</p> <p>When I attempt to add fields to daemonsets.apps with the following command:</p> <pre><code> sudo kubectl edit daemonsets.apps -n kube-system kube-proxy </code></pre> <p>With the following values</p> <pre><code>affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: DoesNotExist </code></pre> <p>Returns the following error:</p> <pre><code># daemonsets.apps &quot;kube-proxy&quot; was not valid: # * &lt;nil&gt;: Invalid value: &quot;The edited file failed validation&quot;: ValidationError(DaemonSet): unknown field &quot;nodeAffinity&quot; in io.k8s.api.apps.v1.DaemonSet # </code></pre> <p>The full YAML for reference:</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: annotations: deprecated.daemonset.template.generation: &quot;1&quot; creationTimestamp: &quot;2022-03-10T21:02:16Z&quot; generation: 1 labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: &quot;458&quot; uid: 098d94f4-e892-43ef-80ac-6329617b670c spec: revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kube-proxy template: metadata: creationTimestamp: null labels: k8s-app: kube-proxy spec: containers: - command: - /usr/local/bin/kube-proxy - --config=/var/lib/kube-proxy/config.conf - --hostname-override=$(NODE_NAME) env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: k8s.gcr.io/kube-proxy:v1.23.4 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true dnsPolicy: ClusterFirst hostNetwork: true nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: kube-proxy serviceAccountName: kube-proxy terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - configMap: defaultMode: 420 name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock - hostPath: path: /lib/modules type: &quot;&quot; name: lib-modules updateStrategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 1 desiredNumberScheduled: 1 numberAvailable: 1 numberMisscheduled: 0 numberReady: 1 observedGeneration: 1 updatedNumberScheduled: 1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: DoesNotExist </code></pre> <p>Other answers suggested it was a formatting issue, but I've ran it through a YAML validator and it said it was valid.</p>
syntaxerror
<p><code>affinity</code> should be placed under the pod template. Try:</p> <pre><code>spec: ... template: ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: DoesNotExist containers: - command: - /usr/local/bin/kube-proxy ... </code></pre>
gohm'c
<p>Im using the helm <a href="https://pkg.go.dev/helm.sh/helm/v3" rel="nofollow noreferrer">SDK</a> and it works great, for my testing using the fake option which works(for kubeconfig),</p> <p>Now when I update the <code>kubeconfig</code> to my cluster I notice that during installation the chart is <strong>stuck</strong> on <strong>status pending</strong>, and it <strong>stuck forever</strong> in this state until I'm deleting &amp; installing it again,(manually) My question is how to solve this issue with the <strong>Helm SDK</strong> (via code only) <a href="https://pkg.go.dev/helm.sh/helm/v3" rel="nofollow noreferrer">https://pkg.go.dev/helm.sh/helm/v3</a>,</p> <p>I mean wait for a while and if the status is pending after 3 min delete and reinstall it again... or try upgrade before</p> <p>This is the code</p> <pre><code> kubeConfigPath, err := findKubeConfig() if err != nil { fmt.Println() } actionConfig := &amp;action.Configuration{ } cfg := cli.New() clientGetter := genericclioptions.NewConfigFlags(false) clientGetter.KubeConfig = &amp;kubeConfigPath actionConfig.Init(clientGetter, &quot;def&quot;, &quot;memory&quot;, log.Printf) if err != nil { fmt.Println(err) } chart, err := installation.InstallChart(cfg, &quot;test&quot;, &quot;chart1&quot;, &quot;./charts/dns&quot;, nil, actionConfig) if err != nil { fmt.Println(err) } fmt.Println(chart) } func findKubeConfig() (string, error) { env := os.Getenv(&quot;KUBECONFIG&quot;) if env != &quot;&quot; { return env, nil } path, err := homedir.Expand(&quot;~/.kube/config&quot;) if err != nil { return &quot;&quot;, err } return path, nil } </code></pre>
Jenney
<p>Looking at the example, I don't know what the <code>installation</code> package is but I feel like you would need to use a <code>Loader</code> (maybe you are using it in that pkg)</p> <p>from a quick search over the <a href="https://github.com/helm/helm/issues/6910#issuecomment-557106092" rel="nofollow noreferrer">github issues</a> I found someone with a similar problem - and they got the same suggestion. here is a derived example:</p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;fmt&quot; &quot;os&quot; &quot;helm.sh/helm/v3/pkg/action&quot; &quot;helm.sh/helm/v3/pkg/chart/loader&quot; &quot;helm.sh/helm/v3/pkg/kube&quot; &quot;helm.sh/helm/v3/pkg/release&quot; _ &quot;k8s.io/client-go/plugin/pkg/client/auth&quot; ) func main() { chartPath := &quot;./charts/dns&quot; chart, err := loader.Load(chartPath) if err != nil { panic(err) } kubeconfigPath := findKubeConfig() releaseName := &quot;test&quot; releaseNamespace := &quot;default&quot; actionConfig := new(action.Configuration) if err := actionConfig.Init(kube.GetConfig(kubeconfigPath, &quot;&quot;, releaseNamespace), releaseNamespace, os.Getenv(&quot;HELM_DRIVER&quot;), func(format string, v ...interface{}) { fmt.Sprintf(format, v) }); err != nil { panic(err) } iCli := action.NewInstall(actionConfig) iCli.Namespace = releaseNamespace iCli.ReleaseName = releaseName rel, err := iCli.Run(chart, nil) if err != nil { panic(err) } fmt.Println(&quot;Successfully installed release: &quot;, rel.Name) rel.Info.Status // check Release Status, feel free to run it in a go routine along the deletetion logic upCli := action.NewUpgrade(actionConfig) upgradedRel, err := pollAndUpdate(rel.Info.Status, upCli) // see if its better to just run that code here directly :shrug: // if we still on pending, then delete it if upgradedRel.Info.Status.IsPending() { unCli := action.NewUninstall(actionConfig) res, err := unCli.Run(rel.Name) if err != nil { panic(err) } } } func pollAndUpdate(originalRel *release.Release, upgradeCli *action.Upgrade) (*release.Release, error) { if !originalRel.Info.Status.IsPending() { return originalRel, nil } c := time.Tick(10 * time.Second) // we gonna time it out besides checking repeatedly var rel *release.Release = originalRel for _ = range c { //check the status and try and upgrade for rel.Info.Status.IsPending() { // https://pkg.go.dev/helm.sh/helm/[email protected]/pkg/release#Status.IsPending // run the upgrade command you have // its this function: https://github.com/helm/helm/blob/main/pkg/action/upgrade.go#L111 rel, err := upgradeCli.Run(/*you gotta get all the values this needs*/) if err != nil { panic(err) } } } return rel, nil } </code></pre>
bjornaer
<p>How do I get top three most CPU utilized pod in a Kubernetes cluster?</p> <pre><code>kubectl top po --all-namespaces </code></pre> <p>Above command gives me CPU and memory utilization for all the pods across all namespaces. How to restrict it to only top three most CPU utilized pods?</p> <p>Also, I've tried to sort by CPU, but seems like sorting is not working.</p> <pre><code>kubectl top po --all-namespaces --sort-by="cpu" </code></pre> <p>Output:</p> <pre><code>NAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system weave-net-ksfp4 1m 51Mi kube-system kube-controller-manager-master 10m 50Mi kube-system coredns-5644d7b6d9-rzd64 2m 6Mi kube-system weave-net-h4xlg 1m 77Mi kube-system kube-proxy-lk9xv 1m 19Mi kube-system coredns-5644d7b6d9-v2v4m 3m 6Mi kube-system kube-scheduler-master 2m 21Mi kube-system kube-apiserver-master 26m 228Mi kube-system kube-proxy-lrzjh 1m 9Mi kube-system etcd-master 12m 38Mi kube-system metrics-server-d58c94f4d-gkmql 1m 14Mi </code></pre>
Naseem Khan
<p>You can use this command:</p> <pre><code>kubectl top pods --all-namespaces --sort-by=cpu | sort --reverse --key 3 --numeric | head -n 3 </code></pre> <p>The head command will provide you with the top 3 pods.</p>
vinay_kumar
<p>I am setting up a new project that will include a number of micro services (Spring Boot apps). I plan to use Kubernetes/Docker and deployment to AWS.</p> <p>there will be 5 separate applications (microservices) tat will talk to each other over a REST api.</p> <p><strong>Question</strong></p> <p>How many Clusters must I create?</p> <p>Can I create just one Cluster with Nodes with multiples Pods (i.e. 5 Pods each Node)? AWS will then auto scale the nodes in the cluster.</p> <p>e.g.</p> <pre><code>+-------------+ +-------------+ | Node 1 | | Node 2 | +-------------+ +-------------+ | Pod/App1 | | Pod/App1 | | Pod/App2 | | Pod/App2 | | Pod/App3 | | Pod/App3 | | Pod/App4 | | Pod/App4 | | Pod/App5 | | Pod/App5 | +-------------+ +-------------+ </code></pre>
Richard
<p>Yes, you can use one AWS EKS that has one managed node group with two nodes to run your microservices. To create one, it is as simple as running a command <a href="https://eksctl.io/usage/creating-and-managing-clusters/" rel="nofollow noreferrer">eksctl create cluster</a>. One thing to note is currently AWS EKS do not automatically scale node, you need to install <a href="https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html" rel="nofollow noreferrer">cluster autoscaler</a> that will scale up/down the number of nodes for you.</p>
gohm'c
<p>I deployed an <a href="https://www.elastic.co/elasticsearch" rel="noreferrer">elasticsearch</a> cluster on K8S using this command <code>helm install elasticsearch elastic/elasticsearch</code>.</p> <p>I can see the pod is running:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE elasticsearch-master-0 0/1 Running 0 4m30s kibana-kibana-5697fc485b-qtzzl 0/1 Running 0 130m </code></pre> <p>The service looks good as well:</p> <pre><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch-master ClusterIP 10.105.59.248 &lt;none&gt; 9200/TCP,9300/TCP 4m50s elasticsearch-master-headless ClusterIP None &lt;none&gt; 9200/TCP,9300/TCP 4m50s kibana-kibana ClusterIP 10.104.31.124 &lt;none&gt; 5601/TCP 6d7h kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 10d </code></pre> <p>But there is no <code>deployment</code> for the <a href="https://www.elastic.co/elasticsearch" rel="noreferrer">elasticsearch</a>:</p> <pre><code>$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kibana-kibana 0/1 1 0 6d7h </code></pre> <p>I'd like to restart the <a href="https://www.elastic.co/elasticsearch" rel="noreferrer">elasticsearch</a> pod and I have searched that people say to use <code>kubectl scale deployment --replicas=0</code> to terminate the pod. But there is no deployment for the <code>elasticsearch</code> cluster, In this case, how can I restart the <a href="https://www.elastic.co/elasticsearch" rel="noreferrer">elasticsearch</a> pod?</p>
Joey Yi Zhao
<p>The <code>elasticsearch-master-0</code> rise up with a statefulsets.apps resource in k8s.</p> <p><code>statefulsets</code> apps is like Deployment object but different in the naming for pod. You should delete the pod and the statefulsets recreate the pod.</p> <pre><code> kubectl delete pods elasticsearch-master-0 </code></pre>
SAEED mohassel
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>version</th> </tr> </thead> <tbody> <tr> <td>Java</td> <td>1.8.0_242-b08</td> </tr> <tr> <td>Spark</td> <td>2.4.5</td> </tr> <tr> <td>Zeppelin</td> <td>0.10.0</td> </tr> </tbody> </table> </div> <p>Inside a k8s Pod, with above set, web ui running, I opened Zeppelin's spark interpreter and ran <code>sc</code>. Following Error prints out:</p> <pre><code>io.fabric8.kubernetes.client.KubernetesClientException: Operation: [create] for kind: [Pod] with name: [null] in namespace: [default] failed. </code></pre> <p>Besides that it's not working, what I don't understand is whether Zeppelin use k8s and how.</p> <p>Did some searching. <code>fabric8</code> seems to be related to k8s and I suppose Zeppelin tries to run its interpreters in k8s cluster. Yet, k8s is not installed (?... like <code>kubectl</code>?) in the machine (where Zeppelin is running). So there are two explanation I can think of:</p> <ol> <li><code>fabric8</code> include k8s so Zeppelin creates local k8s cluster to run its interpreters.</li> <li>Zeppelin somehow detected it's been run in k8s Pod and tries to use that Cluster to create Pod, run interpreters in it.</li> </ol> <p>Think the first explanation is the correct one. But running local k8s cluster under the hood feels... how? Error itself was solved by using different versions: Spark 2.4.0, Zeppelin 0.8.2, yet would be nice if I could use up-to-date Spark and Zeppelin.</p> <p>And in addition, the error itself seems to be related with <a href="https://github.com/fabric8io/kubernetes-client/issues/2145" rel="nofollow noreferrer">specific Java version</a>.</p>
김기영
<p>Found out Zeppelin has a parameter ZEPPELIN_RUN_MODE. By default it is set as &quot;auto&quot;. If you are running Zeppelin in a Pod and don't want to run within k8s sidecar, you should set it &quot;local&quot;.</p>
김기영
<p>I am running below command in openshift platform which produces some output</p> <p>oc get pods</p> <p>name status</p> <p>job1 Running</p> <p>job2 Completed</p> <p>How to extract only status from the above result and store it in a variable using shell script.</p> <p>ex : status=completed</p>
Prabhakar
<blockquote> <p>How to extract only status from the above result and store it in a variable using shell script.</p> <p>ex : status=completed</p> </blockquote> <p>Try <code>status=$(oc get pods -o=custom-columns=STATUS:.status.phase --no-headers)</code>. You can <code>echo $status</code> and see the list of status saved in the environment variable.</p>
gohm'c
<p>One of the pods in my local cluster can't be started because I get <code>Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition</code> error.</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE deployment-nats-db-5f5f9fd6d5-wrcpk 0/1 ContainerCreating 0 19m deployment-nats-server-57bbc76d44-tz5zj 1/1 Running 0 19m $ kubectl describe pods deployment-nats-db-5f5f9fd6d5-wrcpk Name: deployment-nats-db-5f5f9fd6d5-wrcpk Namespace: default Priority: 0 Node: docker-desktop/192.168.65.4 Start Time: Tue, 12 Oct 2021 21:42:23 +0600 Labels: app=nats-db pod-template-hash=5f5f9fd6d5 skaffold.dev/run-id=1f5421ae-6e0a-44d6-aa09-706a1d1aa011 Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: ReplicaSet/deployment-nats-db-5f5f9fd6d5 Containers: nats-db: Container ID: Image: postgres:latest Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: cpu: 1 memory: 256Mi Requests: cpu: 250m memory: 128Mi Environment Variables from: nats-db-secrets Secret Optional: false Environment: &lt;none&gt; Mounts: /docker-entrypoint-initdb.d from nats-initdb-volume (rw) /var/lib/postgresql/data from nats-data-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b5cz (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: nats-data-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nats-pvc ReadOnly: false nats-initdb-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nats-pvc ReadOnly: false kube-api-access-5b5cz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 19m default-scheduler Successfully assigned default/deployment-nats-db-5f5f9fd6d5-wrcpk to docker-desktop Warning FailedMount 4m9s (x2 over 17m) kubelet Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-initdb-volume kube-api-access-5b5cz nats-data-volume]: timed out waiting for the condition Warning FailedMount 112s (x6 over 15m) kubelet Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition </code></pre> <p>I don't know where the issue is. The PVs and PVCs are all seemed to be successfully applied.</p> <pre><code>$ kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/nats-pv 50Mi RWO Retain Bound default/nats-pvc local-hostpath-storage 21m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/nats-pvc Bound nats-pv 50Mi RWO local-hostpath-storage 21m </code></pre> <p>Following are the configs for SC, PV and PVC:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-hostpath-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: nats-pv spec: capacity: storage: 50Mi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-hostpath-storage hostPath: path: /mnt/wsl/nats-pv type: DirectoryOrCreate --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nats-pvc spec: volumeName: nats-pv resources: requests: storage: 50Mi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-hostpath-storage --- apiVersion: apps/v1 kind: Deployment metadata: name: deployment-nats-db spec: selector: matchLabels: app: nats-db template: metadata: labels: app: nats-db spec: containers: - name: nats-db image: postgres:latest envFrom: - secretRef: name: nats-db-secrets volumeMounts: - name: nats-data-volume mountPath: /var/lib/postgresql/data - name: nats-initdb-volume mountPath: /docker-entrypoint-initdb.d resources: requests: cpu: 250m memory: 128Mi limits: cpu: 1000m memory: 256Mi volumes: - name: nats-data-volume persistentVolumeClaim: claimName: nats-pvc - name: nats-initdb-volume persistentVolumeClaim: claimName: nats-pvc </code></pre> <p>This pod will be started successfully if I comment out <code>volumeMounts</code> and <code>volumes</code> keys. And it's specifically with this <code>/var/lib/postgresql/data</code> path. Like if I remove <code>nats-data-volume</code> and keep <code>nats-initdb-volume</code>, it's started successfully.</p> <p>Can anyone help me where I'm wrong exactly? Thanks in advance and best regards.</p>
msrumon
<p><code>...if I remove nats-data-volume and keep nats-initdb-volume, it's started successfully.</code></p> <p>This PVC cannot be mounted twice, that's where the condition cannot be met.</p> <p>Looking at your spec, it seems you don't mind which worker node will run your postgress pod. In this case you don't need PV/PVC, you can mount hostPath directly like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: deployment-nats-db spec: selector: matchLabels: app: nats-db template: metadata: labels: app: nats-db spec: containers: - name: nats-db image: postgres:latest envFrom: - secretRef: name: nats-db-secrets volumeMounts: - name: nats-data-volume mountPath: /var/lib/postgresql/data - name: nats-data-volume mountPath: /docker-entrypoint-initdb.d resources: requests: cpu: 250m memory: 128Mi limits: cpu: 1000m memory: 256Mi volumes: - name: nats-data-volume hostPath: path: /mnt/wsl/nats-pv type: DirectoryOrCreate </code></pre>
gohm'c
<p>My query is pretty much what the title says, I have a local file say <code>file.txt</code> and I want to copy it into <code>pod1</code>'s container <code>container1</code>.</p> <p>If I was to do it using kubectl, the appropriate command would be :</p> <p><code>kubectl cp file.txt pod1:file.txt -c container1</code></p> <p>However, how do I do it using the Go client of kubectl?</p> <p>I tried 2 ways but none of them worked :</p> <pre><code>import ( &quot;fmt&quot; &quot;context&quot; &quot;log&quot; &quot;os&quot; &quot;path/filepath&quot; g &quot;github.com/sdslabs/katana/configs&quot; v1 &quot;k8s.io/api/core/v1&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/apimachinery/pkg/labels&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/rest&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; //&quot;k8s.io/kubectl/pkg/cmd/exec&quot; ) func CopyIntoPod(namespace string, podName string, containerName string, srcPath string, dstPath string) { // Create a Kubernetes client config, err := GetKubeConfig() if err != nil { log.Fatal(err) } client, err := kubernetes.NewForConfig(config) if err != nil { log.Fatal(err) } // Build the command to execute cmd := []string{&quot;cp&quot;, srcPath, dstPath} // Use the PodExecOptions struct to specify the options for the exec request options := v1.PodExecOptions{ Container: containerName, Command: cmd, Stdin: false, Stdout: true, Stderr: true, TTY: false, } log.Println(&quot;Options set!&quot;) // Use the CoreV1Api.Exec method to execute the command inside the container req := client.CoreV1().RESTClient().Post(). Namespace(namespace). Name(podName). Resource(&quot;pods&quot;). SubResource(&quot;exec&quot;). VersionedParams(&amp;options, metav1.ParameterCodec) log.Println(&quot;Request generated&quot;) exec, err := req.Stream(context.TODO()) if err != nil { log.Fatal(err) } defer exec.Close() // Read the response from the exec command var result []byte if _, err := exec.Read(result); err != nil { log.Fatal(err) } fmt.Println(&quot;File copied successfully!&quot;) } </code></pre> <p>This gave me the error message :</p> <p><code>no kind is registered for the type v1.PodExecOptions in scheme &quot;pkg/runtime/scheme.go:100&quot;</code></p> <p>I couldn't figure it out, so I tried another way :</p> <pre><code>type PodExec struct { RestConfig *rest.Config *kubernetes.Clientset } func NewPodExec(config *rest.Config, clientset *kubernetes.Clientset) *PodExec { config.APIPath = &quot;/api&quot; // Make sure we target /api and not just / config.GroupVersion = &amp;schema.GroupVersion{Version: &quot;v1&quot;} // this targets the core api groups so the url path will be /api/v1 config.NegotiatedSerializer = serializer.WithoutConversionCodecFactory{CodecFactory: scheme.Codecs} return &amp;PodExec{ RestConfig: config, Clientset: clientset, } } func (p *PodExec) PodCopyFile(src string, dst string, containername string, podNamespace string) (*bytes.Buffer, *bytes.Buffer, *bytes.Buffer, error) { ioStreams, in, out, errOut := genericclioptions.NewTestIOStreams() copyOptions := cp.NewCopyOptions(ioStreams) copyOptions.Clientset = p.Clientset copyOptions.ClientConfig = p.RestConfig copyOptions.Container = containername copyOptions.Namespace = podNamespace err := copyOptions.Run() if err != nil { return nil, nil, nil, fmt.Errorf(&quot;could not run copy operation: %v&quot;, err) } return in, out, errOut, nil } </code></pre> <p>However, there were some issues with the <code>copyOptions.Run()</code> command, it tried to look for <code>o.args[0] and o.args[0]</code> inside copyOptions but <code>o</code> is not imported so it couldn't be modified.</p> <p>Context : <a href="https://pkg.go.dev/k8s.io/kubectl/pkg/cmd/cp#CopyOptions.Run" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/kubectl/pkg/cmd/cp#CopyOptions.Run</a></p> <p>So, now I'm really lost and confused. Any help would be appreciated. Thanks.</p> <p>Edit : I did think of a viable method where we can just call <code>cmd.exec()</code> and run the <code>kubectl cp</code> command directly but it seems kinda hacky and I'm not sure whether it would work, any thoughts?</p>
infinite-blank-
<p>Here's how I finally managed to do it :</p> <pre><code>package main import ( &quot;context&quot; &quot;fmt&quot; &quot;os&quot; &quot;path/filepath&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/kubernetes/scheme&quot; corev1 &quot;k8s.io/api/core/v1&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; &quot;k8s.io/client-go/util/homedir&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/tools/remotecommand&quot; ) func CopyIntoPod(podName string, namespace string, containerName string, srcPath string, dstPath string) { // Get the default kubeconfig file kubeConfig := filepath.Join(homedir.HomeDir(), &quot;.kube&quot;, &quot;config&quot;) // Create a config object using the kubeconfig file config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, kubeConfig) if err != nil { fmt.Printf(&quot;Error creating config: %s\n&quot;, err) return } // Create a Kubernetes client client, err := kubernetes.NewForConfig(config) if err != nil { fmt.Printf(&quot;Error creating client: %s\n&quot;, err) return } // Open the file to copy localFile, err := os.Open(srcPath) if err != nil { fmt.Printf(&quot;Error opening local file: %s\n&quot;, err) return } defer localFile.Close() pod, err := client.CoreV1().Pods(namespace).Get(context.TODO(), podName, metav1.GetOptions{}) if err != nil { fmt.Printf(&quot;Error getting pod: %s\n&quot;, err) return } // Find the container in the pod var container *corev1.Container for _, c := range pod.Spec.Containers { if c.Name == containerName { container = &amp;c break } } if container == nil { fmt.Printf(&quot;Container not found in pod\n&quot;) return } // Create a stream to the container req := client.CoreV1().RESTClient().Post(). Resource(&quot;pods&quot;). Name(podName). Namespace(namespace). SubResource(&quot;exec&quot;). Param(&quot;container&quot;, containerName) req.VersionedParams(&amp;corev1.PodExecOptions{ Container: containerName, Command: []string{&quot;bash&quot;, &quot;-c&quot;, &quot;cat &gt; &quot; + dstPath}, Stdin: true, Stdout: true, Stderr: true, }, scheme.ParameterCodec) exec, err := remotecommand.NewSPDYExecutor(config, &quot;POST&quot;, req.URL()) if err != nil { fmt.Printf(&quot;Error creating executor: %s\n&quot;, err) return } // Create a stream to the container err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{ Stdin: localFile, Stdout: os.Stdout, Stderr: os.Stderr, Tty: false, }) if err != nil { fmt.Printf(&quot;Error streaming: %s\n&quot;, err) return } fmt.Println(&quot;File copied successfully&quot;) } </code></pre>
infinite-blank-
<p>I have an init container and am running a command in it which takes a ton of parameters so I have something like </p> <pre><code>command: ['sh', '-c', 'tool input1 input2 output -action param1 -action param2 -action param3 -action param4 -action param5 -action param6 -action param7 -action param7 -action param8 -action param9 -action param10 '] </code></pre> <p>This highly decreases the readability of the command. Is there someway I can improve this like passsing this a a separate array ?</p>
The_Lost_Avatar
<p>You can make command's args as a YAML list:</p> <pre><code> initContainers: - name: init-container image: busybox:1.28 command: ["/bin/sh","-c"] args: - -action - param1 - -action - param2 . . . </code></pre>
wpnpeiris
<p>I have a Kubernetes cluster and some applications running on it. All of the Pods have their own resource limit/request settings.</p> <p>One of my applications uses tmpfs (/dev/shm) for its data processing, so I have described its volume setting like below to expand available tmpfs size:</p> <pre class="lang-yaml prettyprint-override"><code>volumes: - name: tmpfs emptyDir: medium: Memory sizeLimit: 1G </code></pre> <p>However, as Kubernetes doesn't take tmpfs usage into account when it schedules new Pods, some Pods may not be able to use their allocated memory despite being successfully scheduled based on its memory request.</p> <p>Is it possible to let Kubernetes consider the use of tmpfs? I want to set the memory request to the sum of its memory limit and tmpfs usage but it's not possible as the request must be less than or equal to the limit. I wonder there may be a proper way to do this.</p>
Daigo
<p><code>tmpfs</code> volume is treated as container memory usage, you should add the 1Gi to your <code>resources.limits.memory</code> for the scheduler to take this into consideration. It is fine to have the <code>resources.requests.memory</code> lower than your limits.</p>
gohm'c
<p>I am launching a K8S cluster on AWS EKS with nodegroup. I spin up a few instances running as k8s nodes which has 4GiB and 2 vCPU.</p> <p>How can I launch a pod which request 32GB memory and 8 vCPU. I am getting memory/cpu errors.</p> <p>Is there a way for one pod running on top of a few nodes?</p>
Joey Yi Zhao
<p><code>How can I launch a pod which request 32GB memory and 8 vCPU. I am getting memory/cpu errors.</code></p> <p>To get actual allocatable capacity of a worker node you do <code>kubectl get node &lt;node name&gt; -o json | jq .status.capacity</code>. You will find that a worker node capacity is not determine by its instance type.</p> <p>You can then create new node group with instance type like m5.4xlarge for workload requires 8 vCPU and 32GB or more. Or you can also modify the existing node group <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/change-launch-config.html" rel="nofollow noreferrer">autoscaling group</a> with larger instance type. Finally in your deployment spec, you use <code>nodeSelector</code> to direct the scheduler where to run your pod:</p> <pre><code>nodeselector: node.kubernetes.io/instance-type: m5.4xlarge </code></pre> <p><code>Is there a way for one pod running on top of a few nodes?</code></p> <p>A pod can only run on a worker node.</p>
gohm'c
<p>What is the best practice to set environment variables for a java app's deployment in a helm chart so that I can use the same chart for dev and prod environments? I have separate kubernetes deployments for both the environments.</p> <pre><code>spec: containers: env: - name: SYSTEM_OPTS - value: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc ..." </code></pre> <p>Similarly, my prod variables would something like </p> <pre><code>"-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc ..." </code></pre> <p>Now, how can I leverage helm to write a single chart but can create separated set of pods with different properties according to the environment as in</p> <pre><code>helm install my-app --set env=prod ./test-chart </code></pre> <p>or </p> <pre><code>helm install my-app --set env=dev ./test-chart </code></pre>
gsdk
<p>The best way is to use single deployment template and use separate value file for each environment. It does not need to be only environment variable used in the application. The same can be apply for any environment specific configuration. </p> <p>Example:</p> <p>deployment.yaml</p> <pre><code>spec: containers: env: - name: SYSTEM_OPTS - value: "{{ .Values.opts }}" </code></pre> <p>values-dev.yaml</p> <pre><code># system opts opts: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc " </code></pre> <p>values-prod.yaml</p> <pre><code># system opts opts: "-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc " </code></pre> <p>Then specify the related value file in the helm command.</p> <p>For example, deploying on dev enviornemnt.</p> <pre><code>helm install -f values-dev.yaml my-app ./test-chart </code></pre>
wpnpeiris
<p>I have an HTTP2 service. It's deployed on EKS (AWS Kubernetes). And I am trying to expose it to the internet.</p> <p>If I am exposing it without TLS (with the code below) everything works fine. I can access it.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: demoapp spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 5000 selector: name: demoapp </code></pre> <p>If I am adding TLS, I am getting HTTP 502 (Bad Gateway).</p> <pre><code>apiVersion: v1 kind: Service metadata: name: demoapp annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-cert: somearn service.beta.kubernetes.io/aws-load-balancer-ssl-ports: &quot;https&quot; spec: type: LoadBalancer ports: - name: https port: 443 targetPort: 5000 selector: name: demoapp </code></pre> <p>I have a guess (which could be wrong) that <code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http</code> for reason assumes that it's HTTP 1.1 (vs HTTP 2.0) and bark when one of the sides start sending binary (vs textual data).</p> <p>Additional note: I am not using any Ingress controller.</p> <p>And a thought. Potentially, I can bring TLS termination within the app (vs doing it on the load balancer) and switch as an example to NLB. However, brings a lot of hair in the solution and I would rather use load balancer for it.</p>
Victor Ronin
<p>Base on the annotations in your question; the TLS should terminate at the CLB and you should remove <code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol</code> (default to <code>tcp</code>).</p>
gohm'c
<p>Do we have some Api in Kubernetes to know if we have enough resources to run deployment of pods before sending the request on prem envirenment?</p> <p>the reason I need it is because I want to run a pod from another pod and I want to make sure I have enough of resources.</p>
Ohad
<p>To get information about CPU(cores), CPU%, memory, and memory% usage of your nodes, you can run <code>kubectl top nodes</code> command.</p> <p>For example:</p> <pre><code>1 kubectl top node --use-protocol-buffers 2 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 3 example-node 500m 50% 573Mi 20% </code></pre> <ul> <li>CPU(cores) - 500m means 500 millicpu. 1000m is equal to 1 CPU, hence 500m means 50% of 1 CPU.</li> <li>CPU% - Total CPU usage percentage of that node.</li> <li>Memory - Memory being used by that node.</li> <li>Memory% - Total memory usage percentage of that node.</li> </ul>
Mikolaj
<p>I am trying from a Java program to start a Kubernetes job and wait for its completion. However, there seems to have no equivalent of kubectl wait in the official Java API. My program is already using io.kubernetes.client and I don't want to rewrite everything on top of another API, or use two different Kubernetes API. I also want to avoid having to once again figure out how to call processes from Java to invoke kubectl, and moreover kubectl is not installed in the Docker image that will execute my Java program. If there is a method to wait for a pod/job completion in the current Java Kubernetes API, why is it so hard to find and Google always directs me to kubectl-based posts?</p> <p>I tried checking the io.kubernetes.client API. Only available &quot;wait&quot; method is the usual Object.wait which of course doesn't do what I need. Many searches on Google give results about kubectl or another Kubernetes Java API, something like fabline or labnine which I can hardly remember and make sense of.</p>
Eric Buist
<p>You can watch for an API resource, and when the appropriate state change happens you can run further statements.</p> <p>For example like here: <a href="https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/WatchExample.java#L48" rel="nofollow noreferrer">https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/WatchExample.java#L48</a></p> <p>When some change is happend on the Pod resource, you can catch the event and you get the current state of the Pod. You can check the current state is the state you are wanted and when it is, then run your logic.</p>
kubenetic
<p>I'd like to expand my PVC storage, but it does not expand and gives me this error:</p> <pre><code>kubectl patch pvc/&quot;data-mongodb-0&quot; -p='{&quot;spec&quot;: {&quot;resources&quot;: {&quot;requests&quot;: {&quot;storage&quot;: &quot;110Gi&quot;}}}}' </code></pre> <p>Describe of PVC:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ExternalExpanding 26s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. </code></pre> <p>I have my storage class <code>allowVolumeExpansion</code> true. This is basically an AWS EFS storage. I tested it and there is no issues expanding EBS volumes but cannot do it with the EFS ones. Any workarounds to make this happen?</p> <p>PVC spec:</p> <pre><code>spec: accessModes: - ReadWriteOnce resources: requests: storage: 100G storageClassName: aws-efs volumeMode: Filesystem volumeName: pvc-cea0000c-0000-4520-bac0-000000000 status: accessModes: - ReadWriteOnce capacity: storage: 100G phase: Bound </code></pre> <p>StorageClass:</p> <pre><code>allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: aws-efs provisioner: mongodb/aws-efs reclaimPolicy: Retain volumeBindingMode: Immediate </code></pre>
titanic
<p>AWS EFS does not support <code>allowVolumeExpansion</code> mainly because the mounted volume has a logical size of 8 exabytes which means unlimited. There's no need to expand the volume.</p>
gohm'c
<p>So I am deploying an EMR application using EKS, following the EMR-EKS workshop by AWS, declaring the Fargate profile.</p> <p>I tried everything to be serverless, so even my EKS Cluster runs on Fargate (kube-sytem, default, etc).</p> <p>I created a custom namespace with a Fargate profile, and submit a Spark job. The job stayed at Pending status.</p> <p>When I added some managed nodegroups, the job was submitted successfully.</p> <p>I tried submitting very light jobs and by the time I removed the managed node groups, spark jobs stay at pending status.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 33s (x7 over 5m43s) default-scheduler 0/2 nodes are available: 2 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate. </code></pre> <p><strong>eksworkshop-eksctl.yaml</strong></p> <pre><code>apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: eksworkshop-eksctl region: ${AWS_REGION} version: &quot;1.21&quot; availabilityZones: [&quot;${AZS[0]}&quot;, &quot;${AZS[1]}&quot;, &quot;${AZS[2]}&quot;, &quot;${AZS[3]}&quot;] fargateProfiles: - name: default selectors: # All workloads in the &quot;default&quot; Kubernetes namespace will be # scheduled onto Fargate: - namespace: default # All workloads in the &quot;kube-system&quot; Kubernetes namespace will be # scheduled onto Fargate: - namespace: kube-system </code></pre> <p>Create the EKS cluster:</p> <pre><code>eksctl create cluster -f eksworkshop-eksctl.yaml </code></pre> <p>Create namespace and fargate profile:</p> <pre><code>kubectl create namespace spark eksctl create fargateprofile --cluster eksworkshop-eksctl --name emr \ --namespace spark --labels type=etl </code></pre> <p>Create virtual cluster:</p> <p>I have already added the labels under sparkSubmitParameters, but it is still stuck in Pending state. :( Is there additional configuration I need to add when creating virtual cluster:</p> <pre><code>aws emr-containers create-virtual-cluster \ --name eksworkshop-eksctl \ --container-provider '{ &quot;id&quot;: &quot;eksworkshop-eksctl&quot;, &quot;type&quot;: &quot;EKS&quot;, &quot;info&quot;: { &quot;eksInfo&quot;: { &quot;namespace&quot;: &quot;spark&quot; } } }' </code></pre> <p>Submit the spark job:</p> <pre><code>aws emr-containers start-job-run --virtual-cluster-id $VIRTUAL_CLUSTER_ID --name spark-pi-logging --execution-role-arn $EMR_ROLE_ARN --release-label emr-6.2.0-latest --job-driver '{ &quot;sparkSubmitJobDriver&quot;: { &quot;entryPoint&quot;: &quot;s3://aws-data-analytics-workshops/emr-eks-workshop/scripts/pi.py&quot;, &quot;sparkSubmitParameters&quot;: &quot;--conf spark.kubernetes.driver.label.type=etl --conf spark.kubernetes.executor.label.type=etl --conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1&quot; } }' --configuration-overrides '{ &quot;applicationConfiguration&quot;: [ { &quot;classification&quot;: &quot;spark-defaults&quot;, &quot;properties&quot;: { &quot;spark.driver.memory&quot;:&quot;2G&quot; } } ], &quot;monitoringConfiguration&quot;: { &quot;cloudWatchMonitoringConfiguration&quot;: { &quot;logGroupName&quot;: &quot;/emr-containers/jobs&quot;, &quot;logStreamNamePrefix&quot;: &quot;emr-eks-workshop&quot; }, &quot;s3MonitoringConfiguration&quot;: { &quot;logUri&quot;: &quot;'&quot;$s3DemoBucket&quot;'/logs/&quot; } } }' </code></pre> <p>For EKS cluster, is NodeGroup must be declared and is mandatory, and we can't run Spark Job just be using Fargate only?</p>
Charmee Lee
<p>Your situation:</p> <p><code>...created fargate-profile with the same namespace: Job completed</code></p> <p><code>... created fargate-profile with the same namespace but with labels...Job stucked in Pending </code></p> <p>If a namespace is associated with &gt;1 profile, EKS will randomly choose a profile. When it picks a profile that requires a label and your spark job don't have it - it goes into pending state since there is no other node group in your cluster.</p> <p><code>Is it required to have 2 profiles, one with label, and one with no label</code></p> <p>No, in fact you should not do this. To run job on a profile with labels your job <code>sparkSubmitParameters</code> must specify the label:</p> <p><code>&quot;sparkSubmitParameters&quot;: &quot;...--conf spark.kubernetes.driver.label.&lt;label key&gt;=&lt;value&gt; --conf spark.kubernetes.executor.label.&lt;label key&gt;=&lt;value&gt;...&quot;</code></p> <p>You don't need &gt;1 Fargate profile unless you need to distinguish spark jobs by Fargate profile selector.</p>
gohm'c
<p>This is my yaml file that i am trying to use for cronJob creation. I am getting error like unknown field &quot;container&quot; in io.k8s.api.core.v1.PodSpec,</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: abc-service-cron-job spec: schedule: &quot;* * * * *&quot; jobTemplate: spec: template: spec: container: - name: abc-service-cron-job image: docker.repo1.xyz.com/hui-services/abc-application/REPLACE_ME imagePullPolicy: Always command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre>
lucky
<pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: ... spec: ... jobTemplate: spec: template: spec: containers: # &lt;-- you have spelling error here, should be &quot;containers&quot; ... </code></pre>
gohm'c
<p>I am new with Kubernetes. I have created the control node and wanted to add a service user to login in dashboard.</p> <pre><code>root@bm-mbi-01:~# cat admin-user.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system root@bm-mbi-01:~# cat admin-user-clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system root@bm-mbi-01:~# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-kd8c8 Namespace: kube-system Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 226e0ea4-9d2e-480e-8b1d-709b9860e561 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjVZOS02T3M2T3AwNUZhQXA3NDdJZENXZlpIU2F6UUtNdEdJNmd3MFg0WEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWtkOGM4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMjZlMGVhNC05ZDJlLTQ4MGUtOGIxZC03MDliOTg2MGU1NjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.OfRZlszXRt5AKxCumqSicPOkIK6g-fqPzitH_DjqskFxz6SzwYoDeFIPqyQ8O_6SFFgU6b-lgwiRmZtoj3dTKxr04PDl_t37KD7QTmBtX33vrW_sgq2EFbRkaiRxyTvFPjQDmo04iiyOQmlfzj67MIbgYYmem3NaTqgqx-j-SEi-CKTwVM4JyGa3GrTN7xeRfsFNSq1YOV6Yx1keyiD-gVEZiDxkBCJcdCJOM6p6q1s3cXgH1KWIDYkGXIHFX1f0tvu4xlr_-jgpSVehaAU98WN9DtgXL16ny1ckgKL1mPpBezrjVrf4k1lOSsXHWuE1cnlG9SnUIhbZ9k11HQJNtw root@bm-mbi-01:~# </code></pre> <p>Used this token to login in dashboard. But after clicking in login, no response.</p>
SamUnix
<ul> <li>With IP, the URL is browsable but login button clicking is not working.</li> <li>Finally solved it by doing SSH local port formwarding</li> <li>Kubenetes Proxy was started by:</li> </ul> <pre><code>root@bm-mbi-01:~# kubectl proxy --address=10.20.200.75 --accept-hosts=.* &amp; </code></pre> <ul> <li>SSH tunnel from my local PC to bm-mbi-01 server</li> </ul> <pre><code> ✘ s.c@MB-SC  ~  ssh -L 8001:localhost:8001 bmadmin@bm-mbi-01 </code></pre> <p><a href="https://i.stack.imgur.com/4qXml.png" rel="nofollow noreferrer">enter image description here</a></p>
SamUnix
<p>I wouldd to create a persistent volume on my kubernetes (gcp) cluster and use it in my django app as , for example, media folder. On my kubernetes side i do:</p> <p>First create a volumes claim:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-zeus namespace: ctest spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>, then in my deployments.yaml i create a volume and associate to the pod:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: django namespace: ctest labels: app: django spec: replicas: 3 selector: matchLabels: app: django template: metadata: labels: app: django spec: volumes: - name: cc-volume persistentVolumeClaim: claimName: pvc-zeus containers: - name: django image: gcr.io/direct-variety-3066123/cc-mirror volumeMounts: - mountPath: &quot;/app/test-files&quot; name: cc-volume ... </code></pre> <p>then in my django settings:</p> <pre><code>MEDIA_URL = '/test-files/' </code></pre> <p>Here my Dockerfile:</p> <pre><code>FROM python:3.8-slim ENV PROJECT_ROOT /app WORKDIR $PROJECT_ROOT COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . RUN chmod +x run.sh CMD python manage.py runserver --settings=settings.kube 0.0.0.0:8000 </code></pre> <p>when i apply volume claim on my cluster all was done (volume claim was created) but whe apply deployment.yaml no volume was created for the pods (also if i connect in bash to my pods, no folder test-files exist).</p> <p>How can i create a volume on my deployments pods and use it in my django app?</p> <p>So many thanks in advance</p>
Manuel Santi
<p>You need to have one of two Kubernetes objects in place in order to make a PVC: a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a>(PV) or a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a>(SC).</p> <p>As you showed, your PVC <strong>does not indicate a PV or a SC</strong> from which to create a volume.</p> <p>Usually, when you don't specify a PV or a SC in a PVC, a default SC will be used and you are not supposed to indicate the .resources in the PVC but in the default SC.</p> <p>Maybe, if you just want to work with default SC, you would want to check if your specific cluster has it active or whether you need to create/activate it.</p>
RicHincapie
<p>I am trying to create a CI build in Github Actions for Kubernetes deployment with Terraform on Minikube. The Terraform apply fails on deploying provider with following message:</p> <pre><code>Invalid attribute in provider configuration with provider[&quot;registry.terraform.io/hashicorp/kubernetes&quot;], on providers.tf line 18, in provider &quot;kubernetes&quot;: 18: provider &quot;kubernetes&quot; { 'config_path' refers to an invalid path: &quot;/github/home/.kube/config&quot;: stat /github/home/.kube/config: no such file or directory </code></pre> <p>How can I resolve it? I have tried various approaches but so far nothing works. Everything works fine when I deploy it locally with Minikube.</p> <p>Relevant code snippets from Terraform:</p> <p>variables.tf:</p> <pre><code>variable &quot;kube_config&quot; { type = string default = &quot;~/.kube/config&quot; } </code></pre> <p>providers.tf:</p> <pre><code>provider &quot;kubernetes&quot; { config_path = pathexpand(var.kube_config) config_context = &quot;minikube&quot; } </code></pre> <p>Github Actions job:</p> <pre><code>jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: setup minikube uses: manusa/[email protected] with: minikube version: 'v1.28.0' kubernetes version: 'v1.25.4' github token: ${{ secrets.GITHUB_TOKEN }} driver: docker container runtime: docker - name: terraform-apply uses: dflook/[email protected] with: path: terraform-k8s auto_approve: true </code></pre> <p>I have also tried running it with official setup-minikube action, but doesn't work as well.</p>
RSK RSK
<p>Seems like I have managed to make it work by using official Hashicorp's action instead of the original. Gonna check if it deploys everything in the end :)</p> <pre><code> - uses: hashicorp/setup-terraform@v2 - name: terraform-init run: terraform -chdir=terraform-k8s init - name: terraform-apply run: terraform -chdir=terraform-k8s apply -auto-approve </code></pre>
RSK RSK
<p>In my project I need to use UID of the kube-system namespace, but could not find a way to get it. Is it possible to retrieve this?</p>
abdul wajid
<p><code>...UID of the kube-system namespace...</code></p> <p>kubectl get namespace kube-system --output jsonpath={.metadata.uid}</p>
gohm'c
<p>I find some usecases of k8s in production which work with the Public Cloud will put a LoadBalancer type of Service in front of the Nginx Ingress. (You can find an example from the below yaml.)</p> <p>As I known, ingress can be used to expose the internal servcie to the public, so what's the point to put a loadbalancer in front of the ingress? Can I delete that service?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-3.27.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.45.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller kubernetes.io/elb.class: union name: ingress-nginx-controller namespace: ingress-nginx spec: type: LoadBalancer loadBalancerIP: xxx.xxx.xxx.xxx externalTrafficPolicy: Cluster ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller </code></pre>
Charlie
<pre><code>...so what's the point to put a loadbalancer in front of the ingress? </code></pre> <p>This way allows you to take advantage of the cloud provider LB facilities (eg. multi-az etc), then with Ingress you can further control routing using path or name-based virtual host for services in the cluster.</p> <pre><code>Can I delete that service? </code></pre> <p>Ingress doesn't do port mapping or pods selection, and you can't resolve an Ingress name with DNS.</p>
gohm'c
<p>I want to assign <code>RBAC</code> rule to a user providing access to all the resources except <code>'create'</code> and <code>'delete'</code> verb in <code>'namespace'</code> resource using Terraform.</p> <p>Currently we have rule as stated below:</p> <pre><code>rule { api_groups = [&quot;*&quot;] resources = [&quot;*&quot;] verbs = [&quot;*&quot;] } </code></pre>
Praj
<p>As we can find in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Role and ClusterRole documentation</a>, permissions (rules) are purely additive - there are no &quot;deny&quot; rules:</p> <blockquote> <p>Role and ClusterRole An RBAC Role or ClusterRole contains rules that represent a set of permissions. Permissions are purely additive (there are no &quot;deny&quot; rules).</p> </blockquote> <p>The list of possible verbs can be found <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb" rel="nofollow noreferrer">here</a>: <a href="https://i.stack.imgur.com/1KImX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KImX.png" alt="enter image description here" /></a></p> <p><br>You need to provide all verbs that should be applied to the resources contained in the rule.<br /> Instead of:</p> <pre><code>verbs = [&quot;*&quot;] </code></pre> <p>Provide required verbs e.g.:</p> <pre><code>verbs = [&quot;get&quot;, &quot;list&quot;, &quot;patch&quot;, &quot;update&quot;, &quot;watch&quot;] </code></pre> <p><br>As an example, I've created an <code>example-role</code> <code>Role</code> and an <code>example_role_binding</code> <code>RoleBinding</code>.<br /> The <code>example_role_binding</code> <code>RoleBinding</code> grants the permissions defined in the <code>example-role</code> <code>Role</code> to user <code>john</code>.<br /> <strong>NOTE:</strong> For details on using the following resources, see the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role" rel="nofollow noreferrer">kubernetes_role</a> and <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/role_binding" rel="nofollow noreferrer">kubernetes_role_binding</a> resource documentation.</p> <pre><code>resource &quot;kubernetes_role&quot; &quot;example_role&quot; { metadata { name = &quot;example-role&quot; namespace = &quot;default&quot; } rule { api_groups = [&quot;*&quot;] resources = [&quot;*&quot;] verbs = [&quot;get&quot;, &quot;list&quot;, &quot;patch&quot;, &quot;update&quot;, &quot;watch&quot;] } } resource &quot;kubernetes_role_binding&quot; &quot;example_role_binding&quot; { metadata { name = &quot;example_role_binding&quot; namespace = &quot;default&quot; } role_ref { api_group = &quot;rbac.authorization.k8s.io&quot; kind = &quot;Role&quot; name = &quot;example-role&quot; } subject { kind = &quot;User&quot; name = &quot;john&quot; api_group = &quot;rbac.authorization.k8s.io&quot; } } </code></pre> <p>Additionally, I've created the <code>test_user.sh</code> Bash script to quickly check if it works as expected:<br /> <strong>NOTE:</strong> You may need to modify the variables <code>namespace</code>, <code>resources</code>, and <code>user</code> to fit your needs.</p> <pre><code>$ cat test_user.sh #!/bin/bash namespace=default resources=&quot;pods deployments&quot; user=john echo &quot;=== NAMESPACE: ${namespace} ===&quot; for verb in create delete get list patch update watch; do echo &quot;-- ${verb} --&quot; for resource in ${resources}; do echo -n &quot;${resource}: &quot; kubectl auth can-i ${verb} ${resource} -n ${namespace} --as=${user} done done $ ./test_user.sh === NAMESPACE: default === -- create -- pods: no deployments: no -- delete -- pods: no deployments: no -- get -- pods: yes deployments: yes -- list -- pods: yes deployments: yes ... </code></pre>
matt_j
<p>This is my pod:</p> <pre class="lang-yaml prettyprint-override"><code>spec: containers: volumeMounts: - mountPath: /etc/configs/config.tmpl name: config-main readOnly: true subPath: config.tmpl volumes: - configMap: defaultMode: 420 name: config-main name: config-main </code></pre> <p>This is my config map:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: config-main data: config.tmpl: | # here goes my config content </code></pre> <p>This produces the following error:</p> <blockquote> <p>Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting &quot;/var/lib/kubelet/pods/dc66ebd1-90ef-4c25-bb69-4b3329f61a5a/volume-subpaths/config-main/podname/1&quot; to rootfs at &quot;/etc/configs/config.tmpl&quot; caused: mount through procfd: not a directory: unknown</p> </blockquote> <p>The container has pre-existing <code>/etc/configs/config.tmpl</code> file that I want to override</p>
chingis
<p>To mount as file instead of directory try:</p> <pre><code>spec: containers: volumeMounts: - mountPath: /etc/configs/config.tmpl name: config-main readOnly: true subPath: config.tmpl volumes: - configMap: defaultMode: 420 items: - key: config.tmpl path: config.tmpl name: config-main name: config-main </code></pre>
gohm'c
<p>Checking the latest image used in the metrics-server <a href="https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.5.0" rel="nofollow noreferrer">Github repo</a>, the tag used is <strong>v0.5.0</strong>, for arm64 I would usually add <em>arm64</em> to the image name and pull it.</p> <p>But the image doesn't exist and doing an inspect to the base image shows that its arch is <em>amd64</em>.</p> <p>In <a href="https://console.cloud.google.com/gcr/images/google-containers/global/metrics-server" rel="nofollow noreferrer">google's registry</a> the latest image is <code>v0.3.6</code>. So I'm not sure if support for arm64 has continued or staled.</p>
amnestor
<p>There's no need to append <em>arm64</em> starting v0.3.7, the image support multiple architectures. See official FAQ <a href="https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metric-server-on-different-architecture" rel="nofollow noreferrer">here</a> with complete image url.</p>
gohm'c
<p>I'm working with a Pod (Shiny Proxy) that talks to Kubernetes API to start other pods. I'm wanting to make this generic, and so don't want to hardcode the namespace (because I intend to have multiple of these, deployed probably as an OpenShift Template or similar).</p> <p>I am using Kustomize to set the namespace on all objects. Here's what my kustomization.yaml looks like for my overlay:</p> <pre class="lang-yaml prettyprint-override"><code>bases: - ../../base namespace: shiny commonAnnotations: technical_contact: A Local Developer &lt;[email protected]&gt; </code></pre> <p>Running Shiny Proxy and having it start the pods I need it to (I have service accounts and RBAC already sorted) works, so long as as in the configuration for Shiny Proxy I specify (hard-code) the namespace that the new pods should be generated in. The default namespace that Shiny Proxy will use is (unfortunately) 'default', which is inappropriate for my needs.</p> <p>Currently for the configuration I'm using a ConfigMap (perhaps I should move to a Kustomize ConfigMapGenerator)</p> <p>The ConfigMap in question is currently like the following:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: shiny-proxy data: application_yml: | ... container-backend: kubernetes kubernetes: url: https://kubernetes.default.svc.cluster.local:443 namespace: shiny ... </code></pre> <p>The above works, but 'shiny' is hardcoded; I would like to be able to do something like the following:</p> <pre><code> namespace: { .metadata.namespace } </code></pre> <p>But this doesn't appear to work in a ConfigMap, and I don't see anything in the documentation that would lead to believe that it would, or that a similar thing appears possible within the ConfigMap machinery.</p> <p>Looking over the Kustomize documentation doesn't fill me with clarity either, particularly as the configuration file is essentially plain-text (and not a YAML document as far as the ConfigMap is concerned). I've seen some use of Vars, but <a href="https://github.com/kubernetes-sigs/kustomize/issues/741" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/issues/741</a> leads to believe that's a non-starter.</p> <p>Is there a nice declarative way of handling this? Or should I be looking to have the templating smarts happen within the container, which seems kinda wrong to me, but I am still new to Kubernetes (and OpenShift)</p> <p>I'm using CodeReady Containers 1.24 (OpenShift 4.7.2) which is essentially Kubernetes 1.20 (IIRC). I'm preferring to keep this fairly well aligned with Kubernetes without getting too OpenShift specific, but this is still early days.</p> <p>Thanks, Cameron</p>
Cameron Kerr
<p>If you don't want to hard-code a specific data in your manifest file, you can consider using <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/" rel="nofollow noreferrer">Kustomize plugins</a>. In this case, the <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer" rel="nofollow noreferrer">sedtransformer plugin</a> may be useful. This is an example plugin, maintained and tested by the kustomize maintainers, but not built-in to kustomize.<br /> As you can see in the <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/" rel="nofollow noreferrer">Kustomize plugins guide</a>:</p> <blockquote> <p>Kustomize offers a plugin framework allowing people to write their own resource generators and transformers.</p> </blockquote> <p>For more information on creating and using Kustomize plugins, see <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/" rel="nofollow noreferrer">Extending Kustomize</a>.</p> <hr /> <p>I will create an example to illustrate how you can use the <code>sedtransformer</code> plugin in your case.</p> <p>Suppose I have a <code>shiny-proxy</code> <code>ConfigMap</code>:<br /> <strong>NOTE:</strong> I don't specify a namespace, I use <code>namespace: NAMESPACE</code> instead.</p> <pre><code>$ cat cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: shiny-proxy data: application_yml: | container-backend: kubernetes kubernetes: url: https://kubernetes.default.svc.cluster.local:443 namespace: NAMESPACE something_else: something: something </code></pre> <p>To use the <code>sedtransformer</code> plugin, we first need to create the plugin’s configuration file which contains a YAML configuration object:<br /> <strong>NOTE:</strong> In <code>argsOneLiner:</code> I specify that <code>NAMESPACE</code> should be replaced with <code>shiny</code>.</p> <pre><code>$ cat sedTransformer.yaml apiVersion: someteam.example.com/v1 kind: SedTransformer metadata: name: sedtransformer argsOneLiner: s/NAMESPACE/shiny/g </code></pre> <p>Next, we need to put the <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer" rel="nofollow noreferrer">SedTransformer</a> Bash script in the right place.</p> <blockquote> <p>When loading, kustomize will first look for an executable file called <code>$XDG_CONFIG_HOME/kustomize/plugin/${apiVersion}/LOWERCASE(${kind})/${kind}</code></p> </blockquote> <p>I create the necessary directories and download the <code>SedTransformer</code> script from the Github:<br /> <strong>NOTE:</strong> The downloaded script need to be executable.</p> <pre><code>$ mkdir -p $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer $ cd $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer $ wget https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer $ chmod a+x SedTransformer </code></pre> <p>Finally, we can check if it works as expected:<br /> <strong>NOTE:</strong> To use this plugin, you need to provide the <code>--enable-alpha-plugins</code> flag.</p> <pre><code>$ tree . ├── cm.yaml ├── kustomization.yaml └── sedTransformer.yaml 0 directories, 3 files $ cat kustomization.yaml resources: - cm.yaml transformers: - sedTransformer.yaml $ kustomize build --enable-alpha-plugins . apiVersion: v1 data: application_yml: | container-backend: kubernetes kubernetes: url: https://kubernetes.default.svc.cluster.local:443 namespace: shiny something_else: something: something kind: ConfigMap metadata: name: shiny-proxy </code></pre> <p>Using the <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer" rel="nofollow noreferrer">sedtransformer plugin</a> can be especially useful if you want to replace <code>NAMESPACE</code> in a number of places.</p>
matt_j
<p>I have total <strong>3</strong> node pools are as follow:</p> <ol> <li><strong>database pool</strong> - regular node pool</li> <li><strong>content pool</strong> - regular node pool</li> <li><strong>content spot pool</strong> - spot node pool</li> </ol> <p>Initially, <strong>content pool</strong> have <strong>0 node</strong> count with enabled autoscaler. I have deployed one nginx pod deployment on the <strong>content spot pool</strong>. which has minimum node count 1 and maximum node count 3. The deployment file for nginx are as follow:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: agentpool operator: In values: - contentspotpool - weight: 1 preference: matchExpressions: - key: agentpool operator: In values: - contentpool </code></pre> <p>When the content spot pool is evicted I want that the pod on the <strong>content spot pool</strong> are to be shifted on <strong>content pool</strong>. But, the pod are scheduled on the <strong>database pool</strong>..!</p> <p>Can anyone tell me how I can achieve this?..</p> <p>Also How can I setup a <strong>database pool</strong> in such way that it refuses all the new pods?</p> <p>AKS version used - 1.18.14</p>
Kaivalya Dambalkar
<p>I decided to provide a Community Wiki answer as there was a similar answer but was deleted by its author.</p> <p>In this case, you can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a>:</p> <blockquote> <p>Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.</p> </blockquote> <p>You may add a <code>taint</code> to nodes from the <strong>database pool</strong> and specify a <code>toleration</code> that &quot;matches&quot; this <code>taint</code> only for Pods that can be scheduled on the <strong>database pool</strong>.</p> <hr /> <p>I've created a simple example to illustrate how it may work.</p> <p>I have only one worker node and I added a specific <code>taint</code> to this node:</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE database-pool Ready &lt;none&gt; 6m9s $ kubectl taint nodes database-pool type=database:NoSchedule node/database-pool tainted $ kubectl describe node database-pool | grep -i taint Taints: type=database:NoSchedule </code></pre> <p>Only <code>Pods</code> with the following <code>toleration</code> will be allowed to be scheduled onto the <code>database-pool</code> node:</p> <pre><code>tolerations: - key: &quot;type&quot; operator: &quot;Equal&quot; value: &quot;database&quot; effect: &quot;NoSchedule&quot; </code></pre> <p>I created two <code>Pods</code>: <code>web</code> (does not tolerate the <code>taint</code>) and <code>web-with-toleration</code> (tolerates the <code>taint</code>):</p> <pre><code>$ cat web.yml apiVersion: v1 kind: Pod metadata: labels: run: web name: web spec: containers: - image: nginx name: web $ cat web-with-toleration.yml apiVersion: v1 kind: Pod metadata: labels: run: web name: web-with-toleration spec: containers: - image: nginx name: web tolerations: - key: &quot;type&quot; operator: &quot;Equal&quot; value: &quot;database&quot; effect: &quot;NoSchedule&quot; $ kubectl apply -f web.yml pod/web created $ kubectl apply -f web-with-toleration.yml pod/web-with-toleration created </code></pre> <p>Finally, we can check which <code>Pod</code> has been correctly scheduled:</p> <pre><code>$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE web 0/1 Pending 0 6m13s &lt;none&gt; &lt;none&gt; web-with-toleration 1/1 Running 0 6m8s 10.76.0.14 database-pool </code></pre> <p><br>It is possible to use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer"><em>Node affinity</em></a> and <em>Taints</em> at the same time to have a great control over the placement of <code>Pods</code> on specific nodes.</p> <blockquote> <p>Node affinity, is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.</p> </blockquote>
matt_j
<p>I have an angular application which is deployed on apache container running on Kubernetes. I wanted to setup liveness and readiness probe for pods but I am out of ideas .Any help would be appreciated.</p>
Ladu anand
<p>Base on the link you provided, you can use the following as a start:</p> <pre><code>apiVersion: v1 kind: Pod ... spec: ... containers: - name: ... ... livenessProbe: tcpSocket: port: 80 # &lt;-- live when your web server is running initialDelaySeconds: 5 # &lt;-- wait 5s before starting to probe periodSeconds: 20 # &lt;-- do this probe every 20 seconds readinessProbe: httpGet: path: / # &lt;-- ready to accept connection when your home page is serving port: 80 initialDelaySeconds: 15 periodSeconds: 10 failureThreshold: 3 # &lt;-- must not fail &gt; 3 probes </code></pre>
gohm'c
<p>I have several pods where the rollout (initial and later updates) needs to performed one by one. (Actually only the first needs to be ready before the remaining can start or be upgraded) I used stateful set for that, as it makes sure only one at a time is updated or created, but we are usung telepresence for development and it doesn’t support replacing stateful sets. So I thought I could use deployment instead of stateful set with rolling update strategy and limit the number of maxunavailable or maxsurge or whatever to “throttle” the deployment. But for the initial deploymemt that doesn’t work, as K8s creates the desired 2 at once instead one one by one. Is there a way to achieve that with a deploment or do I need to use a stateful set? (alternatively: is there a trick to use telepresence with stateful sets)</p> <p>Clarification based on questions in comments:</p> <ul> <li>The problematic software here is flyway in combination with mariadb in cluster mode. Then the table locking doesn't work and simultaniously starting pods can try to perform schema and data updates at the same time</li> <li>init containers don't help, as they start at the same time for multiple instances of the pod and just make sure that the main container of each instance is started after the init container</li> <li>The problem is only on first initalization, because afterwards I can configure the rolling update strategy to only update one container at a time. In case of a scaling out I'd have to do it in increments of 1, but that would be a manual process anyway.</li> <li>I could make sure that the deployment descriptor for new deployments uses scale 1 and updates to scale 2 afterwards, but that leads to a very complicated automatic deployment process with variable scales dependig on the state and the build chain would need to check if a deployment is present to decide if it's an update or a first deployment. Which would be error prone and overly complex</li> </ul>
Patrick Cornelissen
<p>In my opinion, there are two possible solutions, but both require extra effort.<br /> I will describe both solutions and you can choose which one suits you best.</p> <hr /> <h3>Solution one: Deploying application with a script</h3> <p>You can deploy your application with the script below:</p> <pre><code>$ cat deploy.sh #!/bin/bash # Usage: deploy.sh DEPLOYMENT_FILENAME NAMESPACE NUMBER_OF_REPLICAS deploymentFileName=$1 # Deployment manifest file name namespace=$2 # Namespace where app should be deployed replicas=$3 # Numbers of replicas that should be deployed # First deploy ONLY one replica - sed command changes actual number of replicas to 1 but the original manifest file remains intact cat ${deploymentFileName} | sed -E 's/replicas: [0-9]+$/replicas: 1/' | kubectl apply -n $namespace -f - # The &quot;until&quot; loop waits until the first replica is ready # Check deployment rollout status every 10 seconds (max 10 minutes) until complete. attempts=0 rollout_cmd=&quot;kubectl rollout status -f ${deploymentFileName} -n $namespace&quot; until $rollout_cmd || [ $attempts -eq 60 ]; do $rollout_cmd attempts=$((attempts + 1)) sleep 10 done if [ $attempts -eq 60 ]; then echo &quot;ERROR: Timeout&quot; exit 1 fi # With the first replica running, deploy the rest unless we want to deploy only one replica if [ $replicas -ne 1 ]; then kubectl scale -f ${deploymentFileName} -n $namespace --replicas=${replicas} fi </code></pre> <p>I created simple example to illustrate you how it works.</p> <p>First, I created the <code>web-app.yml</code> deployment manifest file:</p> <pre><code>$ kubectl create deployment web-app --image=nginx --replicas=3 --dry-run=client -oyaml &gt; web-app.yml </code></pre> <p>Then I deployed the <code>web-app</code> Deployment using the <code>deploy.sh</code> script:</p> <pre><code>$ ./deploy.sh web-app.yml default 3 deployment.apps/web-app created Waiting for deployment &quot;web-app&quot; rollout to finish: 0 of 1 updated replicas are available... deployment &quot;web-app&quot; successfully rolled out deployment.apps/web-app scaled </code></pre> <p>From another terminal window, we can see that only when the first replica (<code>web-app-5cd54cb75-krgtc</code>) was in the &quot;Running&quot; state, the rest started to start:</p> <pre><code>$ kubectl get pod -w NAME READY STATUS RESTARTS AGE web-app-5cd54cb75-krgtc 0/1 Pending 0 0s web-app-5cd54cb75-krgtc 0/1 Pending 0 0s web-app-5cd54cb75-krgtc 0/1 ContainerCreating 0 0s web-app-5cd54cb75-krgtc 1/1 Running 0 4s # First replica in the &quot;Running state&quot; web-app-5cd54cb75-tmpcn 0/1 Pending 0 0s web-app-5cd54cb75-tmpcn 0/1 Pending 0 0s web-app-5cd54cb75-sstg6 0/1 Pending 0 0s web-app-5cd54cb75-tmpcn 0/1 ContainerCreating 0 0s web-app-5cd54cb75-sstg6 0/1 Pending 0 0s web-app-5cd54cb75-sstg6 0/1 ContainerCreating 0 0s web-app-5cd54cb75-tmpcn 1/1 Running 0 5s web-app-5cd54cb75-sstg6 1/1 Running 0 7s </code></pre> <h3>Solution two: Using initContainers</h3> <p>You can use the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init Container</a> that will run a script that determines which Pod should be run first:</p> <pre><code>$ cat checker.sh #!/bin/bash labelName=&quot;app&quot; # label key of you rapplication labelValue=&quot;web&quot; # label value of your application hostname=$(hostname) apt update &amp;&amp; apt install -y jq 1&gt;/dev/null 2&gt;&amp;1 # install the jq program - command-line JSON processor startFirst=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/?labelSelector=${labelName}%3D${labelValue}&amp;limit=500&quot; | jq '.items[].metadata.name' | sort | head -n1 | tr -d '&quot;&quot;') # determine which Pod should start first -&gt; kubectl get pods -l app=web -o=name | sort | head -n1 firstPodStatusChecker=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/${startFirst}&quot;| jq '.status.phase' | tr -d '&quot;&quot;') # check status of the Pod that should start first attempts=0 if [ ${hostname} != ${startFirst} ] then while [ ${firstPodStatusChecker} != &quot;Running&quot; ] &amp;&amp; [ $attempts -lt 60 ]; do attempts=$((attempts + 1)) sleep 5 firstPodStatusChecker=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/${startFirst}&quot;| jq '.status.phase' | tr -d '&quot;&quot;') # check status of the Pod that should start first done fi if [ $attempts -eq 60 ]; then echo &quot;ERROR: Timeout&quot; exit 1 fi </code></pre> <p>The most important line in this script is:</p> <pre><code>startFirst=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/?labelSelector=${labelName}%3D${labelValue}&amp;limit=500&quot; | jq '.items[].metadata.name' | sort | head -n1 | tr -d '&quot;&quot;') </code></pre> <p>This line determines which Pod should start first and the rest of replicas will wait until the first Pod is started. I'm using <code>curl</code> command to access the API from the Pod. We don't need to create complex <code>curl</code> commands manually but we can easily convert <code>kubectl</code> command to <code>curl</code> command - if you run <code>kubectl</code> command with <code>-v=10</code> option you can see <code>curl</code> requests.</p> <p><strong>NOTE:</strong> In this approach, you need to add appropriate permissions for the <code>ServiceAccount</code> to communicate with the API. For example you may add a <code>view</code> role to your <code>ServiceAccount</code> like this:</p> <pre><code>$ kubectl create clusterrolebinding --serviceaccount=default:default --clusterrole=view default-sa-view-access clusterrolebinding.rbac.authorization.k8s.io/default-sa-view-access created </code></pre> <p>You can mount this <code>checker.sh</code> script e.g. as a <code>ConfigMap</code>:</p> <pre><code>$ cat check-script-configmap.yml apiVersion: v1 kind: ConfigMap metadata: name: check-script data: checkScript.sh: | #!/bin/bash labelName=&quot;app&quot; # label key of you rapplication labelValue=&quot;web&quot; # label value of your application hostname=$(hostname) apt update &amp;&amp; apt install -y jq 1&gt;/dev/null 2&gt;&amp;1 # install the jq program - command-line JSON processor startFirst=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/?labelSelector=${labelName}%3D${labelValue}&amp;limit=500&quot; | jq '.items[].metadata.name' | sort | head -n1 | tr -d '&quot;&quot;') # determine which Pod should start first -&gt; kubectl get pods -l app=web -o=name | sort | head -n1 firstPodStatusChecker=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/${startFirst}&quot;| jq '.status.phase' | tr -d '&quot;&quot;') # check status of the Pod that should start first attempts=0 if [ ${hostname} != ${startFirst} ] then while [ ${firstPodStatusChecker} != &quot;Running&quot; ] &amp;&amp; [ $attempts -lt 60 ]; do attempts=$((attempts + 1)) sleep 5 firstPodStatusChecker=$(curl -sSk -H &quot;Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; &quot;https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/${startFirst}&quot;| jq '.status.phase' | tr -d '&quot;&quot;') # check status of the Pod that should start first done fi if [ $attempts -eq 60 ]; then echo &quot;ERROR: Timeout&quot; exit 1 fi </code></pre> <p>I also created simple example to illustrate you how it works.</p> <p>First, I created the above <code>check-script</code> ConfigMap:</p> <pre><code>$ kubectl apply -f check-script-configmap.yml configmap/check-script created </code></pre> <p>Then I mounted this <code>ConfigMap</code> to the <code>initContainer</code> and deployed this Deployment:</p> <pre><code>$ cat web.yml apiVersion: apps/v1 kind: Deployment metadata: labels: app: web name: web spec: replicas: 3 selector: matchLabels: app: web template: metadata: labels: app: web spec: volumes: - name: check-script configMap: name: check-script initContainers: - image: nginx name: init command: [&quot;bash&quot;, &quot;/mnt/checkScript.sh&quot;] volumeMounts: - name: check-script mountPath: /mnt/ containers: - image: nginx name: nginx $ kubectl apply -f web.yml deployment.apps/web created </code></pre> <p>From another terminal window, we can see that only when the first replica (<code>web-98c4d45dd-6zcsd</code>) was in the &quot;Running&quot; state, the rest started to start:</p> <pre><code>$ kubectl get pod -w NAME READY STATUS RESTARTS AGE web-98c4d45dd-ztjlf 0/1 Pending 0 0s web-98c4d45dd-ztjlf 0/1 Pending 0 0s web-98c4d45dd-6zcsd 0/1 Pending 0 0s web-98c4d45dd-mc2ww 0/1 Pending 0 0s web-98c4d45dd-6zcsd 0/1 Pending 0 0s web-98c4d45dd-mc2ww 0/1 Pending 0 0s web-98c4d45dd-ztjlf 0/1 Init:0/1 0 0s web-98c4d45dd-6zcsd 0/1 Init:0/1 0 0s web-98c4d45dd-mc2ww 0/1 Init:0/1 0 1s web-98c4d45dd-6zcsd 0/1 Init:0/1 0 3s web-98c4d45dd-ztjlf 0/1 Init:0/1 0 3s web-98c4d45dd-mc2ww 0/1 Init:0/1 0 4s web-98c4d45dd-6zcsd 0/1 PodInitializing 0 10s web-98c4d45dd-6zcsd 1/1 Running 0 12s web-98c4d45dd-mc2ww 0/1 PodInitializing 0 23s web-98c4d45dd-ztjlf 0/1 PodInitializing 0 23s web-98c4d45dd-mc2ww 1/1 Running 0 25s web-98c4d45dd-ztjlf 1/1 Running 0 26s </code></pre>
matt_j
<p>I am a newbie to k8s and I am trying to deploy a private docker registry in Kubernetes.</p> <p>The problem is that whenever I have to upload a heavy image (1GB size) via <code>docker push</code>, the command eventually returns EOF.</p> <p>Apparently, I believe the issue has to do with kubernetes ingress nginx controller.</p> <p>I will provide you with some useful information, in case you need more, do not hesitate to ask:</p> <p>Docker push (to internal k8s docker registry) fail:</p> <pre><code>[root@bastion ~]# docker push docker-registry.apps.kube.lab/example:stable The push refers to a repository [docker-registry.apps.kube.lab/example] c0acde035881: Pushed f6d2683cee8b: Pushed 00b1a6ab6acd: Retrying in 1 second 28c41b4dd660: Pushed 36957997ca7a: Pushed 5c4d527d6b3a: Pushed a933681cf349: Pushing [==================================================&gt;] 520.4 MB f49d20b92dc8: Retrying in 20 seconds fe342cfe5c83: Retrying in 15 seconds 630e4f1da707: Retrying in 13 seconds 9780f6d83e45: Waiting EOF </code></pre> <p>Ingress definition:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: docker-registry namespace: docker-registry annotations: nginx.ingress.kubernetes.io/proxy-body-size: &quot;0&quot; nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;86400&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;86400&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;86400&quot; spec: rules: - host: docker-registry.apps.kube.lab http: paths: - backend: serviceName: docker-registry servicePort: 5000 path: / </code></pre> <p>Docker registry configuration (/etc/docker/registry/config.yml):</p> <pre><code>version: 0.1 log: level: info formatter: json fields: service: registry storage: redirect: disable: true cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry http: addr: :5000 host: docker-registry.apps.kube.lab headers: X-Content-Type-Options: [nosniff] health: storagedriver: enabled: true interval: 10s threshold: 3 </code></pre> <p>Docker registry logs:</p> <pre><code>{&quot;go.version&quot;:&quot;go1.11.2&quot;,&quot;http.request.host&quot;:&quot;docker-registry.apps.kube.lab&quot;,&quot;http.request.id&quot;:&quot;c079b639-0e8a-4a27-96fa-44c4c0182ff7&quot;,&quot;http.request.method&quot;:&quot;HEAD&quot;,&quot;http.request.remoteaddr&quot;:&quot;10.233.70.0&quot;,&quot;http.request.uri&quot;:&quot;/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;http.request.useragent&quot;:&quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))&quot;,&quot;level&quot;:&quot;debug&quot;,&quot;msg&quot;:&quot;authorizing request&quot;,&quot;time&quot;:&quot;2020-11-07T14:43:22.893626513Z&quot;,&quot;vars.digest&quot;:&quot;sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;vars.name&quot;:&quot;example&quot;} {&quot;go.version&quot;:&quot;go1.11.2&quot;,&quot;http.request.host&quot;:&quot;docker-registry.apps.kube.lab&quot;,&quot;http.request.id&quot;:&quot;c079b639-0e8a-4a27-96fa-44c4c0182ff7&quot;,&quot;http.request.method&quot;:&quot;HEAD&quot;,&quot;http.request.remoteaddr&quot;:&quot;10.233.70.0&quot;,&quot;http.request.uri&quot;:&quot;/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;http.request.useragent&quot;:&quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))&quot;,&quot;level&quot;:&quot;debug&quot;,&quot;msg&quot;:&quot;GetBlob&quot;,&quot;time&quot;:&quot;2020-11-07T14:43:22.893751065Z&quot;,&quot;vars.digest&quot;:&quot;sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;vars.name&quot;:&quot;example&quot;} {&quot;go.version&quot;:&quot;go1.11.2&quot;,&quot;http.request.host&quot;:&quot;docker-registry.apps.kube.lab&quot;,&quot;http.request.id&quot;:&quot;c079b639-0e8a-4a27-96fa-44c4c0182ff7&quot;,&quot;http.request.method&quot;:&quot;HEAD&quot;,&quot;http.request.remoteaddr&quot;:&quot;10.233.70.0&quot;,&quot;http.request.uri&quot;:&quot;/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;http.request.useragent&quot;:&quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))&quot;,&quot;level&quot;:&quot;debug&quot;,&quot;msg&quot;:&quot;filesystem.GetContent(\&quot;/docker/registry/v2/repositories/example/_layers/sha256/751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee/link\&quot;)&quot;,&quot;time&quot;:&quot;2020-11-07T14:43:22.893942372Z&quot;,&quot;trace.duration&quot;:74122,&quot;trace.file&quot;:&quot;/go/src/github.com/docker/distribution/registry/storage/driver/base/base.go&quot;,&quot;trace.func&quot;:&quot;github.com/docker/distribution/registry/storage/driver/base.(*Base).GetContent&quot;,&quot;trace.id&quot;:&quot;11e24830-7d16-404a-90bc-8a738cab84ea&quot;,&quot;trace.line&quot;:95,&quot;vars.digest&quot;:&quot;sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;vars.name&quot;:&quot;example&quot;} {&quot;err.code&quot;:&quot;blob unknown&quot;,&quot;err.detail&quot;:&quot;sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;err.message&quot;:&quot;blob unknown to registry&quot;,&quot;go.version&quot;:&quot;go1.11.2&quot;,&quot;http.request.host&quot;:&quot;docker-registry.apps.kube.lab&quot;,&quot;http.request.id&quot;:&quot;c079b639-0e8a-4a27-96fa-44c4c0182ff7&quot;,&quot;http.request.method&quot;:&quot;HEAD&quot;,&quot;http.request.remoteaddr&quot;:&quot;10.233.70.0&quot;,&quot;http.request.uri&quot;:&quot;/v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;http.request.useragent&quot;:&quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))&quot;,&quot;http.response.contenttype&quot;:&quot;application/json; charset=utf-8&quot;,&quot;http.response.duration&quot;:&quot;1.88607ms&quot;,&quot;http.response.status&quot;:404,&quot;http.response.written&quot;:157,&quot;level&quot;:&quot;error&quot;,&quot;msg&quot;:&quot;response completed with error&quot;,&quot;time&quot;:&quot;2020-11-07T14:43:22.894147954Z&quot;,&quot;vars.digest&quot;:&quot;sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee&quot;,&quot;vars.name&quot;:&quot;example&quot;} 10.233.105.66 - - [07/Nov/2020:14:43:22 +0000] &quot;HEAD /v2/example/blobs/sha256:751620502a7a2905067c2f32d4982fb9b310b9808670ce82c0e2b40f5307a3ee HTTP/1.1&quot; 404 157 &quot;&quot; &quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \\(linux\\))&quot; </code></pre> <p>I believe the issue has to do with ingress controller because when EOF error shows up, there is something weird in <strong>ingress-controller logs</strong>:</p> <pre><code>10.233.70.0 - - [07/Nov/2020:14:43:41 +0000] &quot;PUT /v2/example/blobs/uploads/dab984a8-7e71-4481-91fb-af53c7790a20?_state=usMX2WH24Veunay0ozOF-RMZIUMNTFSC8MSPbMcxz-B7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6ImRhYjk4NGE4LTdlNzEtNDQ4MS05MWZiLWFmNTNjNzc5MGEyMCIsIk9mZnNldCI6NzgxMTczNywiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyOFoifQ%3D%3D&amp;digest=sha256%3A101c41d0463bc77661fb3343235b16d536a92d2efb687046164d413e51bd4fc4 HTTP/1.1&quot; 201 0 &quot;-&quot; &quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))&quot; 606 0.026 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 0.026 201 06304ff584d252812dff016374be73ae 172.16.1.123 - - [07/Nov/2020:14:43:42 +0000] &quot;HEAD /v2/example/blobs/sha256:101c41d0463bc77661fb3343235b16d536a92d2efb687046164d413e51bd4fc4 HTTP/1.1&quot; 200 0 &quot;-&quot; &quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))&quot; 299 0.006 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 0.006 200 a5a93c7b7f4644139fcb0697d3e5e43f I1107 14:44:05.285478 6 main.go:184] &quot;Received SIGTERM, shutting down&quot; I1107 14:44:05.285517 6 nginx.go:365] &quot;Shutting down controller queues&quot; I1107 14:44:06.294533 6 status.go:132] &quot;removing value from ingress status&quot; address=[172.16.1.123] I1107 14:44:06.306793 6 status.go:277] &quot;updating Ingress status&quot; namespace=&quot;kube-system&quot; ingress=&quot;example-ingress&quot; currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[] I1107 14:44:06.307650 6 status.go:277] &quot;updating Ingress status&quot; namespace=&quot;kubernetes-dashboard&quot; ingress=&quot;dashboard&quot; currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[] I1107 14:44:06.880987 6 status.go:277] &quot;updating Ingress status&quot; namespace=&quot;test-nfs&quot; ingress=&quot;example-nginx&quot; currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[] I1107 14:44:07.872659 6 status.go:277] &quot;updating Ingress status&quot; namespace=&quot;test-ingress&quot; ingress=&quot;example-ingress&quot; currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[] I1107 14:44:08.505295 6 queue.go:78] &quot;queue has been shutdown, failed to enqueue&quot; key=&quot;&amp;ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:&lt;nil&gt;,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}&quot; I1107 14:44:08.713579 6 status.go:277] &quot;updating Ingress status&quot; namespace=&quot;docker-registry&quot; ingress=&quot;docker-registry&quot; currentValue=[{IP:172.16.1.123 Hostname:}] newValue=[] I1107 14:44:09.772593 6 nginx.go:373] &quot;Stopping admission controller&quot; I1107 14:44:09.772697 6 nginx.go:381] &quot;Stopping NGINX process&quot; E1107 14:44:09.773208 6 nginx.go:314] &quot;Error listening for TLS connections&quot; err=&quot;http: Server closed&quot; 2020/11/07 14:44:09 [notice] 114#114: signal process started 10.233.70.0 - - [07/Nov/2020:14:44:16 +0000] &quot;PATCH /v2/example/blobs/uploads/adbe3173-9928-4eb5-97bb-7893970f032a?_state=nEr2ip9eoLNCTe8KQ6Ck7k3C8oS9IY7AnBOi1_f5mSl7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6ImFkYmUzMTczLTk5MjgtNGViNS05N2JiLTc4OTM5NzBmMDMyYSIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyOC45ODY3MTQwNTlaIn0%3D HTTP/1.1&quot; 202 0 &quot;-&quot; &quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))&quot; 50408825 46.568 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 14.339 202 55d9cab4f915f54e5c130321db4dc8fc 10.233.70.0 - - [07/Nov/2020:14:44:19 +0000] &quot;PATCH /v2/example/blobs/uploads/63d4a54a-cdfd-434b-ae63-dc434dcb15f9?_state=9UK7MRYJYST--u7BAUFTonCdPzt_EO2KyfJblVroBxd7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6IjYzZDRhNTRhLWNkZmQtNDM0Yi1hZTYzLWRjNDM0ZGNiMTVmOSIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyMy40MjIwMDI4NThaIn0%3D HTTP/1.1&quot; 202 0 &quot;-&quot; &quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))&quot; 51842691 55.400 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 18.504 202 1f1de1ae89caa8540b6fd13ea5b165ab 10.233.70.0 - - [07/Nov/2020:14:44:50 +0000] &quot;PATCH /v2/example/blobs/uploads/0c97923d-ed9f-4599-8a50-f2c21cfe85fe?_state=WmIRW_3owlin1zo4Ms98UwaMGf1D975vUuzbk1JWRuN7Ik5hbWUiOiJleGFtcGxlIiwiVVVJRCI6IjBjOTc5MjNkLWVkOWYtNDU5OS04YTUwLWYyYzIxY2ZlODVmZSIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAyMC0xMS0wN1QxNDo0MzoyMC41ODA5MjUyNDlaIn0%3D HTTP/1.1&quot; 202 0 &quot;-&quot; &quot;docker/1.13.1 go/go1.10.3 kernel/3.10.0-1127.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.13.1 \x5C(linux\x5C))&quot; 192310965 89.937 [docker-registry-docker-registry-5000] [] 10.233.70.84:5000 0 22.847 202 d8971d2f543e936c2f805d5b257f1130 I1107 14:44:50.832669 6 nginx.go:394] &quot;NGINX process has stopped&quot; I1107 14:44:50.832703 6 main.go:192] &quot;Handled quit, awaiting Pod deletion&quot; I1107 14:45:00.832892 6 main.go:195] &quot;Exiting&quot; code=0 [root@bastion registry]# </code></pre> <p>After that happens, ingres-controller pod is not ready, and after some seconds it is again ready.</p> <p>Is it to do with config reload of kubernetes nginx ingress controller? In such case, do I have to add any special variable to nginx.conf?</p> <p>Any help is welcome! Kind regards!</p> <p><strong>EDIT</strong></p> <p>The moment EOF appears, ingress-nginx crashes, and pods become not ready.</p> <pre><code>[root@bastion ~]# kubectl get po NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-lbmd6 0/1 Completed 0 5d4h ingress-nginx-admission-patch-btv27 0/1 Completed 0 5d4h ingress-nginx-controller-7dcc8d6478-n8dkx 0/1 Running 3 15m Warning Unhealthy 29s (x8 over 2m39s) kubelet Liveness probe failed: Get http://10.233.70.100:10254/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>As a consequence, any of my applications are not reachable:</p> <pre><code>[root@bastion ~]# curl http://hello-worrld.apps.kube.lab Hello, world! Version: 1.0.0 Hostname: web-6785d44d5-4r5q5 [root@bastion ~]# date sáb nov 7 18:58:16 -01 2020 [root@bastion ~]# curl http://hello-worrld.apps.kube.lab curl: (52) Empty reply from server [root@bastion ~]# date sáb nov 7 18:58:53 -01 2020 </code></pre> <p>Is the issue to do with performance of nginx? If so, what options would you recommend me to tweak ingress-nginx?</p>
José Ángel Morena Simón
<p>You should try another Docker registry to ensure its actually caused by ingress. It does not make sense why ingress would fail due to an image size.</p> <p>You can try JFrog JCR which is free and you could then deploy JCR into your kubernetes and expose it via a LoadBalancer (external ip) or ingress.</p> <p>You then have the option to verify this way that it is really an ingress issue as you can push a docker image via LoadBalancer (external ip) and if that works but ingress fails you know this is specifically caused by your ingress.</p> <p>JFrog JCR is also free and available at chartcenter <a href="https://chartcenter.io/jfrog/artifactory-jcr" rel="nofollow noreferrer">here</a></p>
John Peterson
<p>I am deploying Elasticsearch cluster to K8S on EKS with nodegroup. I claimed a EBS for the cluster's storage. When I launch the cluster, only one pod is running successfully but I got this error for other pods:</p> <pre><code> Warning FailedAttachVolume 3m33s attachdetach-controller Multi-Attach error for volume &quot;pvc-4870bd46-2f1e-402a-acf7-005de83e4588&quot; Volume is already used by pod(s) es-0 Warning FailedMount 90s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[es-config persistent-storage default-token-pqzkp]: timed out waiting for the condition </code></pre> <p>It means the storage is already in use. I understand that this volume is used by the first pod so other pods can't use it. But I don't know how to use different mount path for different pod when they are using the same EBS volume.</p> <p>Below is the full spec for the cluster.</p> <pre><code> apiVersion: v1 kind: ConfigMap metadata: name: es-config data: elasticsearch.yml: | cluster.name: elk-cluster network.host: &quot;0.0.0.0&quot; bootstrap.memory_lock: false # discovery.zen.minimum_master_nodes: 2 node.max_local_storage_nodes: 9 discovery.seed_hosts: - es-0.es-entrypoint.default.svc.cluster.local - es-1.es-entrypoint.default.svc.cluster.local - es-2.es-entrypoint.default.svc.cluster.local ES_JAVA_OPTS: -Xms4g -Xmx8g --- apiVersion: apps/v1 kind: StatefulSet metadata: name: es namespace: default spec: serviceName: es-entrypoint replicas: 3 selector: matchLabels: name: es template: metadata: labels: name: es spec: volumes: - name: es-config configMap: name: es-config items: - key: elasticsearch.yml path: elasticsearch.yml - name: persistent-storage persistentVolumeClaim: claimName: ebs-claim initContainers: - name: permissions-fix image: busybox volumeMounts: - name: persistent-storage mountPath: /usr/share/elasticsearch/data command: [ 'chown' ] args: [ '1000:1000', '/usr/share/elasticsearch/data' ] containers: - name: es image: elasticsearch:7.10.1 resources: requests: cpu: 2 memory: 8Gi ports: - name: http containerPort: 9200 - containerPort: 9300 name: inter-node volumeMounts: - name: es-config mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml - name: persistent-storage mountPath: /usr/share/elasticsearch/data --- apiVersion: v1 kind: Service metadata: name: es-entrypoint spec: selector: name: es ports: - port: 9200 targetPort: 9200 protocol: TCP clusterIP: None </code></pre>
Joey Yi Zhao
<p>You should be using <code>volumeClaimTemplates</code> with statefulset so that each pod gets its own volume. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="nofollow noreferrer">Details</a>:</p> <pre><code>volumeClaimTemplates: - metadata: name: es spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi # storageClassName: &lt;omit to use default StorageClass or specify&gt; </code></pre>
gohm'c
<p>I have an ubuntu image that I use for debugging. I do <code>kubectl attach my-app -c my-app -it</code> and then many <code>apt-get install</code> and other configurations I need.</p> <p>The problem is when I exit <code>ctrl+c</code> the pod seems to get restarted and I loose all what I did <br>Is there a way like <code>--restart=never</code> to avoid the container to be recreated</p>
Carlos Garcia
<p>Run another shell session <code>kubectl exec my-app -c my-app -it -- bash</code> to prepare your container. Alternately, if your pod spec has the following set to true:</p> <pre><code>stdin: true tty: true </code></pre> <p>You use the escape sequence Ctrl+P followed by Ctrl+Q to detach from the container after <code>kubectl attach -it</code> to the container.</p>
gohm'c
<p>I have an application that has react in the front-end and a node service in the back-end. The app is deployed in the GKE cluster. Both the apps are exposed as a NodePort Service, and the fan out ingress path is done as follows :</p> <pre><code>- host: example.com http: paths: - backend: serviceName: frontend-service servicePort: 3000 path: /* - backend: serviceName: backend-service servicePort: 5000 path: /api/* </code></pre> <p>I have enabled authentication using IAP for both services. When enabling IAP for both the kubernetes services, new Client Id and Client Secret is created individually. But I need to provide authentication for the back-end API from the front-end, since they have 2 different accounts, its not possible, i.e when I call the back-end API service from the front-end the authentication fails because the cookies provided from the FE does not match in the back-end service.</p> <p>What is the best way to handle this scenario. Is there a way to use the same client credentials for both these services and if so, Is that the right way to do it or Is there a way to authenticate the Rest API using IAP directly.</p>
Ameena
<p>If IAP is setup using BackendConfig, then you can have two separate BackendConfig objects for frontend and backend applications but both of them use the same secrete (secretName) for oauthclientCredentials.</p> <p><em><strong>For frontend app</strong></em></p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: frontend-iap-config namespace: namespace-1 spec: iap: enabled: true oauthclientCredentials: secretName: common-iap-oauth-credentials </code></pre> <p><em><strong>For backend app</strong></em></p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: backend-iap-config namespace: namespace-1 spec: iap: enabled: true oauthclientCredentials: secretName: common-iap-oauth-credentials </code></pre> <p>Then refer these BackendConfigs from respective kubernetes service objects</p>
Ravindu Rathugama