Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I am in the process of deploying ActiveMQ 5.15 in HA on Kubernetes. Previously I was using a deployment and a <code>clusterIP</code> Service. And it was working fine. The master will boot up and the slave will wait for the lock to be acquired. If I delete the pod which is the master one, the slave picks up and becomes the master.</p> <p>Now I want to try with <code>statefulset</code> basing myself on <a href="https://stackoverflow.com/questions/57289381/activemq-on-kuberenetes-with-shared-storage">this thread</a>. Deployment is done successfully and two pods were created with <code>id0</code> and <code>id1</code>. But what I noticed is that both pods were master. They were both started. I noticed also that two PVC were created <code>id0</code> and <code>id1</code> in the case of <code>Statefulset</code> compared to <code>deployment</code> which had only 1 PVC. Could that be the issue since it is no more a shared storage? Can we still achieve a master/slave setup with <code>Statefulset</code>?</p>
ashley
<blockquote> <p>I noticed also that two PVC were created id0 and id1 in the case of statefulset compared to deployment which had only 1 PVC. Could that be the issue since it is no more a shared storage?</p> </blockquote> <p>You are right. When using k8s StatefulSets each Pod gets its own persistent storage (dedicated PVC and PV), and this persistent storage is not shared.</p> <p>When a Pod gets terminated and is rescheduled on a different Node, the Kubernetes controller will ensure that the Pod is associated with the same PVC which will guarantee that the state is intact.</p> <p>In your case, to achieve a master/slave setup, consider using a <strong>shared network location / filesystem for persistent storage</strong> like:</p> <ul> <li><a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">NFS storage</a> for on-premise k8s cluster.</li> <li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="nofollow noreferrer">AWS EFS</a> for EKS.</li> <li>or <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">Azure Files</a> for AKS.</li> </ul> <p>Check the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">complete list of PersistentVolume types currently supported by Kubernetes</a> (implemented as plugins).</p>
mozello
<p>I am generating Kubernetes CRD (Custom Resource Definition) using kubebuilder. Along with CRD, I also need to document the REST endpoints by creating OpenAPI v3 Spec (OAS) file. Is there a way to get this done using kubebuilder? Also does kubebuilder allow us to add sample Request/Response payload details in the .go file so that the generated OAS file is more user friendly?</p> <p>One option I found is to use curl :/openapi/v3, but maintaining the spec file manually is not sustainable. I want to use kubebuilder generated .go spec file as one source of truth. Any suggestion?</p>
Santi
<p>Seems to be there is no intended way to generate openapi files. As per this <a href="https://github.com/kubernetes-sigs/kubebuilder/issues/1129#issuecomment-546508941" rel="nofollow noreferrer">Git Link comment</a> by DirectXMan12 :</p> <blockquote> <p>KubeBuilder is not intended to generate go openapi files, since they're not useful for CRDs (they're basically only useful for aggregated API servers, and even then you can just bundle in the JSON/YAML form and parse on init).</p> </blockquote> <p>You can also refer to this <a href="https://github.com/kubernetes-sigs/kubebuilder/issues/1231#issuecomment-571374228" rel="nofollow noreferrer">Git linkcomment</a> and <a href="https://book-v1.book.kubebuilder.io/beyond_basics/generating_documentation.html" rel="nofollow noreferrer">doc</a> for further information .</p> <p>Edit : Kubebuilder is generating only CRD and by using controller gen can generate an open API but kubebuilder is still not intended to generate open api. You can use other third party tools like mentioned in this <a href="https://nordicapis.com/7-open-source-openapi-documentation-generators/" rel="nofollow noreferrer">document</a> by Vyom Srivastava.</p>
Hemanth Kumar
<p>By default creating a managed certificate object on GKE creates a managed certificate of type &quot;Load Balancer Authorization&quot;. How can I create one with DNS authorization through GKE?</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs</a></p> <pre><code>apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert spec: domains: - DOMAIN_NAME1 - DOMAIN_NAME2 </code></pre> <p>I want to add wildcard domains and this only possible with DNS authorization.</p> <p><a href="https://stackoverflow.com/questions/73734679/how-to-generate-google-managed-certificates-for-wildcard-hostnames-in-gcp">How to generate Google-managed certificates for wildcard hostnames in GCP?</a></p>
s_curry_s
<p>To create a google managed certificate with DNS Authorization follow this <a href="https://cloud.google.com/certificate-manager/docs/deploy-google-managed-dns-auth#create_a_google-managed_certificate_referencing_the_dns_authorization" rel="nofollow noreferrer">Google official doc</a> and <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/certificate_manager_dns_authorization" rel="nofollow noreferrer">Terraform doc</a>. `</p> <blockquote> <p>Each DNS authorization stores information about the DNS record that you need to set up and covers a single domain plus its wildcard—for example, example.com and *.example.com.</p> </blockquote> <ul> <li>You need to add a domain name and wild card name in the same domain name while creating the certificate.</li> <li>By using the certificate map and certificate Mapping entry, you need to map this domain and wild card domain.</li> <li>Create two certificate map entries one for domain and other for wild card domain. This will help the certificate to be active. You can also refer to this <a href="https://github.com/hashicorp/terraform-provider-google/issues/11037#issuecomment-1362628852" rel="nofollow noreferrer">gitlink</a> by fbozic for relevant info</li> </ul> <p>Already a <a href="https://issuetracker.google.com/issues/123290919" rel="nofollow noreferrer">feature request</a> is raised on this for more wild card usage and the Google Product team is working on this.</p>
Hemanth Kumar
<p>I'm trying to expose a port 8080 on a pod, so I can wget directly from server. With port-forward everything works fine (<code>kubectl --namespace jenkins port-forward pods/jenkins-6f8b486759-6vwkj 9000:8080</code>) , I'm able to connect to 127.0.0.1:9000</p> <p>But when I try to avoid port-forward and open ports permanently (<code>kubectl expose deployment jenkins --type=LoadBalancer -njenkins</code>): I see it in svc (<code>kubectl describe svc jenkins -njenkins</code>):</p> <pre><code>Name: jenkins Namespace: jenkins Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=jenkins Type: LoadBalancer IP Families: &lt;none&gt; IP: 10.111.244.192 IPs: 10.111.244.192 Port: port-1 8080/TCP TargetPort: 8080/TCP NodePort: port-1 31461/TCP Endpoints: 172.17.0.2:8080 Port: port-2 50000/TCP TargetPort: 50000/TCP NodePort: port-2 30578/TCP Endpoints: 172.17.0.2:50000 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>but port is still not up, <code>netstat</code> does not show anything. How it should be done correctly?</p> <p>Using minikube version: v1.20.0 , pod yaml just in case:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: jenkins spec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins spec: securityContext: containers: - name: jenkins image: jenkins/jenkins:lts ports: - name: http-port containerPort: 8080 hostPort: 8080 - name: jnlp-port containerPort: 50000 volumeMounts: - name: task-pv-storage mountPath: /var/jenkins_home volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim </code></pre>
Lev Bystritskiy
<p>I see that you are running your k8s cluster locally, in this case, LoadBalancer ServiceType is not recommended as this type uses cloud providers' load balancer to expose services externally. You might use a self-hosted or hardware load balancer but I suppose it's a bit overkill for minikube cluster.</p> <p>In your minikube deployment, I'd suggest using NodePort Service Type as it uses IP address of your node to expose service. Example YAML:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: jenkins-service spec: type: NodePort selector: app: jenkins ports: - port: 8080 targetPort: 8080 # nodePort field is optional, Kubernetes will allocate port from a range 30000-32767, but you can choose nodePort: 30007 - port: 50000 targetPort: 50000 nodePort: 30008 </code></pre> <p>Then, you can access your app on <code>&lt;NodeIP&gt;:&lt;nodePort&gt;</code>. If you want to read more about k8s Services go <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">here</a>.</p>
mdobrucki
<p>Env:</p> <pre class="lang-sh prettyprint-override"><code>❯ sw_vers ProductName: macOS ProductVersion: 11.6.1 BuildVersion: 20G224 ❯ minikube version minikube version: v1.24.0 commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b </code></pre> <p>I made a self-signed certificate example on NGINX pod. Omitting to create certificates and keys, since they are working on my local mac, the files are following:</p> <pre class="lang-sh prettyprint-override"><code>❯ ll rootCA.* -rw-r--r--@ 1 hansuk staff 1383 1 17 12:37 rootCA.crt -rw------- 1 hansuk staff 1874 1 17 12:02 rootCA.key ❯ ll localhost.* -rw------- 1 hansuk staff 1704 1 17 12:09 localhost.key -rw-r--r-- 1 hansuk staff 1383 1 17 12:37 localhost.pem </code></pre> <p>Start up the following kubernetes definitions on minikube(<code>kubectl apply -f nginx.yml -n cert</code>):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: nginx-cert labels: app: nginx-cert spec: type: NodePort ports: - port: 80 protocol: TCP name: http nodePort: 30080 - port: 443 protocol: TCP name: https nodePort: 30443 selector: app: nginx-cert --- apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx-cert name: nginx-cert spec: replicas: 1 selector: matchLabels: app: nginx-cert template: metadata: labels: app: nginx-cert spec: volumes: - name: secret-volume secret: secretName: nginxsecret - name: configmap-volume configMap: name: nginxconfmap containers: - image: nginx name: nginx ports: - containerPort: 80 - containerPort: 443 volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume </code></pre> <p>Create the configmap and secret for nginx config and TLS path respectively:</p> <pre class="lang-sh prettyprint-override"><code>❯ cat default.conf server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; listen 443 ssl; root /usr/share/nginx/html; index index.html; server_name locahost; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location / { try_files / =404; } } ❯ kubectl create configmap nginxconfmap --from-file=default.conf -n cert ❯ kubectl create secret tls nginxsecret --key localhost.key --cert localhost.pem -n cert </code></pre> <p>All status of deployments and services, and event logs are OK. No failures:</p> <pre class="lang-sh prettyprint-override"><code>❯ kubectl get all -n cert NAME READY STATUS RESTARTS AGE pod/nginx-cert-76f7f8748f-q2nvl 1/1 Running 0 21m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx-cert NodePort 10.110.115.36 &lt;none&gt; 80:30080/TCP,443:30443/TCP 21m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-cert 1/1 1 1 21m NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-cert-76f7f8748f 1 1 1 21m ❯ kubectl get events -n cert 22m Normal Scheduled pod/nginx-cert-76f7f8748f-q2nvl Successfully assigned cert/nginx-cert-76f7f8748f-q2nvl to minikube 22m Normal Pulling pod/nginx-cert-76f7f8748f-q2nvl Pulling image &quot;nginx&quot; 22m Normal Pulled pod/nginx-cert-76f7f8748f-q2nvl Successfully pulled image &quot;nginx&quot; in 4.345505365s 22m Normal Created pod/nginx-cert-76f7f8748f-q2nvl Created container nginx 22m Normal Started pod/nginx-cert-76f7f8748f-q2nvl Started container nginx 22m Normal SuccessfulCreate replicaset/nginx-cert-76f7f8748f Created pod: nginx-cert-76f7f8748f-q2nvl 22m Normal ScalingReplicaSet deployment/nginx-cert Scaled up replica set nginx-cert-76f7f8748f to </code></pre> <p>And then, SSL handshaking is working with minukube service IP:</p> <pre class="lang-sh prettyprint-override"><code>❯ minikube service --url nginx-cert --namespace cert http://192.168.64.2:30080 http://192.168.64.2:30443 ❯ openssl s_client -CAfile rootCA.crt -connect 192.168.64.2:30443 -showcerts 2&gt;/dev/null &lt; /dev/null CONNECTED(00000003) --- Certificate chain 0 s:C = KR, ST = Seoul, L = Seocho-gu, O = Localhost, CN = localhost i:C = KR, ST = RootState, L = RootCity, O = Root Inc., OU = Root CA, CN = Self-signed Root CA a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256 v:NotBefore: Jan 17 03:37:15 2022 GMT; NotAfter: Jan 17 03:37:15 2023 GMT -----BEGIN CERTIFICATE----- MIIDzzCCAregAwIBAgIUYMe6nRgsZwq9UPMKFgj9dt9z9FIwDQYJKoZIhvcNAQEL BQAweDELMAkGA1UEBhMCS1IxEjAQBgNVBAgMCVJvb3RTdGF0ZTERMA8GA1UEBwwI Um9vdENpdHkxEjAQBgNVBAoMCVJvb3QgSW5jLjEQMA4GA1UECwwHUm9vdCBDQTEc MBoGA1UEAwwTU2VsZi1zaWduZWQgUm9vdCBDQTAeFw0yMjAxMTcwMzM3MTVaFw0y MzAxMTcwMzM3MTVaMFkxCzAJBgNVBAYTAktSMQ4wDAYDVQQIDAVTZW91bDESMBAG A1UEBwwJU2VvY2hvLWd1MRIwEAYDVQQKDAlMb2NhbGhvc3QxEjAQBgNVBAMMCWxv Y2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALc9retjBorw RKbuyC1SNx1U9L5LJPPbBkBh4kg98saQxtRX0Wqs5mgswWMZYL3E6yRl0gfwBkdq t8GVQ49dgg0QO5MbG9ylfCLS9xR3WWjAgxaDJ0W96PyvTzmg295aqqHFKPSaG/nM JyZgFJDuGoRRgwoWNqZ1pRCDLMIENDx4qgjOnQch529pM9ZRwFQSswKpn4BVkY00 /u8jIvax67kFOg70QGY16paGEg7YfSNle7BFZY0VJ8rIiBoqwRmPH6hbF/djxe5b yzkI9eqts9bqw8eDLC28S36x62FxkdqkK8pI/rzWAKSV43TWML1zq4vM2bI+vp0k a06GhSsS1bUCAwEAAaNwMG4wHwYDVR0jBBgwFoAUURHNpOE9zTgXgVYAGvLt94Ym P+8wCQYDVR0TBAIwADALBgNVHQ8EBAMCBPAwFAYDVR0RBA0wC4IJbG9jYWxob3N0 MB0GA1UdDgQWBBSS1ZHHT6OHTomYIRsmhz6hMJLGnDANBgkqhkiG9w0BAQsFAAOC AQEAWA23pCdAXtAbdSRy/p8XURCjUDdhkp3MYA+1gIDeGAQBKNipU/KEo5wO+aVk AG6FryPZLOiwiP8nYAebUxOAqKG3fNbgT9t95BEGCip7Cxjp96KNYt73Kl/OTPjJ KZUkHQ7MXN4vc5gmca8q+OqwCCQ/daMkzLabPQWNk3R/Hzo/mT42v8ht9/nVh1Ml u3Dow5QPp8LESrJABLIRyRs0+Tfp+WodgekgDX5hnkkSk77+oXB49r2tZUeG/CVv Fg8PuUNi+DWpdxX8fE/gIbSzSsamOf29+0sCIoJEPvk7lEVLt9ca0SoJ7rKn/ai4 HxwTiYo9pNcoLwhH3xdXjvbuGA== -----END CERTIFICATE----- --- Server certificate subject=C = KR, ST = Seoul, L = Seocho-gu, O = Localhost, CN = localhost issuer=C = KR, ST = RootState, L = RootCity, O = Root Inc., OU = Root CA, CN = Self-signed Root CA --- No client certificate CA names sent Peer signing digest: SHA256 Peer signature type: RSA-PSS Server Temp Key: X25519, 253 bits --- SSL handshake has read 1620 bytes and written 390 bytes Verification: OK --- New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: EED06A09B8971ADD25F352BF55298096581A490020C88BB457AB9864B9844778 Session-ID-ctx: Master-Key: 71C686180017B4DB5D681CCFC2C8741A7A70F7364572811AE548556A1DCAC078ABAF34B9F53885C6177C7024991B98FF PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - 8b 7f 76 5a c3 4a 1f 40-43 8e 00 e7 ad 35 ae 24 [email protected].$ 0010 - 5c 63 0b 0c 91 86 d0 74-ef 39 94 8a 07 fa 96 51 \c.....t.9.....Q 0020 - 58 cd 61 99 7d ae 47 87-7b 36 c1 22 89 fa 8e ca X.a.}.G.{6.&quot;.... 0030 - 52 c2 04 6e 7a 9f 2d 3e-42 25 fc 1f 87 11 5f 02 R..nz.-&gt;B%...._. 0040 - 37 b3 26 d4 1f 10 97 a3-29 e8 d1 37 cd 9a a3 8e 7.&amp;.....)..7.... 0050 - 61 52 15 63 89 99 8e a8-95 58 a8 e0 12 03 c4 15 aR.c.....X...... 0060 - 95 bf 1e b7 48 dc 4e fb-c4 8c 1a 17 eb 19 88 ca ....H.N......... 0070 - eb 16 b0 17 83 97 04 0d-79 ca d9 7d 80 5b 96 8d ........y..}.[.. 0080 - d3 bf 6f 4f 55 6d 2f ce-0b b9 24 a9 a2 d0 5b 28 ..oOUm/...$...[( 0090 - 06 10 1d 72 52 a3 ef f1-5c e3 2a 35 83 93 a1 91 ...rR...\.*5.... 00a0 - cb 94 6c 4f 3e f7 2e 8d-87 76 a5 46 29 6f 0e 5f ..lO&gt;....v.F)o._ Start Time: 1643011123 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: yes --- </code></pre> <p>But it fail to connect on Chrome browser or on curl, redirecting to its listening port each(30080 -&gt; 80, 30443 -&gt; 443):</p> <pre class="lang-sh prettyprint-override"><code># for convenience ignore root CA now, the problem is not in there. ❯ curl -k https://192.168.64.2:30443 &lt;html&gt; &lt;head&gt;&lt;title&gt;301 Moved Permanently&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;301 Moved Permanently&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.21.5&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; ❯ curl -kL https://192.168.64.2:30443 curl: (7) Failed to connect to 192.168.64.2 port 443: Connection refused ❯ curl http://192.168.64.2:30080 &lt;html&gt; &lt;head&gt;&lt;title&gt;301 Moved Permanently&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;301 Moved Permanently&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.21.5&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; ❯ curl -L http://192.168.64.2:30080 curl: (7) Failed to connect to 192.168.64.2 port 80: Connection refused ❯ kubectl logs nginx-cert-76f7f8748f-q2nvl -n cert /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?) /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2022/01/24 07:33:25 [notice] 1#1: using the &quot;epoll&quot; event method 2022/01/24 07:33:25 [notice] 1#1: nginx/1.21.5 2022/01/24 07:33:25 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 2022/01/24 07:33:25 [notice] 1#1: OS: Linux 4.19.202 2022/01/24 07:33:25 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2022/01/24 07:33:25 [notice] 1#1: start worker processes 2022/01/24 07:33:25 [notice] 1#1: start worker process 24 2022/01/24 07:33:25 [notice] 1#1: start worker process 25 172.17.0.1 - - [24/Jan/2022:07:44:36 +0000] &quot;\x16\x03\x01\x01$\x01\x00\x01 \x03\x03rM&amp;\xF2\xDD\xA3\x04(\xB0\xB2\xBF\x1CTS`\xDC\x90\x86\xF1\xEC\xBD9\x9Cz1c4\x0B\x8F\x13\xC2&quot; 400 157 &quot;-&quot; &quot;-&quot; 172.17.0.1 - - [24/Jan/2022:07:44:48 +0000] &quot;\x16\x03\x01\x01$\x01\x00\x01 \x03\x03'Y\xECP\x15\xD1\xE6\x1C\xC4\xB1v\xC1\x97\xEE\x04\xEBu\xDE\xF9\x04\x95\xC2V\x14\xB5\x7F\x91\x86V\x8F\x05\x83 \xBFtL\xDB\xF6\xC2\xD8\xD4\x1E]\xAE4\xCA\x03xw\x92D&amp;\x1E\x8D\x97c\xB3,\xFD\xCD\xF47\xC4:\xF8\x00&gt;\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0'\x00g\xC0&quot; 400 157 &quot;-&quot; &quot;-&quot; 172.17.0.1 - - [24/Jan/2022:07:45:05 +0000] &quot;\x16\x03\x01\x01$\x01\x00\x01 \x03\x03;J\xA7\xD0\xC2\xC3\x1A\xF9LK\xC7\xA8l\xBD&gt;*\x80A$\xA4\xFCw\x19\xE7(\xFAGc\xF6]\xF3I \xFF\x83\x84I\xC2\x8D\xD5}\xEA\x95\x8F\xDB\x8Cfq\xC6\xBA\xCF\xDDyn\xC6v\xBA\xCC\xDC\xCC\xCC/\xAF\xBC\xB2\x00&gt;\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0'\x00g\xC0&quot; 400 157 &quot;-&quot; &quot;-&quot; 172.17.0.1 - - [24/Jan/2022:07:49:08 +0000] &quot;GET / HTTP/1.1&quot; 301 169 &quot;-&quot; &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36&quot; 172.17.0.1 - - [24/Jan/2022:07:49:08 +0000] &quot;GET / HTTP/1.1&quot; 301 169 &quot;-&quot; &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36&quot; 172.17.0.1 - - [24/Jan/2022:08:00:24 +0000] &quot;GET / HTTP/1.1&quot; 400 255 &quot;-&quot; &quot;curl/7.64.1&quot; 172.17.0.1 - - [24/Jan/2022:08:01:46 +0000] &quot;GET / HTTP/1.1&quot; 301 169 &quot;-&quot; &quot;curl/7.64.1&quot; 172.17.0.1 - - [24/Jan/2022:08:01:50 +0000] &quot;GET / HTTP/1.1&quot; 301 169 &quot;-&quot; &quot;curl/7.64.1&quot; 172.17.0.1 - - [24/Jan/2022:08:03:04 +0000] &quot;GET / HTTP/1.1&quot; 301 169 &quot;-&quot; &quot;curl/7.64.1&quot; 172.17.0.1 - - [24/Jan/2022:08:03:07 +0000] &quot;GET / HTTP/1.1&quot; 301 169 &quot;-&quot; &quot;curl/7.64.1&quot; </code></pre> <p>Actually, <strong>at first, the pod respond with the requested ports, 30080 and 30443</strong>, but it respond with 80 and 443 now. I have no ideas when it changed and i did change.</p> <p>I have changed <code>server_name</code> on nginx config from <code>localhost</code> to <code>192.168.64.2</code> but it doesn't matter.</p>
홍한석
<p>I completely recreated your configuration for minikube on Linux. Your Kubernetes configuration is fine. And I got the same response - <code>301 Moved Permanently</code>.</p> <p>After that, I changed these lines in the <code>default.conf</code> file:</p> <pre class="lang-yaml prettyprint-override"><code>location / { try_files $uri $uri/ =404; } </code></pre> <p>And everything is working for me now (nginx web page from the pod is reachable using curl and browser).</p>
mozello
<p>i installed airflow using bitnami repo: To install extra python packages i mounted an extra volume<a href="https://i.stack.imgur.com/G1Btn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G1Btn.png" alt="enter image description here" /></a></p> <p>I prepared my requirements.txt file and then i created a ConfigMap using <em>kubectl create -n airflow configmap requirements --from-file=requirements.txt</em> after this i upgraded airflow using helm upgrade....</p> <p>But in my dags file, i'm still getting the error &quot;ModuleNotFoundError: No module named 'yfinance'&quot;</p>
Elpis
<p>Posting this as Community wiki for better visibility. Feel free to expand it.</p> <hr /> <p>As @Elpis wrote in the comments section, he followed <a href="https://towardsdatascience.com/setting-up-data-pipelines-using-apache-airflow-on-kubernetes-4506baea3ce0" rel="nofollow noreferrer">this guide</a> to install Apache Airflow on Kubernetes.</p> <p>And he solved the problem by adding <code>extraVolumeMounts</code> and <code>extraVolumes</code> to the <strong>worker pod</strong>, and also to the <strong>web pod</strong> and to the <strong>scheduler pod</strong>.</p> <pre><code>extraVolumeMounts: - name: requirements mountPath: /bitnami/python/ ## Add extra volumes extraVolumes: - name: requirements configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: requirements </code></pre> <p>After that, all extra Python packages were installed.</p>
mozello
<p>kubernetes: 1.25</p> <p>traefik: 2.8.7</p> <p>domain: gitlab.mydomain-prod.dk (I have already certificates for this domain)</p> <p>kubectl get svc gitlab-ce -n gitlab -o yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: gitlab-ce name: gitlab-ce namespace: gitlab spec: clusterIP: 10.98.93.9 clusterIPs: - 10.98.93.9 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: port-1 port: 80 protocol: TCP targetPort: 80 - name: port-2 port: 443 protocol: TCP targetPort: 443 selector: app: gitlab-ce sessionAffinity: None type: ClusterIP </code></pre> <p>so, I have a gitlab pod configured with both http:</p> <p>gitlab.rb I have <strong>external_url 'http://gitlab.mydomain-prod.dk'</strong></p> <p>here is the ingressroute</p> <p>kubectl get ingressroute -n gitlab -o yaml</p> <pre><code>apiVersion: v1 items: - apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: gitlab-ingress namespace: gitlab spec: entryPoints: - websecure routes: - kind: Rule match: Host(`gitlab.mydomain-prod.dk`) &amp;&amp; PathPrefix(`/`) priority: 1 services: - name: gitlab-ce port: 80 tls: secretName: gitlab-test-cert </code></pre> <p>With actual config, I can access the <a href="https://gitlab.mydomain-prod.dk" rel="nofollow noreferrer">https://gitlab.mydomain-prod.dk</a></p> <p>But, if I enable also https inside gitlab pod (gitlab will listen both 80 and 443)</p> <p><strong>external_url 'https://gitlab.mydomain-prod.dk'</strong></p> <p>With the same ingressroute I get bad gateway...</p> <p>So, my question is, once I have configured gitlab for both http/https how do I define traefik for ssl passthrough ?</p> <pre><code>Something similar to openshift: oc create route passthrough route-passthrough-secured --service=frontend --port=80 </code></pre>
johnny bravo
<p>To define the traefik for ssl passthrough , the gitlab should listen to the HTTP and HTTPs Ports. As per the question seems to be getting a bad gateway when you are running the same ingress route on HTTPS. Refer to this <a href="https://traefik.io/blog/https-on-kubernetes-using-traefik-proxy/" rel="nofollow noreferrer">HTTPS on Kubernetes Using Traefik Proxy</a> by Rahul Sharma and <a href="https://traefik.io/blog/traefik-2-tls-101-23b4fbee81f1/" rel="nofollow noreferrer">Traefik Proxy 2.x and TLS 101</a> by Gerald Croes .</p> <p>To configure this SSL passthrough, you need to configure a TCP router by following this <a href="https://oracle.github.io/fmw-kubernetes/20.3.3/soa-domains/adminguide/configure-load-balancer/traefik/#create-ingressroutetcp" rel="nofollow noreferrer">traefik SSL termination doc</a> by oracle fusion middleware and modify your IngressRoute configuration so that Traefik can pass through SSL traffic to the backend GitLab service. Make sure <code>tls.passthrough is true</code>, this delegates the SSL termination to the backend and verifies the application whether you are able to access or not.</p>
Hemanth Kumar
<p>About two year ago i installed a k8s cluster and added Istio. Currently i can’t remember how i installed it (operator or using <code>istioctl</code>). At this moment when i ask for the version i get:</p> <pre class="lang-bash prettyprint-override"><code>./bin/istioctl version client version: 1.11.3 control plane version: 1.11.3 data plane version: 1.11.3 (352 proxies) </code></pre> <p>I have the following namespaces related to istio:</p> <pre class="lang-bash prettyprint-override"><code>kubectl get ns | grep istio istio-operator Active 726d istio-system Active 726d </code></pre> <p>iside our gitops i have an <code>IstioOperator</code> yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istiocontrolplane namespace: istio-system spec: profile: default meshConfig: accessLogFile: /dev/stdout extensionProviders: # https://istio.io/v1.9/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ExtensionProvider-EnvoyExternalAuthorizationHttpProvider - name: xxxx envoyExtAuthzHttp: service: oauth2-proxy-xxxx.keycloak.svc.cluster.local port: 4180 includeHeadersInCheck: - authorization - cookie headersToUpstreamOnAllow: - authorization - path - cookie - x-auth-request-access-token - x-auth-request-user - x-auth-request-email headersToDownstreamOnDeny: - content-type - set-cookie components: ingressGateways: - name: istio-ingressgateway k8s: hpaSpec: minReplicas: 2 service: type: NodePort ports: - name: http2 nodePort: 32080 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 32443 port: 443 protocol: TCP targetPort: 8443 pilot: k8s: hpaSpec: minReplicas: 2 </code></pre> <p>Inside the <code>istio-operator</code> i have the following items (mited the replicaset and services)</p> <pre class="lang-bash prettyprint-override"><code>k -n istio-operator get all NAME READY STATUS RESTARTS AGE pod/istio-operator-1-12-5-65c9f7bf96-qcdsc 1/1 Running 0 15m pod/istio-operator-1-14-1-9874cfdcb-bwtwg 1/1 Running 3 (51d ago) 83d pod/istio-operator-58dc7d74f5-pbkcs 1/1 Running 48 (48d ago) 83d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istio-operator 1/1 1 1 726d deployment.apps/istio-operator-1-12-5 1/1 1 1 15m deployment.apps/istio-operator-1-14-1 1/1 1 1 146d </code></pre> <p>Inside <code>istio-system</code> i have the following deployments:</p> <pre class="lang-bash prettyprint-override"><code>NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR istio-ingressgateway 2/2 2 2 726d istio-proxy docker.io/istio/proxyv2:1.11.3 app=istio-ingressgateway,istio=ingressgateway istiod 2/2 2 2 726d discovery </code></pre> <p>I want to clean this mess up and move to version 14.1 (our k8s version is <code>v1.22.13</code>)</p> <p>So my questions;</p> <ul> <li>what method of installation did i follow (or did i mix them through the last years)</li> <li>how to clean and remove the older versions.</li> <li>actually any tips that can help me &quot;clean&quot; this mess i created</li> </ul> <p>p.s. I am using custom <code>EnvoyFilter</code> for oauth2 and for redirecting (using lua)</p>
jen
<p>Can you refer to this <a href="https://stackoverflow.com/questions/54261468/u">SO</a> and If you uninstall and reinstall Istio, any resources that were created when Istio was initially installed will be deleted and will not be recreated. This includes Kubernetes objects such as Services, Deployments, ConfigMaps, and Secrets. In addition, any custom configurations or settings that were created for Istio will be lost, and will need to be re-created when Istio is reinstalled. For this reason, it is important to back up any configurations or settings before uninstalling Istio in a production environment.</p> <p>The recommended way to <a href="https://istio.io/latest/docs/setup/additional-setup/customize-installation/" rel="nofollow noreferrer">install Istio</a> is to use istioctl with a custom IstioOperator. This allows you to configure the control plane as well as manage the Istio installation in a declarative way.</p>
Hemanth Kumar
<p>I want to replace a node group entirely with fresh nodes in an EKS node group.</p> <p>Whats is the difference between</p> <pre><code>eksctl scale nodegroup --cluster=$CLUSTER_NAME --nodes=0 </code></pre> <p>And draining/deleting nodegroup, then provisioning via Terraform?</p>
deagleshot
<p>To replace an entire node group in an EKS cluster, you can either use the eksctl command or manually drain and delete the node group and provision a new one with Terraform.</p> <pre><code>eksctl scale nodegroup --cluster=my-cluster --nodes=0 </code></pre> <p>This command will drain and delete all nodes in the node group, leaving the node group empty. You can then use the eksctl create nodegroup command to provision a new node group with the desired configuration.</p> <blockquote> <p>The difference between draining and deleting a node group is that :</p> </blockquote> <ul> <li><p>Draining a node group will gracefully terminate the nodes in the group, while deleting a node group will immediately terminate the nodes.</p> </li> <li><p>Draining a node group is typically used when you want to replace the node group with a new one, while deleting a node group is usually used when you want to completely remove the node group from the EKS cluster</p> </li> </ul> <p>For more information refer to this <a href="https://eksctl.io/usage/managing-nodegroups/" rel="nofollow noreferrer">EKS Doc</a></p>
Hemanth Kumar
<p>There is a known limitation in this feature that when pods scale down there topology spread may not be even. How does one overcome this limitation to keep there pods spread across multiple availability zones. This can be a huge issue for apps that need to be highly available.</p> <p>As per documentation:</p> <p>“There’s no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution. You can use Descheduler to rebalance the Pods distribution.”</p>
user10869670
<p>In Topology Spread Constraint, scaling down a Deployment may result in imbalanced Pods distribution. To maintain the balanced pods distribution we need to use a tool such as the <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">Descheduler</a> to rebalance the Pods distribution. This Descheduler allows you to kill off certain workloads based on user requirements, and let the default kube-scheduler reschedule killed pods. It can be installed easily with manifests or Helm and run on a schedule. It can even be triggered manually when the topology changes, which will be implemented to suit our needs. Refer to this <a href="https://github.com/kubernetes-sigs/descheduler#compatibility-matrix" rel="nofollow noreferrer">Compatibility Matrix</a> to install the suitable Deschedular version compatible with your kubernetes version.</p> <p>As per Question tags you are using <a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html" rel="nofollow noreferrer">AWS EKS</a> and for multiple zones, so you can also use <a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/" rel="nofollow noreferrer">Topology Aware Hints</a> to indicate your preference for keeping traffic in different availability zones when cluster worker nodes are deployed across multiple availability zones. This will allow your pods to be spread across multiple availability zones, even when scaling down.</p> <p>You can also use <a href="https://kubernetes.io/docs/setup/best-practices/multiple-zones/#node-behavior" rel="nofollow noreferrer">node labels in conjunction</a> with Pod topology spread constraints to control how Pods are distributed across zones. This will help ensure that your pods are spread across multiple availability zones when scaling down, and will help keep your application highly available.</p>
Hemanth Kumar
<p>I am testing out GKE with attaching existing disk using this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">GCP tutorial</a>. I've created a Storage Class for my storage with the reclaimPolicy set to <strong>Delete</strong></p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: pd-ssd provisioner: pd.csi.storage.gke.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: pd-ssd </code></pre> <p>However after attaching my GCP persistent disk using PV below, I found that my reclaimPolicy is set to <strong>retain</strong> again. Even after patching the PV back to <strong>Delete</strong>. The disk still fails to auto delete from my GCP Persistent Disk page <strong>(After using the disk with PVC and Pod and deleted)</strong>. Am I doing anything wrong? or is this the mechanism for GCP that persistent disk can't be auto deleted when using PV static provisioning. Thank you very much for your response in advance.</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: test-pv spec: storageClassName: &quot;pd-ssd&quot; capacity: storage: 50Gi accessModes: - ReadWriteOnce claimRef: namespace: default name: test-pvc csi: driver: pd.csi.storage.gke.io volumeHandle: projects/{project}/zones/{zone-of-my-disk}/disks/{my-disk-name} fsType: ext4 </code></pre>
jasonmacintosh
As mentioned in the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deleting-a-cluster#overview" rel="nofollow noreferrer">document</a>:</p> <blockquote> GKE might not delete the following resources:</p> 1. External load balancers created by the cluster.</p> 2. Internal load balancers created by the cluster.</p> 3. <a href="https://cloud.google.com/compute/docs/disks#pdspecs" rel="nofollow noreferrer">Persistent disk</a> volumes</p> </blockquote> Persistent disks are located independently from your virtual machine (VM) instances, so you can detach or move persistent disks to keep your data even after you delete your instances.&nbsp;</p> If you want to delete the disk also permanently then this can be fixed by first deleting all the namespaces. When you delete a claim, the corresponding PersistentVolume object and the provisioned Compute Engine persistent disk are also deleted.</p>
Fariya Rahmat
<p>I want to edit my <code>Nginx.conf</code> file present inside Nginx controller pod in AKS, but the edit command is not working using the exec command, is there any way else I could edit my <code>nginx.conf</code>.</p> <p>the command which I tried:</p> <pre><code>kubectl exec -it nginx-nginx-ingress-controller -n nginx -- cat /etc/nginx/nginx.conf </code></pre>
Rughma Sussan Renji
<p>yh, seems this is also working. tried an alternative way:</p> <p><code>Edit/add</code> the properties to change in <code>ingress.yaml</code> and redeploy it. the changes will then reflect in <code>nginx.conf</code></p>
Rughma Sussan Renji
<p>I have some HPAs defined within a Kubernetes cluster and the scaling functionality works as expected. However, I've observed that the choice of specific pods that are chosen to be scaled down seems pretty arbitrary.</p> <p>So the question is. Can I somewhere define criteria to choose which pods are preferred to be terminated when a scale-down event happens, but without explicitly defining that the pods are actively scaled on that criteria?</p> <p>For example, I mainly care about CPU and scale such that the CPU percentage is maintained at 50% or less, but when scaling down would prefer that older pods are preferred to be terminated rather than newer ones, or pods consuming the most memory be terminated in preference to those consuming less memory.</p> <p>I'm aware that I can explicitly scale on multiple criteria like CPU and memory, but this can be problematic and prevent downward scaling unnecessarily for example when memory is allocated to a cache but CPU usage has decreased.</p>
Colin
<p>As per this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost" rel="nofollow noreferrer">official doc</a> You can add the annotation controller.kubernetes.io/pod-deletion-cost with a value in the range [-2147483647, 2147483647] and this will cause pods with lower value to be killed first. Default is 0, so anything negative on one pod will cause a pod to get killed during downscaling.</p> <p>Find this <a href="https://github.com/kubernetes/enhancements/issues/2255" rel="nofollow noreferrer">gitlink</a> about the implementation of this feature: Scale down a deployment by removing specific pods (PodDeletionCost) #2255.</p> <p>You can also use Pod Priority and Preemption. Refer to this <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">official doc</a> for more information</p>
Hemanth Kumar
<p>I am trying to follow this tutorial to setup Minikube on my MacBook.</p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">Minikube with ingress</a></p> <p>I have also referred <a href="https://stackoverflow.com/questions/58561682/minikube-with-ingress-example-not-working">Stack overflow question</a> and <a href="https://stackoverflow.com/questions/70961901/ingress-with-minikube-working-differently-on-mac-vs-ubuntu-when-to-set-etc-host">Stack over flow question 2</a> both of these are not working for me.</p> <p>When I run <code>Minikube tunnel</code> it says to enter the password and then get stuck after entering my password.</p> <pre><code>sidharth@Sidharths-MacBook-Air helm % minikube tunnel ✅ Tunnel successfully started 📌 NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ... ❗ The service/ingress example-ingress requires privileged ports to be exposed: [80 443] 🔑 sudo permission will be asked for it. ❗ The service/ingress minimal-ingress requires privileged ports to be exposed [80 443] 🏃 Starting tunnel for service example-ingress. 🔑 sudo permission will be asked for it. 🏃 Starting tunnel for service minimal-ingress. Password: </code></pre> <p>I am getting the below response when I run <code>kubectl ge ingress</code></p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress nginx hello-world.info localhost 80 34m </code></pre>
sidharth vijayakumar
<p>This is an issue specifically with the docker driver, and it's only an output issue. If you use a VM driver (like hyperkit for macOS), you'll get the expected output in the documentation.</p> <p>This stems from the fact that we need to do two discrete things to tunnel for a container driver (since it needs to route to 127.0.0.1) and for a VM driver.</p> <p>We can potentially look into fixing this so that the output for both versions are similar, but the tunnel itself is working fine.</p> <p>Refer this <a href="https://github.com/kubernetes/minikube/issues/12899" rel="nofollow noreferrer">github</a> link for more information.</p>
Fariya Rahmat
<p>I created a ConfigMap with Pulumi:</p> <pre><code> // Create setup script const argoCDInitialSetupScript = new k8s.core.v1.ConfigMap('argo-cd-setup', { metadata: { name: 'argo-cd-setup', namespace: 'argo', }, data: { 'init-argo.sh': fs.readFileSync(&quot;src/assets/yaml/argo-cd/argo-cd-setup.sh&quot;).toString() }, }); </code></pre> <p>Next, I run a Job that will use this ConfigMap and when the Job completes, I want to delete this ConfigMap but I cannot find how to do so in Pulumi!</p> <p>I would have expected something like this:</p> <pre><code> // Delete Config Map argoCDInitialSetupScript.delete(); </code></pre>
LucasBrazi06
<p>With Pulumi once the job is done, you can simply run the command to destroy.</p> <pre><code>$ pulumi destroy </code></pre> <p>Refer to <a href="https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/configmap-rollout/" rel="nofollow noreferrer">link</a> for more information.</p>
Fariya Rahmat
<p>We want to get all the pods in the cluster so we are using something like following:</p> <pre><code>pods, err := client.CoreV1().Pods(&quot;&quot;).List(context.Background(), metav1.ListOptions{}) </code></pre> <p>This code will receive all the pods in the cluster.</p> <p>My question is: If there is a code or lib which will bring all the pods with the <code>owner reference</code> . i.e. if pods owns by <code>deployment</code> or <code>statfulset</code> etc you will get all the hierarchy ,the <code>trick</code> here is if I need to get additional level up like some recursion, for example <code>statefulset</code> which owns by controller which have custom kind</p>
Jenney
<p>As @CoolNetworking suggested,there is a single lib or code that will get you all the pods with their owner reference, but you can use k8s API to retrieve the owner references for each pod. You can then use the k8s API to retrieve the owner object for each owner reference. This will allow you to create a hierarchical structure of all the pods in the cluster.</p> <p>The Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET).</p> <p>Most Kubernetes API resource types are objects: they represent a concrete instance of a concept on the cluster, like a pod or namespace.</p> <p>Refer the document on <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks" rel="nofollow noreferrer">kubernetes API</a> for more information</p>
Fariya Rahmat
<p>I was reading about Kubernetes events and it seems the only way to process events is to create a Watch over http call, which in turn processes the response and becomes Iterator of events. But that is finite and you have to recreate the event watch constantly... Is there a way to simply tail events with some callback, in Java?</p>
Bober02
<p>As a native watching method for Kubernetes, you can watch events in real-time with <code>--watch</code> <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">flag</a> - it is a part of API server, allowing to fetch the current state of resource/object and then subscribe to subsequent changes:</p> <pre><code>kubectl get events --watch </code></pre> <p>There is no in-built solution in Kubernetes for storing/forward event objects on the long term as by default the events have a 1-hour life span. You would need to use third-party tools to stream the events continuously, e.g. - <a href="https://github.com/caicloud/event_exporter?ref=thechiefio" rel="nofollow noreferrer">Kubernetes Event Exporter</a>; to collect them and export to external systems for alerting/further processing - e.g. <a href="https://github.com/bitnami-labs/kubewatch?ref=thechiefio" rel="nofollow noreferrer">kubewatch</a>, <a href="https://github.com/opsgenie/kubernetes-event-exporter" rel="nofollow noreferrer">kubernetes-event-exporter</a> tools.</p>
anarxz
<p>I am looking for a (more or less scientific) document/presentation that explains why the developers of the &quot;Kubernbetes language&quot; made the choice to fragment an application definition (for instance) in multiple yaml files instead of writing a single yaml file with all the details of the application deployment (all deployments, volumes, ...)?</p> <p>I imagine that this has something to do with reusability, maintainability and readability but it would be nice to have a more structured argumentation (I think of a conference paper or presentation in a conference such as kubcon or dockercon)</p> <p>Thanks,</p> <p>Abdelghani</p>
Abdelghani
<blockquote> <p>why the developers of the &quot;Kubernbetes language&quot; made the choice to fragment an application definition (for instance) in multiple yaml files instead of writing a single yaml file with all the details of the application deployment (all deployments, volumes, ...)?</p> </blockquote> <p>This allows us to modify the configuration of Kubernetes objects easier. Thus, you do not have to go through the entire cluster configuration yaml file to make a small change in the service backend, for example. And yes, it is easier to develop and maintain a bunch of files, each containing some object or a group of related objects.</p> <p>Keep in mind that you can call kubectl apply command on a directory of config files:</p> <pre><code>kubectl apply -f &lt;directory&gt; </code></pre> <p><strong>Group related k8s objects into a single file whenever it makes sense</strong>. So, it is easier to manage.</p> <p>But you definitely can put the entire cluster configuration in one yaml file using this syntax (note ---).</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: # skip --- apiVersion: apps/v1 kind: Deployment metadata: # skip </code></pre> <p>Kubernetes developers suggest write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.</p> <p>There is a good Kubernetes page on <a href="https://kubernetes.io/docs/concepts/configuration/overview/#general-configuration-tips" rel="nofollow noreferrer">Configuration Best Practices</a>.</p>
mozello
<p>I'm new to docker and k8s. Topic with ports is hard to understand for me... Basically, I know that we assign ports to containers for access to the container.</p> <p>What is the difference between publishing port: <code>8080:80</code> and <code>127.0.0.1:8080:80</code>?</p> <p>(<em>Because I'm new in docker and my question might be inexact I'll clarify</em> - I mean while using <code>Docker run</code> command I use <code>-p</code> option to set it).</p> <p>What does <code>8080</code> and <code>80</code> ports mean? - Can we namely define those ports differently?</p> <p>How publishing ports relates to k8s defining in pod manifest? Also if I'd like to assign ports exactly the same in pod manifest like in docker, so how to relate let's say <code>127.0.0.1:8080:80</code> to k8s pod? Are those <code>containerPort</code> and <code>hostPort</code> properties?</p>
Dawid Sieczka
<blockquote> <p>What is the difference between publishing port: 8080:80 and 127.0.0.1:8080:80?</p> </blockquote> <p>The difference is very well explained <a href="https://www.howtogeek.com/225487/what-is-the-difference-between-127.0.0.1-and-0.0.0.0/#:%7E:text=0.0?-,127.0" rel="nofollow noreferrer">here</a>:</p> <blockquote> <ul> <li>127.0.0.1 is the loopback address (also known as localhost).</li> <li>0.0.0.0 is a non-routable meta-address used to designate an invalid, unknown, or non-applicable target (a ‘no particular address’ place holder). In the context of a route entry, it usually means the default route. In the context of servers, 0.0.0.0 means <em>all IPv4 addresses on the local machine</em>. If a host has two IP addresses, 192.168.1.1 and 10.1.2.1, and a server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs.</li> </ul> </blockquote> <p>If you run a Docker container using this command:</p> <pre class="lang-sh prettyprint-override"><code>$ docker run -p 8080:80 --name web nginx </code></pre> <p>this will <strong>map</strong> a running <em>container port</em> 80 to <em>host port</em> <code>0.0.0.0:8080</code>:</p> <pre class="lang-sh prettyprint-override"><code>$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 08a5deaeeae8 nginx &quot;/docker-entrypoint.…&quot; 26 seconds ago Up 26 seconds 0.0.0.0:8080-&gt;80/tcp, :::8080-&gt;80/tcp web </code></pre> <p>Then container port 80 will be reachable on the all host's IP addresses on port 8080.</p> <p>And if you want to map <code>127.0.0.1:8080</code> you should use:</p> <pre class="lang-sh prettyprint-override"><code>$ docker run -p 127.0.0.1:8080:80 --name web nginx </code></pre> <p>Then container port 80 will be reachable only on the host's loopback address.</p> <p>You can read more information about ports exposing on official Docker documentation page <a href="https://docs.docker.com/config/containers/container-networking/" rel="nofollow noreferrer">here</a>.</p> <blockquote> <p>What mean 8080 and 80 ports? - Can we namely define those ports differently?</p> </blockquote> <p>You can choose any available port on your host and container. But, please, keep in mind that some apps inside a container are configured to use certain ports.</p> <h2>k8s</h2> <p>By default, ports in pod in Kubernetes are not published on nodes and host's IP addresses (pods have their own IP addresses). It's something like using <em>docker run</em> without <em>-p</em> argument.</p> <p>And a pod definition doesn't have an option to publish ports on the host IP address, you need to use</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl port-forward pod/mypod 8080:80 </code></pre> <p>command to do it, which by default uses 127.0.0.1, but you can specify 0.0.0.0 using <code>--address</code> flag:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl port-forward --address 0.0.0.0 pod/mypod 8080:80 </code></pre> <p>You can find additional information about port forwarding on the k8s official page <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">here</a>.</p> <p>And there is a much better option to use in Kubernetes - <strong>Service</strong> - <strong><em>An abstract way to expose an application running on a set of Pods as a network service.</em></strong></p> <p>You can check the official Kubernetes documentation about service <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a>.</p>
mozello
<p>I have to turn off my service in production and turn it on again after a small period (doing a DB migration).</p> <p>I know I can use <code>kubectl scale deployment mydeployment --replicas=0</code>. This services uses a HorizontalPodAutoscaler (HPA) so how would I go about reseting it to scale according to the HPA?</p> <p>Thanks in advance :)</p>
Tebogo
<p>As suggested by the @ Gari Singh ,HPA will not scale from 0, so once you are ready to reactivate your deployment, just run kubectl scale deployment mydeployment --replicas=1 and HPA will then takeover again.</p> <p>In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.</p> <p>Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.</p> <p>If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.</p> <p>Refer to this link on <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a> for detailed more information</p>
Fariya Rahmat
<p>By default pods can communicate with each other in Kubernetes, which is unwanted should a pod be compromised. We want to use <strong>NetworkPolicies</strong> to control inbound (ingress) and outbound (egress) traffic to/from pods.</p> <p>Specifically pods should ONLY be able to:</p> <ul> <li>Egress: Call services on the internet</li> <li>Ingress: Receive requests from the <strong>Nginx-ingress controller</strong></li> <li>Ingress: Send logs via <strong>promtail</strong> to <strong>Loki</strong></li> </ul> <h2>What I have tried</h2> <h2>1. Denying all ingress and egress</h2> <p>This is the default policy that we want to gradually open up. It blocks all ingress and egress.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-deny-all namespace: mynamespace spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <h2>2. Opening egress to internet only</h2> <p>We allow egress only to IP-adresses that are not reserved for <strong>private networks</strong> according to <a href="https://en.wikipedia.org/wiki/Private_network#Private_IPv4_addresses" rel="nofollow noreferrer">wikipedia</a>.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: egress-allow-internet-only namespace: mynamespace spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 </code></pre> <h2>3. Opening Ingress from ingress controller and loki</h2> <p>We have deployed the standard <strong>NginX Ingress Controller</strong> in namespace <strong>default</strong>, and it has the lable <strong>app.kubernetes.io/name=ingress-nginx</strong>. We have also deployed the standard <strong>loki-grafana stack</strong> to the <strong>default</strong> namespace, which uses <strong>promtail</strong> to transfer logs to <strong>Loki</strong>. Here I allow pods to recieve ingress from the <strong>promtail</strong> and <strong>ingress-nginx</strong> pods.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ingress-allow-ingress-controller-and-promptail namespace: mynamespace spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name=default - podSelector: matchLabels: app.kubernetes.io/name=ingress-nginx - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name=default - podSelector: matchLabels: app.kubernetes.io/name=promtail </code></pre> <h2>So, does this configuration look right?</h2> <p>I am new to Kubernetes, so I hope you guys can help point me in the right direction. Does this configuration do what I intent it to do, or have I missed something? E.g. is it enough that I have just blocked <strong>egress</strong> within the <strong>private network</strong> to ensure that the pods are isolated from each other, or should I also make the ingress configuration as I have done here?</p>
Esben Eickhardt
<p>I have compared your Ingress with <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#allow-all-ingress-traffic" rel="nofollow noreferrer">K8 Doc</a> and Egress with this <a href="https://stackoverflow.com/a/57845666/19230181">SO</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-and-all-egress-traffic" rel="nofollow noreferrer">deny Both ingress and Egress</a> seems to be correct.The only thing we need to do is check whether all the name space is given correct or not. Seems to be correct as per your YAML file.</p> <p>But kubernetes pods use the DNS server inside Kubernetes; due to this DNS server being blocked, we need to define more specific IP ranges to allow DNS lookups. Follow this <a href="https://stackoverflow.com/a/57204119/19230181">SO</a> to define <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">DNS config</a> at pod levels and to get curl calls with domain names allow Egress to Core DNS from kube-system(by adding a namespace selecter (kube-system) and a pod selector (dns pods)).</p> <h3>How to identify dns pod</h3> <pre><code># Identifying DNS pod kubectl get pods -A | grep dns # Identifying DNS pod label kubectl describe pods -n kube-system coredns-64cfd66f7-rzgwk </code></pre> <h2>Adding DNS pod to NetworkPolicy</h2> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: egress-allow-internet-only namespace: mynamespace spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: &quot;kube-system&quot; - podSelector: matchLabels: k8s-app: &quot;kube-dns&quot; </code></pre>
Hemanth Kumar
<p>I need to read a Kubernetes key and value from NodeJs. But am getting an undefined error.</p> <p>Please find the below code.</p> <p><strong>deployment.yaml</strong></p> <pre><code>containers: - name: server env: -name: CLIENT_DEV valueFrom: secretKeyRef: name: dev1-creds-config key: clientId </code></pre> <p>The secretKeyRef value will be available in another yaml file. This will get read properly when the dev/local minikube build is running based on the region we are running.</p> <p><strong>secrets.enc.yaml</strong></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: dev1-creds-config type: Opaque data: clientId: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx username: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx password: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx </code></pre> <p>The above one contains the encrypted values. This is created to ensure security.</p> <p>The <strong>index.js</strong> file of NodeJs to read the value</p> <pre><code>require(&quot;dotenv&quot;).config(); const express = require(&quot;express&quot;); const app = express(); console.log(&quot;value..&quot;+processs.env.CLIENT_DEV); const host = customHost || &quot;localhost&quot;; app.listen(port,host,err =&gt; { if(err){ return logger.error(err.message); } logger.appStarted(&quot;3000&quot;, host); }); </code></pre> <p><code>console.log(&quot;value..&quot;+processs.env.CLIENT_DEV);</code> this line is giving me &quot;undefined&quot;</p> <p>My query is,</p> <ol> <li>is it possible to the yaml encrypted value from deployment yaml using Node js</li> <li>is it possible to configure yaml key, value in .env file of Node Js</li> </ol> <p>Am not able to read this secret value from yaml file in my node js.</p> <p>Please help me to resolve this issue.</p> <p>Thanks in advance.</p>
Neela
<p>Check the indentation in your deployment.yaml, it should be like this:</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: server env: - name: CLIENT_DEV valueFrom: secretKeyRef: name: dev1-creds-config key: clientId </code></pre> <p>In your question the indentation is incorrect. But as long as your nodejs pods are running well, I think you just pasted it not very accurately.</p> <p>Second, there is a typo <code>processs</code> in your JavaScript code. Correct the line:</p> <pre><code>console.log(&quot;value..&quot;+process.env.CLIENT_DEV); </code></pre> <p>After verifying all of these, your NodeJs application will read the Kubernetes secret value.</p>
mozello
<p>I need to create two instances using the same Ubuntu Image in Kubernetes. Each instance used two ports i.e. 8080 and 9090. How can I access these two ports externally? Can we use the IP address of the worker in this case?</p>
Himanshu1310
<p>If you want to access your Ubuntu instances from outside the k8s cluster you should place pods behind the service.</p> <p>You can access services through public IPs:</p> <ul> <li>create <code>Service</code> of type <code>NodePort</code>- the <code>service</code> will be available on <code>&lt;NodeIp&gt;:&lt;NodePort&gt;</code></li> <li>create <code>Service</code> of type <code>LoadBalancer</code> - if you are running your workload in the cloud creating service of type <code>LoadBalancer</code> will automatically deploy LoadBalancer for you.</li> </ul> <p>Alternatively you can deploy <code>Ingress</code> to expose your <code>Service</code>. You would also need Ingress Controller.</p> <p>Useful links:</p> <ul> <li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps" rel="nofollow noreferrer">GCP example</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a></li> </ul>
mlewinska
<p>When deploying a set of SpringBoot microservice applications in a Kubernetes cluster, should I include any kind of service discovery client libraries in my SpringBoot application to leverage kubernetes-native-service-discovery? If not, how a caller service calls another microservice in the same cluster?</p> <p>Thanks in advance.</p>
Larsen
<p>A service in kubernetes can be invoked as mentioned below for example an http service. An service has to be created and associated with pod. Please refer kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> documentation for various services.</p> <pre><code>http://&lt;service-name&gt;:&lt;port&gt; </code></pre> <p>No other changes are required from application end. Please refer <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">kubernetes</a> official documentation for resolution details</p>
Nataraj Medayhal
<p>I just created a private Kubernetes cluster on Oracle Cloud. The normal way to connect to cluster API is via the Bastion service. I've followed the exact steps as mentioned in this article: <a href="https://www.ateam-oracle.com/using-oci-bastion-service-to-manage-private-oke-kubernetes-clusters" rel="nofollow noreferrer">https://www.ateam-oracle.com/using-oci-bastion-service-to-manage-private-oke-kubernetes-clusters</a></p> <p>After executing the ssh command port-forwarding (Step 4 in the article), the shell blocks as intended, but I don't get any sensible output from running kubectl:</p> <pre><code>$ kubectl cluster-info To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>Here's the output when passing <code>-v</code> to ssh:</p> <pre><code>OpenSSH_8.4p1, OpenSSL 1.1.1k 25 Mar 2021 debug1: Reading configuration data /home/praj/.ssh/config debug1: Reading configuration data /usr/etc/ssh/ssh_config debug1: /usr/etc/ssh/ssh_config line 24: include /etc/ssh/ssh_config.d/*.conf matched no files debug1: /usr/etc/ssh/ssh_config line 26: Applying options for * debug1: Connecting to host.bastion.ap-mumbai-1.oci.oraclecloud.com [192.29.162.226] port 22. debug1: Connection established. debug1: identity file /home/praj/.ssh/id_rsa type 0 debug1: identity file /home/praj/.ssh/id_rsa-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_8.4 debug1: Remote protocol version 2.0, remote software version Go debug1: no match: Go debug1: Authenticating to host.bastion.ap-mumbai-1.oci.oraclecloud.com:22 as 'ocid1.bastionsession.oc1.ap-mumbai-1.amaaaaaafvm2mgaa5inuqsfwe73eitjgead23h2avusdwryx5hlz6orz7jea' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: [email protected] debug1: kex: host key algorithm: ssh-rsa debug1: kex: server-&gt;client cipher: aes128-ctr MAC: [email protected] compression: none debug1: kex: client-&gt;server cipher: aes128-ctr MAC: [email protected] compression: none debug1: kex: [email protected] need=32 dh_need=32 debug1: kex: [email protected] need=32 dh_need=32 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ssh-rsa SHA256:JTeqM8qvS9EO9reRIF/qyllvs6px8Y69LEveK9NFzZc debug1: Host 'host.bastion.ap-mumbai-1.oci.oraclecloud.com' is known and matches the RSA host key. debug1: Found key in /home/praj/.ssh/known_hosts:13 debug1: rekey out after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 4294967296 blocks debug1: pubkey_prepare: ssh_get_authentication_socket: No such file or directory debug1: Will attempt key: /home/praj/.ssh/id_rsa RSA SHA256:380ueVYrrzxGrkPRep4huj+pHdElPoz8iCTSYvKD5Hg explicit debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /home/praj/.ssh/id_rsa RSA SHA256:380ueVYrrzxGrkPRep4huj+pHdElPoz8iCTSYvKD5Hg explicit debug1: Server accepts key: /home/praj/.ssh/id_rsa RSA SHA256:380ueVYrrzxGrkPRep4huj+pHdElPoz8iCTSYvKD5Hg explicit Enter passphrase for key '/home/praj/.ssh/id_rsa': debug1: Authentication succeeded (publickey). Authenticated to host.bastion.ap-mumbai-1.oci.oraclecloud.com ([192.29.162.226]:22). debug1: Local connections to LOCALHOST:6443 forwarded to remote address 10.0.0.14:6443 debug1: Local forwarding listening on 127.0.0.1 port 6443. debug1: channel 0: new [port listener] debug1: Entering interactive session. debug1: pledge: network debug1: Connection to port 6443 forwarding to 10.0.0.14 port 6443 requested. debug1: channel 1: new [direct-tcpip] debug1: Connection to port 6443 forwarding to 10.0.0.14 port 6443 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 6443 forwarding to 10.0.0.14 port 6443 requested. debug1: channel 3: new [direct-tcpip] debug1: Connection to port 6443 forwarding to 10.0.0.14 port 6443 requested. debug1: channel 4: new [direct-tcpip] debug1: channel 1: free: direct-tcpip: listening port 6443 for 10.0.0.14 port 6443, connect from 127.0.0.1 port 44054 to 127.0.0.1 port 6443, nchannels 5 debug1: Connection to port 6443 forwarding to 10.0.0.14 port 6443 requested. debug1: channel 1: new [direct-tcpip] debug1: channel 2: free: direct-tcpip: listening port 6443 for 10.0.0.14 port 6443, connect from 127.0.0.1 port 44056 to 127.0.0.1 port 6443, nchannels 5 debug1: channel 3: free: direct-tcpip: listening port 6443 for 10.0.0.14 port 6443, connect from 127.0.0.1 port 44058 to 127.0.0.1 port 6443, nchannels 4 debug1: channel 4: free: direct-tcpip: listening port 6443 for 10.0.0.14 port 6443, connect from 127.0.0.1 port 44060 to 127.0.0.1 port 6443, nchannels 3 debug1: channel 1: free: direct-tcpip: listening port 6443 for 10.0.0.14 port 6443, connect from 127.0.0.1 port 44062 to 127.0.0.1 port 6443, nchannels 2 ^Cdebug1: channel 0: free: port listener, nchannels 1 Killed by signal 2. </code></pre> <p>My cluster is running on two ARM-based nodes (A1 Flexible VM), with the default Oracle Linux 7.9 as OS, and Kubernetes version 1.20.8</p> <p>Can anyone tell me where's the issue? Does it need any additional configuration to connect to Kubernetes API?</p>
Raj
<p>I was able to connect to private K8s cluster from my local machine via SSH with these steps:</p> <ol> <li>Create a separate <em>public</em> subnet inside the same VCN of the cluster for the bastion target subnet.</li> <li>Use the default public route table of the cluster network for the new subnet. This route table already has a route rule with internet gateway target that will allow traffic from local machine to the cluster API and vice versa.</li> <li>Add security list to the subnet with egress rule of 0.0.0.0/0 as destination CIDR, allowing all ports.</li> <li>Make sure to add the port CIDR of the subnet to the security list ingress of the subnet used by the cluster API.</li> <li>Create the OCI bastion. Remember to use the new subnet as the target subnet.</li> </ol>
Alon B.
<p>Is it possible to create a Kubernetes cluster admin without the ability to modify/read certain namespace and its content?</p> <p>I am talking about subtracting certain permissions from existing role.</p> <p>thanks.</p>
tal
<p>To get the behavior you want you would need a <a href="https://en.wikipedia.org/wiki/Complement_(set_theory)" rel="nofollow noreferrer">set subtraction</a> of cluster-admin role minus the rules that you have defined. <a href="https://github.com/kubernetes/kubernetes/issues/70387" rel="nofollow noreferrer">It's not supported in K8s as of this writing.</a></p> <p>If you need a custom role which has less permissions than a predefined role, it would be more clear to list those permissions rather than to list the inverse of those permissions.</p>
Ramesh kollisetty
<p>First time Kubernetes user here.</p> <p>I deployed a service using <code>kubectl -n my_namespace apply -f new_service.yaml</code></p> <p>It failed, with the pod showing <code>Warning -Back-off restarting failed container</code></p> <p>I am now trying to delete the failed objects and redeploy a fixed version. However, I have tried to delete the service, pod, deployment, and replicaset, but they all keep recreating themselves.</p> <p>I looked at <a href="https://stackoverflow.com/questions/63344878/kubernetes-deployments-replicasets-are-recreated-after-deletion">this thread</a> but don't believe it applies since my deployment list:</p> <pre><code>apiVersion: apps/v1 kind: Deployment </code></pre> <p>Any input appreciated!</p>
Xela
<p>Posting this as Community wiki for better visibility. Feel free to expand it.</p> <hr /> <p><strong>In a Kubernetes cluster</strong>:</p> <ul> <li>if you delete Pods, but they are recreated again<br /> <code>there is a Kubernetes Deployment / StatefulSet / DaemonSet / job that recreates them</code><br /> <strong>delete a Deployment / StatefulSet / DaemonSet to delete those pods, check k8s jobs</strong></li> <li>if you delete a ReplicaSet, but it is recreated again<br /> <code>there is a Kubernetes Deployment that recreates it</code><br /> <strong>delete a Deployment to delete this replicaset</strong></li> <li>if you delete a Deployments / Services, etc., but they are recreated again<br /> <code>there is a deployment tool like ArgoCD / FluxCD / other tool that recreates them</code><br /> <strong>configure ArgoCD / FluxCD / other deployment tool to delete them</strong></li> <li>also check if Helm is used, run <code>helm list --all-namespaces</code> to list installed releases.</li> </ul> <p>Thanks to @P....for comments.</p>
mozello
<p>Lets say I have utility jar called <strong>validation.jar</strong> which can validate some string to be forwarded. </p> <p>How can I add this validation.jar along with camel-k integration in <strong>minkube</strong>.</p> <p>example :</p> <pre><code>import com.validation.Util; import org.apache.camel.builder.RouteBuilder; public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { //from("timer:tick").log(Util.validate("dummy message - new")); from("timer:tick").log("dummy message - new"); } } </code></pre> <p>to get <strong>com.validation.Util</strong> class we need validation.jar available with camel-k. How to provide that.</p>
S Kumar
<p>There is just one way of achieving your goal. Store your java classses with pom.xml file for building jar in github repository. After that you can use Jitpack.io. Jitpack will build and store jar file in its registry. Finally, you can use as dependency on kamel run command.</p>
Raimondas M
<p>I have an AKS cluster with default settings. I'm trying to create a very simple Deployment/Service. The Service is type LoadBlanacer. I see the service is created, however I cannot curl the service public IP. I don't even get an error, curl just hangs.</p> <pre><code>$ kubectl get all --show-labels NAME READY STATUS RESTARTS AGE LABELS pod/myapp-79579b5b68-npb2g 1/1 Running 0 104m app=myapp,pod-template-hash=79579b5b68 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS service/kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 26h component=apiserver,provider=kubernetes service/myapp-service LoadBalancer 10.0.223.167 $PUBLIC_IP 8080:31000/TCP 104m &lt;none&gt; NAME READY UP-TO-DATE AVAILABLE AGE LABELS deployment.apps/myapp 1/1 1 1 104m app=myapp NAME DESIRED CURRENT READY AGE LABELS replicaset.apps/myapp-79579b5b68 1 1 1 104m app=myapp,pod-template-hash=79579b5b68 </code></pre> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: nginx:latest resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: myapp-service labels: app: myapp spec: selector: app: myapp type: LoadBalancer ports: - port: 8080 targetPort: 8080 # container port of Deployment; kubectl describe pod &lt;podname&gt; | grep Port nodePort: 31000 # http://external-ip:nodePort </code></pre>
Timothy Pulliam
<p>Depending on your requirements, you can create <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">internal</a> or <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard" rel="nofollow noreferrer">public</a> load balancer attached to application service. Post that you can access the service from outside the k8s cluster.</p>
Nataraj Medayhal
<p>I have a kubernetes cluster running many apps. 1 pod per 1 namespace is considered an app. only 1 pod (the app pod) runs per namespace. an example looks like this:</p> <pre><code>(Note that id&lt;x&gt; is a complete random string, so id1 it's not what an id looks like) namespace: app-id1, only-pod-running-in-this-namespace: app-id1 namespace: app-id2, only-pod-running-in-this-namespace: app-id2 namespace: app-id3, only-pod-running-in-this-namespace: app-id3 namespace: app-id4, only-pod-running-in-this-namespace: app-id4 </code></pre> <p>The list goes on and on. I am trying to get the uptime for each app, by checking the pod status. In prometheus, I am doing it like this:</p> <pre><code>kube_pod_status_ready{condition=&quot;true&quot;, namespace=~&quot;app-.*&quot;, pod=~&quot;app-.*&quot;} </code></pre> <p>this returns a table with all existing apps (each record is 1 app status), and their uptime value (1 if the pod is up, 0 is the pod is down).</p> <p>Now I want to create another metric, that returns the average for <em><strong>all</strong></em> apps combined. This is, I want only 1 record returned with the average for all apps. So, let's say I have 100 apps, then if 1 went down for 5 min, I want that 5 min window to show 99 (or 0.99 actually) as a result, instead of 99 apps showing 1, and 1 app showing 0. I hope that make sense. This is how I am trying, but it's not working, as it's returning a table with 1 record per app.</p> <pre><code>avg_over_time(kube_pod_status_ready{condition=&quot;true&quot;, namespace=~&quot;app-.*&quot;, pod=~&quot;app-.*&quot;}[5m]) </code></pre>
Simon Ernesto Cardenas Zarate
<p>My understanding is you want all your app instances' up percent? is <code>kube_pod_status_ready{condition=&quot;true&quot;, namespace=~&quot;app-.*&quot;, pod=~&quot;app-.*&quot;}</code> return <code>1</code> when up and <code>0</code> when down?If so,may be a <code>sum(kube_pod_status_ready{condition=&quot;true&quot;, namespace=~&quot;app-.*&quot;, pod=~&quot;app-.*&quot;}) / count(kube_pod_status_ready{condition=&quot;true&quot;, namespace=~&quot;app-.*&quot;, pod=~&quot;app-.*&quot;})</code> can do?</p>
Xiang.Bao
<p>For one of the test AKS clusters I am trying to update, it gives the following error.</p> <p>Error: SkuNotAvailable. Message: The requested VM size for resource &quot;Following SKUs have failed for capacity restrictions: Standard_D4s_v4' is currently not available in location 'SouthAfricaNorth'. Please try another size or deploy to a different location or different size.</p> <p>I have checked and found that the quota is available in the subscription for this SKU and region selected. Now cluster and pools went in to failed status</p>
Mishap
<p>As far as I know, this error &quot;SkuNotAvailable&quot; is either a capacity issue in the region or that your SUBSCRIPTION doesn't have access to that specific size</p> <p>You could once verify that by running the below Azure cli command az vm list-skus --location centralus --size Standard_D --all --output table</p> <p>If a SKU isn't available for your subscription in a location or zone that meets your business needs, submit a <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/general/region-access-request-process" rel="nofollow noreferrer">SKU request</a> to Azure Support.</p> <p>If the subscription doesn't have access, please reach out to azure subscription and quota mgmt support team through as support case to check and make sure it's available to use the particular size on your subscription in case they cannot enable that for any reason, there will be an appropriate explanation. At this point there is nothing can be done at the AKS side.</p>
Prrudram-MSFT
<p>Having a HPA configuration of <code>50%</code> average <code>CPU</code></p> <pre><code>kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 </code></pre> <p>I found the problem that I have only <code>one</code> pod receiving traffic so the <code>CPU</code> is higher than <code>50%</code> of request cpu.</p> <p>Then start auto scaling up new pods, but those sometimes are not receiving yet any traffic, so the cpu consumption is very low.</p> <p>My expectations was to see those pods that dont use any cpu to be scale down at some point(how much it should take?), but it's not happening, and I believe the reason is, that first condition of one pod cpu use, higher than 50% is forcing to keep those pods up.</p> <p>What I need is to scale up/down those pods, until they can start receiving traffic, which it depends on in which node they are deployed.</p> <p>Any suggestion of how to accomplish this issue?</p>
paul
<p><strong>HPA CPU Utilization:</strong></p> <p>The targetCPUUtilizationPercentage of 50 means that if average CPU utilization across all Pods goes up above 50% then HPA would scale up the deployment and if the average CPU utilization across all Pods goes below 50% then HPA would scale down the deployment if the number of replicas are more than 1. This is how it works,</p> <p>I just checked the code and found that targetUtilization percentage calculation uses resource request. You can refer to below code:</p> <pre><code>currentUtilization = int32((metricsTotal * 100) / requestsTotal) </code></pre> <p>Here is the link <a href="https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49</a></p> <p>There is an official walkthrough focusing on HPA and it's scaling:</p> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Run application: Horizontal pod autoscale: Walkthrough</a></p> <blockquote> <p><strong>Support for configurable scaling behavior</strong></p> <p>Kubernetes v1.23 [stable] (the autoscaling/v2beta2 API version previously provided this ability as a beta feature) If you use the v2 HorizontalPodAutoscaler API, you can use the behavior field (see the API reference) to configure separate scale-up and scale-down behaviors. You specify these behaviours by setting scaleUp and / or scaleDown under the behavior field. You can specify a stabilization window that prevents flapping the replica count for a scaling target. Scaling policies also let you controls the rate of change of replicas while scaling.</p> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Run application: Horizontal pod autoscale: Support for configurable scaling behavior</a></p> </blockquote> <p>You could use newly introduced fields like <code>behavior</code> and <code>stabilizationWindowSeconds</code> to your workload to your specific needs.</p>
Mayur Kamble
<p>How can I rewrite an URI in a VirtualService but just internally? In other words:</p> <ol> <li>A client requests api.example.com/users/get/87</li> <li>IngressGateway receives the request and translates it to api.example.com/get/87 to the &quot;users&quot; pod. The pod's web server doesn't &quot;know&quot; the URL &quot;api.example.com/users/get/87&quot; but &quot;knows&quot; &quot;api.example.com/get/87&quot;</li> <li>To the user's browser, it will still be appearing as &quot;api.example.com/users/get/87&quot; not rewrited.</li> </ol> <p>I need to do it that way because there are several pod/services to respond this same domain &quot;api.example.com&quot;</p> <p>If write it like this below, it doesn't work because it will cause conflict since I have many pod/services on the same domain:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: api namespace: default spec: hosts: - &quot;api.example.com&quot; gateways: - istio-system/default-gateway http: - match: - uri: prefix: /users rewrite: uri: &quot;/&quot; route: - destination: host: user port: number: 80 - match: - uri: prefix: /cars rewrite: uri: &quot;/&quot; route: - destination: host: user port: number: 80 </code></pre>
brgsousa
<p>URL rewrite - Browser URL will not be changed to redirect URL. For ex: &quot;http://xx.xx.xx.xx:8080/api/v1/products&quot; is the url which has been requested in browser. And in virtual service if you use rewrite: uri: /productpage. In this case the browser url will still remain &quot;http://xx.xx.xx.xx:8080/api/v1/products&quot; instead of &quot;http://xx.xx.xx.xx:8080/productpage&quot;</p>
Nataraj Medayhal
<p>I try to deploy Matabase on my GKE cluster but I got Readiness probe failed.</p> <p>I build on my local and get localhost:3000/api/health i got status 200 but on k8s it's not works.</p> <p>Dockerfile. I create my own for push and build to my GitLab registry</p> <pre><code>FROM metabase/metabase:v0.41.6 EXPOSE 3000 CMD [&quot;/app/run_metabase.sh&quot; ] </code></pre> <p>my deployment.yaml</p> <pre><code># apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metaba-dev spec: selector: matchLabels: app: metaba-dev replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 50% maxSurge: 100% template: metadata: labels: app: metaba-dev spec: restartPolicy: Always imagePullSecrets: - name: gitlab-login containers: - name: metaba-dev image: registry.gitlab.com/team/metabase:dev-{{BUILD_NUMBER}} command: [&quot;/app/run_metabase.sh&quot; ] livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 60 periodSeconds: 10 readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 60 periodSeconds: 10 imagePullPolicy: Always ports: - name: metaba-dev-port containerPort: 3000 terminationGracePeriodSeconds: 90 </code></pre> <p>I got this error from</p> <p>kubectl describe pod metaba-dev</p> <pre><code> Warning Unhealthy 61s (x3 over 81s) kubelet Readiness probe failed: Get &quot;http://10.207.128.197:3000/api/health&quot;: dial tcp 10.207.128.197:3000: connect: connection refused Warning Unhealthy 61s (x3 over 81s) kubelet Liveness probe failed: Get &quot;http://10.207.128.197:3000/api/health&quot;: dial tcp 10.207.128.197:3000: connect: connection refused </code></pre> <p>kubectl logs</p> <pre><code>Picked up JAVA_TOOL_OPTIONS: -Xmx1g -Xms1g -Xmx1g Warning: environ value jdk-11.0.13+8 for key :java-version has been overwritten with 11.0.13 WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2022-01-28 15:32:23,966 INFO metabase.util :: Maximum memory available to JVM: 989.9 MB 2022-01-28 15:33:09,703 INFO util.encryption :: Saved credentials encryption is ENABLED for this Metabase instance. 🔐 For more information, see https://metabase.com/docs/latest/operations-guide/encrypting-database-details-at-rest.html </code></pre> <hr /> <p>Here Solution</p> <p>I add initialDelaySeconds: to 1200 and check logging it's cause about network mypod cannot connect to database and when i checking log i did not see that cause i has been restart and it's was a new log</p>
Thotsaphon Sirikutta
<p>Try to change your <code>initialDelaySeconds:</code> 60 to <code>100</code> And you should always set the resource request and limit in your container to avoid the probe failure, this is because when your app starts hitting the resource limit the kubernetes starts throttling your container.</p> <pre><code>containers: - name: app image: images.my-company.example/app:v4 resources: requests: memory: &quot;128Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;100Mi&quot; cpu: &quot;500m&quot; </code></pre>
Leo
<p>At first i have delployed cloud sql with postgresql. and then i deployed gke with documents: <a href="https://cloud.google.com/support-hub#section-2" rel="nofollow noreferrer">https://cloud.google.com/support-hub#section-2</a> i used gcloud tool. and i made gke cluster using auto pilot mode. and made deployment with autoscaler. and i registered my docker image. and then i expose it with loadbalancer. Once i build my docker image. and then i excute it in local. it is running well. but it is not running well in gke server. and suddendly can not connect cloud sql. so i registered gke external ip in cloud sql connection ips. but it doesn't work... i want to connect cloud sql from google kubernetes engine. please help me...</p>
Hyeonkyu Han
<p>For accessing the Cloud SQL instance from an application running in Google Kubernetes Engine, you can use either the Cloud SQL Auth proxy (with public or private IP), or connect directly using a private IP address. The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP.</p> <p>Referral link :</p> <p>[1] <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a> [2]<a href="https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4" rel="noreferrer">https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4</a> [3]<a href="https://www.jhipster.tech/tips/018_tip_kubernetes_and_google_cloud_sql.html" rel="noreferrer">https://www.jhipster.tech/tips/018_tip_kubernetes_and_google_cloud_sql.html</a></p>
Shiva Matta
<p>The Minikube cluster on my Debian system removes all resources which have been up and running after I run <code>minikube stop</code> or shutting down the system.</p> <p>What could possibly be the reason? How can I persist it?</p> <p>Kubernetes &amp; Kubectl version 1.23.1.</p>
Faramarz Qoshchi
<p>Posting it as an answer: the issue with removing all deployed resources in Minikube cluster after <code>minikube stop</code> and system shutdown was raised <a href="https://github.com/kubernetes/minikube/issues/12655" rel="nofollow noreferrer">here</a>. It is currently fixed in the newest release, upgrading your Minikube installation to 1.25.2 version should solve it.</p>
anarxz
<p>I have a certificate</p> <pre><code>-----BEGIN CERTIFICATE----- MIIC3jCCAcagAwIBAgIRAMVxQYnfOmukOqdI7EkOujMwDQYJKoZIhvcNAQELBQAw ADAeFw0yMjA2MDcwOTQ3MDlaFw0yMjA5MDUwOTQ3MDlaMAAwggEiMA0GCSqGSIb3 DQEBAQUAA4IBDwAwggEKAoIBAQDRrmSirAmqSsM3WtJ0/2wEwMw5aMH0tagfDDEy Fofr64UkxCw/e6gZYhOTY5TPMyK9XZkSf81lsRdYyo/t5WtNhYZgHkAaNTK8WVeJ LCGP1VQSjwZq82+edRfiJ0xIXD1JWlARhh7uXToZxYUXQXhJYtjJg9qCtISOv3/C S6V+rMNaCq8yegLfb3RdXz5KAiHs/+xAAKlOmhn2Ab3XUVFCBPpVIWZpCrcnAag3 ev8dDm28g9oRjJzC0jeOrLz1gbUn6M/B8VsYLTGFSjiopPkYZsmcFY0DHe7FopWe hDQueVkmFtYdrRUZaT/r1R+65dCmS1YtQu83mhCDZQ7oNW+XAgMBAAGjUzBRMB0G A1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMCIGA1Ud EQEB/wQYMBaCFHdvLXNyZS48Y2x1c3RlciBVUkw+MA0GCSqGSIb3DQEBCwUAA4IB AQB86t7SguZySp7C0vjqqAECEHOS34xyhecOYbmxyu+aaQTu2Ryzxh9ymSUlI9oa qUqjMYXSeQY244bt2jgqh9yLWe7VtMu9IMX3DAXlV5Hogmt4BKNtJTRwB8hTBZHl 26e+UiHe72BW28xCL5zYNkLG4fE5r+pMWUrAQzIsVmkfiGSb+OZpwJ7EoOz5wnBm Q/85ehlufxYwpywnZZcM3FKcDwxiDm1VDo+jU70KsZ4f1zxWpXqnUEUBQ0Y8ca+7 oMneoZi4/VeBC82qDmTfvigi0NE+VTCglVeU2jgKFDodChaJbHXIpg8UKVpmvGsO CfUXffVNI/PErCgY3e4vH/65 -----END CERTIFICATE----- </code></pre> <p>It is stored in the variable after converting into the base64 encoding.</p> <p>export certificateData=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMzakNDQWNhZ0F3SUJBZ0lSQU1WeFFZbmZPbXVrT3FkSTdFa091ak13RFFZSktvWklodmNOQVFFTEJRQXcKQURBZUZ3MHlNakEyTURjd09UUTNNRGxhRncweU1qQTVNRFV3T1RRM01EbGFNQUF3Z2dFaU1BMEdDU3FHU0liMwpEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURScm1TaXJBbXFTc00zV3RKMC8yd0V3TXc1YU1IMHRhZ2ZEREV5CkZvZnI2NFVreEN3L2U2Z1pZaE9UWTVUUE15SzlYWmtTZjgxbHNSZFl5by90NVd0TmhZWmdIa0FhTlRLOFdWZUoKTENHUDFWUVNqd1pxODIrZWRSZmlKMHhJWEQxSldsQVJoaDd1WFRvWnhZVVhRWGhKWXRqSmc5cUN0SVNPdjMvQwpTNlYrck1OYUNxOHllZ0xmYjNSZFh6NUtBaUhzLyt4QUFLbE9taG4yQWIzWFVWRkNCUHBWSVdacENyY25BYWczCmV2OGREbTI4ZzlvUmpKekMwamVPckx6MWdiVW42TS9COFZzWUxUR0ZTamlvcFBrWVpzbWNGWTBESGU3Rm9wV2UKaERRdWVWa21GdFlkclJVWmFUL3IxUis2NWRDbVMxWXRRdTgzbWhDRFpRN29OVytYQWdNQkFBR2pVekJSTUIwRwpBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1DSUdBMVVkCkVRRUIvd1FZTUJhQ0ZIZHZMWE55WlM0OFkyeDFjM1JsY2lCVlVrdytNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUIKQVFCODZ0N1NndVp5U3A3QzB2anFxQUVDRUhPUzM0eHloZWNPWWJteHl1K2FhUVR1MlJ5enhoOXltU1VsSTlvYQpxVXFqTVlYU2VRWTI0NGJ0MmpncWg5eUxXZTdWdE11OUlNWDNEQVhsVjVIb2dtdDRCS050SlRSd0I4aFRCWkhsCjI2ZStVaUhlNzJCVzI4eENMNXpZTmtMRzRmRTVyK3BNV1VyQVF6SXNWbWtmaUdTYitPWnB3SjdFb096NXduQm0KUS84NWVobHVmeFl3cHl3blpaY00zRktjRHd4aURtMVZEbytqVTcwS3NaNGYxenhXcFhxblVFVUJRMFk4Y2ErNwpvTW5lb1ppNC9WZUJDODJxRG1UZnZpZ2kwTkUrVlRDZ2xWZVUyamdLRkRvZENoYUpiSFhJcGc4VUtWcG12R3NPCkNmVVhmZlZOSS9QRXJDZ1kzZTR2SC82NQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==</p> <p>Am trying to create a route with the below Yaml and failing to get the certificate back inside the yaml :</p> <pre><code>cat &lt;&lt;EOF | oc apply -f - apiVersion: route.openshift.io/v1 kind: Route metadata: name: test namespace: default annotations: openshift.io/host.generated: 'true' spec: path: /ts/test to: kind: Service name: test1 weight: 100 port: targetPort: https tls: termination: reencrypt destinationCACertificate: | ${certificateData} wildcardPolicy: None EOF </code></pre>
Manju
<p>You need to create secret of the certificate files and load in deployment as below</p> <pre><code>env: - name: testCA valueFrom: secretKeyRef: key: testCA.pem name: testSecretCert </code></pre> <p>inside the container it will be loaded into environment</p>
Nataraj Medayhal
<p>I have a GKE app that uses kubernetes serviceaccounts linked to google service accounts for api authorizations in-app.</p> <p>Up until now, to test these locally, I had two versions of my images- one with and one without a test-keyfile.json copied into them for authorization. (The production images used the serviceaccount for authorization, the test environment would ignore the serviceaccounts and instead look for a keyfile which gets copied in during the image build.)</p> <p>I was wondering if there was a way to merge the images into one, and have both prod/test use the Kubernetes serviceaccount for authorization. On production, use GKE's workload identity, and in testing, use a keyfile(s) linked with or injected into a Kubernetes serviceaccount.</p> <p>Is such a thing possible? Is there a better method for emulating GKE workload identity on a local test environment?</p>
Ral
<p>I do not know a way of emulating workload identity on a non-Google Kubernetes cluster, but you could change your app to read the auth credentials from a volume/file or the metadata server, depending on the environment setting. See <a href="http://%20https://cloud.google.com/community/tutorials/gke-workload-id-clientserver" rel="nofollow noreferrer">this article</a> (and particularly the <a href="https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/gke-workload-id-clientserver/client/main.go#L31" rel="nofollow noreferrer">code linked there</a>) for an example of how to authenticate using local credentials or Google SA depending on environmental variables.The article also shows how to use Pod overlays to keep the prod vs dev changes separate from the bulk of the configuration.</p>
Goli Nikitha
<p>I have an application running in EKS with istio service mesh, exposed using istio ingress and alb. Is there a tool to measure the amount of time (response time) taken by API request on each service? For instance, what time it reached ingress, then gateway and virtual service down to the pod?</p>
ILoveCode
<p>Istio access logs can be enabled to see different details. Below is <a href="https://istio.io/latest/docs/tasks/observability/logs/access-log/" rel="nofollow noreferrer">default logging</a> format.</p> <pre><code>[%START_TIME%] \&quot;%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\&quot; %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS% \&quot;%UPSTREAM_TRANSPORT_FAILURE_REASON%\&quot; %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \&quot;%REQ(X-FORWARDED-FOR)%\&quot; \&quot;%REQ(USER-AGENT)%\&quot; \&quot;%REQ(X-REQUEST-ID)%\&quot; \&quot;%REQ(:AUTHORITY)%\&quot; \&quot;%UPSTREAM_HOST%\&quot; %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n </code></pre> <p>For istio <a href="https://istio.io/latest/docs/concepts/observability/" rel="nofollow noreferrer">observability </a> to <a href="https://istio.io/latest/docs/tasks/observability/kiali/" rel="nofollow noreferrer">visualize service mesh</a> please refer their documentation.</p>
Nataraj Medayhal
<p>I have to changed the <code>terminationGracePeriodSeconds</code> from 30 to 120 in Kubernetes deployment manifest file but when I am deploying using helm:</p> <pre><code>helm upgrade --install &lt;chartname&gt; --values &lt;valuesfilename&gt; </code></pre> <p>The old pods getting terminated immediately and new pods started running.</p> <p>But the expected behavior is to keep the old pods in terminating state and continue its current processes for 120 seconds as defined.</p> <p>What else could be missing here?</p> <p>Does this solve my issue here?</p> <pre><code>containers: - name: containername lifecycle: preStop: exec: command: [ &quot;/bin/sleep&quot;, &quot;20&quot; ] </code></pre> <p>One question I had is does adding sleep command stops execution of current processes of the pod and just sleeps while it is in terminating state?</p>
Bala krishna
<p>Basically, it is expected behavior for <code>terminationGracePeriodSeconds</code>, as it is <em>optional</em> duration in seconds the pod needs to terminate gracefully.</p> <p>On pod termination - the lifecycle is described <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">here</a> - pod receives <code>terminating</code> state and it is removed from the endpoints list of all services and stops getting new traffic. <code>SIGTERM</code> signal is immediately sent to the processes running in pod, and Kubernetes is waiting for the pod to stop normally on its own during the grace period specified in <code>terminationGracePeriodSeconds</code> option. As soon as <code>terminationGracePeriodSeconds</code> expires, the Pod is forcefully killed by <code>SIGKILL</code> signal. In your case, the processes in the pod were just shutdown normally before 120 seconds of grace period have passed.</p> <p>In its turn, <code>preStop</code> hook is called immediately before a pod is terminated, which means that the hook will be executed prior to <code>kubectl</code> sends <code>SIGTERM</code> to the pod. As stated in <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">documentation</a>, the hook <strong>must</strong> complete before the <code>TERM</code> signal to stop the container is sent. <code>terminationGracePeriodSeconds</code> is happening in parallel with the preStop hook and the <code>SIGTERM</code> signal and its countdown begins before the <code>preStop</code> hook is executed. The pod will eventually terminate within the pod's termination grace period, regardless of the hook completion.</p> <p>Therefore, on receiving <code>sleep</code> command for <code>preStop</code> hook, pod already marked as <code>Terminating</code> but the processes inside of it are not yet terminated - so the container can complete all the active requests during this period.</p>
anarxz
<p>I have a baremetal Kubernetes cluster running on Nutanix. However during the process of setting up the ingress controller i have had issues with it picking up an external IP address. Im reasonably new to Kubernetes and trying to debug this but cant figure out whats wrong.</p> <p>Following <a href="https://platform9.com/learn/v1.0/tutorials/nginix-controller-via-yaml" rel="nofollow noreferrer">this</a> article i ran the following commands:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml </code></pre> <p>The above command to setup the ingress controller, rbac, service accounts etc</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml </code></pre> <p>And this command i believe should setup the service with the non internally networked IP, or the IP to which the Ingress rules would be applied against, and therefore my applications managed.</p> <p>However what ends up being created is as follows. <a href="https://i.stack.imgur.com/SsZcE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SsZcE.png" alt="no external ip" /></a></p> <p>I've also tried using port-forwards on the Kubernetes cluster and can use the associated containers, so i know they are definitely functioning. I just can't seem to get an External-IP associated with the ingress controller, and cant find any resources online saying there is a setting i may not have enabled. Could anyone please help?</p>
Bailey Dunn
<p>ExternalIP will get populated once there is an rule for the traffic routing from outside to k8s network. This can be done by attaching an external reachable load balancer to ingress controller service. Few options are <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>, <a href="https://purelb.gitlab.io/docs/" rel="nofollow noreferrer">PureLB</a> and <a href="https://github.com/openelb/openelb" rel="nofollow noreferrer">OpenELB</a> etc., Based on your requirement you can choose one.</p>
Nataraj Medayhal
<p>I deployed my angular application(Static webpage) on kubernetes and tried launching it from Google Chrome. I see app is loading, however there is nothing displayed on the browser. Upon checking on browser console , I could see this error</p> <p>&quot;Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of &quot;text/html&quot;. Strict MIME type checking is enforced for module scripts per HTML spec.&quot; for (main.js,poylfill.js,runtime.js) files . I research few forums and one possible rootcause could because of <code>type</code> attribute in <code>&lt;script&gt;</code> tag should be type=text/javascript instead of <code>type=module</code> in my index.html file that is produced under dist folder after executing ng build. I don't how to make that change as to these tags as generated during the build process, and my ng-build command is taken care by a docker command.</p> <p>URL i'm trying to access will be something like : &quot;http://xxxx:portnum/issuertcoetools</p> <p>note: The host <code>xxxx:portnum</code> will be used by many other apps as well.</p> <p>Are there any work-arounds or solutions to this issue?</p> <p>index.html - produced after running ng-build in local, (which is the same i see in kubernetes POD too)</p> <pre><code>&lt;!DOCTYPE html&gt;&lt;html lang=&quot;en&quot;&gt;&lt;head&gt; &lt;meta charset=&quot;utf-8&quot;&gt; &lt;title&gt;Data Generator&lt;/title&gt; &lt;base href=&quot;/&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot;&gt; &lt;link rel=&quot;icon&quot; type=&quot;image/x-icon&quot; href=&quot;favicon.ico&quot;&gt; &lt;link rel=&quot;preconnect&quot; href=&quot;https://fonts.gstatic.com&quot;&gt; &lt;style type=&quot;text/css&quot;&gt;@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCRc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fABc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCBc4AMP6lbBP.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fBxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fChc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fBBc4AMP6lQ.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu72xKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu5mxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7mxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu4WxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7WxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7GxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu4mxKKTU1Kg.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCRc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fABc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCBc4AMP6lbBP.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fBxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fChc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fBBc4AMP6lQ.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}&lt;/style&gt; &lt;style type=&quot;text/css&quot;&gt;@font-face{font-family:'Material Icons';font-style:normal;font-weight:400;src:url(https://fonts.gstatic.com/s/materialicons/v128/flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.woff2) format('woff2');}.material-icons{font-family:'Material Icons';font-weight:normal;font-style:normal;font-size:24px;line-height:1;letter-spacing:normal;text-transform:none;display:inline-block;white-space:nowrap;word-wrap:normal;direction:ltr;-webkit-font-feature-settings:'liga';-webkit-font-smoothing:antialiased;}&lt;/style&gt; &lt;style&gt;.mat-typography{font:400 14px/20px Roboto,Helvetica Neue,sans-serif;letter-spacing:normal}html,body{height:100%}body{margin:0;font-family:Roboto,Helvetica Neue,sans-serif}&lt;/style&gt;&lt;link rel=&quot;stylesheet&quot; href=&quot;styles.9c0548760c8b22be.css&quot; media=&quot;print&quot; onload=&quot;this.media='all'&quot;&gt;&lt;noscript&gt;&lt;link rel=&quot;stylesheet&quot; href=&quot;styles.9c0548760c8b22be.css&quot;&gt;&lt;/noscript&gt;&lt;/head&gt; &lt;body class=&quot;mat-typography&quot;&gt; &lt;app-root&gt;&lt;/app-root&gt; &lt;script src=&quot;runtime.397c3874548e84cd.js&quot; type=&quot;module&quot;&gt; &lt;/script&gt;&lt;script src=&quot;polyfills.2145acc81d0726ab.js&quot; type=&quot;module&quot;&gt; &lt;/script&gt;&lt;script src=&quot;main.a655dca28148b7e2.js&quot; type=&quot;module&quot;&gt;&lt;/script&gt; &lt;/body&gt;&lt;/html&gt; </code></pre> <p>nginx.conf file</p> <pre><code>worker_processes 4; events { worker_connections 1024; } http { server { listen 8080; include /etc/nginx/mime.types; location /issuertcoetools { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html =404; } } } </code></pre>
suku
<p>This error usually happens because you deployment was into a subfolder, so it seems like Angular is fetching you app directly from your base URL, so your html is found when you go to your <code>domain.com/mysubfolder/index.html</code>, but as the Angular fetches your resources from <code>domain.com/index.html</code> instead of domain.com/mysubfolder/index.html; I’m pretty sure this is the cause of your issue. You can resolve it building your app with:</p> <pre><code>ng build --prod --base-href mysubfolder </code></pre>
Leo
<p>I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.</p> <p>I tried to create a secret that make me able to to pull the images following this article <a href="https://blog.container-solutions.com/using-google-container-registry-with-kubernetes" rel="nofollow noreferrer">https://blog.container-solutions.com/using-google-container-registry-with-kubernetes</a></p> <p>Basically, these are the steps</p> <ol> <li>In the GCP account, create a service account key, with a JSON credential</li> <li>Execute <pre><code>kubectl create secret docker-registry gcr-json-key \ --docker-server=gcr.io \ --docker-username=_json_key \ --docker-password=&quot;$(cat ~/json-key-file.json)&quot; \ [email protected] </code></pre> </li> <li>In the deployment yaml reference the secret <pre><code>imagePullSecrets: - name: gcr-json-key </code></pre> </li> </ol> <p>I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.</p> <blockquote> <p>Failed to pull image &quot;gcr.io/myapp/backendnodeapi:latest&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;gcr.io/myapp/backendnodeapi:latest&quot;: failed to resolve reference &quot;gcr.io/myapp/backendnodeapi:latest&quot;: unexpected status code [manifests latest]: 403 Forbidden</p> </blockquote>
agusgambina
<p>Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has <a href="https://cloud.google.com/container-registry/docs/access-control" rel="nofollow noreferrer">permissions</a> to access Container Registry.</p> <p>Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group. This <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#prereqs" rel="nofollow noreferrer">documentation</a> has details on prerequisites for container registry.</p> <p><strong>Note</strong>: Ensure that the version of kubectl is the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux" rel="nofollow noreferrer">latest version</a>.</p> <p>I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.</p>
Goli Nikitha
<p>Now I found my kubernetes v1.21 cluster no response with any command, for example, when I scale the deployment pod number, nothing change with the deployment. also there is no error info output. is it possible to debug the command, how to found out where is the issue? I have checked the kubelet:</p> <pre><code>[root@k8smasterone ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2023-08-04 16:31:37 CST; 1h 28min ago Docs: https://kubernetes.io/docs/ Main PID: 6067 (kubelet) Tasks: 15 Memory: 73.4M CGroup: /system.slice/kubelet.service └─6067 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/co... </code></pre> <p>the controller manager:</p> <pre><code>[root@k8smasterone ~]# ps aux|grep &quot;controller&quot; polkitd 3055 0.1 4.3 1022732 342976 ? Ssl 2022 601:27 /usr/bin/kube-controllers root 10433 0.0 0.5 819636 44476 ? Ssl 02:52 0:53 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.96.0.0/12 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true </code></pre> <p>the scheduler manager:</p> <pre><code>[root@k8smasterone ~]# ps aux|grep &quot;scheduler&quot; root 10667 0.2 0.3 752032 30780 ? Ssl 02:52 2:11 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true </code></pre> <p>all of them works fine. what should I do to fixed this issue? the recently change of my cluster is renewed the certificate.</p>
Dolphin
<p>As you mentioned, the k8s cluster certificates are renewed. Please ensure you update the latest certificates in home directory by running below command as an normal user. More details of manual certificate renewal can be referred in following <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal" rel="nofollow noreferrer">k8s documentation</a></p> <pre><code>sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p>Also the tools like kubectl, kubeadm can be debugged by passing &quot;--v=9&quot; flag. More details of the verbosity for kubectl is there in <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">k8s documentation</a></p>
Nataraj Medayhal
<p>I can't get my Ingress to use my TLS cert. I have created a self signed TLS cert using openssl for hostname myapp.com and added myapp.com to /etc/hosts.</p> <p><code>openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365</code></p> <p>I have verified the Ingress is using the TLS cert</p> <pre><code>$ kubectl describe ingress myapp-ingress Name: myapp-ingress Labels: app=myapp name=myapp-ingress Namespace: default Address: $PUBLIC_IP Ingress Class: nginx-ingress-class Default backend: &lt;default&gt; TLS: nginx-ingress-tls terminates myapp.com Rules: Host Path Backends ---- ---- -------- myapp.com / myapp-service:8080 (10.244.0.14:80) Annotations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 19m (x11 over 21h) nginx-ingress-controller Scheduled for sync </code></pre> <p>however, when I curl myapp.com, I get an error message informing me no subject name matches target host 'myapp.com'.</p> <pre><code>$ curl -I https://myapp.com curl: (60) SSL: no alternative certificate subject name matches target host name 'myapp.com' More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. </code></pre> <p>I made sure to give openssl myapp.com as the FQDN. I'm not sure why it isn't working. Any help is appreciated.</p> <p>Edit:</p> <p>I'm looking at the logs of the ingress controller. I see the following error messages</p> <pre><code>$ kubectl logs -n nginx-ingress ingress-nginx-controller-7c45d9ff9f-2hcd7 | grep cert I0618 20:43:32.096653 7 main.go:104] &quot;SSL fake certificate created&quot; file=&quot;/etc/ingress-controller/ssl/default-fake-certificate.pem&quot; I0618 20:43:32.116162 7 ssl.go:531] &quot;loading tls certificate&quot; path=&quot;/usr/local/certificates/cert&quot; key=&quot;/usr/local/certificates/key&quot; W0618 20:43:33.246716 7 backend_ssl.go:45] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: failed to parse private key I0618 20:43:33.340807 7 nginx.go:319] &quot;Starting validation webhook&quot; address=&quot;:8443&quot; certPath=&quot;/usr/local/certificates/cert&quot; keyPath=&quot;/usr/local/certificates/key&quot; W0618 20:43:33.342061 7 controller.go:1334] Error getting SSL certificate &quot;default/nginx-ingress-tls&quot;: local SSL certificate default/nginx-ingress-tls was not found. Using default certificate W0618 20:43:37.149824 7 controller.go:1334] Error getting SSL certificate &quot;default/nginx-ingress-tls&quot;: local SSL certificate default/nginx-ingress-tls was not found. Using default certificate W0618 20:43:41.152972 7 controller.go:1334] Error getting SSL certificate &quot;default/nginx-ingress-tls&quot;: local SSL certificate default/nginx-ingress-tls was not found. Using default certificate </code></pre>
Timothy Pulliam
<p>When you are using a cert that is not signed by trusted certificates in the installed CA certificate store, you will get the error message:</p> <pre><code>failed to verify the legitimacy of the server and therefore could not establish a secure connection to it </code></pre> <p>As a workaround, you can disable the strict certificate checking it with the following command:</p> <pre><code>curl -k https://myapp.com </code></pre> <p>You can find more details about it in this <a href="https://curl.haxx.se/docs/sslcerts.html" rel="nofollow noreferrer">link</a>.</p>
Leo
<p>I'm trying to set oidc credentials and got stuck, because the client-secret contains a comma:</p> <pre><code>kubectl config set-credentials user@cluster \ --auth-provider=oidc \ --auth-provider-arg='idp-issuer-url=https://host' \ --auth-provider-arg='client-id=xxx' \ --auth-provider-arg='client-secret=AAAA,BBBB' </code></pre> <p>This results in the following error:</p> <pre><code>error: Error: invalid auth-provider-arg format: BBBB </code></pre> <p>Is there a way to escape the char?</p>
user2534584
<p>Mentioning the special characters in a single quote is an exact work around for <a href="https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pods-with-prod-test-credentials" rel="nofollow noreferrer">escaping special characters</a> but in this case as comma is present it's considered as <a href="https://github.com/int128/kubelogin/blob/master/docs/standalone-mode.md#extra-scopes" rel="nofollow noreferrer">extra-scope</a> (Scopes to request to the provider (comma separated)).</p> <p>Currently kubectl does not accept multiple scopes, so you need to edit the kubeconfig as following:</p> <pre><code>$ kubectl config set-credentials keycloak --auth-provider-arg extra-scopes=SCOPES sed -i ' ' -e s/SCOPES/email,profile/ $KUBECONFIG </code></pre>
Goli Nikitha
<p><a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">Since Kubernetes 1.20</a>, Docker support is deprecated and will be totally removed from 1.24. We use GKE to manage Kubernetes so the upgrade will be done automatically.</p> <p>As far as I've read, developers should not have been impacted but we made tests in Kubernetes 1.23 to check that all is OK and it seems we have some issues with a microservice using Testcontainers :</p> <pre><code>09:59:44.578 [testcontainers-ryuk] WARN org.testcontainers.utility.ResourceReaper - Can not connect to Ryuk at localhost:49153 java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) at java.base/java.net.Socket.connect(Socket.java:591) at org.testcontainers.utility.ResourceReaper.lambda$null$3(ResourceReaper.java:194) at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27) at org.testcontainers.utility.ResourceReaper.lambda$start$4(ResourceReaper.java:190) at java.base/java.lang.Thread.run(Thread.java:835) </code></pre> <p>This is not reproductible on a Kubernetes 1.19 where Docker is not deprecated nor removed.</p> <p>We tried to disable Ryuk in <code>pom.xml</code> (as indicated for this error in a <a href="https://github.com/testcontainers/testcontainers-java/issues/3609" rel="nofollow noreferrer">Testcontainers issue</a>) but it has no effect :</p> <pre><code>&lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-failsafe-plugin&lt;/artifactId&gt; &lt;executions&gt; &lt;execution&gt; &lt;goals&gt; &lt;goal&gt;verify&lt;/goal&gt; &lt;goal&gt;integration-test&lt;/goal&gt; &lt;/goals&gt; &lt;configuration&gt; &lt;environmentVariables&gt; &lt;TESTCONTAINERS_RYUK_DISABLED&gt;true&lt;/TESTCONTAINERS_RYUK_DISABLED&gt; &lt;/environmentVariables&gt; &lt;/configuration&gt; &lt;/execution&gt; &lt;/executions&gt; &lt;/plugin&gt; </code></pre> <p>To reproduce locally, we tried to launch IT with testcontainers in a Minikube with Kubernetes 1.23 and Containerd as container runtime (no docker env):</p> <pre><code>minikube start --kubernetes-version v1.23.0 --network-plugin=cni --enable-default-cni --container-runtime=containerd --bootstrapper=kubeadm </code></pre> <p>But it leads to this error when launching <code>mvn -T 2 failsafe:integration-test failsafe:verify</code> :</p> <pre><code>[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.87 s &lt;&lt;&lt; FAILURE! - in com.ggl.merch.kafka.it.MerchandisingConsumerIT [ERROR] should_consume_merchandising_message_and_process_record Time elapsed: 0.012 s &lt;&lt;&lt; ERROR! java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration at com.ggl.merch.kafka.it.MerchandisingConsumerIT.&lt;init&gt;(MerchandisingConsumerIT.java:91) </code></pre> <p>Anyone already had the same problem?</p> <p>Thank you by advance!</p>
François Delbrayelle
<p><strong>Solved</strong> by putting this in our configuration:</p> <pre><code>- name: &quot;TESTCONTAINERS_HOST_OVERRIDE&quot; valueFrom: fieldRef: fieldPath: status.hostIP </code></pre> <p>Following <a href="https://www.testcontainers.org/features/configuration/" rel="nofollow noreferrer">this doc</a>:</p> <blockquote> <p><strong><code>TESTCONTAINERS_HOST_OVERRIDE</code></strong><br />     Docker's host on which ports are exposed.<br />     Example: <code>docker.svc.local</code></p> </blockquote>
François Delbrayelle
<p>I have a google kubernetes cluster running and I am trying to manually scale some pods with the python-client kubernetes SDK. I use the following command on my terminal to get my google account credentials:</p> <pre><code>gcloud auth login </code></pre> <p>Next, I connect to my cluster using the default command to get locally my kube-config:</p> <pre><code>gcloud container clusters get-credentials ${clusterName} --zone ${zoneName}--${projectName} </code></pre> <p>Using the python SDK I load my configuration:</p> <pre><code>from kubernetes import client, config import kubernetes.client config.load_kube_config() v1 = client.CoreV1Api() api = client.CustomObjectsApi() k8s_apps_v1 = client.AppsV1Api() </code></pre> <p>With this code I have my cluster info and I can scale my pods as needed. This works for around 30-45 mins and after that when I try to make API requests to scale the pods in my cluster I get a response with the following error:</p> <pre><code>kubernetes.client.exceptions.ApiException: (401) Reason: Unauthorized HTTP response headers: HTTPHeaderDict({'Audit-Id': '697f82b7-4db9-46c3-b873-cef49a45bb19', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Tue, 31 May 2022 01:20:53 GMT', 'Content-Length': '129'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;Unauthorized&quot;,&quot;reason&quot;:&quot;Unauthorized&quot;,&quot;code&quot;:401} </code></pre> <p>Why do I get anauthorized and can't make API calls anymore, and how can I fix this ?</p>
thepaulbot
<p>To resolve it, you should refresh the token before calling the API. This <a href="https://github.com/kubernetes-client/python-base/blob/474e9fb32293fa05098e920967bb0e0645182d5b/config/kube_config.py#L625" rel="nofollow noreferrer">doc</a> is useful to check if the token expired, the function <code>load_gcp_token</code> refreshes the GCP token only if it expires.</p>
Leo
<p>I need some help regarding this OOM status of pods 137. I am kinda stuck here for 3 days now. I built a docker image of a flask application. I run the docker image, it was running fine with a memory usage of 2.7 GB. I uploaded it to GKE with the following specification. <a href="https://i.stack.imgur.com/o5dGZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o5dGZ.png" alt="Cluster" /></a></p> <p>Workloads show &quot;Does not have minimum availability&quot;</p> <p><a href="https://i.stack.imgur.com/860ox.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/860ox.png" alt="Workload" /></a></p> <p>Checking that pod &quot;news-category-68797b888c-fmpnc&quot; shows CrashLoopBackError</p> <p>Error &quot;back-off 5m0s restarting failed container=news-category-pbz8s pod=news-category-68797b888c-fmpnc_default(d5b0fff8-3e35-4f14-8926-343e99195543): CrashLoopBackOff&quot;</p> <p>Checking the YAML file shows OOM killed 137.</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: &quot;2022-02-17T16:07:36Z&quot; generateName: news-category-68797b888c- labels: app: news-category pod-template-hash: 68797b888c managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{&quot;uid&quot;:&quot;8d99448a-04f6-4651-a652-b1cc6d0ae4fc&quot;}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: f:containers: k:{&quot;name&quot;:&quot;news-category-pbz8s&quot;}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: kube-controller-manager operation: Update time: &quot;2022-02-17T16:07:36Z&quot; - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{&quot;type&quot;:&quot;ContainersReady&quot;}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{&quot;type&quot;:&quot;Initialized&quot;}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{&quot;type&quot;:&quot;Ready&quot;}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{&quot;ip&quot;:&quot;10.16.3.4&quot;}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update time: &quot;2022-02-17T16:55:18Z&quot; name: news-category-68797b888c-fmpnc namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: news-category-68797b888c uid: 8d99448a-04f6-4651-a652-b1cc6d0ae4fc resourceVersion: &quot;25100&quot; uid: d5b0fff8-3e35-4f14-8926-343e99195543 spec: containers: - image: gcr.io/projectiris-327708/news_category:noConsoleDebug imagePullPolicy: IfNotPresent name: news-category-pbz8s resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-z2lbp readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: gke-news-category-cluste-default-pool-42e1e905-ftzb preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-z2lbp projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: &quot;2022-02-17T16:07:37Z&quot; status: &quot;True&quot; type: Initialized - lastProbeTime: null lastTransitionTime: &quot;2022-02-17T16:55:18Z&quot; message: 'containers with unready status: [news-category-pbz8s]' reason: ContainersNotReady status: &quot;False&quot; type: Ready - lastProbeTime: null lastTransitionTime: &quot;2022-02-17T16:55:18Z&quot; message: 'containers with unready status: [news-category-pbz8s]' reason: ContainersNotReady status: &quot;False&quot; type: ContainersReady - lastProbeTime: null lastTransitionTime: &quot;2022-02-17T16:07:36Z&quot; status: &quot;True&quot; type: PodScheduled containerStatuses: - containerID: containerd://a582af0248a330b7d4087916752bd941949387ed708f00b3aac6f91a6ef75e63 image: gcr.io/projectiris-327708/news_category:noConsoleDebug imageID: gcr.io/projectiris-327708/news_category@sha256:c4b3385bd80eff2a0c0ec0df18c6a28948881e2a90dd1c642ec6960b63dd017a lastState: terminated: containerID: containerd://a582af0248a330b7d4087916752bd941949387ed708f00b3aac6f91a6ef75e63 exitCode: 137 finishedAt: &quot;2022-02-17T16:55:17Z&quot; reason: OOMKilled startedAt: &quot;2022-02-17T16:54:48Z&quot; name: news-category-pbz8s ready: false restartCount: 13 started: false state: waiting: message: back-off 5m0s restarting failed container=news-category-pbz8s pod=news-category-68797b888c-fmpnc_default(d5b0fff8-3e35-4f14-8926-343e99195543) reason: CrashLoopBackOff hostIP: 10.160.0.42 phase: Running podIP: 10.16.3.4 podIPs: - ip: 10.16.3.4 qosClass: BestEffort startTime: &quot;2022-02-17T16:07:37Z&quot; </code></pre> <p>My question is what to do and how to do to solve this. I tried to add resources to the YAML file in spec-</p> <pre><code>resources: limits: memory: 32Gi requests: memory: 16Gi </code></pre> <p>It also shows errors. How do I increase the memory of pods? And also if I increase memory it shows &quot;Pod Unscheduled&quot;.</p> <p>Someone plz give me an insight into clusters, nodes, and pods and how to solve this. Thank you.</p>
Pritam Sinha
<p>The message <code>back-off restarting failed container</code> appears when you are facing a temporary resource overload, as a result of an activity spike. And the <code>OOMKilled code 137</code> means that a container or pod was terminated because they used more memory than the one allowed. OOM stands for <code>“Out Of Memory”</code>.</p> <p>So based on this information from your GKE configuration (Total of memory 16Gi ), I suggest review the total of memory limited configured in your GKE, you can confirm this issue with the following command: <code>kubectl describe pod [name]</code></p> <p>You will need to determine why Kubernetes decided to terminate the pod with the OOMKilled error, and adjust memory requests and limit values, here is an example to increase your memory and how to set limits to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">memory</a> because looks like this is the main problem</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: memory-demo namespace: mem-example spec: containers: - name: memory-demo-ctr image: polinux/stress resources: limits: memory: &quot;128Gi&quot; requests: memory: &quot;64Gi&quot; command: [&quot;stress&quot;] args: [&quot;--vm&quot;, &quot;1&quot;, &quot;--vm-bytes&quot;, &quot;64Gi&quot;, &quot;--vm-hang&quot;, &quot;1&quot;] </code></pre> <p>To review all the GKE metrics, you go to the GCP console, then go to the Monitoring dashboard and select GKE. In this Monitoring dashboard, you will find the statistics related to memory and CPU.</p> <p>Also, it is important to review if the containers have enough resources to run your applications. You can follow this <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">link</a> to check the Kubernetes best practices.</p> <p>GKE also has an interesting feature, which is autoscaling. It is very useful because it automatically resizes your GKE cluster based on the demands of your workloads. Please follow this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">guide</a> to learn how to do it.</p>
Leo
<p>I have a private docker registry hosted on gitlab and I would like to use this repository to pull images for my local kubernetes cluster:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 68m </code></pre> <p>K8s is on <code>v1.22.5</code> and is a single-node cluster that comes 'out of the box' with Docker Desktop. I have already built and deployed an image to the gitlab container registry <code>registry.gitlab.com</code>. What I have done already:</p> <ol> <li>Executed the command <code>docker login -u &lt;username&gt; -p &lt;password&gt; registry.gitlab.com</code></li> <li>Modified the <code>~/.docker/config.json</code> file to the following: <pre><code>{ &quot;auths&quot;: { &quot;registry.gitlab.com&quot;: {} }, &quot;credsStore&quot;: &quot;osxkeychain&quot; } </code></pre> </li> <li>Created and deployed a secret to the cluster with the file: <pre><code>apiVersion: v1 kind: Secret metadata: name: registry-key data: .dockerconfigjson: &lt;base-64-encoded-.config.json-file&gt; type: kubernetes.io/dockerconfigjson </code></pre> </li> <li>Deployed an app with the following file: <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment labels: app: test-app spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: labels: app: test-app spec: imagePullSecrets: - name: registry-key containers: - name: test-app image: registry.gitlab.com/&lt;image-name&gt;:latest imagePullPolicy: Always ports: - containerPort: 80 </code></pre> </li> </ol> <hr /> <p>The deployment is created successfully but upon inspection of the pod (<code>kubectl describe pod</code>) I find the following events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21s default-scheduler Successfully assigned default/test-deployment-87b5747b5-xdsl9 to docker-desktop Normal BackOff 19s kubelet Back-off pulling image &quot;registry.gitlab.com/&lt;image-name&gt;:latest&quot; Warning Failed 19s kubelet Error: ImagePullBackOff Normal Pulling 7s (x2 over 20s) kubelet Pulling image &quot;registry.gitlab.com/&lt;image-name&gt;:latest&quot; Warning Failed 7s (x2 over 19s) kubelet Failed to pull image &quot;registry.gitlab.com/&lt;image-name&gt;:latest&quot;: rpc error: code = Unknown desc = Error response from daemon: Head &quot;https://registry.gitlab.com/v2/&lt;image-name&gt;/manifests/latest&quot;: denied: access forbidden Warning Failed 7s (x2 over 19s) kubelet Error: ErrImagePull </code></pre> <hr /> <p>Please provide any information that might be causing these errors.</p>
johngould9487
<p>I managed to solve the issue by editing the default <code>config.json</code> produced by <code>$ docker login</code>:</p> <pre><code>{ &quot;auths&quot;: { &quot;registry.gitlab.com&quot;: {} }, &quot;credsStore&quot;: &quot;osxkeychain&quot; } </code></pre> <p>becomes</p> <pre><code>{ &quot;auths&quot;: { &quot;registry.gitlab.com&quot;: { &quot;auth&quot;:&quot;&lt;access-token-in-plain-text&gt;&quot; } } } </code></pre> <p>Thanks Bala for suggesting this in the comments. I realise storing the access token in plain text in the file may not be secure but this can be changed to use a path if needed.</p> <p>I also created the secret as per OzzieFZI's suggestion:</p> <pre><code>$ kubectl create secret docker-registry registry-key \ --docker-server=registry.gitlab.com \ --docker-username=&lt;username&gt; \ --docker-password=&quot;$(cat /path/to/token.txt)&quot; </code></pre>
johngould9487
<p>The following questions are about an on-prem K3S setup.</p> <p>1] How does HTTP/S traffic reach an ingress controller in say K3S?</p> <p>When I hit any of my nodes on HTTPS port 443 I get the traefik ingress controller. This must be &quot;magic&quot; though because:</p> <ul> <li>There is no process on the host listening on 443 (according to lsof)</li> <li>The actual <code>nodePort</code> on the <code>traefik</code> service (of type LoadBalancer) is 30492</li> </ul> <p>2] Where is the traefik config located inside the ingress controller pod? When I shell into my traefik pods I cannot find the config anywhere - <code>/etc/traefik</code> does not even exist. Is everything done via API (from Ingress resource definitions) and not persisted?</p> <p>3] Is ingress possible without any service of type LoadBalancer? I.e. can I use a nodePort service instead by using an external load balancer (like F5) to balance traffic between nodes and these nodeports?</p> <p>4] Finally, how do the traefik controller pods &quot;know&quot; when a node is down and stop sending/balancing traffic to pods which no longer exist?</p>
Marc
<ol> <li>Port-forwarding is responsible for traffic getting mapped to traefik ingress controller by hitting on port 443 and NodePort is generally in between this range 30000-32767 only.</li> </ol> <p>Refer this <a href="https://doc.traefik.io/traefik/user-guides/crd-acme/#port-forwarding" rel="nofollow noreferrer">documentation</a> for more information on port forwarding.</p> <ol start="3"> <li>Yes, An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">Service.Type=NodePort</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Service.Type=LoadBalancer</a>.</li> </ol> <p>Refer this <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">documentation</a> for more information on ingress.</p> <ol start="4"> <li>Kubernetes has a health check mechanism to remove unhealthy pods from Kubernetes services (cf readiness probe). As unhealthy pods have no Kubernetes endpoints, Traefik will not forward traffic to them. Therefore, Traefik health check is not available for kubernetesCRD and kubernetesIngress providers.</li> </ol> <p>Refer this <a href="https://doc.traefik.io/traefik/routing/services/#health-check" rel="nofollow noreferrer">documentation</a> for more information on Health check.</p>
Goli Nikitha
<p>I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...</p> <p>I have the app functioning as I need without any issues, my current directory structure is.</p> <pre class="lang-bash prettyprint-override"><code>. ├── README.md ├── backend │ ├── Dockerfile │ ├── Pipfile │ ├── Pipfile.lock │ └── main.py └── frontend ├── Dockerfile ├── index.html ├── package-lock.json ├── package.json ├── postcss.config.js ├── src │ ├── App.jsx │ ├── favicon.svg │ ├── index.css │ ├── logo.svg │ └── main.jsx ├── tailwind.config.js └── vite.config.js </code></pre> <p>I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.</p> <p>I have a <code>.env</code> file in each, only simple things like URLs or hosts.</p> <p>I currently run the app, with the front end and backend separately as an example.</p> <pre class="lang-bash prettyprint-override"><code>&gt; ./frontend &gt; npm run dev </code></pre> <pre class="lang-bash prettyprint-override"><code>&gt; ./backend &gt; uvicorn .... </code></pre> <p>Can anyone give me tips /advice on how I can dockerize this as one?</p>
mrpbennett
<p>As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one <code>Dockerfile</code> per app).</p> <p>Then, you can add a <code>docker-compose.yml</code> file at the root of your project in order to link them together, it could look like that:</p> <pre class="lang-yaml prettyprint-override"><code>version: '3.3' services: app: build: context: ./frontend/ dockerfile: ./Dockerfile ports: - &quot;127.0.0.1:80:80&quot; backend: env_file: - backend/.env build: context: ./backend/ dockerfile: ./Dockerfile ports: - &quot;127.0.0.1:8000:80&quot; </code></pre> <p>The backend would be running on <code>http://localhost:8000</code> and the frontend on <code>http://localhost:80</code></p> <p>In order to start the docker-compose you can just type in your shell:</p> <pre class="lang-bash prettyprint-override"><code>$&gt; docker-compose up </code></pre> <p>This implies that you already have your Dockerfile for both apps. You can find many example online of different implementations of Dockerfile for the different technologies. For example :</p> <ul> <li>For ReactJS you can configure it like <a href="https://mherman.org/blog/dockerizing-a-react-app/" rel="nofollow noreferrer">this</a></li> <li>For FastAPI Like <a href="https://fastapi.tiangolo.com/deployment/docker/#dockerfile" rel="nofollow noreferrer">that</a></li> </ul>
vinalti
<p>I am deploying kubernetes in Cloud and I'm trying to call another container inside the same pod through an API.</p> <p>I am using <code>localhost</code> but also I treid with <code>127.0.0.1</code>. Also, I tried with the container's name.</p> <pre><code>2022/11/04 15:50:47 dial tcp [::1]:4245: connect: connection refused 2022/11/04 15:50:47 Successfully processed file.json file 2022/11/04 15:50:47 Get &quot;http://localhost:4245/api/admin/projects/default&quot;: dial tcp [::1]:4245: connect: connection refused panic: Get &quot;http://localhost:4245/api/admin/projects/default&quot;: dial tcp [::1]:4245: connect: connection refused goroutine 1 [running]: log.Panic({0xc000119dc8?, 0xc000166000?, 0x6aaaea?}) /opt/app-root/src/sdk/go1.19.2/src/log/log.go:388 +0x65 main.StatusServer({0xc000020570?, 0x30?}, {0x0, 0x0}) /build/script.go:197 +0x1ee main.ProcessData({0xc000020041, 0x15}, {0x0, 0x0}, {0xc00002000f?, 0x43ce05?}) /build/script.go:291 +0xa6 main.main() /build/script.go:443 +0xc5 </code></pre> <p>Any idea if I can call the container like that?</p>
X T
<p>You get a connection refused means you reached localhost and it decided to refuse the connection.</p> <p>This is most likly because nothing is listening on the port.</p> <p>If it was a firewall issue the request would timeout.</p> <p>You can check listening ports with command like:</p> <pre><code>netstat -an </code></pre> <p>If not installed maybe you can try it from the workernode where the pod is running.</p> <p>Another method of testing is to use</p> <pre><code>curl http://127.0.0.1:4245 </code></pre> <p>This will probably result in same connection refused.</p> <p>Are you really sure the container is running in same pod?</p> <p>Please check your deployment and service.</p> <p>If you cant find the failure please come back with more information so it can be analysed.</p>
enceladus2022
<p>I obtained Intermediate SSL certificate from SSL.com recently. I'm running some services in AKS (Azure Kubernetes Service) Earlier I was using Let's Encrypt as the CertManager, but I want to use SSL.com as the CA going forward. So basically, I obtained chained.crt and the private.key</p> <p>The chained.crt consists of 4 certificates. Like below.</p> <pre><code>-----BEGIN CERTIFICATE----- abc -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- def -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ghi -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- jkl -----END CERTIFICATE----- </code></pre> <p>The first step was, I created a secret as below. The content I added in tls.crt and tls.key was base64 decoded data.</p> <pre><code>cat chained.crt | base64 | tr -d '\n' cat private.key | base64 | tr -d '\n' apiVersion: v1 kind: Secret metadata: name: ca-key-pair namespace: cert data: tls.crt: &lt;crt&gt; tls.key: &lt;pvt&gt; </code></pre> <p>Then eventually I created the Issuer by referring the secret I created above.</p> <pre><code>apiVersion: cert-manager.io/v1beta1 kind: Issuer metadata: name: my-issuer namespace: cert spec: ca: secretName: ca-key-pair </code></pre> <p>The issue I'm having here is, when I create the Issuer, it gives an error like this</p> <pre><code>Status: Conditions: Last Transition Time: 2022-01-27T16:09:02Z Message: Error getting keypair for CA issuer: certificate is not a CA Reason: ErrInvalidKeyPair Status: False Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ErrInvalidKeyPair 18s (x2 over 18s) cert-manager Error getting keypair for CA issuer: certificate is not a CA </code></pre> <p>I searched and found this too <a href="https://stackoverflow.com/questions/45796058/how-do-i-add-an-intermediate-ssl-certificate-to-kubernetes-ingress-tls-configura">How do I add an intermediate SSL certificate to Kubernetes ingress TLS configuration?</a> and followed the things mentioned here too. But still getting the same error.</p>
Container-Man
<p>Perfect! After spending more time on this, I was lucky enough to make this work. So in this case, you don't need to create an Issuer or ClusterIssuer.</p> <p>First, create a TLS secret by specifying your private.key and the certificate.crt.</p> <pre><code>kubectl create secret tls ssl-secret --key private.key --cert certificate.crt </code></pre> <p>After creating the secret, you can directly refer to that Secret in the Ingres.</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: frontend-app-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/proxy-body-size: 8m nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: tls: - hosts: - &lt;domain-name&gt; secretName: ssl-secret rules: - host: &lt;domain-name&gt; http: paths: - backend: serviceName: backend servicePort: 80 path: /(.*) </code></pre> <p>Then verify if everything's working. For me the above process worked.</p>
Container-Man
<p>I have an nginx-pod which redirects traffic into Kubernetes services and stores related certificates insides its volume. I want to monitor these certificates - mainly their expiration.</p> <p>I found out that there is a TLS integration in Datadog (we use Datadog in our cluster): <a href="https://docs.datadoghq.com/integrations/tls/?tab=host" rel="nofollow noreferrer">https://docs.datadoghq.com/integrations/tls/?tab=host</a>.</p> <p>They provide sample file, which can be found here: <a href="https://github.com/DataDog/integrations-core/blob/master/tls/datadog_checks/tls/data/conf.yaml.example" rel="nofollow noreferrer">https://github.com/DataDog/integrations-core/blob/master/tls/datadog_checks/tls/data/conf.yaml.example</a></p> <p>To be honest, I am completely lost and do not understand comments of the sample file - such as:</p> <pre><code>## @param server - string - required ## The hostname or IP address with which to connect. </code></pre> <p>I want to monitor certificates that are stored in the pod, does it mean this value should be localhost or do I need to somehow iterate over all the certificates that are stored using this value (such as server_names in nginx.conf)? If anyone could help me with setting sample configuration, I would be really grateful - if there are any more details I should provide, that is not a problem at all.</p>
Ondřej
<h3>TLS Setup on Host</h3> <p>You can use a host type of instance to track all your certificate expiration dates</p> <p>1- Install TLS Integration from datadog UI</p> <p>2- Create instance and install datadog agent in there.</p> <p>3- Create a /etc/datadog/conf.d/tls.d/conf.yaml</p> <p>4- Edit following template for your need</p> <pre><code>init_config: instances: ## @param server - string - required ## The hostname or IP address with which to connect. # - server: https://yourDNS1.com/ tags: - dns:yourDNS.com - server: https://yourDNS2.com/ tags: - dns:yourDNS2 - server: yourDNS3.com tags: - dns:yourDNS3 - server: https://yourDNS4.com/ tags: - dns:yourDNS4.com - server: https://yourDNS5.com/ tags: - dns:yourDNS5.com - server: https://yourDNS6.com/ tags: - dns:yourDNS6.com </code></pre> <p>5- Restart datadog-agent</p> <pre><code>systemctl restart datadog-agent </code></pre> <p>6- Check status if you see the tls is running successfully</p> <pre><code>watch systemctl status datadog-agent </code></pre> <p>8- Create a TLS Overview Dashboard</p> <p>9- Create a Monitor for getting alert on expiration dates</p> <hr /> <h3>TLS Setup on Kubernetes</h3> <p>1- Create a ConfigMap and attach that as a Volume <a href="https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=configmap" rel="nofollow noreferrer">https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=configmap</a></p>
spetek
<p>I am incorporating HPA creation based on certain toggle-able environment variables passed to our Jenkins bash deploy script.</p> <p>There are 3 conditions:</p> <ol> <li>Correct HPA variables are found <ul> <li>run 'get hpa' in project and delete it, before re-adding incase max/min pods or cpu thresholds have changed</li> </ul> </li> <li>HPA is set to 'toggle off' <ul> <li>run 'get hpa' in project, and if found, delete it</li> </ul> </li> <li>HPA Env Variables are not present <ul> <li>run 'get hpa' in project and if found, delete it</li> </ul> </li> </ol> <p>This is the actual line of bash I'm running:</p> <p><code>hpaExists=`kubectl get hpa/${APP_NAME} --no-headers | awk '{print $1}'</code></p> <p>The code is running fine, but if there is no hpa found, it shows this error in the console:</p> <p><code>Error from server (NotFound): horizontalpodautoscalers.autoscaling &quot;${APP_NAME}&quot; not found</code></p> <p>There is <em>technically</em> no issue with the error happening, but I know it will confuse developers and QA when running the Jenkins deploy jobs to see an error message and I would like to suppress it.</p> <p>I have tried: <code>kubectl get hpa/${APP_NAME} --no-headers | awk '{print $1}' &gt; /dev/null</code></p> <p><code>kubectl get hpa/${APP_NAME} --no-headers | awk '{print $1}' &gt; /dev/null 2&gt;&amp;1</code></p> <p><code>kubectl get hpa/${APP_NAME} --no-headers | awk '{print $1}' 2&gt; /dev/null</code></p> <p><code>kubectl get hpa/${APP_NAME} --no-headers | awk '{print $1}' || true</code></p> <p><code>kubectl get hpa/${APP_NAME} --no-headers | awk '{print $1}' &amp;&gt; /dev/null</code></p> <p>All still print an error message.</p> <p>So I have 2 questions:</p> <ul> <li>Is there a way to suppress this error message?</li> <li>If I do find a way to suppress the message, will the variable I am setting the result of this command to still be populated in either case?</li> </ul> <p>Thanks</p>
stillmaticone17
<p>Assuming this error message comes from <code>kubectl</code>, you need to redirect stderr on its side of the pipe.</p> <p><code>kubectl get hpa/${APP_NAME} --no-headers 2&gt;/dev/null | awk '{print $1}'</code></p> <p>Each process in the pipeline has its own stdin, stdout, and stderr. The stdout of the first process is connected to the stdin of the second process, but stderr goes to the same place as it normally does unless you tell it to go somewhere else.</p>
tjm3772
<p>I've got a k8s cluster where the datadog's helm is installed.</p> <p>I want to send custom metrics to the datadog's agent, but I don't know what value to set for the dogstatsd client host.</p> <p>These agents are actually a daemonset so I assumed localhost would work, but it doesn't.</p> <p>Also, given the fact that it's UDP, it's very difficult to know if requests are failing to be delivered or not, so it's very difficult to trial and error it.</p> <p>Any idea?</p>
caeus
<p>Generally any metric you send using DogStatsD or through a custom Agent Check is a custom metric.</p> <p>There are multiple <a href="https://docs.datadoghq.com/metrics/custom_metrics/#submitting-custom-metrics" rel="nofollow noreferrer">ways</a> to send metrics to Datadog:</p> <p>The easiest way to get your custom application metrics into Datadog is to send them to <a href="https://docs.datadoghq.com/developers/dogstatsd/?tab=hostagent&amp;code-lang=python" rel="nofollow noreferrer">DogStatsD</a>, a metrics aggregation service bundled with the Datadog Agent.</p> <pre><code>Set host and port to hostname/IP and port of the agent (if different from the default 127.0.0.1:8125) statsd_host':'127.0.0.1 statsd_port':8125 </code></pre> <p>Refer to this <a href="https://docs.datadoghq.com/metrics/custom_metrics/dogstatsd_metrics_submission/" rel="nofollow noreferrer">doc</a> for more information about custom Metric Submission through DogStatsD</p> <p>You can also use one of the <a href="https://docs.datadoghq.com/developers/community/libraries/" rel="nofollow noreferrer">Datadog official and community contributed API and DogStatsD client libraries</a> to submit your custom metrics</p> <p><strong>Note</strong>: There are no enforced fixed rate limits on custom metric submission. If your default allotment is exceeded, you are billed according to Datadog’s <a href="https://docs.datadoghq.com/account_management/billing/custom_metrics/#counting-custom-metrics" rel="nofollow noreferrer">billing policy</a> for custom metrics.</p>
Sai Chandini Routhu
<p>I have a bunch of services deployed to a Kubernetes cluster (various pods and services all load-balanced behind a gateway). I am making a REST call to one of them and getting unexpected errors, but the problem is I'm not actually sure <em>which</em> pod/service is actually throwing the error. I would like to check all the logs of every pod/service.</p> <p>When I run <code>kubectl get namespace</code> I get:</p> <pre><code>NAME STATUS AGE another-app Active 22d myapp Active 22d myapp-database Active 22d default Active 22d kube-public Active 22d kube-system Active 22d zanzabar Active 22d </code></pre> <p>Is there a <code>kubectl log</code> command that can scan the entire cluster for logs and search them for a specific error message? For instance, say the error message I'm getting back from the REST (<code>curl</code>) is &quot;Sorry mario your princess is in another castle&quot;. Is there any way I can use <code>kubectl log</code> to scan all pod/service logs for that phrase and display the results back to me? If not, then whats the best/easiest way <strong>using <code>kubectl</code></strong> to find the pod/service with the error message (and hopefully, more details behind my error)?</p>
hotmeatballsoup
<p>You can fetch the logs of a particular pod or container(use -c flag for container) and grep the error logs by pipelining the log command with the grep command.</p> <p>For example if I want to get logs of a pod with name my-pod and want to grep &quot;error exists&quot; word then command goes like:</p> <pre><code>kubectl logs my-pod | grep “error exist” </code></pre> <p>Refer this <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">document</a> for multiple ways of fetching logs and this <a href="https://stackoverflow.com/questions/53348331/extract-lines-from-kubernetes-log">similar question</a> for more information on grep usage.</p>
Goli Nikitha
<p>Is there any way to set label on secret created by ServiceAccount? For now it is the only secret I'm not able to configure with label.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: forum-sa </code></pre> <p><a href="https://i.stack.imgur.com/qZY9D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qZY9D.png" alt="enter image description here" /></a></p>
dnf
<p>Please use the below configuration using the service account with kubectl,Follow the <a href="https://www.ibm.com/docs/en/cloud-paks/cp-management/2.0.0?topic=kubectl-using-service-account-tokens-connect-api-server#kube" rel="nofollow noreferrer">documentation</a><a href="https://www.ibm.com/docs/en/cloud-paks/cp-management/2.0.0?topic=kubectl-using-service-account-tokens-connect-api-server#kube" rel="nofollow noreferrer">1</a></p> <p>kubectl get secret --namespace={namespace}</p> <p>By using above kubectl command which will help to set label namespace for secrets.</p>
Venkata Satya Karthik Varun Ku
<p>I'm trying to setup a basic NATS service on my kubernetes cluster, according to their documentation, <a href="https://docs.nats.io/running-a-nats-service/nats-kubernetes" rel="nofollow noreferrer">here</a>. I executed the following code:</p> <pre><code>$ helm install test-nats nats/nats NAME: test-nats LAST DEPLOYED: Thu Jul 14 13:18:09 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: You can find more information about running NATS on Kubernetes in the NATS documentation website: https://docs.nats.io/nats-on-kubernetes/nats-kubernetes NATS Box has been deployed into your cluster, you can now use the NATS tools within the container as follows: kubectl exec -n default -it deployment/test-nats-box -- /bin/sh -l nats-box:~# nats-sub test &amp; nats-box:~# nats-pub test hi nats-box:~# nc test-nats 4222 Thanks for using NATS! $ kubectl exec -n default -it deployment/test-nats-box -- /bin/sh -l _ _ _ __ __ _| |_ ___ | |__ _____ __ | '_ \ / _` | __/ __|_____| '_ \ / _ \ \/ / | | | | (_| | |_\__ \_____| |_) | (_) &gt; &lt; |_| |_|\__,_|\__|___/ |_.__/ \___/_/\_\ nats-box v0.11.0 test-nats-box-84c48d46f-j7jvt:~# </code></pre> <p>Now, so far, everything has conformed to their start guide. However, when I try to test the connection, I run into trouble:</p> <pre><code>test-nats-box-84c48d46f-j7jvt:~# nats-sub test &amp; test-nats-box-84c48d46f-j7jvt:~# /bin/sh: nats-sub: not found test-nats-box-84c48d46f-j7jvt:~# nats-pub test hi /bin/sh: nats-pub: not found </code></pre> <p>It looks like the commands weren't found but they should have been installed when I did the <code>helm install</code>. What's going on here?</p>
Woody1193
<p>I have reproduced the set-up on my kubernetes cluster and have successfully deployed the nats box and started a <a href="https://docs.nats.io/nats-concepts/core-nats/pubsub/pubsub_walkthrough#1.-create-subscriber-1" rel="nofollow noreferrer">client subscriber</a> program in which subscribers listen on subjects, and publishers send messages on specific subjects.</p> <p><strong>1. Create Subscriber</strong></p> <p>In a shell or command prompt session, start a client subscriber program.</p> <pre><code> nats sub &lt; subject&gt; </code></pre> <p>Here, &lt; subject &gt; is a subject to listen on. It helps to use unique and well thought-through subject strings because you need to ensure that messages reach the correct subscribers even when wildcards are used.</p> <p>For example:</p> <pre><code>nats sub msg.test </code></pre> <p>You should see the message: Listening on [msg.test].</p> <p><strong>2. Create a Publisher and publish a message</strong></p> <p>Create a NATS publisher and send a message.</p> <pre><code>nats pub &lt; subject&gt; &lt; message&gt; </code></pre> <p>Where &lt; subject&gt; is the subject name and &lt; message&gt; is the text to publish.</p> <p>For example:</p> <pre><code>nats pub msg.test nats-message-1 </code></pre> <p>You'll notice that the publisher sends the message and prints: Published [msg.test] : 'NATS MESSAGE'.</p> <p>The subscriber receives the message and prints: [#1] Received on [msg.test]: 'NATS MESSAGE'.</p> <p>Here, you have provided the wrong syntax nats-sub and nats-pub which are deprecated. Try using the above commands to give precise results.</p>
Srividya
<p>Is there a way query cpu request and limit with kubectl for each container in a kubernetes context / namespace, just as I can query cpu usage with kubectl top pods.</p>
morfys
<p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">Requests and limits</a> are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.Limit can never be lower than the request.</p> <p>As said by @chris, try the following commands for cpu requests and limits for kubernetes namespaces</p> <p>You can get the pods and their CPU requests with the following command.</p> <pre><code>kubectl get pods --all-namespaces -o=jsonpath=&quot;{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}&quot; </code></pre> <p>You can get the pods and their CPU Limits with the following command.</p> <pre><code>kubectl get pods --all-namespaces -o=jsonpath=&quot;{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.limits.cpu}{'\n'}{end}{'\n'}{end}&quot; </code></pre>
Srividya
<p><em>Note: I do not believe the issue I am experiencing is specific to Azure AKS or Application Gateway (AGIC), but it is the environment we are currently working in on the off chance that it impacts responses.</em></p> <p>We have an AKS cluster leveraging a namespace-by-app model and managed identity. All apps except for one have static hosts (i.e., <code>api.mydomain.com</code>) and one app, which is tenantized, utilizes a per-customer subdomain host model (i.e., <code>customer1.mypayquicker.com</code>).</p> <p>When initially configuring the cluster, which implements E2E SSL, the health probes were configured with both a host and host header value in the health probes. For most apps the value was simply their publicly accessible address and for the tenantized app, a single subdomain was selected (<em>k8probes</em>). An example of what one of the probes looked like is provided below. This configuration resulted in the expected listeners, including <code>*.mydomain.com</code> for the tenantized app.</p> <p><strong>Probe Config</strong></p> <pre><code>livenessProbe: failureThreshold: 3 httpGet: host: app1.mydomain.com httpHeaders: - name: Host value: app1.mydomain.com path: /healthz port: 443 scheme: HTTPS periodSeconds: 30 successThreshold: 1 timeoutSeconds: 5 </code></pre> <p>The <code>host</code> value being populated was an artifact of working through the E2E SSL configuration. An unintended side effect of the <code>host</code> value being populated was it was discovered that the probe traffic was being routed out of the cluster (dns resolution of the <code>host</code> address) and back in, which then was defeating the purpose of the probe and the pod answering the probe was not necessarily the one being tested.</p> <p>We then removed the <code>host</code> value from the probes, only providing the path, port and scheme and host header to satisfy the wildcard SSL certificate and all probes were seemingly working as expected. When inspecting the Health Probes in the Application Gateway, all apps but the wildcard app, had a host listed that was equal to the value in the host ingress definition (below) and the wildcard app was listing <code>localhost</code>.</p> <p><strong>Ingress Snippet</strong></p> <pre><code>spec: tls: - hosts: - {{ .Values.application.ingressEndpoint | quote }} </code></pre> <p>The host value being reported for the probes aside, the probes were all succeeding, apps were online, none of the traffic was being routed out of the cluster and back in (traffic staying in the cluster as expected). However, even though the apps were online and probes succeeding, no traffic is routing to the wildcard app.</p> <p>In this scenario, the ingress defintion has not changed, the static host apps all list a single host (app1.mydomain.com ...) and the wildcard app has its wildcard host (*.mydomain.com). The only difference was the removal of a static host in the probe defintions for all apps, which in the case of the wildcard app was <code>k8probes.mydomain.com</code>.</p> <p>Reinstating the <code>host</code> value in the probe for the wildcard app allows traffic to once again flow to the app, which means it is also routing traffic out of the cluster once again to resolve the probe which is not workable.</p> <p>I found this article which has a section for wildcard/multiple host names in listeners: <a href="https://learn.microsoft.com/en-us/azure/application-gateway/multiple-site-overview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/application-gateway/multiple-site-overview</a> which states:</p> <p><strong>Conditions for using wildcard characters and multiple host names in a listener</strong></p> <pre><code>- You can only mention up to 5 host names in a single listener - Asterisk * can be mentioned only once in a component of a domain style name or host name. For example, component1*.component2*.component3. (*.contoso-*.com) is valid. - There can only be up to two asterisks * in a host name. For example, *.contoso.* is valid and *.contoso.*.*.com is invalid. - There can only be a maximum of 4 wildcard characters in a host name. For example, ????.contoso.com, w??.contoso*.edu.* are valid, but ????.contoso.* is invalid. - Using asterisk * and question mark ? together in a component of a host name (*? or ?* or **) is invalid. For example, *?.contoso.com and **.contoso.com are invalid. </code></pre> <p>Does anyone have any insight on how to correctly configure an E2E SSL host, with a wildcard host listener and with health probes which do not require traffic to be routed out of the cluster and back in again?</p>
James Legan
<p>The health probe behavior looks expected due to the limitation of listener.</p> <p>For backend health check, you can't associate multiple custom probes per HTTP settings. Instead, you can probe one of the websites at the backend or use &quot;127.0.0.1&quot; to probe the localhost of the backend server. However, when you're using wildcard or multiple host names in a listener, the requests for all the specified domain patterns will be routed to the backend pool depending on the rule type (basic or path-based).</p> <p>If you don't want the name resolution traffic sent out of the cluster, a workaround would be to add a <a href="https://learn.microsoft.com/en-us/azure/aks/coredns-custom#hosts-plugin" rel="nofollow noreferrer">coredns custom entry</a> so that it is locally resolved within the cluster.</p> <p>Additional reference : <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls?tabs=azure-cli" rel="nofollow noreferrer">Use TLS with an ingress controller on Azure Kubernetes Service</a></p>
srbhatta-MSFT
<p>As I know the way to create a configMap in Kubernetes from a file is to use:<br /> <code>--from-file</code> option for <code>kubectl</code></p> <p>What I am looking for is a way to only load part of the yaml file into the configMap.<br /> Example: Let's say I have this yml file:</p> <pre><code>family: Boys: - name: Joe - name: Bob - name: dan Girls: - name: Alice - name: Jane </code></pre> <p>Now I want to create a configMap called 'boys' which will include only the 'Boys' section.<br /> Possible?</p> <p>Another thing that could help if the above is not possible is when I am exporting the configMap as environment variables to a pod (using <code>envFrom</code>) to be able to only export part of the configMap.<br /> Both options will work for me.</p> <p>Any idea?</p>
nsteiner
<p>The ConfigMap uses a key and value for its configuration. Based on your example, you get multiple arrays of data where there are multiple values with their own keys. But you can create multiple ConfigMap from different file for these issues.</p> <ol> <li>First you need to create .yaml files to create a ConfigMap guided by the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">documentation</a>. First file call <code>Boys.yaml</code></li> </ol> <pre><code># Env-files contain a list of environment variables. # These syntax rules apply: # Each line in an env file has to be in VAR=VAL format. # Lines beginning with # (i.e. comments) are ignored. # Blank lines are ignored. # There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)). name=Joe name=Bob name=Dan </code></pre> <p>Second file call <code>Girls.yaml</code></p> <pre><code>name=Alice name=Jane </code></pre> <ol start="2"> <li>Create your ConfigMap</li> </ol> <pre><code>kubectl create configmap NmaeOfYourConfigmap --from-env-file=PathToYourFile/Boys.yaml </code></pre> <ol start="3"> <li>where the output is similar to this:</li> </ol> <pre><code>apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: name: NmaeOfYourConfigmap namespace: default resourceVersion: uid: data: name: Joe name: Bob name: Dan </code></pre> <ol start="4"> <li>Finally, you can pass these ConfigMap to pod or deployment using <code>configMapRef</code> entries:</li> </ol> <pre><code> envFrom: - configMapRef: name: NmaeOfYourConfigmap-Boys - configMapRef: name: NmaeOfYourConfigmap-Girls </code></pre>
Mykola
<p>We are running our workloads on AKS. Basically we have Two Node-Pools.</p> <p><strong>1. System-Node-Pool:</strong> Where all system pods are running</p> <p><strong>2. Apps-Node-Pool:</strong> Where our actual workloads/ apps run in.</p> <p>In fact, our Apps-Node-Pool is <strong>Tainted</strong> whereas System-Node-Pool <strong>isn't</strong>. So basically I deployed Loki-Grafana stack in order for Monitoring and for Log analysis. I'm using below Helm command to install the Grafana-Loki stack.</p> <pre><code>helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi </code></pre> <p>Since the Toleration isn't added in the helm command (Or even in values.yaml) all the Grafana and Loki pods get deployed in the <strong>System-Node-Pool</strong>. But my case is, since the necessary agents aren't deployed on <strong>Apps-Node-Pool</strong> (For example: Promtail Pods) I can't check the logs of my App pods.</p> <p>Since the <strong>Taint</strong> exists in <strong>Apps-Node-Pool</strong> if we add a Toleration along with the Helm command, then basically all the Monitoring related Pods will get deployed in Apps-Node-Pool (Still can't guarant as it may get deployed in System-Node-Pool since it doesn't have a Taint)</p> <p>So according to my cluster, what can I do in order to make sure that Agent pods are also running in Tainted node?</p>
Container-Man
<p>So in my case my requirement was to run &quot;Promtail&quot; pods in Apps-Node-Pool. I haven't added a Toleration to the promtail pods, so I had to add a toleration to the Promtail pod. So successfully the Promtail pods got deployed in Apps-Node-Pool.</p> <p>But still adding a Toleration in Promtail pod doesn't guarant the deployment of the Promtail pods gets deployed in Apps-Node-Pool because in my case, System-Node-Pool didn't have any Taint.</p> <p>In this case you may leverage both NodeAffinity and Tolerations to exclusively deploy pods in a specific node.</p>
Container-Man
<p>I am running a kubernetes single node cluster and upgraded my gitlab 14.3.2 to 14.7.0 using helm chart. This is my <code>values.yaml</code> file, which worked before perfectly...</p> <pre><code>global: edition: ce hosts: domain: domain.com shell: port: 30022 rails: bootsnap: enabled: false ingress: class: nginx configureCertmanager: false annotations: cert-manager.io/cluster-issuer: letsencrypt-prod acme.cert-manager.io/http01-edit-in-place: &quot;true&quot; certmanager: install: false nginx-ingress: enabled: false prometheus: install: false redis: resources: requests: cpu: 10m memory: 64Mi minio: ingress: tls: secretName: gitlab-minio-tls resources: requests: memory: 64Mi cpu: 10m gitlab-runner: runners: privileged: false gitlab: gitaly: persistence: size: 2Gi gitlab-shell: minReplicas: 1 service: type: NodePort nodePort: 30022 webservice: minReplicas: 1 ingress: tls: secretName: gitlab-webservice-tls sidekiq: minReplicas: 1 toolbox: enabled: false registry: hpa: minReplicas: 1 ingress: tls: secretName: gitlab-registry-tls </code></pre> <p>...but the new gitlab runner pod is failing. In the logs I do get the error</p> <pre><code>gitlab-gitlab-runner Service LoadBalancer External Address not yet available </code></pre> <p>What am I missing in the <code>values.yaml</code> file? As I'm running a single node cluster, I do not have a load balancer. So I don't understand the error message.</p>
user3142695
<p>Got the same issue. I found out a service was misconfigured, lookup into the chart and found out it was about session server.</p> <p>Then I simply disabled the sessionServer, that should be disabled by default (cf <a href="https://gitlab.com/gitlab-org/charts/gitlab-runner/-/commit/477f145bedcb68690d723826831d9c8c4f5b2d09" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/charts/gitlab-runner/-/commit/477f145bedcb68690d723826831d9c8c4f5b2d09</a>) and I don't need it and it did the trick.</p> <pre><code>gitlab-runner: sessionServer: enabled: false </code></pre> <p>If you need it, I think you have to set some configuration, such as PublicIp. Here is the default values from <a href="https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/main/values.yaml" rel="nofollow noreferrer">here</a> :</p> <pre><code>## Specify whether the runner should start the session server. ## Defaults to false ## ref: ## ## When sessionServer is enabled, the user can either provide a public publicIP ## or either rely on the external IP auto discovery ## When a serviceAccountName is used with the automounting to the pod disable, ## we recommend the usage of the publicIP sessionServer: enabled: true # timeout: 1800 # internalPort: 8093 # externalPort: 9000 # publicIP: &quot;&quot; </code></pre>
Aadrii
<p>I need to install Grafana Loki with Prometheus in my Kubernetes cluster. So I followed the below to install them. It basically uses Helm to install it. Below is the command which I executed to install it.</p> <pre><code>helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi -n monitoring --create-namespace </code></pre> <p>I followed the <a href="https://grafana.com/docs/loki/latest/installation/helm/" rel="nofollow noreferrer">official Grafana website</a> in this case.</p> <p>But when I execute the above helm command, I get the below error. In fact, I'm new to Helm.</p> <pre><code>Release &quot;loki&quot; does not exist. Installing it now. W0307 16:54:55.764184 1474330 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy &quot;loki-grafana&quot; in namespace &quot;&quot; exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key &quot;meta.helm.sh/release-name&quot; must equal &quot;loki&quot;: current value is &quot;loki-grafana&quot; </code></pre> <p>I don't see any Grafana chart installed.</p> <pre><code>helm list -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION cert-manager cert-manager 1 2021-11-26 13:07:26.103036078 +0000 UTC deployed cert-manager-v0.16.1 v0.16.1 ingress-nginx ingress-basic 1 2021-11-18 12:23:28.476712359 +0000 UTC deployed ingress-nginx-4.0.8 1.0.5 </code></pre>
Container-Man
<p>Well, I was able to get through my issue. The issue was &quot;PodSecurityPolicy&quot;. I deleted the existing Grafana PodSecurityPolicy and it worked.</p>
Container-Man
<p>Given a container in a Azure container registry and a kubernetes cluster setup via the portal. Are there any visual tools that I can use so that I <strong>don't have to use the command line commands</strong> ,for things like add/edit the yaml file and launching the cluster?</p> <p>For example I found this tool <a href="https://k8syaml.com/" rel="nofollow noreferrer">https://k8syaml.com/</a>, but this is only one part of the process and it is also not aware of the existing infrastructure.</p> <p>What are the visual tools to manage kubernetes end-to-end?</p>
realPro
<p>One tool I always work with when dealing with Kubernetes is <a href="https://k8slens.dev/desktop.html" rel="nofollow noreferrer">Lens</a>. <a href="https://www.youtube.com/watch?v=eeDwdVXattc" rel="nofollow noreferrer">Here</a> is a video showing you what it can do. Best of all, it just needs the kube config file and so it is agnostic to where the Kubernetes cluster is (On-Prem, GKE, AKS, EKS)</p>
Brad Rehman
<p>I'm deploying my application in the cloud, inside a cluster on 03 pods:</p> <ol> <li>one pod: backend - Quarkus</li> <li>one pod: frontend - Angular</li> <li>one pod: DB - Postgres</li> </ol> <p><strong>The backend has 03 endpoints:</strong></p> <ol> <li>One endpoint: GraphQL</li> <li>Two endpoints: Rest</li> </ol> <p><strong>The pods are exposed:</strong></p> <ol> <li>backend: ClusterIp</li> <li>DB: ClusterIp</li> <li>frontend: NodePort</li> </ol> <p><strong>I have an Ngnix web server &amp; 02 ingress manifests; one for the backend and a second for the frontend:</strong></p> <p>1- backend-ingress:</p> <pre><code> apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mcs-thirdparty-back-ingress namespace: namespace annotations: nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: ingressClassName: nginx-internal rules: - host: backend.exemple.com http: paths: - path: / backend: service: name: mcs-thirdparty-backend port: number: 8080 pathType: Prefix </code></pre> <p>2- frontend-ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mcs-thirdparty-ingress namespace: namespace spec: ingressClassName: nginx-internal rules: - host: bilels.exemple.com http: paths: - path: / backend: service: name: mcs-thirdparty-frontend port: number: 80 pathType: Prefix </code></pre> <p>For the GraphQL endpoint/request; the frontend can correctly communicate with the backend and fetchs the required data. When I run the POST request to fetch the accessToken from the server (Rest Endpoint), I receive the 404 error code.</p> <p><a href="https://i.stack.imgur.com/rp4ey.jpg" rel="nofollow noreferrer">The error screenshot is here</a></p> <p>I tried to add several changes in the backend-ingress manifest, but always 404 <code>- path: /(.*)</code> <code>- path: /*</code> <code>- path: /response</code></p>
Bilel-NEJI
<p>I thinks that I managed to make another Diagnotic method wiht the help for Ryan Dawson. I did PortForaward the backend pod and request from the locally, then I found that there is a 500 error code --&gt; meaning that the request was not matching the api requirements: in the frontend I was sendign the wrong context type.</p> <p>--&gt; so the ingress config is already in a good shape.</p>
Bilel-NEJI
<p>We have a kubernetes <code>Ingress</code> object defined. All the ingress rules are not available when defining object for the first time, hence we would like to append rules to it on the fly when configuring respective services to use with it.</p> <p>Example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/certificate-arn: ${acm_arn} alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance kubernetes.io/ingress.class: alb name: ${ingress_name} namespace: ${namespace} spec: rules: - host: api.${region}.infra.${hosted_zone} http: paths: - backend: service: name: istio-ingressgateway port: number: 80 path: /* pathType: Prefix - http: paths: - backend: service: name: nginx-ingress-controller port: number: 80 path: /* pathType: Prefix </code></pre> <p>The above object is created in the beginning. Now, we would like to append a third rule to it at later point in time.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress name: ${ingress_name} namespace: ${namespace} spec: rules: - host: api.${region}.${some_dynamic_variable}.${hosted_zone} http: paths: - backend: service: name: istio-ingressgateway port: number: 80 path: /* pathType: Prefix </code></pre> <p>I could have used <code>kubectl patch</code> but because the merge strategy is not specified in the <a href="https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json" rel="nofollow noreferrer">api docs</a>, I understand it follows the merge strategy <code>replace</code> which is not what I intend to do.</p> <p>What is the best available option to get this issue resolved?</p>
Mukund Jalan
<p>There is a <code>kubectl</code> plugin called <a href="https://github.com/pragaonj/ingress-rule-updater" rel="nofollow noreferrer"><code>ingress-rule</code></a>. It allows you to configure an Ingress on the command line. The plugin is available via <a href="https://krew.sigs.k8s.io/" rel="nofollow noreferrer"><code>krew</code></a>. If you have <code>krew</code> installed simply install the plugin with:</p> <pre><code>kubectl krew install ingress-rule </code></pre> <p>Add a new rule to a existing Ingress:</p> <pre><code>kubectl ingress-rule -n $NAMESPACE set $INGRESSNAME --service $SERVICENAME --port 80 --host $HOST </code></pre> <p>Delete the previously added rule:</p> <pre><code>kubectl ingress-rule -n $NAMESPACE delete $INGRESSNAME --service $SERVICENAME </code></pre>
pjo
<p>I had successfully deployed MySQL router in Kubernetes as described in <a href="https://stackoverflow.com/questions/63149618/mysql-router-in-kubernetes-as-a-service">this</a> answer.</p> <p>But recently I noticed mysql router was having overflow issues after some time.</p> <pre><code>Starting mysql-router. 2022-03-08 10:33:33 http_server INFO [7f2ba406f880] listening on 0.0.0.0:8443 2022-03-08 10:33:33 io INFO [7f2ba406f880] starting 4 io-threads, using backend 'linux_epoll' 2022-03-08 10:33:33 metadata_cache INFO [7f2b63fff700] Starting Metadata Cache 2022-03-08 10:33:33 metadata_cache INFO [7f2b63fff700] Connections using ssl_mode 'PREFERRED' 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] Starting metadata cache refresh thread 2022-03-08 10:33:33 routing INFO [7f2b617fa700] [routing:myCluster_ro] started: listening on 0.0.0.0:6447, routing strategy = round-robin-with-fallback 2022-03-08 10:33:33 routing INFO [7f2b60ff9700] [routing:myCluster_rw] started: listening on 0.0.0.0:6446, routing strategy = first-available 2022-03-08 10:33:33 routing INFO [7f2b3ffff700] [routing:myCluster_x_ro] started: listening on 0.0.0.0:64470, routing strategy = round-robin-with-fallback 2022-03-08 10:33:33 routing INFO [7f2b3f7fe700] [routing:myCluster_x_rw] started: listening on 0.0.0.0:64460, routing strategy = first-available 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] Potential changes detected in cluster 'myCluster' after metadata refresh 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] Metadata for cluster 'myCluster' has 1 replicasets: 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] 'default' (3 members, single-primary) 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] node3.me.com:3306 / 33060 - mode=RW 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] node2.me.com:3306 / 33060 - mode=RO 2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] node1.me.com:3306 / 33060 - mode=RO 2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_x_rw listening on 64460 got request to disconnect invalid connections: metadata change 2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_x_ro listening on 64470 got request to disconnect invalid connections: metadata change 2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_rw listening on 6446 got request to disconnect invalid connections: metadata change 2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_ro listening on 6447 got request to disconnect invalid connections: metadata change 2022-03-08 14:59:30 routing WARNING [7f2b9d2ca700] [routing:myCluster_rw] reached max active connections (512 max=512) 2022-03-08 14:59:30 routing WARNING [7f2b9d2ca700] [routing:myCluster_rw] reached max active connections (512 max=512) </code></pre> <p>We have innodb cluster (MySQL 8) and router is connected to it.</p> <p>When I check <code>show processlist</code> in master node :</p> <pre><code>| 6176344 | routeruser | 192.168.10.6:61195 | my_db | Sleep | 23946 | | NULL | | 6176345 | routeruser | 192.168.10.6:62671 | my_db | Sleep | 23946 | | NULL | | 6176346 | routeruser | 192.168.10.6:65531 | my_db | Sleep | 23946 | | NULL | | 6176347 | routeruser | 192.168.10.6:39541 | my_db | Sleep | 23946 | | NULL | | 6176348 | routeruser | 192.168.10.6:34074 | my_db | Sleep | 23946 | | NULL </code></pre> <p>I had stopped all custom applications running in K8,but still I got this issue.</p> <p>In <code>/etc/mysql/mysql.conf.d/mysqld.cnf</code></p> <pre><code>[mysqld] pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql log-error = /var/log/mysql/error.log log_timestamps='SYSTEM' max_connections = 1000 </code></pre> <p>What am I missing here? why router service overflow after working few hours? Any help to solve/further debug this issue is highly appreciate.</p>
Sachith Muhandiram
<blockquote> <p>Why router service overflow after working few hours?</p> </blockquote> <p>From what I know, you would modify the <code>max_connection</code> setting at <code>mysqld</code> section in your MySQL configuration file. You could determine where the location of the configuration file is by running command:</p> <blockquote> <p><code>mysqld --help --verbose</code></p> </blockquote> <p>After modify don't forget about restart your MySQL server.</p> <p>You have to set <code>$defaultvalue</code> as the your desired value how it writes in <a href="https://dev.mysql.com/doc/mysql-router/8.0/en/mysql-router-conf-options.html#option_mysqlrouter_max_connections" rel="nofollow noreferrer">MySQL documentations</a>:</p> <blockquote> <p>Default Value = 512, Minimum Value = 1, Maximum Value = 9223372036854775807</p> </blockquote> <p>And also look in <code>max_total_connections</code> parameter at the same file.</p>
Mykola
<p>Does anyone know what am I doing wrong with my kubernetes secret yaml and why its not able to successfully create one programatically?</p> <p>I am trying to programmatically create a secret in Kubernetes cluster with credentials to pull an image from a private registry but it is failing with the following:</p> <pre><code>&quot;Secret &quot;secrettest&quot; is invalid: data[.dockerconfigjson]: Invalid value: &quot;&lt;secret contents redacted&gt;&quot;: invalid character 'e' looking for beginning of value&quot; </code></pre> <p>This is the yaml I tried to use to create the secret with. It is yaml output from a secret previously created in my kubernetes cluster using the command line except without a few unnecessary properties. So I know this is valid yaml:</p> <pre><code>apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6eyJoZWxsb3dvcmxkLmF6dXJlY3IuaW8iOnsidXNlcm5hbWUiOiJoZWxsbyIsInBhc3N3b3JkIjoid29ybGQiLCJhdXRoIjoiYUdWc2JHODZkMjl5YkdRPSJ9fX0= kind: Secret metadata: name: secrettest namespace: default type: kubernetes.io/dockerconfigjson </code></pre> <p>This is the decoded value of the &quot;.dockerconfigjson&quot; property which seems to be throwing the error but not sure why if the value is supposed to be encoded per documentation:</p> <pre><code>{&quot;auths&quot;:{&quot;helloworld.azurecr.io&quot;:{&quot;username&quot;:&quot;hello&quot;,&quot;password&quot;:&quot;world&quot;,&quot;auth&quot;:&quot;aGVsbG86d29ybGQ=&quot;}}} </code></pre> <p>According to the documentation, my yaml is valid so Im not sure whats the issue: <a href="https://i.stack.imgur.com/Cdl2H.png" rel="nofollow noreferrer">Customize secret yaml</a></p> <p><strong>Note: I tried creating the Secret using the Kubernetes client and &quot;PatchNamespacedSecretWithHttpMessagesAsync&quot; in C#</strong></p> <p>Referenced documentaion: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
jorgeavelar98
<p>I found my issue. I was trying to create the Secret object using</p> <pre><code>Yaml.LoadAllFromString() </code></pre> <p>which was double encoding my <code>.dockerconfigjson</code> value. The weird part was the if the value wasnt encoded, it would fail. So I had to just manually create the Secret object instead of reading from a yaml file.</p>
jorgeavelar98
<p>I am trying to prepare my application so that I can deploy it via kubernetes in the cloud. Therefore I installed minikube to get accustomed with how to set up an ingress. Therefore I followed the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">ingress documentation by kubernetes</a>. (NOTE: I did not expose my frontend service like they do in the beginning of the tutorial, as I understood it's not needed for an ingress).</p> <p>But after hours of desperate debugging and no useful help by ChatGPT I am still not able to resolve my bug. Whenever I try to access my application via my custom host (example.com), I get <code>InvalidHostHeader</code> as a response.</p> <p>For simplicity's sake right now my application simply has one deployment with one pod that runs a vuejs frontend. My <code>frontend-deployment.yaml</code> looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: XXXXXXXXXXX imagePullPolicy: Always ports: - name: http containerPort: 8080 </code></pre> <p>My <code>frontend-service.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend spec: ports: - name: http port: 8080 targetPort: http selector: app: frontend type: ClusterIP </code></pre> <p>I use the default NGINX ingress controller of minikube. And my <code>ingress.yaml</code> looks like this:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: frontend port: name: http </code></pre> <p>Obviously, I configure my <code>/etc/hosts</code> file to map my minikube ip address to <code>example.com</code></p> <p>Any help and also suggestions on best practices/improvements on the structure of the yaml files is welcome!</p>
raweber
<p>I fixed it. Interestingly, it was not about Kubernetes itself. The issue was the configuration of my ingress annotations (plus my vue.config file)</p> <p>I got the needed annotations from <a href="https://%20%20%20%20https://gist.github.com/suprith-s-reddy/8335104439cea9f88f9687ce8b60204c" rel="nofollow noreferrer">here</a></p> <pre><code>kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; </code></pre> <p>For now, I fixed the issue of invalid host name on the vue side by simply allowing all hosts like this in the vue.config</p> <pre><code> devServer: { allowedHosts: [ 'all', // https://webpack.js.org/configuration/dev-server/#devserverallowedhosts ], </code></pre> <p>this is obviosly only for local development!</p>
raweber
<p>I noticed some of my clusters were reporting a CPUThrottlingHigh alert for metrics-server-nanny container (image: gke.gcr.io/addon-resizer:1.8.11-gke.0) in GKE. I couldn't see a way to configure this container to give it more CPU because it's automatically deployed as part of the metrics-server pod, and Google automatically resets any changes to the deployment/pod resource settings.</p> <p>So out of curiosity, I created a small kubernetes cluster in GKE (3 standard nodes) with autoscaling turned on to scale up to 5 nodes. No apps or anything installed. Then I installed the kube-prometheus monitoring stack (<a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>) which includes the CPUThrottlingHigh alert. Soon after installing the monitoring stack, this same alert popped up for this container. I don't see anything in the logs of this container or the related metrics-server-nanny container.</p> <p>Also, I don't notice this same issue on AWS or Azure because while they do have a similar metrics-server pod in the kube-system namespace, they do not contain the sidecar metrics-server-nanny container in the pod.</p> <p>Has anyone seen this or something similar? Is there a way to give this thing more resources without Google overwriting config changes?</p>
pgier
<p>This is a known issue in <a href="https://github.com/kubernetes/kubernetes/issues/67577" rel="nofollow noreferrer">Kubernetes</a> that CFS leads to Throttling Pods that exhibit a spikey CPU usage pattern. As Kubernetes / GKE uses to implement CPU quotas, this is causing pods to get throttled even when they really aren't busy.</p> <p>Kubernetes uses CFS quotas to enforce CPU limits for the pods running an application. The Completely Fair Scheduler (CFS) is a process scheduler that handles CPU resource allocation for executing processes, based on time period and not on available CPU power.</p> <p>We have no direct control over CFS via Kubernetes, so the only solution is to disable it. This is done via node config.</p> <p>Allow users to tune Kubelet configs &quot;<strong>CPUManagerPolicy</strong>&quot; and &quot;<strong>CPUCFSQuota</strong>”</p> <p>The workaround is to temporarily disable Kubernetes CFS quotas entirely (kubelet's flag <strong>--cpu-cfs-quota=false</strong>)</p> <pre><code> $ cat node-config.yaml kubeletConfig: cpuCFSQuota: false cpuManagerPolicy: static $ gcloud container clusters create --node-config=node-config.yaml </code></pre> <p>gcloud will map the fields from the YAML node config file to the newly added GKE API fields.</p>
Srividya
<p>I am trying to add my context as an <code>environment variable</code> in <code>Azure Container App</code> like below but it throws an error.</p> <pre><code>az containerapp update -n MyContainerapp -g MyResourceGroup -v ConnectionStrings:MyContext=secretref:mycontextsecretkey </code></pre> <blockquote> <p>Invalid value: &quot;ConnectionStrings:MyContext&quot;: Invalid Environment Variable Name</p> </blockquote> <p>I tried with <code>ConnectionStrings__MyContext</code> but the Asp.Net Core app does not recognize it.</p> <p>How can I add this?</p>
Ramesh
<p>This error <strong>Invalid value: &quot;ConnectionStrings:MyContext&quot;</strong>: Invalid Environment Variable Name indicates that environment variable you are trying to define is unsupported.</p> <p>Instead of using <strong>&quot;ConnectionStrings:MyContext&quot;, use MyConnectionStrings_MyContext</strong> as your environment variable.</p> <p>You can use the below command,</p> <pre><code>az containerapp update -n MyContainerapp -g MyResourceGroup -v MyConnectionStrings_MyContext=secretref:mycontextsecretkey </code></pre> <p>Reference : <a href="https://www.mihajakovac.com/set-environment-variables-to-azure-container-app/" rel="nofollow noreferrer">Set Environment variables to Azure Container App | Miha Jakovac</a></p>
Rukmini
<p>Im trying to get all the external IPs that the pods from kubernetes can use, its possible consult this in console?</p>
Angel Moreno
<p>Pods have no external IP as the nodes are responsible for communication with the Internet. You can check this diagram for more details[1].</p> <p>It seems what you're referring here is the internal IP address range that the pods can use.</p> <p>You can get this information by navigating to <code>☰</code> &gt; <code>Kubernetes Engine</code> &gt; <code>Clusters</code>.</p> <p>Click the name of your cluster, then scroll to &quot;Networking&quot;. It will show you the &quot;Cluster pod address range (default)&quot;. You can check this documentation[2] for more details.</p> <p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods</a></p> <p>[2] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ip-allocation" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ip-allocation</a></p>
Marvin Lucero
<p>Considering that kubelet failed to perform some action - for instance pulling an image, the pod will go into a back-off state, for instance <code>ImagePullBackOff</code>, how do I determine when will it be retried? I understand that the back-off uses increasing time intervals for retrying. End it may eventually give up. Is there a clear algorithm so I can figure out the time to next attempt?</p> <p>Apart from curiosity and ops convenience, it would help assess the self-healing recovery time needed.</p>
Robert Cutajar
<p>At any point, the maximum delay is 300s, that's a compiled-in constant.</p> <p>See the <a href="https://kubernetes.io/docs/concepts/containers/images/#imagepullbackoff" rel="nofollow noreferrer">common info about ImagePullBackOff</a>:</p> <blockquote> <p>The BackOff part indicates that Kubernetes will keep trying to pull the image, with an increasing back-off delay.</p> <p>Kubernetes raises the delay between each attempt until it reaches a compiled-in limit, which is 300 seconds (5 minutes)</p> </blockquote> <p>and the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>:</p> <blockquote> <p>After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container</p> </blockquote> <p>I can't tell you more than in official documentation.</p>
Ivan Ponomarev
<p>Im trying to make an ingress for the minikube dashboard using the embedded dashboard internal service.</p> <p>I enabled both <code>ingress</code> and <code>dashboard</code> minikube addons.</p> <p>I also wrote this ingress YAML file :</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.com http: paths: - path: / pathType: Prefix backend: service: name: kubernetes-dashboard port: number: 80 </code></pre> <p>My Ingress is being created well as u can see :</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE dashboard-ingress nginx dashboard.com localhost 80 15s </code></pre> <p>I edited my <code>/etc/hosts</code> to add this line : <code>127.0.0.1 dashboard.com</code>.</p> <p>Now im trying to access the dashboard trough <code>dashboard.com</code>. But it's not working.</p> <p><code>kubectl describe ingress dashboard-ingress -n kubernetes-dashboard</code> gives me this :</p> <pre><code>Name: dashboard-ingress Namespace: kubernetes-dashboard Address: localhost Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- dashboard.com / kubernetes-dashboard:80 (172.17.0.4:9090) Annotations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 14m (x2 over 14m) nginx-ingress-controller Scheduled for sync </code></pre> <p>I don't really understand what <code>&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;</code> means, but maybe my issue comes from this.</p> <p><code>kubectl get pods -n ingress-nginx</code> result :</p> <pre><code>NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create--1-8krc7 0/1 Completed 0 100m ingress-nginx-admission-patch--1-qblch 0/1 Completed 1 100m ingress-nginx-controller-5f66978484-hvk9j 1/1 Running 0 100m </code></pre> <p>Logs for nginx-controller pod :</p> <pre><code>------------------------------------------------------------------------------- NGINX Ingress controller Release: v1.0.4 Build: 9b78b6c197b48116243922170875af4aa752ee59 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.9 ------------------------------------------------------------------------------- W1205 19:33:42.303136 7 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1205 19:33:42.303261 7 main.go:221] &quot;Creating API client&quot; host=&quot;https://10.96.0.1:443&quot; I1205 19:33:42.319750 7 main.go:265] &quot;Running in Kubernetes cluster&quot; major=&quot;1&quot; minor=&quot;22&quot; git=&quot;v1.22.3&quot; state=&quot;clean&quot; commit=&quot;c92036820499fedefec0f847e2054d824aea6cd1&quot; platform=&quot;linux/amd64&quot; I1205 19:33:42.402223 7 main.go:104] &quot;SSL fake certificate created&quot; file=&quot;/etc/ingress-controller/ssl/default-fake-certificate.pem&quot; I1205 19:33:42.413477 7 ssl.go:531] &quot;loading tls certificate&quot; path=&quot;/usr/local/certificates/cert&quot; key=&quot;/usr/local/certificates/key&quot; I1205 19:33:42.420838 7 nginx.go:253] &quot;Starting NGINX Ingress controller&quot; I1205 19:33:42.424731 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-controller&quot;, UID:&quot;f2d27cc7-b103-490f-807f-18ccaa614e6b&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;664&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I1205 19:33:42.427171 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;tcp-services&quot;, UID:&quot;e174971d-df1c-4826-85d4-194598ab1912&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;665&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services I1205 19:33:42.427195 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;udp-services&quot;, UID:&quot;0ffc7ee9-2435-4005-983d-ed41aac1c9aa&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;666&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services I1205 19:33:43.622661 7 nginx.go:295] &quot;Starting NGINX process&quot; I1205 19:33:43.622746 7 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader... I1205 19:33:43.623402 7 nginx.go:315] &quot;Starting validation webhook&quot; address=&quot;:8443&quot; certPath=&quot;/usr/local/certificates/cert&quot; keyPath=&quot;/usr/local/certificates/key&quot; I1205 19:33:43.623683 7 controller.go:152] &quot;Configuration changes detected, backend reload required&quot; I1205 19:33:43.643547 7 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader I1205 19:33:43.643635 7 status.go:84] &quot;New leader elected&quot; identity=&quot;ingress-nginx-controller-5f66978484-hvk9j&quot; I1205 19:33:43.691342 7 controller.go:169] &quot;Backend successfully reloaded&quot; I1205 19:33:43.691395 7 controller.go:180] &quot;Initial sync, sleeping for 1 second&quot; I1205 19:33:43.691435 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Pod&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-controller-5f66978484-hvk9j&quot;, UID:&quot;55d45c26-eda7-4b37-9b04-5491cde39fd4&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;697&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1205 21:06:47.402756 7 main.go:101] &quot;successfully validated configuration, accepting&quot; ingress=&quot;dashboard-ingress/kubernetes-dashboard&quot; I1205 21:06:47.408929 7 store.go:371] &quot;Found valid IngressClass&quot; ingress=&quot;kubernetes-dashboard/dashboard-ingress&quot; ingressclass=&quot;nginx&quot; I1205 21:06:47.409343 7 controller.go:152] &quot;Configuration changes detected, backend reload required&quot; I1205 21:06:47.409352 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;kubernetes-dashboard&quot;, Name:&quot;dashboard-ingress&quot;, UID:&quot;be1ebfe9-fdb3-4d0c-925b-0c206cd0ece3&quot;, APIVersion:&quot;networking.k8s.io/v1&quot;, ResourceVersion:&quot;5529&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync I1205 21:06:47.458273 7 controller.go:169] &quot;Backend successfully reloaded&quot; I1205 21:06:47.458445 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Pod&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-controller-5f66978484-hvk9j&quot;, UID:&quot;55d45c26-eda7-4b37-9b04-5491cde39fd4&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;697&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1205 21:07:43.654037 7 status.go:300] &quot;updating Ingress status&quot; namespace=&quot;kubernetes-dashboard&quot; ingress=&quot;dashboard-ingress&quot; currentValue=[] newValue=[{IP: Hostname:localhost Ports:[]}] I1205 21:07:43.660598 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;kubernetes-dashboard&quot;, Name:&quot;dashboard-ingress&quot;, UID:&quot;be1ebfe9-fdb3-4d0c-925b-0c206cd0ece3&quot;, APIVersion:&quot;networking.k8s.io/v1&quot;, ResourceVersion:&quot;5576&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync </code></pre> <p>Anyone has a clue on how i can solve my problem ?</p> <p>(Im using minikube v1.24.0)</p> <p>Regards,</p>
Fragan
<p>I have also faced the same issue with minikube(v1.25.1) running in my local.</p> <p><code>kubectl get ingress -n kubernetes-dashboard</code></p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE dashboard-ingress nginx dashboard.com localhost 80 34m </code></pre> <p>After debug I have found this. &quot;If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list will be the internal IP&quot;.</p> <p>Run this command</p> <pre><code>minikube ip XXX.XXX.64.2 </code></pre> <p>add this ip in the host file,after that I am able to access dashboard.com</p>
Naveen Aetukuri
<p>I have installed autoscaler group on my cluster (running on aws). It works fine scaling up and down. However i configured the threshold (to scale down) be 0.4 (that means that each of the worker nodes cpu requirements less then this should be down). however what happens that the autoscaler just take the bigger value from cpu or memory.</p> <p>this is my configuration</p> <pre><code> - command: - ./cluster-autoscaler - --cloud-provider=aws - --namespace=kube-system - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/{my-cluster-name} - --logtostderr=true - --scale-down-utilization-threshold=0.4 - --skip-nodes-with-local-storage=false - --skip-nodes-with-system-pods=false - --stderrthreshold=info - --v=4 </code></pre> <p>this is the log of autoscaler</p> <pre><code>I0720 07:48:48.243694 1 scale_down.go:421] Node ip-10-0-146-54.ec2.internal - memory utilization 0.674829 I0720 07:48:48.243705 1 scale_down.go:424] Node ip-10-0-146-54.ec2.internal is not suitable for removal - memory utilization too big (0.674829) I0720 07:48:48.243719 1 scale_down.go:421] Node ip-10-0-144-198.ec2.internal - cpu utilization 0.873750 I0720 07:48:48.243741 1 scale_down.go:424] Node ip-10-0-144-198.ec2.internal is not suitable for removal - cpu utilization too big (0.873750) I0720 07:48:48.243753 1 scale_down.go:421] Node ip-10-0-132-191.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.243776 1 scale_down.go:424] Node ip-10-0-132-191.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.243796 1 scale_down.go:421] Node ip-10-0-158-56.ec2.internal - memory utilization 0.756398 I0720 07:48:48.243803 1 scale_down.go:424] Node ip-10-0-158-56.ec2.internal is not suitable for removal - memory utilization too big (0.756398) I0720 07:48:48.243814 1 scale_down.go:421] Node ip-10-0-146-236.ec2.internal - memory utilization 0.471180 I0720 07:48:48.243821 1 scale_down.go:424] Node ip-10-0-146-236.ec2.internal is not suitable for removal - memory utilization too big (0.471180) I0720 07:48:48.243831 1 scale_down.go:421] Node ip-10-0-141-80.ec2.internal - cpu utilization 0.911250 I0720 07:48:48.243837 1 scale_down.go:424] Node ip-10-0-141-80.ec2.internal is not suitable for removal - cpu utilization too big (0.911250) I0720 07:48:48.243846 1 scale_down.go:421] Node ip-10-0-131-74.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.243851 1 scale_down.go:424] Node ip-10-0-131-74.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.243860 1 scale_down.go:421] Node ip-10-0-135-213.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.243865 1 scale_down.go:424] Node ip-10-0-135-213.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.243874 1 scale_down.go:421] Node ip-10-0-145-101.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.243879 1 scale_down.go:424] Node ip-10-0-145-101.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.243891 1 scale_down.go:421] Node ip-10-0-149-91.ec2.internal - cpu utilization 0.886250 I0720 07:48:48.243897 1 scale_down.go:424] Node ip-10-0-149-91.ec2.internal is not suitable for removal - cpu utilization too big (0.886250) I0720 07:48:48.243905 1 scale_down.go:421] Node ip-10-0-130-30.ec2.internal - memory utilization 0.559890 I0720 07:48:48.243913 1 scale_down.go:424] Node ip-10-0-130-30.ec2.internal is not suitable for removal - memory utilization too big (0.559890) I0720 07:48:48.243924 1 scale_down.go:421] Node ip-10-0-145-37.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.243933 1 scale_down.go:424] Node ip-10-0-145-37.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.243943 1 scale_down.go:421] Node ip-10-0-135-59.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.243949 1 scale_down.go:424] Node ip-10-0-135-59.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.243957 1 scale_down.go:421] Node ip-10-0-145-80.ec2.internal - cpu utilization 0.898750 I0720 07:48:48.243964 1 scale_down.go:424] Node ip-10-0-145-80.ec2.internal is not suitable for removal - cpu utilization too big (0.898750) I0720 07:48:48.243975 1 scale_down.go:421] Node ip-10-0-128-31.ec2.internal - cpu utilization 0.930000 I0720 07:48:48.243981 1 scale_down.go:424] Node ip-10-0-128-31.ec2.internal is not suitable for removal - cpu utilization too big (0.930000) I0720 07:48:48.243988 1 scale_down.go:421] Node ip-10-0-150-103.ec2.internal - memory utilization 0.559890 I0720 07:48:48.244009 1 scale_down.go:424] Node ip-10-0-150-103.ec2.internal is not suitable for removal - memory utilization too big (0.559890) I0720 07:48:48.244025 1 scale_down.go:421] Node ip-10-0-138-235.ec2.internal - cpu utilization 0.855000 I0720 07:48:48.244033 1 scale_down.go:424] Node ip-10-0-138-235.ec2.internal is not suitable for removal - cpu utilization too big (0.855000) I0720 07:48:48.244044 1 scale_down.go:421] Node ip-10-0-139-155.ec2.internal - memory utilization 0.675887 I0720 07:48:48.244049 1 scale_down.go:424] Node ip-10-0-139-155.ec2.internal is not suitable for removal - memory utilization too big (0.675887) I0720 07:48:48.244059 1 scale_down.go:421] Node ip-10-0-149-95.ec2.internal - memory utilization 0.512408 I0720 07:48:48.244065 1 scale_down.go:424] Node ip-10-0-149-95.ec2.internal is not suitable for removal - memory utilization too big (0.512408) I0720 07:48:48.244073 1 scale_down.go:421] Node ip-10-0-146-35.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.244079 1 scale_down.go:424] Node ip-10-0-146-35.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.244088 1 scale_down.go:421] Node ip-10-0-148-252.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.244124 1 scale_down.go:424] Node ip-10-0-148-252.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.244141 1 scale_down.go:421] Node ip-10-0-154-233.ec2.internal - cpu utilization 0.996250 I0720 07:48:48.244149 1 scale_down.go:424] Node ip-10-0-154-233.ec2.internal is not suitable for removal - cpu utilization too big (0.996250) I0720 07:48:48.244158 1 scale_down.go:421] Node ip-10-0-157-83.ec2.internal - cpu utilization 0.961250 I0720 07:48:48.244163 1 scale_down.go:424] Node ip-10-0-157-83.ec2.internal is not suitable for removal - cpu utilization too big (0.961250) I0720 07:48:48.244175 1 scale_down.go:421] Node ip-10-0-159-144.ec2.internal - memory utilization 0.583811 I0720 07:48:48.244191 1 scale_down.go:424] Node ip-10-0-159-144.ec2.internal is not suitable for removal - memory utilization too big (0.583811) I0720 07:48:48.244200 1 scale_down.go:421] Node ip-10-0-144-12.ec2.internal - cpu utilization 0.886250 I0720 07:48:48.244205 1 scale_down.go:424] Node ip-10-0-144-12.ec2.internal is not suitable for removal - cpu utilization too big (0.886250) I0720 07:48:48.244215 1 scale_down.go:421] Node ip-10-0-156-220.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.244222 1 scale_down.go:424] Node ip-10-0-156-220.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.244233 1 scale_down.go:421] Node ip-10-0-131-80.ec2.internal - cpu utilization 0.836250 I0720 07:48:48.244242 1 scale_down.go:424] Node ip-10-0-131-80.ec2.internal is not suitable for removal - cpu utilization too big (0.836250) I0720 07:48:48.244257 1 scale_down.go:421] Node ip-10-0-140-90.ec2.internal - cpu utilization 0.955000 I0720 07:48:48.244274 1 scale_down.go:424] Node ip-10-0-140-90.ec2.internal is not suitable for removal - cpu utilization too big (0.955000) </code></pre> <p>sometimes autoscaler deciding to choose cpu utilization and sometimes it's deciding to use memory utilization. i would like to use only cpu utilization for decreasing nodes</p>
yuofvi
<p>This may not completely answer your question but after investigation into cluster-autoscaler for our use the following was discovered:</p> <p><strong>Scale-up:</strong></p> <ul> <li>The cluster-autoscaler obtains cluster metrics every 10secodns to determine if an Up-Scale or Down-Scale action is a potential option.</li> <li>When 1 of the scaling criteria is met it is detected and the relevant infrastructure is marked for a scaling event.</li> </ul> <p><strong>Scale-down:</strong></p> <ul> <li>When a node becomes under-utilised it becomes a potential candidate for removal.</li> <li>The evaluation time period is 10minutes, after this point if the highlighted nodes are still under-utilised they are removed. Any pods on the surplus node are moved onto a remaining node(s).</li> </ul> <p>Perhaps the decision to use either CPU or memory to scale-down is based on which metric meets the threshold for the 10minute time period first.</p> <p>This page may be useful but I couldnt see anything which answers your direct query: <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work</a></p> <p>Sorry, hope this is somewhat helpful :)</p>
dinho
<p>I deployed K8S cluster on AWS EKS (nodegroup) with 3 nodes. I'd like to see the pod CIDR for each node but this command returns empty: <code>$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'</code>. Why doesn't it have CIDR in the configuration?</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-193.ap-southeast-2.compute.internal Ready &lt;none&gt; 94d v1.21.5-eks-bc4871b ip-10-0-2-66.ap-southeast-2.compute.internal Ready &lt;none&gt; 22m v1.21.5-eks-bc4871b ip-10-0-2-96.ap-southeast-2.compute.internal Ready &lt;none&gt; 24m v1.21.5-eks-bc4871b </code></pre> <p>Below is one of the node info.</p> <pre><code>$ kubectl describe node ip-10-0-1-193.ap-southeast-2.compute.internal Name: ip-10-0-1-193.ap-southeast-2.compute.internal Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=t3.large beta.kubernetes.io/os=linux eks.amazonaws.com/capacityType=ON_DEMAND eks.amazonaws.com/nodegroup=elk eks.amazonaws.com/nodegroup-image=ami-00c56588b2d911d26 failure-domain.beta.kubernetes.io/region=ap-southeast-2 failure-domain.beta.kubernetes.io/zone=ap-southeast-2a kubernetes.io/arch=amd64 kubernetes.io/hostname=ip-10-0-1-193.ap-southeast-2.compute.internal kubernetes.io/os=linux node.kubernetes.io/instance-type=t3.large topology.ebs.csi.aws.com/zone=ap-southeast-2a topology.kubernetes.io/region=ap-southeast-2 topology.kubernetes.io/zone=ap-southeast-2a Annotations: csi.volume.kubernetes.io/nodeid: {&quot;ebs.csi.aws.com&quot;:&quot;i-0da5d02f6c203fe6b&quot;} node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 19 Nov 2021 16:04:37 +1100 Taints: &lt;none&gt; Unschedulable: false Lease: HolderIdentity: ip-10-0-1-193.ap-southeast-2.compute.internal AcquireTime: &lt;unset&gt; RenewTime: Mon, 21 Feb 2022 20:39:23 +1100 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletReady kubelet is posting ready status Addresses: InternalIP: 10.0.1.193 Hostname: ip-10-0-1-193.ap-southeast-2.compute.internal InternalDNS: ip-10-0-1-193.ap-southeast-2.compute.internal Capacity: attachable-volumes-aws-ebs: 25 cpu: 2 ephemeral-storage: 20959212Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8047100Ki pods: 35 Allocatable: attachable-volumes-aws-ebs: 25 cpu: 1930m ephemeral-storage: 18242267924 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7289340Ki pods: 35 System Info: Machine ID: ec29e60ae2a5ed86515b0b6e7fe39341 System UUID: ec29e60a-e2a5-ed86-515b-0b6e7fe39341 Boot ID: f75bc84f-fbd5-4414-87c8-669a8b4e3c62 Kernel Version: 5.4.149-73.259.amzn2.x86_64 OS Image: Amazon Linux 2 Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.7 Kubelet Version: v1.21.5-eks-bc4871b Kube-Proxy Version: v1.21.5-eks-bc4871b ProviderID: aws:///ap-southeast-2a/i-0da5d02f6c203fe6b Non-terminated Pods: (15 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- cert-manager cert-manager-68ff46b886-ndnm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d cert-manager cert-manager-cainjector-7cdbb9c945-bzfx2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d cert-manager cert-manager-webhook-58d45d56b8-2mr76 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d default elk-es-node-1 1 (51%) 100m (5%) 4Gi (57%) 50Mi (0%) 32m default my-nginx-5b56ccd65f-sndqv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m elastic-system elastic-operator-0 100m (5%) 1 (51%) 150Mi (2%) 512Mi (7%) 89d kube-system aws-load-balancer-controller-9c59c86d8-86ld2 100m (5%) 200m (10%) 200Mi (2%) 500Mi (7%) 89d kube-system aws-node-mhqp6 10m (0%) 0 (0%) 0 (0%) 0 (0%) 94d kube-system cluster-autoscaler-76fd4db4c-j59vm 100m (5%) 100m (5%) 600Mi (8%) 600Mi (8%) 89d kube-system coredns-68f7974869-2x4qc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 89d kube-system coredns-68f7974869-wfhzq 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 89d kube-system ebs-csi-controller-7584b68c57-ksvkc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d kube-system ebs-csi-controller-7584b68c57-rkbq4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d kube-system ebs-csi-node-nxfkz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94d kube-system kube-proxy-zcqg4 100m (5%) 0 (0%) 0 (0%) 0 (0%) 94d Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1610m (83%) 1400m (72%) memory 5186Mi (72%) 2002Mi (28%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) attachable-volumes-aws-ebs 0 0 Events: &lt;none&gt; </code></pre>
Joey Yi Zhao
<p>From kubelet let documentation I can see that it is only being used for standalone configuration <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/</a></p> <p><a href="https://i.stack.imgur.com/VegKe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VegKe.png" alt="enter image description here" /></a></p>
Yuvraj Shekhawat
<p>I'm trying to migrate my Nextcloud instance to a kubernetes cluster. I've succesfully deployed a Nextcloud instance using <strong>openEBS-cStor</strong> storage. Before I can &quot;kubectl cp&quot; my old files to the cluster, I need to put Nextcloud in maintenance mode.</p> <p>This is what I've tried so far:</p> <ul> <li>Shell access to pod</li> <li>Navigate to folder</li> <li>Run OCC command to put next cloud in maintenance mode</li> </ul> <p>These are the commands I used for the OCC way:</p> <pre class="lang-sh prettyprint-override"><code>kubectl exec --stdin --tty -n nextcloud nextcloud-7ff9cf449d-rtlxh -- /bin/bash su -c 'php occ maintenance:mode --on' www-data # This account is currently not available. </code></pre> <p>Any tips on how to put Nextcloud in maintenance mode would be appreciated!</p>
Freek
<p>The <code>su</code> command fails because there is no shell associated with the <code>www-data</code> user.</p> <p>What worked for me is explicitly specifying the shell in the <code>su</code> command:</p> <pre class="lang-sh prettyprint-override"><code>su -s /bin/bash www-data -c &quot;php occ maintenance:mode --on&quot; </code></pre>
Michael
<p>I set up Docker for Desktop (Windows) and enabled Kubernetes in the gui. I am behind a Proxy and added .internal to the no_proxy environmental variable. <br> kubectl config get-contexts shows that I am in the docker-desktop context. <br> kubectl config view shows the following <strong>config</strong>:</p> <pre><code>clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://kubernetes.docker.internal:6443 name: docker-desktop contexts: - context: cluster: docker-desktop user: docker-desktop name: docker-desktop current-context: docker-desktop kind: Config preferences: {} users: - name: docker-desktop user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>Now whenever I try to run a command like kubectl cluster-info or kubectl get pod, the following <strong>Error-Message</strong> is shown: <br><br> <strong>Unable to connect to the server: dial tcp: lookup kubernetes.docker.internal on 160.50.250.20:53: dial udp 160.50.250.20:53: connect: network is unreachable</strong></p>
Daniel_Hafe
<p>As a solution i edited the config file which is in</p> <pre><code>~/.kube/config </code></pre> <p>I replaced &quot;kubernetes.docker.internal&quot; with &quot;localhost&quot;</p>
Daniel_Hafe
<p>I have made a nginx deployment which will be tagged by a ClusterIP service via a selector. Then I entered a new pod that is not related to that deployment nor service. And from within that pod I try to ping the i.p of the ClusterIP service hoping it would reach the nginx deploy, but it's not receiving the ping response.</p> <p>The nginx deployment I made was with this manifest.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 </code></pre> <p>Then, the service I created was with this manifest:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: basicping-service labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx </code></pre> <p>I made sure the service got created by running <code>kubectl get svc</code> and it did, the i.p is <code>10.98.91.185</code></p> <p>And then I created a new pod completely unrelated to this deployment&amp;Service.</p> <p><code>kubectl run -it --rm --image=ubuntu bash</code></p> <p>From within it, I pinged a sandbox server called <code>pingtest.net</code> just to see it was able to send requests and receive response. And it did</p> <p>So finally, I tried pinging the <code>basicping-service</code> created previously by trying to ping the i.p of the service, I did this by running <code>ping 10.98.91.185</code></p> <p>And here is the problem. It does sends pings but doesn't receives the responses back, even after several minutes.</p> <p>It was my understanding that the ping should have received a response. But is my understanding of services incorrect ? Or it should have worked but there is an error?</p> <p>Just for more documentation, the my-nginx deployment is running, and the pod as well. And there seems to be nothing wrong with the nginx running in it. I checked this by running the <code>kubectl describe</code> of the deploy &amp; pod, and also by checking the pod's logs, it's running nginx correctly apparently. Also, after running <code>kubectl describe svc basicping-service</code> it does shows the nginx pod's i.p address with port 80 as the endpoint</p>
Eugenio.Gastelum96
<p>ping doesn't work with a service's cluster IP, as it is a virtual IP. You should be able to ping a specific pod, but not a service.</p> <p>ping sends packets using the very-low-level ICMP protocol, but Nginx serves HTTP which uses the TCP protocol instead.</p>
Siegfred V.
<p>I'm learning the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/" rel="nofollow noreferrer">k8s Scheduling Framework</a>, scheduling cycle phase. I can't catch an actual difference between filters &quot;PreFilter&quot;, &quot;Filter&quot; and &quot;PostFilter&quot;. What is the real difference between them, what is the real work on a simple example?</p> <ol> <li>PreFilter -These plugins are used to pre-process info about the Pod... ex: If the POD has too big memory request (more than the max of the nodes in our cluster) then this plugin returns an error and, the scheduling cycle is aborted. Is it right?</li> <li>Filter - the main work of the scheduler. These plugins are used to filter out nodes that don't satisfy the Pod's scheduling constraints. So, this plugin about to work with node filter, is it right? ex: If the POD has some nodeSelector, then this plugin will filter out all nodes that don't satisfy the nodeSelector. Is it right?</li> <li>PostFilter - These plugins are called after filter phase, but only when no feasible nodes were found for the pod. ex: If we have some POD with some nodeSelector, but there is not enough some resources (ex. memory) on that nodes, then .... we run next plugins on this scheduling cycle (score...). I'm little confused, what is the real job of this plugin?</li> </ol> <p><a href="https://i.stack.imgur.com/QbfZE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QbfZE.jpg" alt="enter image description here" /></a></p>
Kirill K
<p>kube-scheduler selects a node for the pod in a 2-step operation:</p> <ul> <li><p>Filtering</p> </li> <li><p>Scoring</p> </li> </ul> <p>You can understand from the doc <a href="https://www.kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/" rel="nofollow noreferrer">Kubernetes scheduler</a> written by Yunkun, that the scheduler inserts the pods to be scheduled into a queue through an informer. These pods are cyclically popped out from the queue and pushed into a schedule pipeline.</p> <p>The schedule pipeline is divided into the schedule thread, wait thread, and bind thread.</p> <ul> <li>The schedule thread is divided into the pre-filter, filter, post-filter, score, and reserve phases.</li> </ul> <p>The filter phase selects the nodes consistent with pod spec. The score phase scores and sorts the selected nodes. The reserve phase puts a pod in the node cache of the optimal sorted node, indicating that the pod is assigned to this node. In this way, the next pod that waits for scheduling knows the previously assigned pod when the node is filtered and scored.</p> <p>Pods are scheduled one by one only in the schedule thread and are scheduled in the wait thread and bind thread in an asynchronous and parallel manner.</p> <ul> <li><p><strong>PreFilter</strong> preprocesses pod-related requests, such as pod cache requests.</p> </li> <li><p><strong>Filter</strong> allows you to add custom filters. For example, GPU share is implemented by a custom filter.</p> </li> <li><p><strong>PostFilter</strong> is used to process logs and metrics or preprocess data before the score phase. You can configure cache plugins through PostFilter.</p> </li> </ul> <p>The components that implement the extension points of kubernetes scheduler are called <strong>Plugins.</strong></p> <p>You can refer to the doc <a href="https://www.alibabacloud.com/blog/getting-started-with-kubernetes-%7C-scheduling-process-and-scheduler-algorithms_596299" rel="nofollow noreferrer">filters</a> authored by Wang Menghai for more information.</p>
Srividya
<p>So I have a deployment job that runs some kubectl commands. A configuration file is modified to use the latest docker image SHA, and I run these commands to deploy it out:</p> <pre><code>kubectl apply -f myconfig-file.yaml #we have to scale up temporarily due to reasons beyond the purview of this question kubectl scale -f myconfig-file.yaml --replicas=4 #* wait a minute or so * sleep 60 kubectl scale -f myconfig-file.yaml --replicas=2 </code></pre> <p>Apply correctly updates the replicationcontroller definition on Google Cloud to be pointed at the latest image, but the original containers still remain. Scaling up DOES create containers with the correct image, but once I scale down, it removes the newest containers, leaving behind the old containers with the old image.</p> <p>I have confirmed that:</p> <ol> <li>The new containers with their new image are working as expected.</li> <li>I ended up doing the deployment manually and manually removed the old containers, (and k8s correctly created two new containers with the latest image) and when I scaled down, the new containers with the new images stuck around. My application worked as expected.</li> </ol> <p>My yaml file in question:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: my-app-cluster spec: replicas: 2 template: metadata: labels: run: my-app-cluster spec: containers: - image: mySuperCoolImage@sha256:TheLatestShaHere imagePullPolicy: Always name: my-app-cluster ports: - containerPort: 8008 protocol: TCP terminationGracePeriodSeconds: 30 </code></pre> <p>I'm using the Google Cloud K8s FWIW. Do I need to do something in the YAML file to instruct k8s to destroy the old instances?</p>
Another Stackoverflow User
<p>So it looks like the majority of my problem stem from the fact that I'm using a <code>ReplicationController</code> instead of a more supported <code>Deployment</code> or a <code>ReplicaSet</code>. Unfortunately, at this time, I'll need to consult with my team on the best way to migrate to that format since there are some considerations we have.</p> <p>In the meantime, I fixed the issue with this little hack.</p> <pre><code>oldPods=($(kubectl get pods | grep myPodType | perl -wnE'say /.*-myPodType-\w*/g')) #do the apply and scale thing like listed above #then delete for item in ${oldPods[@]}; do kubectl delete pod $item; done </code></pre>
Another Stackoverflow User
<p>Consider a pod, whose one of the containers mount a PVC that points to a persistent volume (using NFS storage class). All of it is running over Google Kubernetes Engine. Is there a way to inspect the content of the persistent volume via Google Cloud Console?</p>
brandizzi
<p>You can check what's inside the Persistent Disk via SSH onto its node. As all Persistent Disks are needed to be attached to an instance to have access to them.</p> <p>Or you can also check this <a href="https://cloud.google.com/filestore/docs/csi-driver" rel="nofollow noreferrer">documentation</a> if you are using Filestore instances as Persistent Volumes.</p>
Siegfred V.
<p>I am currently working with Apache Pulsar, installed from a helm chart on a local Minikube cluster. The install goes just fine and Apache Pulsar runs well. However, whenever I shutdown/restart my laptop, I can never get the pods all running again. I always get the <code>CrashLoopBackOff</code> status. I try and restart the Pulsar cluster using the following command upon restarting my machine (<code>minikube start</code>):</p> <pre><code>xyz-MBP:~ xyz$ minikube start 😄 minikube v1.23.2 on Darwin 11.4 🆕 Kubernetes 1.22.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.2 ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🔄 Restarting existing docker container for &quot;minikube&quot; ... 🐳 Preparing Kubernetes v1.19.0 on Docker 20.10.8 ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 ▪ Using image kubernetesui/dashboard:v2.3.1 ▪ Using image kubernetesui/metrics-scraper:v1.0.7 🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard ❗ /usr/local/bin/kubectl is version 1.22.0, which may have incompatibilites with Kubernetes 1.19.0. ▪ Want kubectl v1.19.0? Try 'minikube kubectl -- get pods -A' 🏄 Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default </code></pre> <p>Now, it looks like it started okay but then when I go to query the status of the pods sometime later, I get the following:</p> <pre><code>xyz-MBP:pulsar xyz$ kubectl get pods -n pulsar NAME READY STATUS RESTARTS AGE pulsar-mini-bookie-0 0/1 CrashLoopBackOff 8 25h pulsar-mini-bookie-init-kqx6j 0/1 Completed 0 25h pulsar-mini-broker-0 0/1 CrashLoopBackOff 8 25h pulsar-mini-grafana-555cf54cf-jl5xp 1/1 Running 1 25h pulsar-mini-prometheus-5556dbb8b8-k5v2v 1/1 Running 1 25h pulsar-mini-proxy-0 0/1 Init:1/2 1 25h pulsar-mini-pulsar-init-h78xk 0/1 Completed 0 25h pulsar-mini-pulsar-manager-6c6889dff-r6tmk 1/1 Running 1 25h pulsar-mini-toolset-0 1/1 Running 1 25h pulsar-mini-zookeeper-0 1/1 Running 1 25h </code></pre> <p>The mini-proxy never gets out of the init stage, and the bookie and broker keep retrying and instantly going into <code>CrashLoopBackOff</code>. Then, when digging into the logs for the Bookie pod I see the following unfamiliar exception:</p> <pre><code>01:15:10.164 [main] ERROR org.apache.bookkeeper.bookie.Bookie - Cookie for this bookie is not stored in metadata store. Bookie failing to come up 01:15:10.170 [main] ERROR org.apache.bookkeeper.server.Main - Failed to build bookie server </code></pre> <p>Additionally, I get an exception from the broker pod:</p> <pre><code>01:21:44.733 [main-EventThread] ERROR org.apache.bookkeeper.proto.PerChannelBookieClient - Cannot connect to pulsar-mini-bookie-0.pulsar-mini-bookie.pulsar.svc.cluster.local:3181 as endpopint resolution failed </code></pre> <p>There is more to the above error but didn't want to dump the entire log here. The above error is the first one that shows up, I believe anything that follows is just fallout from the above... let me know if I'm mistaken about that!</p>
Snoop
<p><strong>Solution:</strong></p> <ol> <li>You can check whether your application is exiting due the application crashing which is resulting in the issue. Running the following command should give you a sense of whether that is happening in the container logs:</li> </ol> <p><strong>kubectl logs pod-name --all-containers=true</strong></p> <p>If you enable the stackdriver logging the following filters can be used to obtain container logs:</p> <p><strong>Stackdriver V1:</strong></p> <p>resource.type=&quot;container&quot;</p> <p>resource.labels.pod_id=&quot;$POD_NAME&quot;</p> <p><strong>Stackdriver V2:</strong></p> <p>resource.type=&quot;k8s_container&quot;</p> <p>resource.labels.pod_name=&quot;$POD_NAME&quot;</p> <ol start="2"> <li>You can check whether your liveness probes are failing which is resulting in crashes. These would be in pod events. Running the following commands should give you a sense of whether that is happening:</li> </ol> <p><strong>kubectl describe pod &quot;$POD_NAME&quot;</strong></p> <p>If the you have stackdriver logging the following filters can be used to get pod event logs :</p> <p><strong>Stackdriver V1:</strong></p> <p>resource.type=&quot;gke_cluster&quot;</p> <p>logName=&quot;projects/$PROJECT_ID/logs/events&quot;</p> <p>jsonPayload.reason=&quot;Unhealthy&quot;</p> <p>jsonPayload.involvedObject.name=&quot;$POD_NAME&quot;</p> <p><strong>Stackdriver V2:</strong></p> <p>resource.type=&quot;k8s_pod&quot;</p> <p>logName=&quot;projects/$PROJECT_ID/logs/events&quot;</p> <p>jsonPayload.reason=&quot;Unhealthy&quot;</p> <p>resource.labels.pod_name=&quot;$POD_NAME&quot;</p> <p><strong>Root cause of this issue:</strong> The pod simply stuck in a loop of starting and crashing.</p>
Tatikonda vamsikrishna
<p>I wonder if someone can help me.</p> <p>Kubernetes (K8s 1.21 platform eks.4) is Terminating running pods without error or reason. The only thing I can see in the events is:</p> <pre><code>7m47s Normal Killing pod/test-job-6c9fn-qbzkb Stopping container test-job </code></pre> <p>Because I've set up an anti-affinity rule, only one pod can run in one node. So every time a pod gets killed, autoscaler brings up another node.</p> <p>These are the cluster-autoscaler logs</p> <pre><code>I0208 19:10:42.336476 1 cluster.go:148] Fast evaluation: ip-10-4-127-38.us-west-2.compute.internal for removal I0208 19:10:42.336484 1 cluster.go:169] Fast evaluation: node ip-10-4-127-38.us-west-2.compute.internal cannot be removed: pod annotated as not safe to evict present: test-job-6c9fn-qbzkb I0208 19:10:42.336493 1 scale_down.go:612] 1 nodes found to be unremovable in simulation, will re-check them at 2022-02-08 19:15:42.335305238 +0000 UTC m=+20363.008486077 I0208 19:15:04.360683 1 klogx.go:86] Pod default/test-job-6c9fn-8wx2q is unschedulable I0208 19:15:04.360719 1 scale_up.go:376] Upcoming 0 nodes I0208 19:15:04.360861 1 scale_up.go:300] Pod test-job-6c9fn-8wx2q can't be scheduled on eks-ec2-8xlarge-84bf6ad9-ca4a-4293-a3e8-95bef28db16d, predicate checking error: node(s) didn't match Pod's node affinity/selector; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity/selector; debugInfo= I0208 19:15:04.360901 1 scale_up.go:449] No pod can fit to eks-ec2-8xlarge-84bf6ad9-ca4a-4293-a3e8-95bef28db16d I0208 19:15:04.361035 1 scale_up.go:300] Pod test-job-6c9fn-8wx2q can't be scheduled on eks-ec2-inf1-90bf6ad9-caf7-74e8-c930-b80f785bc743, predicate checking error: node(s) didn't match Pod's node affinity/selector; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity/selector; debugInfo= I0208 19:15:04.361062 1 scale_up.go:449] No pod can fit to eks-ec2-inf1-90bf6ad9-caf7-74e8-c930-b80f785bc743 I0208 19:15:04.361162 1 scale_up.go:300] Pod test-job-6c9fn-8wx2q can't be scheduled on eks-ec2-large-62bf6ad9-ccd4-6e03-5c78-c3366d387d50, predicate checking error: node(s) didn't match Pod's node affinity/selector; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity/selector; debugInfo= I0208 19:15:04.361194 1 scale_up.go:449] No pod can fit to eks-ec2-large-62bf6ad9-ccd4-6e03-5c78-c3366d387d50 I0208 19:15:04.361512 1 scale_up.go:412] Skipping node group eks-eks-on-demand-10bf6ad9-c978-9b35-c7fc-cdb9977b27cb - max size reached I0208 19:15:04.361675 1 scale_up.go:300] Pod test-job-6c9fn-8wx2q can't be scheduled on eks-ec2-test-58bf6d43-13e8-9acc-5173-b8c5054a56da, predicate checking error: node(s) didn't match Pod's node affinity/selector; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity/selector; debugInfo= I0208 19:15:04.361711 1 scale_up.go:449] No pod can fit to eks-ec2-test-58bf6d43-13e8-9acc-5173-b8c5054a56da I0208 19:15:04.361723 1 waste.go:57] Expanding Node Group eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f would waste 75.00% CPU, 86.92% Memory, 80.96% Blended I0208 19:15:04.361747 1 scale_up.go:468] Best option to resize: eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f I0208 19:15:04.361762 1 scale_up.go:472] Estimated 1 nodes needed in eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f I0208 19:15:04.361780 1 scale_up.go:586] Final scale-up plan: [{eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f 0-&gt;1 (max: 2)}] I0208 19:15:04.361801 1 scale_up.go:675] Scale-up: setting group eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f size to 1 I0208 19:15:04.361826 1 auto_scaling_groups.go:219] Setting asg eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f size to 1 I0208 19:15:04.362154 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;kube-system&quot;, Name:&quot;cluster-autoscaler-status&quot;, UID:&quot;81b80048-920c-4bf1-b2c0-ad5d067d74f4&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;359476&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'ScaledUpGroup' Scale-up: setting group eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f size to 1 I0208 19:15:04.374021 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;kube-system&quot;, Name:&quot;cluster-autoscaler-status&quot;, UID:&quot;81b80048-920c-4bf1-b2c0-ad5d067d74f4&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;359476&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'ScaledUpGroup' Scale-up: setting group eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f size to 1 I0208 19:15:04.541658 1 eventing_scale_up_processor.go:47] Skipping event processing for unschedulable pods since there is a ScaleUp attempt this loop I0208 19:15:04.541859 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:&quot;Pod&quot;, Namespace:&quot;default&quot;, Name:&quot;test-job-6c9fn-8wx2q&quot;, UID:&quot;67beba1d-4f52-4860-91af-89e5852e4cad&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;359507&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{eks-ec2-xlarge-84bf6ad9-cb6d-7e24-7eb5-a00c369fd82f 0-&gt;1 (max: 2)}] </code></pre> <p>I'm running an EKS cluster with cluster-autoscaler and keda's aws-sqs trigger. I've set up an autoscaling node group with SPOT instances.</p> <p>For testing purposes I've defined an ScaledJob consisting on a container with a simple python script, looping through time.sleep. The pod should run for 30 mins. But it never gets so far. In general it ends after 15 mins.</p> <pre><code>{ &quot;apiVersion&quot;: &quot;keda.sh/v1alpha1&quot;, &quot;kind&quot;: &quot;ScaledJob&quot;, &quot;metadata&quot;: { &quot;name&quot;: id, &quot;labels&quot;: {&quot;myjobidentifier&quot;: id}, &quot;annotations&quot;: { &quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;false&quot; }, }, &quot;spec&quot;: { &quot;jobTargetRef&quot;: { &quot;parallelism&quot;: 1, &quot;completions&quot;: 1, &quot;backoffLimit&quot;: 0, &quot;template&quot;: { &quot;metadata&quot;: { &quot;labels&quot;: {&quot;job-type&quot;: id}, &quot;annotations&quot;: { &quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;false&quot; }, }, &quot;spec&quot;: { &quot;affinity&quot;: { &quot;nodeAffinity&quot;: { &quot;requiredDuringSchedulingIgnoredDuringExecution&quot;: { &quot;nodeSelectorTerms&quot;: [ { &quot;matchExpressions&quot;: [ { &quot;key&quot;: &quot;eks.amazonaws.com/nodegroup&quot;, &quot;operator&quot;: &quot;In&quot;, &quot;values&quot;: group_size, } ] } ] } }, &quot;podAntiAffinity&quot;: { &quot;requiredDuringSchedulingIgnoredDuringExecution&quot;: [ { &quot;labelSelector&quot;: { &quot;matchExpressions&quot;: [ { &quot;key&quot;: &quot;job-type&quot;, &quot;operator&quot;: &quot;In&quot;, &quot;values&quot;: [id], } ] }, &quot;topologyKey&quot;: &quot;kubernetes.io/hostname&quot;, } ] }, }, &quot;serviceAccountName&quot;: service_account.service_account_name, &quot;containers&quot;: [ { &quot;name&quot;: id, &quot;image&quot;: image.image_uri, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot;, &quot;env&quot;: envs, &quot;resources&quot;: { &quot;requests&quot;: requests, }, &quot;volumeMounts&quot;: [ { &quot;mountPath&quot;: &quot;/tmp&quot;, &quot;name&quot;: &quot;tmp-volume&quot;, } ], } ], &quot;volumes&quot;: [ {&quot;name&quot;: &quot;tmp-volume&quot;, &quot;emptyDir&quot;: {}} ], &quot;restartPolicy&quot;: &quot;Never&quot;, }, }, }, &quot;pollingInterval&quot;: 30, &quot;successfulJobsHistoryLimit&quot;: 10, &quot;failedJobsHistoryLimit&quot;: 100, &quot;maxReplicaCount&quot;: 30, &quot;rolloutStrategy&quot;: &quot;default&quot;, &quot;scalingStrategy&quot;: {&quot;strategy&quot;: &quot;default&quot;}, &quot;triggers&quot;: [ { &quot;type&quot;: &quot;aws-sqs-queue&quot;, &quot;metadata&quot;: { &quot;queueURL&quot;: queue.queue_url, &quot;queueLength&quot;: &quot;1&quot;, &quot;awsRegion&quot;: region, &quot;identityOwner&quot;: &quot;operator&quot;, }, } ], }, } </code></pre> <p>I know this is not a problem of resources (dummy code and large instances), nor a problem of eviction (it's clear from the logs that the pod is safe from eviction), but I really don't know how to troubleshoot this anymore.</p> <p>thanks a lot!!</p> <p>EDIT:</p> <p>Same behavior with On-Demand and SPOT instances.</p> <p>EDIT 2:</p> <p>I added the aws node termination handler, it seems that now I'm seeing other events:</p> <pre><code>ip-10-4-126-234.us-west-2.compute.internal.16d223107de38c5f NodeNotSchedulable Node ip-10-4-126-234.us-west-2.compute.internal status is now: NodeNotSchedulable test-job-p85f2-txflr.16d2230ea91217a9 FailedScheduling 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) were unschedulable. </code></pre> <p>If I check the scaling group activity:</p> <pre><code>Instance i-03d27a1cf341405e1 was taken out of service in response to a user request, shrinking the capacity from 1 to 0. </code></pre>
Raúl
<p>I also have a similar problem because of HPA's scale-in.</p> <p>When you don't write the <strong>minReplicaCount</strong> value, the value is set to 0 as default. Then, the pod is terminated because of the HPA's scale-in.</p> <p>I recommend you should set the <strong>minReplicaCount</strong> value that you want (e.g. 1).</p>
Vujadeyoon
<p>I have tried to deploy Kafka in k8s, so I need to persist its volume with hostpath, but when the volume configuration adds to the deployment file, this error shows in Kafka pod, and the pod state becomes Crashloopbackoff:</p> <pre><code>mkdir: cannot create directory ‘/bitnami/config’: Permission denied </code></pre> <p>I think I have to change permission so the pod can create this file.</p> <p>Deployment.yml:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: containers: - env: - name: ALLOW_PLAINTEXT_LISTENER value: "yes" - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://:9092 image: bitnami/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 volumeMounts: - name: kafka-data readOnly: false mountPath: "/bitnami/kafka" volumes: - name: kafka-data hostPath: path: /data/kafka-data</code></pre> </div> </div> </p>
mona moghadampanah
<p>I have solved the problem by changing the path (where I mount the pod data )ownership on the worker servers with this command:</p> <pre><code>sudo chown -R 1001:1001 /data/kafka-data </code></pre> <p>But I think this solution is not bestpractice.</p>
mona moghadampanah
<p>I have having 2 pool in my GKE cluster default-pool (2 nodes), temp-pool (3 nodes). I am having pod replicas as 4 now my requirment is atleast <strong><strong>minimum one pod it has to schedule on default-pool its mandatory</strong> (not all the pod should schedule on default pool)</strong> and other pods can schedule on temp pool</p> <p>I tried using nodeaffinity, topologySpreadConstraints specification but sometimes pod is not scheduling on default-pool</p> <p>Is there a way to achieve this in kubernetes</p>
iamarunk
<p>Node affinity allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.</p> <p>As you need that they won't end up on the same node, you can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">AntiAffinity</a> along with the weighted  node pool preferences using <code>preferredDuringSchedulingIgnoredDuringExecution</code>.</p> <p>or try this spec on <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">topologySpreadConstraints</a></p> <pre><code>spec:   topologySpreadConstraints:   - maxSkew: 1     topologyKey: zone     whenUnsatisfiable: DoNotSchedule     labelSelector:       matchLabels:         node: temp-pool </code></pre>
Siegfred V.
<p>&quot;Failed to scrape node&quot; `</p> <pre><code>err=&quot;GET \&quot;https://10.128.0.17:10250/stats/summary? only_cpu_and_memory=true\&quot;: bad status code \&quot;403 Forbidden\&quot;&quot; node=&quot;gke-zipydev-cluster-zipy-pool-b4bfa53a-t575&quot; I1215 10:33:03.405180 1 server.go:188] &quot;Failed probe&quot; probe=&quot;metric-storage-ready&quot; err=&quot;not metrics to serve&quot; E1215 10:33:10.513042 1 scraper.go:139] &quot;Failed to scrape node&quot; err=&quot;GET \&quot;https://10.128.0.16:10250/stats/summary? only_cpu_and_memory=true\&quot;: bad status code \&quot;403 Forbidden\&quot;&quot; node=&quot;gke-zipydev-cluster-zipy-pool-b4bfa53a-sg4t&quot; </code></pre> <p>please help if anyone faced same issue.</p>
Nikhil Verma
<p>The privileges for the metrics server are not correctly added as the “403“ error is because access to the requested resource is forbidden.</p> <p>The Metrics Server requires the <strong>“CAP_NET_BIND_SERVICE”</strong> capability in order to bind to a privileged ports as non-root as this applies even if you use the <strong>--secure-port</strong> flag to change the port that Metrics Server binds to to a non-privileged port. Refer <a href="https://github.com/kubernetes-sigs/metrics-server#security-context" rel="nofollow noreferrer">Security context</a> for information.</p> <p>As described in the <a href="https://github.com/kubernetes/kubernetes/pull/105938/files" rel="nofollow noreferrer">Github link</a>, Granting metrics-server necessary permissions to access(query/read) <em><strong>nodes/stat</strong></em> API resource is the workaround to solve this issue. You can grant metrics-server necessary permissions by using the below configuration file.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server rules: - apiGroups: - &quot;&quot; resources: - nodes/stats - nodes verbs: - get - list </code></pre> <p><strong>NOTE:</strong> Check your metrics-server has a recent version if you installed it manually. In order to update your metrics-server deployment, you can refer to the <a href="https://github.com/kubernetes-sigs/metrics-server/releases?page=1" rel="nofollow noreferrer">Github link</a> and select the version which suits you.</p> <p>Refer <a href="https://stackoverflow.com/questions/61125754/">stackpost</a> for more information about 403 forbidden errors.</p>
Jyothi Kiranmayi
<p>I've deployed an Azure Kubernetes Service with the <strong>Azure AD authentication with Azure RBAC</strong> Authentication mode configured.</p> <p>I have given myself the</p> <ul> <li><code>Azure Kubernetes Service Cluster Admin Role</code></li> <li><code>Azure Kubernetes Service RBAC Admin</code></li> </ul> <p>- roles.</p> <p>And with this I can:</p> <ul> <li>List deployments</li> <li>Create deployments</li> <li>Tear down deployments</li> </ul> <p>Across all namespaces.</p> <p>However neither of these allow me to create a <code>namespace</code>, and from what I can tell no obvious other roles touch on this permission.</p> <p>For example <code>kubectl create namespace test-namespace</code> Raises:</p> <pre class="lang-bash prettyprint-override"><code>Error from server (Forbidden): namespaces is forbidden: User &quot;USER AZURE AD&quot; cannot create resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope: User does not have access to the resource in Azure. Update role assignment to allow access. </code></pre> <p>I am aware that pulling credentials with <code>az aks get-credentials -g {Resource Group} --name {CLUSTER NAME} --admin</code> is a workaround, but this particular cluster cannot have <code>Kubertnetes local accounts</code> enabled so this is not an option.</p> <p>What can I do?</p>
Zander Fick
<p>To resolve this issue, create one <strong>custom RBAC role</strong> with <strong><code>Microsoft.ContainerService/managedClusters/namespaces/write</code></strong> permission under your subscription:</p> <p><img src="https://i.imgur.com/fOK3d1t.png" alt="enter image description here" /></p> <p>Clone a role with Azure <code>Kubernetes service RBAC Admin</code>:</p> <p><img src="https://i.imgur.com/eUpj6Eb.png" alt="enter image description here" /></p> <p>In permission tab you can find <strong>NotDataActions</strong> remove the <code>Microsoft.ContainerService/managedClusters/namespaces/write</code> from <code>NotDataActions</code> and add the same permission:</p> <p><img src="https://i.imgur.com/4ilYhpq.png" alt="enter image description here" /></p> <p><img src="https://i.imgur.com/nBkUUSU.png" alt="enter image description here" /></p> <p>Once the custom role is created, now assign the role assignment:</p> <p><img src="https://i.imgur.com/L6Avz10.png" alt="enter image description here" /></p> <p>When I run the namespace command got desired output:</p> <p><code>kubectl create namespace test-namespace</code></p> <p><a href="https://i.imgur.com/px7whaJ.png" rel="nofollow noreferrer"><strong>Output</strong></a></p> <p><em><strong>Reference</strong></em>: <a href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-rbac-admin" rel="nofollow noreferrer">Azure built-in roles - Azure RBAC | Microsoft Learn</a></p>
Imran
<p>I Have a pod running in a node with an attached pv. My requirement is that I need to use the existing pv, if the pod does down and allocates the pod in another node, for the newly created pod in another node. So what can be done?</p>
infantus
<p>If you are using the persistence volume , then pv stays with pod till its life which means it will persists pod's restarts or deletes. It can only be deleted by deleting its deployment config.</p> <p>If your case is to backup the data on pv , then thats different case.</p>
Manmohan Mittal
<p>Whenever you start a Kubernetes cluster at one of the big clouds (EKS at AWS, GKE at GCP, AKS at Azure, or Kubernetes at Digitalocean), you can generate a kubeconfig file from them, which grants you full access.</p> <p>It is now very nice to work with them, but I am always concerned about what I can do if someone manages to steal it. What can I do then?</p> <p>I never found a button at one of the big clouds to revoke access of the stolen kubeconfig and to regenerate a new one. Is there anything with which I can make that aspect more secure - if you have a documentation at hand, that would be appreciated.</p>
tobias
<p>In GKE at GCP the Kubeconfig file which is generated while the cluster creation is located in <strong>$HOME/.kube/config</strong>. The kubeconfig directory is default to $HOME/.kube/config where <strong>$HOME</strong> refers to the /home/.</p> <p><strong>1.</strong> If you want to remove user from kubeconfig file use the following command:</p> <p><code>$ kubectl --kubeconfig=&lt;kubeconfig-name&gt; config unset users.&lt;name&gt;</code></p> <p><strong>2.</strong> If you want to regenerate the Kubeconfig file with the previous Kubeconfig file contents try authorizing the cluster using the command:</p> <p><code>$ gcloud container clusters get-credentials &lt;cluster-name&gt; --zone &lt;zone&gt; --project &lt;project-id&gt;</code></p> <p><strong>3.</strong> If you want to restrict users to kubeconfig file, add permissions to kubeconfig file using the following permission commands:</p> <p><code>$ chmod 644 &lt;kubeconfig-file&gt;</code> - which means that the owner can read and write the file, and all others on the system can only read it.</p> <p><code>$ chmod 640 &lt;kubeconfig-file&gt;</code> - that the owner has read and write permissions, the group has read permissions, and all other users have no rights to the file.</p> <p><code>$ chmod 600 &lt;kubeconfig-file&gt;</code> - only the owner of the file has full read and write access to it. Once a file permission is set to <strong>600</strong>, no one else can access the file.</p> <p><strong>NOTE:</strong> Revoking the contents of Kubeconfig file after the kubeconfig file deletion is not possible, you can regenerate the contents of Kubeconfig file only by authorizing the cluster.</p> <p>Refer to the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">documentation</a> for more information.</p>
Jyothi Kiranmayi
<p>I want to isolate a pod from its deployment. So I execute the command “kubeclt label pod app= --overwrite” Just like</p> <pre><code> ✘ yanchampion@yanchampiondeMacBook-Pro  ~  kubectl -n test1 label pod myapp-deploy-848987f4fb-wc5v2 app=newname-debug --overwrite Error from server (InternalError): Internal error occurred: replace operation does not apply: doc is missing key: /spec/containers/0/env/ : missing value </code></pre> <p>Here is the deployment pod info</p> <pre><code> yanchampion@yanchampiondeMacBook-Pro  ~  kubectl -n test1 get pod |grep myapp-deploy myapp-deploy-848987f4fb-wc5v2 1/1 Running 0 5d8h </code></pre> <p>So What's the problem?</p> <p>I just want to isolate the pod from its deployment! Someone can tell me how to fix it?</p>
champion yan
<p>You could also try to edit or remove the label from its pod spec:</p> <p>​ <code>kubectl edit pod myapp-deploy-848987f4fb-wc5v2</code></p>
Siegfred V.
<p>I know we can edit the pvc and change to RWX but there is a cache in this, I'm trying to do in GKE, so for my pvc with RWO the storage class is standard, but if edit to RWX i need to change the storage class also to NFS.</p> <p>Is it possible to achieve this without losing data inside PVC ?</p>
Nikhil Lingam
<p>Your existing pvc is using the standard storage class which doesn’t allow RWX . So it’s not possible. It means even if you change it in PVC config it’s not going to work.</p> <p>Workaround to the above is take the backup of existing pv data. Create a new pvc with RWX mode for NFS pv and mount that to the application. Copy the backup data to the mounted volume.</p>
Manmohan Mittal
<p>I tried to configure envoy in my kubernetes cluster by following this example: <a href="https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem</a></p> <p>My static envoy config:</p> <pre><code> node: cluster: test-cluster id: test-id dynamic_resources: cds_config: path: /var/lib/envoy/cds.yaml lds_config: path: /var/lib/envoy/lds.yaml admin: access_log_path: &quot;/dev/null&quot; address: socket_address: address: 0.0.0.0 port_value: 19000 </code></pre> <p>The dynamic config from configmap is mounted to and contains the files .</p> <p>I used a configmap to mount the config files (<code>cds.yaml</code> and <code>lds.yaml</code>) into to envoy pod (to <code>/var/lib/envoy/</code>) but unfortunately the envoy configuration doesn't change when I change the config in the configmap. The mounted config files are updated as expected.</p> <p>I can see from the logs, that envoy watches the config files:</p> <pre><code>[2021-03-01 17:50:21.063][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:47] added watch for directory: '/var/lib/envoy' file: 'cds.yaml' fd: 1 [2021-03-01 17:50:21.063][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:140] maybe finish initialize state: 1 [2021-03-01 17:50:21.063][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:149] maybe finish initialize primary init clusters empty: true [2021-03-01 17:50:21.063][1][info][config] [source/server/configuration_impl.cc:95] loading 0 listener(s) [2021-03-01 17:50:21.063][1][info][config] [source/server/configuration_impl.cc:107] loading stats configuration [2021-03-01 17:50:21.063][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:47] added watch for directory: '/var/lib/envoy' file: 'lds.yaml' fd: 1 </code></pre> <p>and once I update the configmap I also get the logs that something changed:</p> <pre><code>[2021-03-01 17:51:50.881][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:72] notification: fd: 1 mask: 80 file: ..data [2021-03-01 17:51:50.881][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:72] notification: fd: 1 mask: 80 file: ..data </code></pre> <p>but envoy doesn't reload the config.</p> <p>It seems that kubernetes updates the config files by changing a directory and envoy doesn't recognise that the config files are changed.</p> <p>Is there an easy way to fix that? I don't want to run and xDS server for my tests but hot config reload would be great for my testing 😇</p> <p>Thanks!</p>
sschoebinger
<p>It can now be solved by using <code>watched_directory</code> in <code>path_config_source</code>. <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-msg-config-core-v3-pathconfigsource" rel="nofollow noreferrer">See the documentation of watched_directory</a> for using xDS with ConfigMap</p>
Alex Mercer
<p>That is my HPA. I want to start the deployment with default replicas=3</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: backend-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: backend minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: AverageValue averageValue: 500Mi - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 1 periodSeconds: 100 scaleUp: stabilizationWindowSeconds: 60 policies: - type: Pods value: 1 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max </code></pre> <p>But it always says: <strong>ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count</strong> and I can't understand why.</p> <pre class="lang-bash prettyprint-override"><code>➜ kg hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE prod-backend-hpa-v1 Deployment/prod-backend-v1 145881770666m/500Mi, 2%/70% 3 20 3 7m16s ➜ kd hpa Name: prod-backend-hpa-v1 Namespace: prod Labels: argocd.argoproj.io/instance=backend-prod Annotations: &lt;none&gt; CreationTimestamp: Thu, 02 Jun 2022 19:34:30 -0500 Reference: Deployment/prod-backend-v1 Metrics: ( current / target ) resource memory on pods: 145596416 / 500Mi resource cpu on pods (as a percentage of request): 1% (3m) / 70% Min replicas: 3 Max replicas: 20 Behavior: Scale Up: Stabilization Window: 60 seconds Select Policy: Max Policies: - Type: Pods Value: 1 Period: 30 seconds - Type: Percent Value: 10 Period: 60 seconds Scale Down: Stabilization Window: 300 seconds Select Policy: Max Policies: - Type: Pods Value: 1 Period: 100 seconds Deployment pods: 3 current / 3 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 9m22s horizontal-pod-autoscaler New size: 3; reason: Current number of replicas below Spec.MinReplicas ➜ kgpo NAME READY STATUS RESTARTS AGE prod-backend-v1-8dd687999-54mzp 1/1 Running 0 58m prod-backend-v1-8dd687999-nn7c2 1/1 Running 0 2d17h prod-backend-v1-8dd687999-rcxsw 1/1 Running 0 2d17h ➜ kg rs NAME DESIRED CURRENT READY AGE prod-backend-v1-566b9c8856 0 0 0 2d19h prod-backend-v1-578d699c45 0 0 0 2d19h prod-backend-v1-64859b74c9 0 0 0 2d18h prod-backend-v1-6498b4b45c 0 0 0 2d19h prod-backend-v1-656cccdc4b 0 0 0 2d19h prod-backend-v1-66cc5cf44 0 0 0 2d19h prod-backend-v1-698c7ddc7d 0 0 0 2d19h prod-backend-v1-6bdbc77f5d 0 0 0 2d19h prod-backend-v1-7486c95664 0 0 0 2d19h prod-backend-v1-774cdbdcdc 0 0 0 2d19h prod-backend-v1-8dd687999 3 3 3 2d17 </code></pre>
DmitrySemenov
<p>A horizontal pod autoscaler, defined by a <strong>HorizontalPodAutoscaler</strong> object, specifies how the system should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration.</p> <p><strong>ScalingLimited</strong> indicates that autoscaling is not allowed because a maximum or minimum replica count was reached.</p> <ol> <li>A <strong>True</strong> condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale.</li> <li>A <strong>False</strong> condition indicates that the requested scaling is allowed.</li> </ol> <p>As per your use case the ScalingLimited shows status as <strong>“True”</strong> with message “desired replica count is less than the minimum replica count”. So, as a workaround you can increase minReplicas or set scaleDown policy with large periodSeconds to increase stabilization periods.</p> <p>Refer <a href="https://docs.okd.io/3.7/dev_guide/pod_autoscaling.html#creating-a-hpa" rel="nofollow noreferrer">Pod Autoscaling</a> for more information.</p>
Jyothi Kiranmayi
<p>I am simply trying to deploy this Grafana app as-is, no changes to the YAML have been made: <a href="https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/" rel="nofollow noreferrer">https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/</a></p> <p>VMs are <strong>Ubuntu 20.04 LTS</strong>. The Kubernetes cluster is made up of the Control-Plane/Mstr &amp; 3x Worker nodes:</p> <pre><code>root@k8s-master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane 35d v1.24.2 k8s-worker1 Ready worker 4h24m v1.24.2 k8s-worker2 Ready worker 4h24m v1.24.2 k8s-worker3 Ready worker 4h24m v1.24.2v </code></pre> <p>Other K8s Pods such as NGINX run without issue.</p> <p>However, the Grafana pod cannot start and is stuck in a Pending state:</p> <pre><code>root@k8s-master:~# kubectl create -f grafana.yaml persistentvolumeclaim/grafana-pvc created deployment.apps/grafana created service/grafana created # time passed here... root@k8s-master:~# kubectl get pods NAME READY STATUS RESTARTS AGE grafana-9bd5bbd6b-k7ljz 0/1 Pending 0 3h39m </code></pre> <p>Troubleshooting this, I found there is an issue with the storage PersistentVolumeClaim (the <code>pvc</code>):</p> <pre><code>root@k8s-master:~# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE grafana-pvc Pending 2m22s root@k8s-master:~# root@k8s-master:~# kubectl describe pvc grafana-pvc Name: grafana-pvc Namespace: default StorageClass: Status: Pending Volume: Labels: &lt;none&gt; Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: grafana-9bd5bbd6b-k7ljz Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 6s (x11 over 2m30s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set </code></pre> <p><strong>UPDATE:</strong> I created a StorageClass and set it as default:</p> <pre><code>root@k8s-master:~# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE generic (default) no-provisioner Delete Immediate false 19m </code></pre> <p>I also created a PersistentVolume:</p> <pre><code>root@k8s-master:~# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE task-pv-volume 10Gi RWO Retain Released default/task-pv-claim manual 12m </code></pre> <p>However, now when I try to deploy the Grafana PVC it is still stuck - why?</p> <pre><code>root@k8s-master:~# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE grafana-pvc Pending generic 4m16s root@k8s-master:~# kubectl describe pvc grafana-pvc Name: grafana-pvc Namespace: default StorageClass: generic Status: Pending Volume: Labels: &lt;none&gt; Annotations: volume.beta.kubernetes.io/storage-provisioner: no-provisioner volume.kubernetes.io/storage-provisioner: no-provisioner Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: grafana-9bd5bbd6b-mmqs6 grafana-9bd5bbd6b-pvhtm grafana-9bd5bbd6b-rtwgj Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 12s (x19 over 4m27s) persistentvolume-controller waiting for a volume to be created, either by external provisioner &quot;no-provisioner&quot; or manually created by system administrator </code></pre>
SamAndrew81
<p>I have tried creating a Grafana configuration file from the <a href="https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/#create-grafana-kubernetes-manifest" rel="nofollow noreferrer">documentation</a>, and was able to create successfully. The pod has a Running state, also the PVC(PersistentVolumeClaim) shows the Storage class as <strong>standard</strong>.</p> <p>The below is the output of PVC:</p> <pre><code>$ kubectl describe pvc grafana-pvc Name: grafana-pvc Namespace: default StorageClass: standard Status: Bound Volume: pvc-ee20cc5d-6ca5-4075-b5f3-d1a6323a5241 Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWO VolumeMode: Filesystem Used By: grafana-75789d79d4-wbgtv Events: &lt;none&gt; </code></pre> <p>But in your use case the StorageClass field is showing as empty. So, try deleting the existing one and recreate the Grafana configuration file. If you were not able to create and are still facing the same error message which is “no persistent volumes available for this claim and no storage class is set” then you will have to create PV(<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a>).</p> <p>Because, your error says, &quot;<strong>Your PVC hasn't found a matching PV and you also haven't mentioned any storageClass name</strong>&quot;. After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.</p> <p>In order to resolve your issue you will need to create a StorageClass with no-provisioner and then create a PV(PersistentVolume) by defining this storageClassName. Then you have to create PVC and Pod/Deployment .</p> <p>Refer to <a href="https://stackoverflow.com/a/55849106/15745153">stackpost1</a> and <a href="https://stackoverflow.com/a/63557664/15745153">stackpost2</a> for more information.</p>
Jyothi Kiranmayi
<p>I'm trying to install hue by helm on local kubernetes, using minikube context. Installation go well, but at the end I have to find hue running on http://minikube:32284, but on the browser I obtain &quot;Impossible to find IP Address of the minikube server&quot;. Whitch is the problem?</p>
Giuliano Stirparo
<p>I have only to do: <strong>minikube service hue</strong></p> <p>This is because my cluster k8s don't run in local, but run in virtual environment created by docker. So when I search in local I can't find minikube. The command minikube service create a tunnel that expose the service to the external environment</p>
Giuliano Stirparo
<p>On an EKS cluster, I want to transition from the existing NodeGroup (say NG1) to a fresh NG2 (with spot instances). NG1 will remain as fall-back.</p> <p>Do I really need to play with Node Affinity in my deployments and make them &quot;prefer&quot; NG2, and then rollout-restart?</p> <p>Or is it enough to set the desired size for NG1 to a very low value, say just one node per AZ, thereby &quot;nudging&quot; the workload to migrate to NG2?</p>
GI D
<p>Node affinity sounds better choice to me</p>
Manmohan Mittal