Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I have a Kube manifest that need be applied to a couple of kubernetes clusters with different resource settings. For that I need to change resource section of this file on the fly. Here's its contents:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: abc-api labels: app: abc-api spec: ports: - name: http port: 80 targetPort: 3000 - name: https port: 3000 targetPort: 3000 selector: app: abc-api tier: frontend --- apiVersion: apps/v1 kind: Deployment metadata: name: abc-api labels: app: abc-api spec: selector: matchLabels: app: abc-api tier: frontend strategy: type: Recreate template: metadata: labels: app: abc-api tier: frontend spec: containers: - image: ABC_IMAGE resources: requests: memory: "128Mi" cpu: .30 limits: memory: "512Mi" cpu: .99 </code></pre> <p>I searched and found that yq is a better tool for this. However when I read values from this file, it only shows it till the line with <strong>'3 dashes'</strong>: no values past that.</p> <pre><code># yq r worker/deployment.yaml apiVersion: v1 kind: Service metadata: name: hometales-api labels: app: hometales-api spec: ports: - name: http port: 80 targetPort: 3000 - name: https port: 3000 targetPort: 3000 selector: app: hometales-api tier: frontend </code></pre> <p>I want to read the <strong>Deployment section</strong>, as well as edit the resource values.</p> <p>Section to read:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: .... </code></pre> <p>Section to edit:</p> <pre><code>resources: requests: memory: "128Mi" cpu: .20 limits: memory: "512Mi" cpu: .99 </code></pre> <p>So 1st part of Q: how to read after 2nd instance of 3-dashes? 2nd part of Q: how to edit resource values on the fly?</p> <p>I'm able to run this command and read this section, but can't read memory or cpu value further:</p> <pre><code># yq r -d1 deployment.yaml "spec.template.spec.containers[0].resources.requests" memory: "128Mi" cpu: .20 </code></pre>
<p>Use the <code>-d</code> CLI option. See <a href="https://mikefarah.gitbook.io/yq/commands/write-update#multiple-documents" rel="nofollow noreferrer">https://mikefarah.gitbook.io/yq/commands/write-update#multiple-documents</a> for more details.</p> <p>Also Kubernetes has its own thing for in <code>kubectl patch</code>.</p>
<p>I have been trying to install Python in minikube of the below version</p> <pre><code>Linux minikube 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux </code></pre> <p>However i havent been able to find out a installation package that availabe in this O.S in the minikube.</p> <p>My objective is to install python on minikube so that i could use ansible from my local machine to deploy things into minikube.Please guide.</p>
<p>Minikube is a dedicated application appliance. It is only for running Kubernetes. You would not install things on it via Ansible. You can use Ansible to automate Kubernetes, but you don't change anything on the host itself, that's just talking to the API.</p>
<p>I have a simple bash script to execute at postStart, but i get an error which is not informative at all:</p> <p><code>Exec lifecycle hook ([/bin/bash -c sleep 30;/xcom/scripts/sidecar_postStart.sh]) for Container "perfcibuddy" in Pod "xcomapp-597fb859c5-6r4g2_ns(412852d1-5eea-11ea-b641-0a31ddb9a71e)" failed - error: command '/bin/bash -c sleep 120;/xcom/scripts/sidecar_postStart.sh' exited with 7: , message: ""</code></p> <p>The sleep is there because I got a tip that there might be a race condition, that the script is not in place at the time Kubernetes calls it.</p> <p>And if I log into the container I can execute the script from the shell without any problem.</p> <p>The script is just doing a simple curl call (IP obviously sanitized):</p> <pre><code># ---------------------------------------------------------------------------- # Script to perform postStart lifecycle hook triggered actions in container # ---------------------------------------------------------------------------- # -------------------------------------------[ get token from Kiam server ]--- role_name=$( curl -s http://1.1.1.1/latest/meta-data/iam/security-credentials/ ) curl -s http://1.1.1.1/latest/meta-data/iam/security-credentials/${role_name} </code></pre> <p>I tried numerous form to set the command in the template (everything in quotes, with &amp;&amp; instead of ;), this is the current one:</p> <pre><code> exec: command: [/bin/bash, -c, "sleep 120;/xcom/scripts/sidecar_postStart.sh"] </code></pre> <p>What could be the problem here?</p>
<p>Curl exit code 7 is generally “unable to connect” so your IP is probably wrong or the kiam agent is not set up correctly.</p>
<p>After making 2 replicas of PostgreSQL StatefulSet pods in k8s, are the the same database? If they do, why I created DB and user in one pod, and can not find the value in the other. If they not, is there no point of creating replicas?</p>
<p>There isn't one simple answer here, it depends on how you configured things. Postgres doesn't support multiple instances sharing the same underlying volume without massive corruption so if you did set things up that way, it's definitely a mistake. More common would be to use the volumeClaimTemplate system so each pod gets its own distinct storage. Then you set up Postgres streaming replication yourself.</p> <p>Or look at using an operator which handles that setup (and probably more) for you.</p>
<p>I am trying to create k8s cluster. Is it necessary to establish ssh connection between hosts ?</p> <p>If so, should we make them passwordless ssh enabled ?</p>
<p>Kubernetes does not use SSH that I know of. It's possible your deployer tool could require it, but I don't know of any that works that way. It's generally recommended you have some process for logging in to the underlying machines in case you need to debug very low-level failures, but this is usually very rare. For my team, we need to log in to a node about once every month or two.</p>
<p>Currently I tried to fetch already rotated logs within the node using --since-time parameter. Can anybody suggest what is the command/mechanism to fetch already rotated logs within kubernetes architecture using commands</p>
<p>You can't. Kubernetes does not store logs for you, it's just providing an API to access what's on disk. For long term storage look at things like Loki, ElasticSearch, Splunk, SumoLogic, etc etc.</p>
<p>So I am looking to set up cert-manager on GKE using google clouddns. It seems like a lot of the older questions on SO that have been asked are using http01 instead of dns01. I want to make sure everything is correct so I don't get rate limited.</p> <p>here is my <code>issuer.yaml</code></p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Issuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-staging solvers: - dns01: clouddns: project: MY-GCP_PROJECT # This is the secret used to access the service account serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct key: key.json </code></pre> <p>here is my <code>certificate.yaml</code></p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: my-website namespace: default spec: secretName: my-website-tls issuerRef: # The issuer created previously name: letsencrypt-staging dnsNames: - my.website.com </code></pre> <p>I ran these commands to get everything configured:</p> <pre><code>kubectx my-cluster kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml kubectl get pods --namespace cert-manager gcloud iam service-accounts create dns01-solver --display-name &quot;dns01-solver&quot; gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:dns01-solver@$PROJECT_ID.iam.gserviceaccount.com --role roles/dns.admin gcloud iam service-accounts keys create key.json --iam-account dns01-solver@$PROJECT_ID.iam.gserviceaccount.com kubectl create secret generic clouddns-dns01-solver-svc-acct --from-file=key.json kubectl apply -f issuer.yaml kubectl apply -f certificate.yaml </code></pre> <p>here is the output from <code>kubectl describe certificaterequests</code></p> <pre><code>Name: my-certificaterequests Namespace: default Labels: &lt;none&gt; Annotations: cert-manager.io/certificate-name: my-website cert-manager.io/private-key-secret-name: my-website-tls kubectl.kubernetes.io/last-applied-configuration: {&quot;apiVersion&quot;:&quot;cert-manager.io/v1alpha2&quot;,&quot;kind&quot;:&quot;Certificate&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;my-cluster&quot;,&quot;namespace&quot;:&quot;default... API Version: cert-manager.io/v1alpha3 Kind: CertificateRequest Metadata: Creation Timestamp: 2020-06-28T00:05:55Z Generation: 1 Owner References: API Version: cert-manager.io/v1alpha2 Block Owner Deletion: true Controller: true Kind: Certificate Name: my-cluster UID: 81efe2fd-5f58-4c84-ba25-dd9bc63b032a Resource Version: 192470614 Self Link: /apis/cert-manager.io/v1alpha3/namespaces/default/certificaterequests/my-certificaterequests UID: 8a0c3e2d-c48e-4cda-9c70-b8dcfe94f14c Spec: Csr: ... Issuer Ref: Name: letsencrypt-staging Status: Certificate: ... Conditions: Last Transition Time: 2020-06-28T00:07:51Z Message: Certificate fetched from issuer successfully Reason: Issued Status: True Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal OrderCreated 16m cert-manager Created Order resource default/my-certificaterequests-484284207 Normal CertificateIssued 14m cert-manager Certificate fetched from issuer successfully </code></pre> <p>I see the secret <code>kubectl get secret my-website-tls</code></p> <pre><code>NAME TYPE DATA AGE my-website-tls kubernetes.io/tls 3 18m </code></pre> <p>Does that means everything worked and I should try it in prod? What worries me is that I didn't see any DNS records change in my cloud console.</p> <p>In addition I wanted to confirm:</p> <ul> <li>How would I change the certificate to be for a wildcard <code>*.company.com</code>?</li> <li>If in fact I am ready for prod and will get the cert, I just need to updated the secret name in my ingress deployment to redeploy?</li> </ul> <p>Any insight would be greatly appreciated. Thanks</p>
<p>I answered you on Slack already. And you would change the name by changing the value in the <code>dnsNames</code> section of the Certificate or the <code>spec.tls.*.hosts</code> if using ingress-shim, you just include the wildcard name exactly as you showed it.</p>
<p>I have a grafana dashboard running in a kubernetes cluster which is configured. yaml via a ConfigMap to use Azure AD to restrict access.</p> <p>I would now like to parameterize the grafana.ini in that configmap so i can use different subdomains in my release pipeline like this:</p> <pre><code>kind: ConfigMap data: grafana.ini: | [server] root_url = https://{Subdomain}.domain/ [...] </code></pre> <p>{Subdomain} should be replaced in the pipeline via arguments. In a "normal" kubernetes .yaml file I can just do something like </p> <pre><code>[...] host: {{ .Values.Subdomain }}.{{ .Values.Domain }} [...] </code></pre> <p>to pass in arguments. This seems not to be working in the grafana.ini data section.</p> <p>What is the correct syntax to pass an argument into the grafana configuration here?</p>
<p>No, there is no string templating in YAML. The examples you are looking at are using Helm to process the YAML. You can do that, but you need actually use Helm for that.</p>
<p>Running k8s 1.6 and in api-server, below is configured:</p> <p><code>--enable-admission-plugins SecurityContextDeny</code></p> <p>Is it possible to disable it for one pod or is there an exclusion list or override for a deployment.</p> <p>I need to run a pod with:</p> <pre><code> securityContext: runAsUser: 0 </code></pre> <p>Not able to figure it out, any pointers?</p>
<p>No, this was a very limited system which is why PodSecurityPolicies were added in 1.8 to be a far more flexible version of the same idea.</p>
<p>Let's say I have such architecture based on Java Spring Boot + Kubernetes:</p> <ul> <li>N pods with similar purpose (lets say: order-executors) - GROUP of pods</li> <li>other pods with other business implementation</li> </ul> <p>I want to create solution where:</p> <ol> <li>One (central) pod can communicate with all/specific GROUP of pods to get some information about state of this pods (REST or any other way)</li> <li>Some pods can of course be replicated i.e. x5 and central pod should communicate with all replicas</li> </ol> <p>Is it possible with any technology? If every order-executor pod has k8s service - there is a way to communicate with all replicas of this pod to get some info about all of them? Central pod has only service url so it doesn't know which replica pod is on the other side.</p> <p>Is there any solution to autodiscovery every pod on cluster and communicate with them without changing any configuration? Spring Cloud? Eureka? Consul?</p> <p>P.S.</p> <ul> <li>in my architecture there is also deployed etcd and rabbitmq. Maybe it can be used as part of solution?</li> </ul>
<p>You can use a &quot;headless Service&quot;, one with <code>clusterIP: none</code>. The result of that is when you do an DNS lookup for the normal service DNS name, instead of a single A result with the magic proxy mesh IP, you'll get separate A responses for every pod that matches the selector and is ready (i.e. the IPs that would normally be the backend). You can also fetch these from the Kubernetes API via the Endpoints type (or EndpointSlices if you somehow need to support groups with thousands, but for just 5 it would be Endpoints) using the Kubernetes Java client library. In either case, then you have a list of IPs and the rest is up to you to figure out :)</p>
<p>In Docker Compose, when I mount an empty host volume to a location that already has data in the container, than this data is copied to the empty host volume on the first run.</p> <p>E.g. if I use the nginx image and mount my empty host volume <code>nginx-config</code> to <code>/etc/nginx</code> in the nginx container then on the first start of the container everything from <code>/etc/nginx</code> is copied to my host volume <code>nginx-config</code>.</p> <p>Meanwhile I am using Kubernetes and wondering how that's done in kubernetes? When I mount a empty PersistentVolume to an container at <code>/etc/nginx</code>, nothing is automatically copied to it ):</p>
<p>You need to use an initContainer, mount the volume on a different path and do the copy explicitly.</p>
<p>I am trying to get pvc and it's volume and in which pod it is mounted and the node it is hosted.</p> <p>There are separate commands are there like below</p> <p>To get PVC and it's volume</p> <pre><code>kubectl get pvc &lt;pvcname&gt; </code></pre> <p>then from PVC i am getting where it is mounted </p> <pre><code>kubectl describe pvc &lt;pvcname&gt; | grep Mounted </code></pre> <p>Then getting the pod , i am find in which node the pod is hosted</p> <pre><code> kubectl get pod &lt;pod name&gt; -o wide </code></pre> <p>As often i need to check this and having lot of PVC created by PVC config ,running one by one is complex task. May be a script can be written. Is there any other way using kubectl filter i can get these in single command?</p> <p>Currently i am doing like this and finding node names where the pvc is mounted.</p> <pre><code>pvc_list=$(kubectl get pvc | awk '{print $1}') pod_list=$(kubectl describe pvc $pvc_list | grep Mounted | awk '{print $NF}') kubectl get pod $pod_list -o wide </code></pre> <p>But I need to get like this </p> <pre><code>PVC_name volume Pod_Name Node_Name PvcTest voltest pod1 node1 </code></pre>
<p>A PVC can be used on any number of pods so this model does not match how Kubernetes works. You could of course write your own script that ignores that possibility.</p>
<p>I have Kubernetes app with two namespaces: project-production and project-development. It contains of React frontend, Express backend and two databases. This is one of my ingress files. The second one is almost the same.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress namespace: project-development labels: name: ingress annotations: kubernetes.io.ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: rules: - http: paths: - path: &quot;/development/api/?(.*)&quot; pathType: Prefix backend: service: name: express-clusterip port: name: express-port - path: &quot;/development/?(.*)&quot; pathType: Prefix backend: service: name: react-clusterip port: name: react-port </code></pre> <p>I need my frontend to be visible on paths:</p> <ul> <li>production -&gt; localhost/production(/)</li> <li>development -&gt; localhost/development(/)</li> </ul> <p>One of the problems (not primary one) is that paths without / don't work. The second one is that my frontend on both paths above is visible, but my axios requests send from them have the same path: http://localhost/api/ I want to rewrite requests coming from react to express throught nginx:</p> <ul> <li>[namespace project-development] http://localhost/api/ -&gt; http://localhost/development/api</li> <li>[namespace project-production] http://localhost/api/ -&gt; http://localhost/production/api</li> </ul> <p>Is there any way to do this ?</p>
<p>Your regex is wrong, you want <code>path: &quot;/development/(api/.*)&quot;</code> for the first one.</p>
<p>I know k8s does not support swap by default.</p> <p>But since I have a rather static deployment of pods in k8s (3 nodes, each with a solr + zookeeper pod), so I decided to set up 2G swap on all my worker nodes anyway and allow the use of swap by starting <strong>kubelet</strong> with --<code>fail-swap-on=false</code></p> <p>Now I have got the cluster up &amp; running, and it seemed to be running okay.</p> <p>However, I found that my java processes are using a lot of swap, I am worried that this might affect the performance.</p> <ul> <li>RSS: 317160 Kb</li> <li>SWAP: 273232 Kb</li> </ul> <p>My question is that is there a way to limit the use of swap memory usage by containers? </p> <p>I was thinking about setting <code>--memory-swap</code> <a href="https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details" rel="nofollow noreferrer">params of Docker</a></p> <p>Currently, based on Docker inspect my container has no limit on swap usage ( <code>"MemorySwap": -1</code> )</p> <pre><code>sudo docker inspect 482d70f73c7c | grep Memory "Memory": 671088640, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": -1, "MemorySwappiness": null, </code></pre> <p>But I just couldn't find this param exposed in k8s.</p> <p><strong>My vm-related settings</strong></p> <pre><code>vm.overcommit_kbytes = 0 vm.overcommit_memory = 1 vm.overcommit_ratio = 50 vm.swappiness = 20 vm.vfs_cache_pressure = 1000 </code></pre> <p>p.s. Will the <strong>limit</strong> on pod memory also limit the swap usage?</p> <p>Thank you all for reading the post!</p> <h2>Notes</h2> <p>This is how I got the RSS &amp; SWAP of my Java process</p> <p><strong>Run</strong> <code>top</code></p> <p><code>27137 8983 20 0 3624720 317160 0 S 3.0 32.0 4:06.74 java</code></p> <p><strong>Run</strong> </p> <pre><code>find /proc -maxdepth 2 -path "/proc/[0-9]*/status" -readable -exec awk -v FS=":" '{process[$1]=$2;sub(/^[ \t]+/,"",process[$1]);} END {if(process["VmSwap"] &amp;&amp; process["VmSwap"] != "0 kB") printf "%10s %-30s %20s\n",process["Pid"],process["Name"],process["VmSwap"]}' '{}' \; | awk '{print $(NF-1),$0}' | sort -h | cut -d " " -f2- </code></pre> <p><code>27137 java 273232 kB</code></p>
<p>And now you know why the kubelet literally refuses to start by default if there swap active. This is not exposed anywhere in CRI and won’t be.</p>
<p>while evaluating the network security using nmap on Kubernetes server, we noticed a warning as below</p> <p>~]# nmap xxx.xx.xx.xx -p 6443 -sVC --script=ssl*</p> <pre><code>. . . ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A | TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A | TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A | TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A | TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C | compressors: | NULL | cipher preference: server | warnings: | 64-bit block cipher 3DES vulnerable to SWEET32 attack </code></pre> <p>With bit of research got to know that <strong>TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C</strong> cipher suite is to support 64bit block SSL/TLS Handshake and the suggested solution is to disable the cipher option in Kubernetes etcd. please help me how to do it.</p> <p>other views on this much appreciated, please let me know what is the better way to secure the environment.</p>
<p>You can use the <code>--cipher-suites</code> CLI option to etcd. See <a href="https://etcd.io/docs/v3.4/op-guide/security/" rel="nofollow noreferrer">https://etcd.io/docs/v3.4/op-guide/security/</a> for a summary of all their TLS config options. The default ciphers is based on the version of Go used to compile it.</p>
<p>I want to create hundreds of Jobs in kubernetes by its api. Is there any way to do this? I have to create them one by one now. Thanks.</p>
<p>I mean you have to make 1 API call per object you want to create, but you can certainly write a script for that. Kubernetes does not offer a "bulk create" API endpoint if that's what you are asking, or really much of anything for bulk operations. It's a boring old REST API :)</p>
<p>Is there a way to fire alerts when some run commands over pods like delete, exec, cp? and alerts message should include in which namespace, which pod, which command, and who run these commands. thanks.</p>
<p>This isn’t what Prometheus does, it’s about metrics. For most api operations, you can use the audit log to check if they happen and why, but exec requests are complicated since they are opaque to the apiserver. The only tool I know of which can decode and log exec requests is Sysdig Trace and it isn’t supported on all platforms since it needs direct access to the control plane to do it.</p>
<p>We have a scenario in our deployment process where an activity needs to be performed only once before the actual application containers are live and ready. This activity can not be placed as an init container because init container will be executed with every replica of application container but in this case, this activity needs to be done only once. </p> <p>To achieve this, I have created a kubernetes job which executes that activity and completes. </p> <ol> <li><p>Is there a way to check in my application container deployment definition that this particular Job has been completed? Are there any pre-defined keys in kubernetes which stores this metadata information and can be used to identify the job status?</p></li> <li><p>This Job is using a configMap and the container used in this Job loads the configuration files (provided by configMap) in Directory Server. Is there a way to trigger the job automatically if configMap changes? I can delete the job and recreate using kubectl but I am looking for an auto trigger. Is there any possible way available in OpenShift or HELM to do this if not in Kubernetes?</p></li> </ol>
<p>Helm has post-deploy hooks for this kind of thing, though they can be a little rough to work with. We use a custom operator for this so we can have an explicit state machine on our deployments (init -> migrate -> deploy -> test -> ready). But that's a lot of work to write.</p>
<p>I am fairly new to kubernetes. Wanted to know if a program running inside a pod can access the namespace in which the pod is running.</p> <p>Let me explain my usecase. There are two pods in my application's namespace. One pod has to be statefulset and must have atleast 3 replicas. Other pod (say POD-A) can be just a normal deployment. Now POD-A needs to talk to a particular instance of the statefulset. I read in an article that it can be done using this address format - <code>&lt;StatefulSet&gt;-&lt;Ordinal&gt;.&lt;Service&gt;.&lt;Namespace&gt;.svc.cluster.local</code>. In my application, the namespace part changes dynamically with each deployment. So can this value be read dynamically from a program running inside a pod?</p> <p>Please help me if I have misunderstood something here. Any alternate/simpler solutions are also welcome. Thanks in advance!</p>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api</a> has an example of this and more.</p> <pre><code> env: - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace </code></pre>
<blockquote> <p>How to use <code>kubectl</code> with system:anonymous account without using impersonation with <code>--as=system:anonymous</code>?</p> </blockquote> <p>How can I send requests with <code>kubectl</code> using the <code>system:anonymous</code> account?</p> <p>I've tried using the <code>--as=</code> option, but this requires that the <code>default</code> service account has impersonation privileges, which it doesn't by default.</p> <p>The only way I currently can send anonymous requests is by using <code>curl</code>.</p>
<p>Set up a new configuration context that doesn't specify any authentication information and then use <code>--context whatever</code>. Or just use curl, that's honestly fine too since I'm really hoping this is just to confirm some security settings or similar. If you run kubectl with <code>-v 10000000</code> (or some other huge number) it will actually show you the equivalent curl command to the request it is making.</p>
<p>I am new to Kubernetes and have an application deployed via GKE on mydomain.com and now want to add another service which should be available on api.mydomain.com without adding a new expensive load balancer. What should the new ingress file for api.mydomain look like? I read the documentation, but cannot figure out how to do this.</p> <p>This is my first service running on mydomain.com:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: app-service spec: selector: app: app ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "ip" cert-manager.io/cluster-issuer: "letsencrypt-prod" acme.cert-manager.io/http01-edit-in-place: "true" kubernetes.io/tls-acme: "true" spec: rules: - host: mydomain.com http: paths: - backend: serviceName: app-service servicePort: 80 tls: - hosts: - mydomain.com secretName: my-certs </code></pre> <p>I tried to use the same configuration for the subdomain api.mydomain.com, but this does not work.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: api-service spec: selector: app: api ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: api-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "ip" cert-manager.io/cluster-issuer: "letsencrypt-prod" acme.cert-manager.io/http01-edit-in-place: "true" kubernetes.io/tls-acme: "true" spec: rules: - host: api.mydomain.com http: paths: - backend: serviceName: api-service servicePort: 80 tls: - hosts: - api.mydomain.com secretName: my-certs-api </code></pre> <p>Maybe I'm approaching the problem in the wrong way, I'm new in GKE, any suggestions?</p>
<p>You would generally use a different Ingress Controller than the default ingress-gce. ingress-nginx is very common and easy to get started with, but there are many options so I recommend you research them and pick which one matches your use case best.</p>
<p>Is there a way to limit the number os replicas per node in Kubernetes? I found some info about Spreading Constraints but didn't understand if this is possible.</p> <p>Example: I want to have only 1 replica per node on the cluster.</p> <p>I know that K8S automatically balances the replicas spreading across the nodes, but I want to enforce a specific limit in each node.</p> <p>Is this possible?</p>
<p>The scheduler has many ways to just about everything but in the particular case of 1 replica per node you can use a <code>required</code> mode anti-affinity.</p>
<p>My approach is:</p> <pre class="lang-golang prettyprint-override"><code>func restartPod(meta metav1.ObjectMeta, kubeClient kubernetes.Interface) error { err := kubeClient.CoreV1().Pods(meta.Namespace).Delete(meta.Name, deleteInForeground()) if err != nil { return err } //time.Sleep(2 * time.Second) return wait.PollImmediate(5*time.Second, 5*time.Minute, func() (done bool, err error) { pod, err := kubeClient.CoreV1().Pods(meta.Namespace).Get(meta.Name, metav1.GetOptions{}) if err != nil { return false, nil } return pod.Status.Phase == v1.PodRunning &amp;&amp; pod.Status.ContainerStatuses[0].Ready, nil }) } </code></pre> <p>It doesn't work because the deletion of pod is non-blocking, means it doesn't wait for the pod to be deleted. So the <code>Get</code> pod method returns the pod with running state. If I use <code>sleep</code> for some seconds after pod deletion then it works fine. Is there any better way to do this without using <code>sleep</code>?</p>
<p>In the metadata of every object there is UUID in a field called <code>uid</code>. You can compare and wait until the pod is Ready <em>and</em> has a different UUID. See <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids</a> for more details (though really that's about all there is to say).</p>
<p>How could I clear existing log of a specific pod?</p> <p>So that I can get all logs since that time with <code>kubectl logs</code> next time.</p> <p>Thanks!</p>
<p>You can't, the log rotation is generally implemented in Docker (or sometimes via logrotate on the node host). However you can use <code>kubectl logs --since-time</code> and fill in the time of your last get. If you're trying to build something to iteratively process logs automatically, probably use Fluentd to load them into some kind of database (Kafka is common for this).</p>
<p>I am writing python script for kubernetes using python client.</p> <p>I want to count or calculate the POD <code>Ready</code> state time.</p> <p>In 30 Second pod Ready or similar.</p> <p>Right now i am planning to use</p> <pre><code>'status': { 'conditions': [ { 'last_probe_time': None, 'last_transition_time': datetime.datetime(2019, 12, 17, 13, 14, 58, tzinfo=tzutc()), 'message': None, 'reason': None, 'status': 'True', 'type': 'Initialized' }, { 'last_probe_time': None, 'last_transition_time': datetime.datetime(2019, 12, 17, 13, 16, 2, tzinfo=tzutc()), 'message': None, 'reason': None, 'status': 'True', 'type': 'Ready' }, { 'last_probe_time': None, 'last_transition_time': datetime.datetime(2019, 12, 17, 13, 16, 2, tzinfo=tzutc()), 'message': None, 'reason': None, 'status': 'True', 'type': 'ContainersReady' }, { 'last_probe_time': None, 'last_transition_time': datetime.datetime(2019, 12, 17, 13, 14, 58, tzinfo=tzutc()), 'message': None, 'reason': None, 'status': 'True', 'type': 'PodScheduled' } ] </code></pre> <p>But looking forward to directly can get total Ready state time from Pod initialised time. </p> <p>can anybody suggest better way.</p> <p>Thanks a lot.</p>
<p>There isn't something specific, you already have the data you need. Just go through the list and find the two last_transition_times for Ready and Initialized and then subtract.</p>
<p>Im getting this error</p> <pre><code>Error creating: pods "node-exporter" is forbidden: unable to validate against any pod security policy: [spec.secur ityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.contain ers[0].hostPort: Invalid value: 9100: Host port 9100 is not allowed to be used. Allowed ports: [0-8000]] </code></pre> <p>But i checked in another cluster in GCP, its not giving me any issue. Does anyone knows why i'm getting this issue</p>
<p>node-exporter needs direct access to the node-level network namespace to be able to gather statistics on this. You have a default security policy that blocks this access. You'll need to make a new policy which allows it, and assign that policy to node-exporter.</p>
<p>Is it possible for a kubernetes operator to watch files in a Persistent Volume Claim(PVC)? I am creating a k8s Golang operator to deploy and manage my application. The application pods will have a mounted volume. I need to be able to stop and start a pod if configuration files are changed on the PVC. Is this possible? I can see in the documentation that I can add a watcher for PVC but not sure if this also watches files updates or changes.</p>
<p>As mentioned in the comments, you would need a ReadWriteMany-capable volume provider but then sure. This isn't how most operators work so you'll have to manage the file watch yourself but there's some good low-level inotify bindings available and I think Viper can reload on the fly itself. Combine that with a channel watch in controller-runtime and a background goroutine interfacing with the file watch that injects reconcile events, and you should be all set.</p> <p>That said, RWX volumes are to be avoided unless <em>absolutely necessary</em> as all existing providers (NFS, CephFS, etc) each come with notable downsides and caveats to be aware of. Also this is not the general model of how operators should work, they should be API-driven. A possibly better approach is instead of a shared RWX volume containing the config, have a controller-like sidecar in each pod that's watching the API and regenerating the config in a pod-shared emptyDir volume. That's basically how most Ingress Controllers work, so you could use those as an example.</p>
<p>when I run <code>kubectl get pods</code> it shows pod existing and ready, but when run <code>kubectl port-forward</code> I get <code>pod not foud</code> error. what's going on here?</p> <pre><code>(base):~ zwang$ k get pods -n delivery NAME READY STATUS RESTARTS AGE screenshot-history-7f76489574-wntkf 1/1 Running 86 7h18m (base):~ zwang$ k port-forward screenshot-history-7f76489574-wntkf 8000:8000 Error from server (NotFound): pods "screenshot-history-7f76489574-wntkf" not found </code></pre>
<p>You need to specify the namespace on the <code>port-forward</code> command too. <code>kubectl port-forward -n delivery screenshot-history-7f76489574-wntkf 8000:8000</code></p>
<p>I was handed a kubernetes cluster to manage. But in the same node, I can see running docker containers (via docker ps) that I could not able to find/relate in the pods/deployments (via kubectl get pods/deployments).</p> <p>I have tried kubectl describe and docker inspect but could not pick out any differentiating parameters.</p> <p>How to differentiate which is which?</p>
<p>There will be many. At a minimum you'll see all the pod sandbox pause containers which are normally not visible. Plus possibly anything you run directly such as the control plane if not using static pods.</p>
<p>I have scenario; where I want to redirect to different services at back end based on a query parameter value. I have gone through documents but couldn't find any help there.</p> <p>For example:</p> <pre><code>if Path=/example.com/?color=red ---&gt; forward request to service--&gt; RED-COLOR-SERVICE:8080 if Path=/example.com/?color=blue ---&gt; forward request to service--&gt; BLUE-COLOR-SERVICE:8080 if Path=/example.com/?color=green ---&gt; forward request to service--&gt; GREEN-COLOR-SERVICE:8080 </code></pre> <p>Thanks</p>
<p>In general no, the Ingress spec only offers routing on hostname and path. Check the annotation features offered by your specific controller, but I don’t know of any for this off the top of my head.</p>
<p>I have configured NGINX as a reverse proxy with web sockets enabled for a backend web application with multiple replicas. The request from NGINX does a <code>proxy_pass</code> to a Kubernetes service which in turn load balances the request to the endpoints mapped to the service. I need to ensure that the request from a particular client is proxied to the same Kubernetes back end pod for the life cycle of that access, basically maintaining session persistence.</p> <p>Tried setting the <code>sessionAffinity: ClientIP</code> in the Kubernetes service, however this does the routing based on the client IP which is of the NGINX proxy. Is there a way to make the Kubernetes service do the affinity based on the actual client IP from where the request originated and not the NGINX internal pod IP ?</p>
<p>This is not an option with Nginx. Or rather it's not an option with anything in userspace like this without a lot of very fancy network manipulation. You'll need to find another option, usually an app-specific proxy rules in the outermost HTTP proxy layer.</p>
<p>Prometheus deployed on kubernetes using prometheus operator is eating too much memory and it is at present at ~12G. I see <code>/prometheus/wal</code> directory is at ~12G. I have removed all <code>*.tmp</code> files but that couldn't help. Unable to figure out the solution for this problem. Any suggestions ??</p>
<p>Reduce your retention time or reduce your number of time series.</p>
<p>I built a promethues for my kubernetes, and it works well now. It can get node and container/pod cpu, memory data, but I don't know how to get the kubernetes CPU Usage in promethues. Because in my application, if pod restart, deployment will not get data before.</p>
<p>A deployment is only an abstraction within the Kubernetes control plane, the things actually using CPU will all be pods. So you can use something like this <code>container_cpu_usage_seconds_total{namespace="mynamespace", pod_name=~"mydeployment-.*"}</code>.</p>
<p>So I was following along this tutorial (<a href="https://www.youtube.com/watch?v=KBTXBUVNF2I" rel="nofollow noreferrer">https://www.youtube.com/watch?v=KBTXBUVNF2I</a>) and after setting up the reconciler, when I execute "make run", I am getting the following error:</p> <pre><code>/Users/sourav/go/bin/controller-gen object:headerFile=./hack/boilerplate.go.txt paths="./..." go fmt ./... go vet ./... /Users/sourav/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases go run ./main.go 2020-03-15T22:13:29.111+0530 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2020-03-15T22:13:29.112+0530 INFO setup starting manager 2020-03-15T22:13:29.113+0530 INFO controller-runtime.manager starting metrics server {"path": "/metrics"} 2020-03-15T22:13:29.213+0530 INFO controller-runtime.controller Starting EventSource {"controller": "guestbook", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.213+0530 INFO controller-runtime.controller Starting EventSource {"controller": "redis", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.213+0530 INFO controller-runtime.controller Starting EventSource {"controller": "guestbook", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.315+0530 INFO controller-runtime.controller Starting EventSource {"controller": "redis", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.315+0530 INFO controller-runtime.controller Starting EventSource {"controller": "guestbook", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.418+0530 INFO controller-runtime.controller Starting EventSource {"controller": "guestbook", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.418+0530 INFO controller-runtime.controller Starting EventSource {"controller": "redis", "source": "kind source: /, Kind="} 2020-03-15T22:13:29.418+0530 INFO controller-runtime.controller Starting Controller {"controller": "guestbook"} 2020-03-15T22:13:29.418+0530 INFO controller-runtime.controller Starting Controller {"controller": "redis"} 2020-03-15T22:13:29.519+0530 INFO controller-runtime.controller Starting workers {"controller": "redis", "worker count": 1} 2020-03-15T22:13:29.519+0530 INFO controllers.Redis reconciling redis {"redis": "default/redis-sample"} 2020-03-15T22:13:29.523+0530 INFO controller-runtime.controller Starting workers {"controller": "guestbook", "worker count": 1} 2020-03-15T22:13:29.527+0530 ERROR controller-runtime.controller Reconciler error {"controller": "redis", "request": "default/redis-sample", "error": "415: Unsupported Media Type"} github.com/go-logr/zapr.(*zapLogger).Error /Users/sourav/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /Users/sourav/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /Users/sourav/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker /Users/sourav/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 /Users/sourav/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 k8s.io/apimachinery/pkg/util/wait.JitterUntil /Users/sourav/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 k8s.io/apimachinery/pkg/util/wait.Until /Users/sourav/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 2020-03-15T22:13:30.528+0530 INFO controllers.Redis reconciling redis {"redis": "default/redis-sample"} 2020-03-15T22:13:30.548+0530 ERROR controller-runtime.controller Reconciler error {"controller": "redis", "request": "default/redis-sample", "error": "415: Unsupported Media Type"} </code></pre> <p>The error seems to originate from this line:</p> <p><a href="https://github.com/DirectXMan12/kubebuilder-workshops/blob/605890232fb368a8ff00ac5e9879c8dfd90f904c/controllers/redis_controller.go#L73" rel="nofollow noreferrer">https://github.com/DirectXMan12/kubebuilder-workshops/blob/605890232fb368a8ff00ac5e9879c8dfd90f904c/controllers/redis_controller.go#L73</a></p> <p>Any Idea what might be causing this error and how to solve it?</p>
<p>That is using a relatively new feature, make sure your kubernetes is very up to date and has server side apply enabled.</p>
<p>In <strong>Minikube</strong>, created many <em>Persistent Volumes</em> and its <em>claims</em> as a practice? Do they reserve disk space on local machine? </p> <p>Checked disk usage </p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv spec: capacity: storage: 100Gi accessModes: - ReadWriteMany storageClassName: shared hostPath: path: /data/config --- $ kubectl create -f 7e1_pv.yaml $ kubectl get pv </code></pre> <p>Now create YAML for Persistent Volume Claim</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc spec: storageClassName: shared accessModes: - ReadWriteMany resources: requests: storage:90Gi </code></pre> <pre><code>$ kubectl create -f 7e2_pvc.yaml </code></pre>
<p>No, it's just a local folder. The size value is ignored.</p>
<p>I've faced a strange behaviour with K8s pods running in AWS EKS cluster (version 1.14). The services are deployed via Helm 3 charts. The case is that pod receives more environment variables than expected.</p> <p>The pod specification says that variables should be populated from a config map.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: apigw-api-gateway-59cf5bfdc9-s6hrh namespace: development spec: containers: - env: - name: JAVA_OPTS value: -server -XX:MaxRAMPercentage=75.0 -XX:+UseContainerSupport -XX:+HeapDumpOnOutOfMemoryError - name: GATEWAY__REDIS__HOST value: apigw-redis-master.development.svc.cluster.local envFrom: - configMapRef: name: apigw-api-gateway-env # &lt;-- this is the map # the rest of spec is hidden </code></pre> <p>The config map <code>apigw-api-gateway-env</code> has this specification:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 data: GATEWAY__APP__ADMIN_LOPUSH: "" GATEWAY__APP__CUSTOMER_LOPUSH: "" GATEWAY__APP__DISABLE_RATE_LIMITS: "true" # here are other 'GATEWAY__' envs JMX_AUTH: "false" JMX_ENABLED: "true" # here are other 'JMX_' envs kind: ConfigMap metadata: name: apigw-api-gateway-env namespace: development </code></pre> <p>If I request a list of environment variables, I can find values from a different service. These values are not specified in the config map of the 'apigw' application; they are stored in a map for a 'lopush' application. Here is a sample.</p> <pre><code>/ # env | grep -i lopush | sort | head -n 4 GATEWAY__APP__ADMIN_LOPUSH=&lt;hidden&gt; GATEWAY__APP__CUSTOMER_LOPUSH=&lt;hidden&gt; LOPUSH_GAME_ADMIN_MOBILE_PORT=tcp://172.20.248.152:5050 LOPUSH_GAME_ADMIN_MOBILE_PORT_5050_TCP=tcp://172.20.248.152:5050 </code></pre> <p>I've also noticed that this behaviour is somehow relative to the order in which the services were launched. That could be just because some config maps didn't exist at that moment. It seems for now like the pod receives variables from all config maps in the current namespace.</p> <p>Did any one faced this issue before? Is it possible, that there are other criteria which force K8s to populate environment from other maps?</p>
<p>If you mean the <code>_PORT</code> stuff, that's for compatibility with the old Docker Container Links system. All services in the namespace get automatically set up that way to make it easier to move things from older Docker-based systems.</p>
<p>Currently in my <code>kubernetes-nodes</code> job in Prometheus, The endpoint <code>/api/v1/nodes/gk3-&lt;cluster name&gt;-default-pool-&lt;something arbitrary&gt;/proxy/metrics</code> is being scraped</p> <p>But the thing is I'm getting a 403 error which says <code>GKEAutopilot authz: cluster scoped resource &quot;nodes/proxy&quot; is managed and access is denied</code> when I try it manually on postman</p> <p>How do I get around this on GKE Autopilot?</p>
<p>While the Autopilot docs don't mention the node proxy API specifically, this is in the limitations section:</p> <blockquote> <p>Most external monitoring tools require access that is restricted. Solutions from several Google Cloud partners are available for use on Autopilot, however not all are supported, and custom monitoring tools cannot be installed on Autopilot clusters.</p> </blockquote> <p>Given that port-forward and all other node-level access is restricted it seems likely this is not available. It's not clear that Autopilot even uses Kubelet at all and they probably aren't going to tell you.</p> <p><em>End of year update:</em></p> <p>This mostly works now. Autopilot has added support for things like cluster-scope objects and webhooks. You do need to reconfigure any install manifests to not touch the <code>kube-system</code> namespace as that is still locked down but you can most of this working if you hammer on it a bunch.</p>
<p>Was unable to find any K8s API which can query all volumeIDs related to specific AWS account region wide. My primary intention is to clean all stale volumes. For doing this, I'm collecting volume information from AWS which are in Available state using below shell script. </p> <pre><code> for regions in $(aws ec2 describe-regions --output text|awk {'print $4'}) do for volumes in $(aws ec2 describe-volumes --region $regions --output text| grep available | awk '{print $9}' | grep vol| tr '\n' ' ') do echo "$regions" "$volumes" done done </code></pre> <p>But that alone isn't sufficient as sometimes I keep some of my environments down and that time pods are not running which in turn marks the volumes as <em>Available</em> but they will be <em>in use</em>/attached to pods once the environment comes up. Hence, I need to get both the lists (from AWS and K8s) and diff them. Finally, I get the volumes which are actually not associated to any of my environments. Any help is greatly appreciated. </p> <p>N.B: It is known the below k8s API can fetch volumes taking namespaces as input which I'm not looking for. </p> <pre><code>GET /api/v1/namespaces/{namespace}/persistentvolumeclaims </code></pre>
<p>Two answers: first you can <code>kubectl get pvc —all-namespaces</code>. Second, the PVs themselves are not in namespaces so <code>kubectl get pv</code>.</p>
<p>I am unable to configure my stateful application to be resilient to kubernetes worker failure (the one where my application pod exists)</p> <pre><code>$ kk get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-openebs-97767f45f-xbwp6 1/1 Running 0 6m21s 192.168.207.233 new-kube-worker1 &lt;none&gt; &lt;none&gt; </code></pre> <p>Once I take the worker down, kubernetes notices that the pod is not responding and schedules it to a different worker.</p> <pre><code>marek649@new-kube-master:~$ kk get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-openebs-97767f45f-gct5b 0/1 ContainerCreating 0 22s &lt;none&gt; new-kube-worker2 &lt;none&gt; &lt;none&gt; example-openebs-97767f45f-xbwp6 1/1 Terminating 0 13m 192.168.207.233 new-kube-worker1 &lt;none&gt; &lt;none&gt; </code></pre> <p>This is great, but the new container is not able to start since it is trying to attach the same pvc that the old container was using and kubernetes does not release the binding to the old (not responding) node.</p> <pre><code>$ kk describe pod example-openebs-97767f45f-gct5b Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: ReplicaSet/example-openebs-97767f45f Containers: example-openebs: Container ID: Image: nginx Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /usr/share/nginx/html from demo-claim (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xmvf (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: demo-claim: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: example-pvc ReadOnly: false default-token-4xmvf: Type: Secret (a volume populated by a Secret) SecretName: default-token-4xmvf Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m9s default-scheduler Successfully assigned default/example-openebs-97767f45f-gct5b to new-kube-worker2 Warning FailedAttachVolume 2m9s attachdetach-controller Multi-Attach error for volume "pvc-911f94a9-b43a-4cac-be94-838b0e7376e8" Volume is already used by pod(s) example-openebs-97767f45f-xbw p6 Warning FailedMount 6s kubelet, new-kube-worker2 Unable to attach or mount volumes: unmounted volumes=[demo-claim], unattached volumes=[demo-claim default-token-4xmvf]: timed out waiti ng for the condition </code></pre> <p>I am able to resolve this situation by manually force deleting the containers, unbounding the PV and recreating containers but this is far from high availability that I am expecting. </p> <p>I am using openEBS jiva volumes and after manual intervention I am able to restore the container with correct data on the PV which means that data gets replicated to other nodes correctly.</p> <p>Can someone please explain what am I doing wrong and how to achieve a fault tolerance for k8s applications with volumes attached?</p> <p>I found this related but I don;t see any suggestions how to overcome this issue <a href="https://github.com/openebs/openebs/issues/2536" rel="nofollow noreferrer">https://github.com/openebs/openebs/issues/2536</a></p>
<p>It will eventually release the volume, usually limiting factor is the network storage system being slow to detect the volume is unmounted. But you are correct that it's a limitation. The usual fix would be to use a multi-mount capable volume type instead, such as NFS or CephFS.</p>
<p>I have a case where we use custom Authorization in Kubernetes via a webhook. Once authorized is there any way the user id could propagated on to the metadata or labels or env of a resource in Kubernetes.</p> <p>Eg - When a user creates a pod, the userid should be available on the request object.</p> <p>The only place where the user data is available is in the events that is available via audit logs.</p>
<p>You could use a mutating webhook to inject it. The webhook admission request struct has the user identity data and you can patch the incoming object in the admission response. There is nothing off the shelf for that though, you would have to build it yourself.</p>
<p>I have 4 services running as 4 different deployments/pods in my kubernetes cluster, name them, <strong>A</strong>, <strong>B</strong>, <strong>C</strong> and <strong>D</strong>. <strong>D</strong> is providing a common service which <strong>A</strong> and <strong>B</strong> are using via rest API calls, but it behaves differently between the requests come from <strong>A</strong> and those from <strong>B</strong>. And for requests from <strong>C</strong>, <strong>D</strong> should just deny the request (not authorized). Is there a k8s built-in way of supporting identifying/authorizing pods within the cluster?</p> <p>My current approach is to use service account, where I created two service accounts, SA1 and SA2, which are used by <strong>A</strong> and <strong>B</strong> respectively, and my service <strong>D</strong> is registered as token reviewer. Both <strong>A</strong> and <strong>B</strong> need to read the service account token and submit it with the request to <strong>D</strong>. Through reviewing the token, <strong>D</strong> can tell if the requests are from <strong>A</strong> or <strong>B</strong>. This works, but I'm not sure if this is the best way of achieving this. Since each deployment can only use one service account, this may increase the complexity of service account management and token reviewing complexity if we get more services.</p> <p>I went through kubernetes documents, e.g. RBAC, ABAC, node authorization and etc... but seems these are all designed for authorization of cluster API access, rather than the authorisation of services running in the cluster.</p> <p>azure seems have a solution <a href="https://github.com/Azure/aad-pod-identity" rel="nofollow noreferrer">https://github.com/Azure/aad-pod-identity</a> which fulfills my requirement, but it also has dependencies to other deployments (Active directory), which I don't think we will have in our cluster.</p>
<p>What you have is correct, Service account tokens are the way to do, other systems generally build on top of them. Most service mesh tools do offer simpler systems for identity, though for something this small it would likely be overkill.</p>
<p>I'm currently following a course on udemy called <code>Microservices with Node JS and React</code> from Stephen Grider, and I've come to a part where I need to run a command:</p> <pre><code>kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system </code></pre> <p>And this command is producing this error:</p> <pre><code>Error from server (NotFound): deployments.apps &quot;ingress-nginx-controller&quot; not found </code></pre> <p>when I run the command <code>kubectl get deployments</code> I do not see an ingress-nginx-controller deployment so I tried <code>kubectl get namespace</code> and I saw then entry <code>ingress-nginx</code> from that so I then tried <code>kubectl get deployments -n ingress-nginx</code> and then I finally see <code>ingress-nginx-controller</code> from output of that command. So I now know where the ingress-nginx-controller is but I am still pretty clueless as to how i get the initial command of <code>kubectl explose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system</code> to work i've been stuck on this for a long time now any help is appreciated, thanks.</p> <p><strong>Edit 1</strong>: this is probably not relevant but I also tried putting <code>ingress-nginx</code> after the -n instead of kube-system and it did not work</p> <p>Also I am using <code>minikube</code> on ubuntu</p> <p><strong>Edit 2</strong>: this is a <a href="https://i.stack.imgur.com/zpif3.png" rel="nofollow noreferrer">screenshot</a> of what the course wants me to do because I'm running minikube</p>
<p>The first time you ran it (with the correct namespace) it worked and you probably didn't notice. Your tutorial seems to be fairly out of date, you might want to find a newer one. If you want to remove the previously created service and do it again, <code>kubectl delete service -n ingress-nginx ingress-nginx-controller</code>.</p>
<p>The Openshift documentation is absolutely abysmal. I can't find direct documentation for any of the objects that are available.</p> <p>I did find a section in the Kubernetes docs that seems to describe the ability to do something like this... </p> <p><a href="https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/</a></p> <p>But it wasn't super clear how this translates to OoenShift, or how to actually implement this IPVS mode for a service. </p>
<p>Answered on Slack, but short version it is not an option for this user given their situation.</p> <p>For others, IPVS does support this but it is enabled and configured at a global level. A better option is usually a userspace proxy, often via the Ingress system.</p>
<p>We have a docker image that is processing some files on a samba share.</p> <p>For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:</p> <pre><code>docker run -v /mnt/dfs/project1:/workspace image </code></pre> <p>Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.</p> <p>I installed this repo as a daemonset</p> <p><a href="https://k8scifsvol.juliohm.com.br/" rel="nofollow noreferrer">https://k8scifsvol.juliohm.com.br/</a></p> <p>and it's up and running.</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: cifs-volumedriver-installer spec: selector: matchLabels: app: cifs-volumedriver-installer template: metadata: name: cifs-volumedriver-installer labels: app: cifs-volumedriver-installer spec: containers: - image: juliohm/kubernetes-cifs-volumedriver-installer:2.4 name: flex-deploy imagePullPolicy: Always securityContext: privileged: true volumeMounts: - mountPath: /flexmnt name: flexvolume-mount volumes: - name: flexvolume-mount hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ </code></pre> <p>Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?</p> <p>Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?</p> <p>Do we even need a PV? Could the pod just read from the host's mounted share?</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mycifspv spec: capacity: storage: 1Gi flexVolume: driver: juliohm/cifs options: opts: sec=ntlm,uid=1000 server: my-cifs-host share: /MySharedDirectory secretRef: name: my-secret accessModes: - ReadWriteMany </code></pre>
<p>No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)</p>
<p>Following <a href="https://stackoverflow.com/questions/31693529/how-to-share-storage-between-kubernetes-pods">this post</a> and <a href="https://stackoverflow.com/questions/58683486/share-storage-between-worker-nodes-in-kubernetes">this</a>, here's my situation:<br /> Users upload images to my backend, setup like so: LB -&gt; Nginx Ingress Controller -&gt; Django (Uwsgi). The image eventually will be uploaded to Object Storage. Therefore, Django will temporarily write the image to the disk, then delegate the upload task to a async service (DjangoQ), since the upload to Object Storage can be time consuming. Here's the catch: since my Django replicas and DjangoQ replicas are all separate pods, the file is not available in the DjangoQ pod. Like usual, the task queue is managed by a redis broker and any random DjangoQ pod may consume that task.<br /> <strong>I need a way to share the disk file created by Django with DjangoQ.</strong></p> <p>The above mentioned posts basically mention two solutions:<br /> -solution 1: NFS to mount the disk on all pods. It kind of seems like an overkill since the shared volume only stores the file for a few seconds until upload to Object Storage is completed.<br /> -solution 2: the Django service should make the file available via an API, which DjangoQ would use to access the file from another pod. This seems nice but I have no idea how to proceed... should I create a second Django/uwsgi app as a side container which would listen to another port and send an HTTPResponse with the file? Can the file be streamed?</p>
<p>Third option: don't move the file data through your app at all. Have the user upload it directly to object storage. This usually means making an API which returns a pre-signed upload URL that's valid for a few minutes, user uploads the file, then makes another call to let you know the upload is finished. Then your async task can download it and do whatever.</p> <p>Otherwise you have the two options correctly. For option 2, and internal Minio server is pretty common since again, Django is very slow for serving large file blobs.</p>
<p>I am seeing a continuous 8 to 15% CPU usage on Rancher related processes while there is not a single cluster being managed by it. Nor is any user interacting with. What explains this high CPU usage when idle? Also, there are several "rancher-agent" containers perpetually running and restarting. Which does not look right. There is no Kubernetes cluster running on this machine. This machine (unless Rancher is creating its own single node cluster for whatever reason).</p> <p>I am using Rancher 2.3</p> <p>docker stats: <a href="https://i.stack.imgur.com/O05ee.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O05ee.png" alt="docker stats output"></a></p> <p>docker ps: <a href="https://i.stack.imgur.com/QgWfv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QgWfv.png" alt="enter image description here"></a></p> <p>htop: <a href="https://i.stack.imgur.com/x67jH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x67jH.png" alt="htop output"></a></p> <p><a href="https://i.stack.imgur.com/kxoXq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kxoXq.png" alt="Rancher version info"></a></p>
<p>I'm not sure I would call 15% "high", but Kubernetes has a lot of ongoing stuff even if it looks like the cluster is entirely quiet. Stuff like processing node heartbeats, etcd election traffic, controllers with time-based conditions which have to be processed. K3s probably streamlines that a bit, but 0% CPU usage is not a design goal even in the fork.</p>
<p>I'm using docker for windows on my local laptop, and I'm trying to mimic a dev installation of kubernetes by using the &quot;run kubernetes' setting on the same laptop. One thing that's awkward is the docker registry. I have a docker registry container running in-cluster that I can push to no problem from the laptop, but when the docker-for-windows kubernetes controller needs to 'pull' an image, I'm not sure how to reference the registry: I've tried referencing the registry using the laptops netbios name, with various DNS suffixes, but it doesn't seem to work.</p> <p>Is there a way I can accomplish this?</p>
<p>You would use the internal cluster DNS, as managed by the Service object you probably created for the registry. All Services are available inside the cluster via <code>$name.$namespace.svc.cluster.local</code> (technically <code>cluster.local</code> is the cluster domain however this is the default and most common value by far).</p>
<p>I am trying to install <a href="https://www.weave.works/blog/install-fluxctl-and-manage-your-deployments-easily" rel="nofollow noreferrer">fluxctl</a> on my WSL (Ubuntu 18.04). I saw the official recommendation to install on Linux is through <a href="https://snapcraft.io/fluxctl" rel="nofollow noreferrer">snapcraft</a> but WSL flavors in general does not support snap yet. </p> <p>I know the other option is to compile from source or download binary. Is there another way to install fluxctl on WSL through a package/application manager? </p>
<p>You could check if someone had made a PPA but it seems unlikely. Also FWIW they publish Windows binaries too, right next to the Linux ones.</p>
<p>I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use <code>port-forward</code> to send it to localhost.</p> <p>What's the best way to setup ingress for a service that will be restricted to only people on the VPN?</p> <p>Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.</p>
<p>It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.</p>
<p>My Docker container running in a <code>minikube</code> pod has configured a directory mounted from the host's <strong>non-empty</strong> <code>/home/my_username/useful/dir</code>. <code>kubectl</code> shows what I expect:</p> <pre><code>$ kubectl get --namespace=my_namespace pods/my-pod -o json | jq '.spec.volumes[3]' { &quot;hostPath&quot;: { &quot;path&quot;: &quot;/hosthome/my_username/useful/dir&quot;, &quot;type&quot;: &quot;Directory&quot; }, &quot;name&quot;: &quot;useful_dir&quot; } $ kubectl get --namespace=my_namespace pods/my-pod -o json | jq '.spec.containers[].volumeMounts[3]' { &quot;mountPath&quot;: &quot;/dir/in/container&quot;, &quot;name&quot;: &quot;useful_dir&quot;, &quot;readOnly&quot;: true } </code></pre> <p>But in the pod, the mountpoint is empty:</p> <pre><code>$ kubectl exec --stdin --tty --namespace my_namespace my-pod -- ls /dir/in/container total 0 </code></pre> <p>I looked at the pod's mountpoint with <code>kubectl exec --stdin --tty --namespace my_namespace my-pod -- findmnt /dir/in/container</code>, and see <code>overlay[/hosthome/my_username/useful/dir]</code>. From this, I conclude that Docker has mounted the directory from the host as expected.</p> <p>I check the mountpoint directly from a pod's container (as root to make sure there is no permission restriction in the way):</p> <pre><code>$ docker exec -it -u root minikube /bin/bash root@minikube:/# docker exec -it -u root &lt;container_id&gt; ls /dir/in/container root@minikube:/# </code></pre> <p>It does not have any content which is present in the host.</p> <p>What should I look for to investigate?</p>
<p>Issue solved in comments, the driver was running dockerd inside a container itself so it didn't have a global filesystem view. Solved via <code>minikube mount</code>.</p>
<p>There is a Kubernetes cluster that I am not really familiar with. I need to set up backups with Velero. It is possible that velero has been installed on the cluster by someone else. How do I make sure it has or has not been previously installed before I install it?</p>
<pre><code>kubectl get pods --all-namespaces | grep velero </code></pre> <p>That’s an easy place to start at least.</p>
<p>I applied my PVC yaml file to my GKE cluster and checked it's state. It says the follwing for the yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"teamcity","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"3Gi"}}}} volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd creationTimestamp: "2019-11-05T09:45:20Z" finalizers: - kubernetes.io/pvc-protection name: teamcity namespace: default resourceVersion: "1358093" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/teamcity uid: fb51d295-ffb0-11e9-af7d-42010a8400aa spec: accessModes: - ReadWriteMany dataSource: null resources: requests: storage: 3Gi storageClassName: standard volumeMode: Filesystem status: phase: Pending </code></pre> <p>I did not created anything like a storage or whatever needs to be done for that? Because I read it as this is provided automatically by the GKE. Any idea what I am missing?</p>
<p>GKE includes default support for GCP disk PV provisioning, however those implement ReadWriteOnce and ReadOnlyMany modes. I do not think GKE includes a provisioner for ReadWriteMany by default.</p> <p>EDIT: While it's not set up by default (because it requires further configuration) <a href="https://stackoverflow.com/questions/54796639/how-do-i-create-a-persistent-volume-claim-with-readwritemany-in-gke">How do I create a persistent volume claim with ReadWriteMany in GKE?</a> shows how to use Cloud Filestore to launch a hosted NFS-compatible server and then aim a provisioner at it.</p>
<p>I have a <code>nginx conf</code> like below</p> <pre><code>map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 80 default_server; access_log off; return 200 'Hello, World! - nginx\n'; } server { listen 80; server_name ~^(dev-)?(?&lt;app&gt;[^.]+)\.mysite\.com$; access_log off; location / { resolver 127.0.0.11; proxy_set_header Host $host; proxy_pass http://${app}-web; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } </code></pre> <p>I expected that redirecting</p> <p><code>dev-blog.mysite.com</code> into service <code>blog-web</code></p> <p><code>dev-market.mysite.com</code> into service <code>market-web</code></p> <p>and so on</p> <p>Is there any way to implement this in k8s ingress-nginx?</p>
<p>No, you would make a separate Ingress object for each (or one huge one, but that's less common). Usually this is semi-automated through either Helm charts or custom controllers.</p>
<p>I've setup a small Kubernetes cluster at home using 4 RPIs. Now I'm at the step where I want to try and deploy some stuff to it but it's not working.</p> <p>I've created a small Flask application for testing purposes:</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route("/") def test(): return {"Hello": "World"} if __name__ == "__main__": app.run(debug=True, host="0.0.0.0") </code></pre> <p>The Dockerfile for it:</p> <pre><code>FROM python:3.8-alpine COPY . /app WORKDIR /app RUN pip install -r requirements.txt CMD ["python", "app.py"] </code></pre> <p>I'm building it and pushing it to a registry in DockerHub.</p> <p>Then I'm setting up a deployment.yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-api spec: replicas: 3 selector: matchLabels: app: test-api template: metadata: labels: app: test-api spec: containers: - name: test-api image: gurkmeja101/pi:latest resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: test-api spec: selector: app: test-api ports: - port: 5000 targetPort: 5000 type: NodePort </code></pre> <p>Running <code>kubectl apply -f deployment.yaml</code> I get the following outputs:</p> <pre><code>deployment.apps/test-api created service/test-api created PS C:\projects\Python\test&gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 40h test-api NodePort 10.105.100.68 &lt;none&gt; 5000:31409/TCP 45s PS C:\projects\Python\test&gt; kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE test-api 0/3 3 0 50s PS C:\projects\Python\test&gt; kubectl get pods NAME READY STATUS RESTARTS AGE test-api-649b447666-97x2r 0/1 CrashLoopBackOff 3 88s test-api-649b447666-bmmld 0/1 CrashLoopBackOff 3 88s test-api-649b447666-scnzz 0/1 CrashLoopBackOff 3 88s </code></pre> <p>Describing a failed pod gives me:</p> <pre><code>Name: test-api-649b447666-97x2r Namespace: default Priority: 0 Node: k8s-worker-02/192.168.1.102 Start Time: Wed, 18 Mar 2020 09:05:34 +0100 Labels: app=test-api pod-template-hash=649b447666 Annotations: &lt;none&gt; Status: Running IP: 10.244.2.18 IPs: IP: 10.244.2.18 Controlled By: ReplicaSet/test-api-649b447666 Containers: test-api: Container ID: docker://1418404c27fc5a1a8ef7b557c495a7fbf8f8907ef1dd4d09b4ad5dae02d98b33 Image: gurkmeja101/pi:latest Image ID: docker-pullable://gurkmeja101/pi@sha256:c2bca364aab8f583c3ed0e64514112475d3e8c77f5dfab979929c5e4b8adb43b Port: 5000/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 18 Mar 2020 09:07:10 +0100 Finished: Wed, 18 Mar 2020 09:07:10 +0100 Ready: False Restart Count: 4 Limits: cpu: 500m memory: 128Mi Requests: cpu: 500m memory: 128Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-7rmwv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-7rmwv: Type: Secret (a volume populated by a Secret) SecretName: default-token-7rmwv Optional: false QoS Class: Guaranteed Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 119s default-scheduler Successfully assigned default/test-api-649b447666-97x2r to k8s-worker-02 Normal Pulled 24s (x5 over 116s) kubelet, k8s-worker-02 Container image "gurkmeja101/pi:latest" already present on machine Normal Created 24s (x5 over 116s) kubelet, k8s-worker-02 Created container test-api Normal Started 23s (x5 over 115s) kubelet, k8s-worker-02 Started container test-api Warning BackOff 21s (x10 over 112s) kubelet, k8s-worker-02 Back-off restarting failed container </code></pre> <p>I can run the container without any issues using <code>docker run -d -p 5000:5000 pi:latest</code>.</p> <p>Any and all help regarding this is greatly appreciated!</p> <p><strong>logs</strong> running <code>kubectl logs test-api-649b447666-97x2r</code>results in: <code>standard_init_linux.go:211: exec user process caused "exec format error"</code></p>
<p>RaspberryPis use ARM CPUs. Your container images are built for x86_64. These are not compatible. You'll need to specifically build your images for ARM. There are many ways to do that, Docker's official tool is <code>buildx</code> I think. Check out <a href="https://www.docker.com/blog/multi-arch-images/" rel="nofollow noreferrer">https://www.docker.com/blog/multi-arch-images/</a> for their guide or just search around if you want to use a different toolchain.</p>
<p>queries which will allow us to track kubeevents and get notified if there are any issues with the pods being scheduled or killed..</p>
<p>YAML is not a scripting language, it is a data markup language like JSON or XML. So no, but perhaps you meant to ask something else?</p>
<p>I have multiple nodes, which are lying mostly idle, but getting an error while scheduling pods/services saying &quot;Insufficient CPU&quot;</p> <p>Node usage output :-</p> <pre><code>top - 17:59:45 up 17 days, 2:52, 1 user, load average: 5.61, 7.85, 8.58 Tasks: 2030 total, 3 running, 1771 sleeping, 0 stopped, 0 zombie %Cpu(s): 6.5 us, 2.3 sy, 0.4 ni, 90.4 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st KiB Mem : 39616812+total, 29776403+free, 30507960 used, 67896128 buff/cache KiB Swap: 0 total, 0 free, 0 used. 35842112+avail Mem </code></pre> <p>As it can be seen, whole bunch of memory/cpu is lying idle (~ 80 to 90 % is free)</p> <p>Same can be confirmed by the fact :-</p> <pre><code>$ kubectl top nodes W0615 14:03:16.457271 108 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node1 4680m 29% 14943Mi 47% node10 9524m 19% 44735Mi 11% node11 273m 0% 1614Mi 0% node12 289m 0% 1617Mi 0% node2 1736m 10% 11683Mi 37% node3 3223m 20% 17837Mi 56% node4 1680m 10% 15075Mi 47% node5 7386m 15% 39163Mi 10% node6 5392m 22% 26448Mi 20% node7 2457m 5% 28002Mi 7% node8 4853m 10% 51863Mi 13% node9 3620m 7% 18299Mi 4% </code></pre> <p>But when scheduling pods, getting an error (kubectl describe pod POD_NAME) :-</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 62m default-scheduler 0/12 nodes are available: 5 node(s) had taints that the pod didn't tolerate, 7 Insufficient cpu. </code></pre> <p>The reason I understand why this is happening is (kubectl descibe node node10) :-</p> <pre><code>Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 47887m (99%) 92270m (192%) memory 59753371Ki (15%) 87218649344 (21%) ephemeral-storage 2Gi (0%) 2Gi (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) </code></pre> <p>The pods have allocated all of the cpu resources across all the nodes</p> <p>Even though the actual usage is very low, k8s thinks the nodes are fully occupied. What I am trying to achieve is how to overcommit the resources ? I tried editing &quot;Allocatable&quot; cpu 2x times the &quot;Capacity&quot;, but changes don't persist. Any suggestion how can I overcommit ?</p>
<p>You cannot overcommit on requests, as those form the minimum required resources for a given pod to run. You can overcommit on limits, as you can see by your <code>192%</code> there.</p>
<p>I am trying to get the CPU usage from the <code>/apis/events.k8s.io/v1beta1</code> endpoint in the kubernetes internal api. </p> <p>I run the following command</p> <p><code>kubectl proxy --port=8080</code></p> <p>Then load the url <a href="http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods" rel="nofollow noreferrer">http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods</a> and get a response similar to this one</p> <pre><code>{ "kind": "PodMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/metrics.k8s.io/v1beta1/pods" }, "items": [ { "metadata": { "name": "name-of-the-container-667656d796-p586s", "namespace": "namespace-name", "selfLink": "/apis/metrics.k8s.io/v1beta1/pods/name-of-the-container-667656d796-p586s", "creationTimestamp": "2019-11-20T21:34:02Z" }, "timestamp": "2019-11-20T21:33:02Z", "window": "30s", "containers": [ { "name": "name-of-the-container", "usage": { "cpu": "350748682n", "memory": "238860Ki" } } ] } ] } </code></pre> <p>The cpu value is <code>350748682n</code>. From <a href="https://discuss.kubernetes.io/t/metric-server-cpu-and-memory-units/7497" rel="nofollow noreferrer">this discussion</a> <code>n</code> is "1/1000000000 (1 billionth) of a cpu"</p> <p>I am also seeing values like <code>14513u</code></p> <p>I have reviewed the <a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-quantity" rel="nofollow noreferrer">quantity</a> definition but do not see anything referencing <code>u</code></p> <p>What are all the possible units used to report this metric? </p>
<p><code>u</code> is a simplification of the lowercase Greek mu (μ) which means 10^-6, aka "micro-cpus". The unit is always the same, it's in terms of CPU cores. Metrics-server tries to report in nano-cpus for maximum accuracy, but if the number won't fit in an int64, it will change the scaling factor until it fits.</p>
<p>I want to create a <code>serviceaccount</code> in Kubernetes with no permissions.</p> <p>However, creating a new <code>serviceaccount</code> as follows results in a privileged <code>serviceaccount</code>, <code>sa</code>, that is able to e.g. retrieve pod information:</p> <pre><code>kubectl create serviceaccount sa -n devns nlykkei:~/projects/k8s-examples$ kubectl get pods --as=system:serviceaccount:devns:sa -v6 I0318 16:12:34.161300 3466 loader.go:359] Config loaded from file: /Users/nlykkei/.kube/config I0318 16:12:34.179023 3466 round_trippers.go:438] GET https://kubernetes.docker.internal:6443/api/v1/namespaces/default/pods?limit=500 200 OK in 11 milliseconds I0318 16:12:34.179299 3466 get.go:564] no kind "Table" is registered for version "meta.k8s.io/v1beta1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30" No resources found. nlykkei:~/projects/k8s-examples$ kubectl auth can-i --list --as=system:serviceaccount:devns:sa Resources Non-Resource URLs Resource Names Verbs *.* [] [] [*] [*] [] [*] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.k8s.io [] [] [create] [/api/*] [] [get] [/api] [] [get] [/apis/*] [] [get] [/apis] [] [get] [/healthz] [] [get] [/healthz] [] [get] [/openapi/*] [] [get] [/openapi] [] [get] [/version/] [] [get] [/version/] [] [get] [/version] [] [get] [/version] [] [get] </code></pre> <p>How can I create a <code>serviceaccount</code> with no permissions initially?</p>
<p>DfM Kubernetes injects a ClusterRoleBinding called docker-for-desktop-binding which does indeed give all perms to everything.</p>
<p>I have a dockerfile where I am trying to copy everything in Github to dockerfile and build it as an image. I have a file called config.json which contains sensitive user data such as username and password. This will also be copied. The issue here is, I want this data to be encrypted and passed onto the dockerfile. While the image is being deployed onto kubernetes, I want this data to be decrypted back again. Can anyone please suggest an ideal method of doing this.</p>
<p>You shouldn't put this in the container image at all. Use a tool like Sealed Secrets, Lockbox, or sops-operator to encrypt the values separately, and then those get decrypted into a Secret object in Kubernetes which you can mount into your container as a volume so the software sees the same <code>config.json</code> file but it's stored externally.</p>
<p>Today I have started to learn about Kubernetes because I have to use it in a project. When I came to the Service object, I started to learn what is the difference between all the different types of ports that can be specified. I think now I undertand it. </p> <p>Specifically, the <strong>port</strong> (spec.ports.port) is the port from which the service can be reached inside the cluster, and <strong>targetPort</strong> (spec.ports.targetPort) is the port that an application in a container is listening to. </p> <p>So, if the service will always redirect the traffic to the targetPort, why is it allowed to specify them separately? In which situations would it be necessary? </p>
<p>The biggest use is with LoadBalancer services where you want to expose something on (usually) 80 or 443, but don't want the process to run as root so it's listening on 8080 or something internally. This lets you map things smoothly.</p>
<p>We have created common helm charts. Using the common charts, we have derived HelloWorld helm chart</p> <pre><code>Charts Common templates &gt; _deployment.yaml &gt; _configmap.yaml &gt; _service.yaml Chart.yaml HelloWorld templates &gt; deployment.yaml &gt; configmap.yaml &gt; service.yaml Chart.yaml values.yaml values-dev.yaml </code></pre> <p>We wanted to override values specified values.yaml (subchart) using values-dev.yaml , We understand we can override the values from the subchart. The values can be overrided.</p> <p>However, we wanted to override the values for chart level instead of app level. Below is the structure.</p> <pre><code>Charts Common templates &gt; _deployment.yaml &gt; _configmap.yaml &gt; _service.yaml Chart.yaml HelloWorld1 templates &gt; deployment.yaml &gt; configmap.yaml &gt; service.yaml Chart.yaml values-HelloWorld1.yaml values-dev.yaml HelloWorld2 templates &gt; deployment.yaml &gt; configmap.yaml &gt; service.yaml Chart.yaml values-HelloWorld2.yaml values-qa.yaml values.yaml </code></pre> <p>Is it possible to override the values from values.yaml?</p>
<p>I'm not 100% sure what you're asking, but in general you can override subchart values at any point by putting them under a key matching the charts name. So something like:</p> <pre><code>Common: foo: bar </code></pre>
<p>What is the purpose of args if one could specify all arguments using command in kubernetes manifest file? for example i can use below syntax which totally negates the usage of the args.</p> <pre><code>command: [ &quot;bin/bash&quot;, &quot;-c&quot;, &quot;mycommand&quot; ] </code></pre> <p>or also</p> <pre><code>command: - &quot;bin/bash&quot; - &quot;-c&quot; - &quot;mycommand&quot; </code></pre>
<p>The main reason to use <code>args:</code> instead of <code>command:</code> is if the container has a specific entrypoint directive that you don't want to change. For example if the Dockerfile has <code>ENTRYPOINT [&quot;/myapp&quot;]</code> you might put <code>args: [--log-level=debug]</code> to add that one argument without changing the path to the binary. In many cases it isn't relevant though and you just use <code>command:</code> to do it all at once.</p>
<p>In Kubernetes field selectors are limited to certain fields for each resource Kind. But almost every resource has field selector for name and namespace on metadata If so why there's a need to have a separate label selector.</p> <pre><code>labels: { app: foo } </code></pre> <p>Instead of querying <code>kubectl get pods -l app=foo</code>, why couldn't it be part of generic field selector like:<br/> <code>kubectl get pods --field-selector metadata.labels.app=foo</code> </p>
<p>Short answer: because etcd is not optimized for general purpose querying and so Kubernetes has to pick and choose what to index and what not to. This is why both labels and annotations exist despite seeming very similar, labels are indexed for searching on and annotations are not.</p>
<p>I am implementing a Kubernetes based solution where I am autoscaling a deployment based on a dynamic metric. I am running this deployment with autoscaling capabilities against a workload for 15 minutes. During this time, pods of this deployment are created and deleted dynamically as a result of the deployment autoscaling decisions.</p> <p>I am interested in saving (for later inspection) of the logs of each of the dynamically created (and potentially deleted) pods occuring in the course of the autoscaling experiment.</p> <p>If the deployment has a label like <strong>app=myapp</strong> , can I run the below command to store all the logs of my deployment?</p> <pre><code>kubectl logs -l app=myapp &gt; myfile.txt </code></pre> <p>Any other more reliable suggestion (without the overhead of manual central logging solution) ? I am runnig on goole kubernetes engine GKE, Does the GKE keep the logs of deleted pods?</p> <p>Thank you.</p>
<p>Yes, by default GKE sends logs for <em>all</em> pods to Stackdriver and you can view/query them there.</p>
<p>I am getting below error while running kubeadm init :</p> <pre><code>[init] Using Kubernetes version: v1.16.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p><code>sudo swapoff -a swapoff: /swapfile: swapoff failed: Cannot allocate memory</code></p> <p>I am using Ubuntu VM in parallel desktop</p> <p><code>free -m command output below: $ free -m total used free shared buff/cache available Mem: 979 455 87 1 436 377 Swap: 2047 695 1352</code></p>
<p>You do not have enough RAM. Your machine is surviving because you use a swap file (i.e. using your hard drive for extra pseudo-RAM) but that is not supported by Kubernetes so it tried to turn that off which failed because you don't have enough RAM.</p>
<p>I have a kubernetes cluster that is currently running in the region europe-north1 and zone europe-north1-a. I want to move this cluster to the new region europe-west3 with the zone europe-west3-b to get access to nvidia-tesla-t4 accelerators.</p> <pre><code>gcloud compute accelerator-types list NAME ZONE DESCRIPTION nvidia-tesla-t4 europe-west3-b NVIDIA Tesla T4 </code></pre> <p>I tried to update the cluster via the gcloud CLI but the standard update command seems not to support this kind of operation.</p> <p><strong>Error: "Specified location "europe-west3-b" is not a valid zone in the cluster\'s region "europe-north1"."</strong></p> <pre><code>gcloud container clusters update cluster-1 \ --region europe-north1 \ --node-locations europe-west3-b Updating cluster-1... 30 .........................done. 31 ERROR: (gcloud.container.clusters.update) Operation [&lt;Operation 32 clusterConditions: [&lt;StatusCondition 33 message: u'Specified location "europe-west3-b" is not a valid zone in the cluster\'s region "europe-north1".'&gt;] </code></pre> <p>Is there any efficient way to move cluster between regions?</p>
<p>No, you can’t move things between regions at all, least of all an entire running cluster. You’ll need to back up your data and restore it onto a new cluster in the new region.</p>
<p>I'm running into an issue with an nginx ingress controller (ingress-nginx v0.44.0) on EKS where the X-Forwarded-* headers are set to the kubernetes worker node the controller pod is running on as opposed to the details of the request of the actual user hitting the controller itself. As we're terminating our SSL on the ingress controller this means the 'X-Forwarded-Proto' is set to 'http' instead of 'https' which causes issues on the application pods.</p> <p>I deployed a <a href="https://github.com/brndnmtthws/nginx-echo-headers" rel="nofollow noreferrer">test pod</a> which returns the headers it received to confirm the issue and I can see these headers having been received:</p> <pre><code>X-Forwarded-For: &lt;ip of the eks worker node&gt; X-Forwarded-Host: foo.bar.net X-Forwarded-Port: 8000 X-Forwarded-Proto: http </code></pre> <p>I was expecting these though:</p> <pre><code>X-Forwarded-For: &lt;ip of the origin of the original request&gt; X-Forwarded-Host: foo.bar.net X-Forwarded-Port: 443 X-Forwarded-Proto: https </code></pre> <p>Now, we do have an old legacy cluster running an older nginx ingress controller (nginx-ingress v0.34.1) which does actually behave as I expected, but I'm struggling to find how this has been configured to make it do this correctly. I did notice that the nginx.conf of this controller contains the 'full_x_forwarded_proto' variable identically as <a href="https://stackoverflow.com/questions/21230918/nginx-scheme-variable-behind-load-balancer/21911864#21911864">described here</a> but I can't find any place where this is configured as in a configmap or similar.</p> <pre><code>map $http_x_forwarded_proto $full_x_forwarded_proto { default $http_x_forwarded_proto; &quot;&quot; $scheme; } </code></pre> <p>Does anybody have any suggestions how I can configure nginx to send the correct 'X-Forwarded-*' headers?</p>
<p>It depends a lot on the exact networking setup in front of Nginx. By default, Kubernetes routes all external connections through the kube-proxy mesh which hides the true client IP. You also might have an AWS ELB of some kind in front of that which also can hide the client IP depending on settings.</p> <p>For the first part, see <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer</a> (tl;dr set <code>externalTrafficPolicy: Local</code>) but for the second you'll have to look at your specific load balancer setup.</p>
<p>I have application run inside the kuberentes pod that update the user configuration file and on every deployment it flush the data, as the file reside in a folder which cann't be mounted so I created the empty configmap to mount that file as configmap with subpath mounting and also set the defaultmode of file 777 but still my application is unable to update the content of the file.</p> <p>Is there way I can mount a file with read/write permission enable for all user so my application can update the file at runtime.</p>
<p>No, a configmap mount is read-only since you need to go through the API to update things. If you just want scratch storage that is temporary you can use an emptyDir volume but it sounds like you want this to stick around so check out the docs on persistent volumes (<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a>). There's a lot of options and complexity, you'll need to work out what is the best match for your use case.</p>
<p>I have built a small, single user, internal service that stores data in a single JSON blob on disk (it uses tinydb) -- thus the service is not designed to be run on multiple nodes to ensure data consistency. Unfortunately, when I send API requests I get back inconsistent results -- it appears the API is writing to different on-disk files and thus returning inconsistent results (if I call the API twice for a list of objects, it will return one of two different versions).</p> <p>I deployed the service to Google Cloud (put it into a container, pushed to gcr.io). I created a cluster with a single node and deployed the docker image to the cluster. I then created a service to expose port 80. (Followed the tutorial here: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a>) </p> <p>I confirmed that only a single node and single pod was running:</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE XXXXX-2-69db8f8765-8cdkd 1/1 Running 0 28m </code></pre> <pre><code>kubectl get nodes NAME STATUS ROLES AGE VERSION gke-cluster-1-default-pool-4f369c90-XXXX Ready &lt;none&gt; 28m v1.14.10-gke.24 </code></pre> <p>I also tried to check if multiple containers might be running in the pod, but only one container of my app seems to be running (my app is the first one, with the XXXX):</p> <pre><code>kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default XXXXX-69db8f8765-8cdkd 1/1 Running 0 31m kube-system event-exporter-v0.2.5-7df89f4b8f-x6v9p 2/2 Running 0 31m kube-system fluentd-gcp-scaler-54ccb89d5-p9qgl 1/1 Running 0 31m kube-system fluentd-gcp-v3.1.1-bmxnh 2/2 Running 0 31m kube-system heapster-gke-6f86bf7b75-pvf45 3/3 Running 0 29m kube-system kube-dns-5877696fb4-sqnw6 4/4 Running 0 31m kube-system kube-dns-autoscaler-8687c64fc-nm4mz 1/1 Running 0 31m kube-system kube-proxy-gke-cluster-1-default-pool-4f369c90-7g2h 1/1 Running 0 31m kube-system l7-default-backend-8f479dd9-9jsqr 1/1 Running 0 31m kube-system metrics-server-v0.3.1-5c6fbf777-vqw5b 2/2 Running 0 31m kube-system prometheus-to-sd-6rgsm 2/2 Running 0 31m kube-system stackdriver-metadata-agent-cluster-level-7bd5779685-nbj5n 2/2 Running 0 30m </code></pre> <p>Any thoughts on how to fix this? I know "use a real database" is a simple answer, but the app is pretty lightweight and does not need that complexity. Our company uses GCloud + Kubernetes so I want to stick with this infrastructure.</p>
<p>Files written inside the container (i.e. not to a persistent volume of some kind) will disappear when then container is restarted for any reason. In fact you should really have the file permissions set up to prevent writing to files in the image except maybe /tmp or similar. You should use a GCE disk persistent volume and it will probably work better :)</p>
<p>I have network policy created and implemented as per <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes</a>, and its working fidn , however I would like to understand how exactly this gets implemeneted in the back end , how does network policy allow or deny traffic , by modifying the iptables ? which kubernetes componenets are involved in implementing this ?</p>
<p>"It depends". It's up to whatever controller actually does the setup, which is usually (but not always) part of your CNI plugin.</p> <p>The most common implementation is Calico's <a href="https://github.com/projectcalico/felix" rel="nofollow noreferrer">Felix daemon</a>, which supports several backends, but iptables is a common one. Other plugins use eBPF network programs or other firewall subsystems to similar effect.</p>
<p>I've got some doubts about deploying Spring Cloud Gateway (old Zuul) with Kubernetes and getting zero-downtime. I'm completely new to Kubernetes and I'm a bit lost with quite a lot of concepts.</p> <p>We would like to use the Spring Cloud Gateway verify the JWT. I've also read that when I've got a call, it should go first have gateway, afterwards the ribbon discovery and finally the REST services. </p> <p>The application has very strict zero-downtime requirements. My question is, what happens when I need to redeploy for some reason the Gateway? Is it possible to achieve the zero-downtime if it is my first component and I will have constantly traffic and request in my system</p> <p>Is there any other component I should set-up in order to archive this? The users that are having having access to my REST services shouldn't be disconnected abruptly.</p>
<p>Kubernetes Deployments use a rolling update model to achieve zero downtime deploys. New pods are brought up and allowed to become ready, then added to the rotation, then old ones are shut down, repeat as needed.</p>
<p>I need to connect a service running in a local container inside Docker on my machine to a database that's running on a Kubernetes cluster.</p> <p>Everything I found on port forwarding allowed me to connect my machine to the cluster, but not the local container to the cluster (unless I install kubectl on my container, which I cannot do).</p> <p>Is there a way to do this?</p>
<p><a href="https://www.telepresence.io/" rel="nofollow noreferrer">https://www.telepresence.io/</a> is what you're looking for. It will hook into the cluster network like a VPN and patch the services so traffic will get routed through the tunnel.</p>
<p>OpenShift:</p> <p>I have the below MySQL Deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mysql-master spec: selector: matchLabels: app: mysql-master strategy: type: Recreate template: metadata: labels: app: mysql-master spec: volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: ro-mstr-nfs-datadir-claim containers: - image: mysql:5.7 name: mysql-master env: - name: MYSQL_SERVER_CONTAINER value: mysql - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_DATABASE - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_USER - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_PASSWORD ports: - containerPort: 3306 name: mysql-master volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql </code></pre> <p>I created a deployment using this yml file which created a deployment and pod which is successfully running.</p> <p>And I have a configmap</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: ro-mstr-mysqlinitcnfgmap data: initdb.sql: |- CREATE TABLE aadhaar ( name varchar(255) NOT NULL, sex char NOT NULL, birth DATE NOT NULL, death DATE NULL, id int(255) NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ); CREATE USER 'usera'@'%' IDENTIFIED BY 'usera'; GRANT REPLICATION SLAVE ON *.* TO 'usera' IDENTIFIED BY 'usera'; FLUSH PRIVILEGES; </code></pre> <p>Now I need to patch the above deployment using this configmap. I am using the below command</p> <pre><code>oc patch deployment mysql-master -p '{ "spec": { "template": { "spec": { "volumes": [ { "name": "ro-mysqlinitconf-vol", "configMap": { "name": "ro-mstr-mysqlinitcnfgmap" } } ], "containers": [ { "image": "mysql:5.7", "name": "mysql-master", "volumeMounts": [ { "name": "ro-mysqlinitconf-vol", "mountPath": "/docker-entrypoint-initdb.d" } ] } ] } } } }' </code></pre> <p>So the above command is successful, I validated the Deployment description and inside the container it placed the initdb.sql file successfully, and recreated the pod. But the issue is it has not created the aadhaar table. I think it has not executed the initdb.sql file from <code>docker-entrypoint-initdb.d</code>.</p>
<p>If you dive into the entrypoint script in your image (<a href="https://github.com/docker-library/mysql/blob/75f81c8e20e5085422155c48a50d99321212bf6f/5.7/docker-entrypoint.sh#L341-L350" rel="nofollow noreferrer">https://github.com/docker-library/mysql/blob/75f81c8e20e5085422155c48a50d99321212bf6f/5.7/docker-entrypoint.sh#L341-L350</a>) you can see it only runs the initdb.d files if it is also creating the database the first time. I think maybe you assumed it always ran them on startup?</p>
<p>I have a Kubernetes cluster version 1.15.5 in Azure where I have installed <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">cert-manager</a> version v0.14.0</p> <p>It works fine with automatically issuing lets encrypt certificates against a valid DNS name: <strong>MY_DOMAIN</strong> pointing to the external IP address of the ingress controller.</p> <p>I would also like to be able to do this same thing using e.g. <a href="https://certbot.eff.org" rel="nofollow noreferrer">certbot</a>. I have tried to run certbot on my cluster with:</p> <pre><code>kubectl run --generator=run-pod/v1 certbot-shell --rm -i --tty --image certbot/certbot:amd64-latest -- -d MY_DOMAIN --manual --preferred-challenges http certonly </code></pre> <p>But it fails with:</p> <pre><code>Create a file containing just this data: QAPu****-klNq1RBgY And make it available on your web server at this URL: http://MY_DOMAIN/.well-known/acme-challenge/QAPu****-klNq1RBgY - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Press Enter to Continue Waiting for verification... Challenge failed for domain MY_DOMAIN http-01 challenge for MY_DOMAIN Cleaning up challenges Some challenges have failed. IMPORTANT NOTES: - The following errors were reported by the server: Domain: MY_DOMAIN Type: unauthorized Detail: Invalid response from http://MY_DOMAIN/.well-known/acme-challenge/QAPuDTHa****1qlLLOg [13.x.x.x]: 404 To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain </code></pre> <p>So somehow <strong>cert-manager</strong> automatically takes care of creating the file during the challenge and making it available at:</p> <pre><code>http://MY_DOMAIN/.well-known/acme-challenge/QAPu****-klNq1RBgY </code></pre> <p>But I am not sure how I do that when using <strong>certbot</strong> or if there some other way to do this??</p> <p>Based on below suggestions I have tried to install Kube lego (0.1.2) instead (for legacy 1.8 cluster) but seems to fail with:</p> <pre><code>level=error msg="Error while processing certificate requests: 403 urn:acme:error:unauthorized: Account creation on ACMEv1 is disabled. Please upgrade your ACME client to a version that supports ACMEv2 / RFC 8555. See https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430 for details." context=kubelego </code></pre> <p>So I have tried to upgrade to:</p> <p><a href="https://acme-staging-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-staging-v02.api.letsencrypt.org/directory</a></p> <p>but then I get:</p> <pre><code>level=error msg="Error while processing certificate requests: Head : unsupported protocol scheme \"\"" context=kubelego </code></pre> <p>I found:</p> <p><a href="https://github.com/jetstack/kube-lego/issues/301" rel="nofollow noreferrer">https://github.com/jetstack/kube-lego/issues/301</a></p> <p>So looks like kube-lego cannot be used with ACME version 2 :-(</p>
<p>Short version of the comments: certbot in DNS mode will probably work, HTTP01 will not since you would need to dynamically adjust Ingress settings, which is exactly what cert-manager does. Overall this is a great example of why running a version of Kube from 2.5 years ago is not good.</p>
<p>I am testing connecting an application running in an external docker container, to a database running in a separate kubernetes cluster. What is the best way to make this connection with security practices in mind.</p> <p>I am planning on creating an ingress service for the database in the kubernetes cluster. Then, when making the connection from the application, I should only need to add the ingress/service connection to be able to use this db, right?</p>
<p>Just like anything else, use TLS, make sure all hops are encrypted and verified. Unless your database of choice uses an HTTP-based protocol, Ingress won't help you. So usually this means setting up TLS at the DB level and exposing it with a LoadBalancer service.</p>
<p>I am beginner with Kubernetes. I tried to deploy Prometheus from helm and now I need to setup Ingress in internal network.</p> <p>I have problem with resolving Prometheus by hostname. If I use IP address I get it work but when I use syntax &quot;host&quot; it is 404 error. I don't know why is not resolved by hostname. I used kubespray for deploy Kubernetes.</p> <p>Could you help me, please?</p> <p>Ingress</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheus-ingress namespace: monitoring annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - prom.tipsport.it secretName: foo-tls rules: - host: prom.cluster.local - http: paths: - path: / pathType: Prefix backend: service: name: prometheus-kube-prometheus-prometheus port: number: 9090 Describe pod Name: prometheus-ingress Namespace: monitoring Address: 10.10.10.3,10.10.10.4,10.10.10.5 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) TLS: foo-tls terminates prom.tipsport.it Rules: Host Path Backends ---- ---- -------- * / prometheus-kube-prometheus-prometheus:9090 (10.233.66.116:9090) Annotations: nginx.ingress.kubernetes.io/enable-cors: true nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 7m36s (x8 over 61m) nginx-ingress-controller Scheduled for sync Normal Sync 7m35s (x8 over 61m) nginx-ingress-controller Scheduled for sync Normal Sync 7m34s (x8 over 61m) nginx-ingress-controller Scheduled for sync </code></pre>
<p>What you want is this:</p> <pre><code> rules: - host: prom.tipsport.it http: paths: - path: / pathType: Prefix backend: service: name: prometheus-kube-prometheus-prometheus port: number: 9090 </code></pre> <p>The <code>host:</code> field tells it which Host header to route where so it should be the public hostname. Also it should be in the same section as the <code>http:</code> field which gives further routing instructions. also you don't need the rewrite target annotation since no rewriting is needed.</p>
<p>How to run a helm hook based on a condition.</p> <p>What I want to solve: <br> I have created a postupgrade hook which will load some data to service A, which is created from a zipped subchart.<br> Right now it runs every time when an upgrade happened. I want it only run when the service A or the job itself has been upgraded.</p> <p>Is it possible on helm or k8s level?</p>
<p>Not really. It doesn't have enough info to know when that is the case.</p>
<p>I am not sure if I'm interpreting the output from my container correctly, but I am seeing the following output from sequelize in the logs:</p> <pre><code>Nates-MacBook-Pro:k8s natereed$ docker logs 1a3e6141d050 ... (node:36) UnhandledPromiseRejectionWarning: SequelizeConnectionError: password authentication failed for user "postgres " </code></pre> <p>It <em>appears</em> there is an extra newline character in the username, which should be "postgres". The database is configured with the environment variable $POSTGRESS_USERNAME (yes, I know it is mispelled, it is from another author).</p> <pre><code>src/config/config.ts: "username": process.env.POSTGRESS_USERNAME </code></pre> <p>I shelled into the running container and checked that the environment variables are correctly set:</p> <pre><code>root@backend-feed-75c4f97d6-9tp2f:/usr/src/app# echo $POSTGRESS_USERNAME postgres root@backend-feed-75c4f97d6-9tp2f:/usr/src/app# echo $POSTGRESS_PASSWORD ... root@backend-feed-75c4f97d6-9tp2f:/usr/src/app# echo $POSTGRESS_DB mydb ... </code></pre> <p>To create the secret and then apply, I ran:</p> <pre><code>echo "postgres" | openssl base64 (edit env-secret.yaml) kubectl apply -f env-secret.yaml  </code></pre> <p>The contents of the secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: env-secret type: Opaque data: POSTGRESS_USERNAME: cG9zdGdyZXMK POSTGRESS_PASSWORD: ... </code></pre> <p>Is this not the correct way to create the k8s secret?</p>
<p>The simple option:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: env-secret type: Opaque stringData: POSTGRESS_USERNAME: myapp POSTGRESS_PASSWORD: supersecret </code></pre> <p><code>stringData</code> takes plain strings instead of base-64 encoded <code>[]byte</code>s.</p>
<p>In Kubernetes deployments, you can specify volume mounts as readonly. Is there a performance advantage to it, or logical only?</p> <p>Is it dependant on the volume type?</p> <p>To make my intentions clear, I'm using a pv in a scenario where I have one writer and many readers, and noticed any fs operation on the mounted volume is much slower than on the volatile disk.</p>
<p>It entirely depends on the volume type. Some might implement performance optimizations when they know the volume is read only.</p>
<p>I'm reading through:</p> <p><a href="https://github.com/kubernetes/ingress-nginx/issues/8" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/8</a> <a href="https://github.com/kubernetes/kubernetes/issues/41881" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/41881</a></p> <p>But I can't seem to determine a conclusive answer. Does Kubernetes support wildcard domains in it's ingress or not? If not, what are the possible workaround approaches?</p> <p>At least for V1.18 it seems to be officially suported - though still dependent on the ingress controllers also supporting it. (<a href="https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/</a>) - though I still want to know about recent previous versions as well.</p>
<p>As you said it’s up to each individual controller but all of them do that I know of.</p>
<p>I am maintaining a K8s cluster and recently kubelet evicted pods many times on different nodes because of disk pressure. After investigation, I found out that the problem is the container log files at <code>/var/lib/docker/containers/.../*-json.log</code> and these files can grow to hundreds of Gi and consume all of the disk.</p> <p>I even face this when I was using a central logging stack consists of Kibana, Elasticsearch and Fluentbit. The fluentbit logs were around 500 Gi and after removing the central logging stack the disk pressure almost solved. But now I see it for some of my other components and its logs are consuming around 170 Gi.</p> <p>What is some of the best practices and tools for managing log files in k8s?</p>
<p>Every Kubernetes installer should include Logrotate to handle this. <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/</a> has some basic info but it depends on your exact configuration.</p> <p>EDIT: As I have now informed myself, Docker itself can also do log rotation directly so that's an option too.</p>
<p>Kubernetes documentation describes pod as a <code>wrapper</code> around one or more containers. containers running <code>inside</code> of a pod share a set of namespaces (e.g. network) which makes me think <code>namespaces</code> are nested (I kind doubt that). What is the <code>wrapper</code> here from container runtime's perspective?</p> <p>Since containers are just processes constrained by <code>namespaces, Cgroups</code> e.g. Perhaps, pod is just the first <code>container</code> launched by Kubelet and the rest of containers are started and grouped by namespaces.</p>
<p>The main difference is networking, the network namespace is shared by all containers in the same Pod. Optionally, the process (pid) namespace can also be shared. That means containers in the same Pod all see the same <code>localhost</code> network (which is otherwise hidden from everything else, like normal for localhost) and optionally can send signals to processes in other containers.</p> <p>The idea is the Pods are groups of related containers, not really a wrapper per se but a set of containers that should always deploy together for whatever reason. Usually that's a primary container and then some sidecars providing support services (mesh routing, log collection, etc).</p>
<p>I have more kubernetes cluster and use different CNI plugin. When I coding a CMDB agent ,I want used kubernetes apiserver get CNI plugin, next write in to my CMDB database.</p> <p>I used Go languages.</p>
<p>This isn't really a thing. You would have to write a separate detection mode for each CNI plugin. Additionally it's probably possible (if inadvisable) to have multiple plugins active on the same node as long as only one tries to configure each pod.</p>
<p>I'm trying to use different secrets on a StatefulSet, based on the index o the pods. Here is the things I tried:</p> <pre><code> env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SECRET_KEY valueFrom: secretKeyRef: key: key name: mysecret-$(POD_NAME) - name: SECRET_HOST value: myhost-$(POD_NAME) </code></pre> <p>However, <code>mysecret-$(POD_NAME)</code> is not correctly sobstituted as parameter, while <code>myhost-$(POD_NAME)</code> acts correcly.</p> <p>How can I solve this problem? The goal is to set different variables from secret/configmaps on different replicas of the StatefulSet.</p>
<p>AFAIK this is not supported. The only volumes you can have differ are the PVs. Instead you would use a single secret with keys or whatnot based on the pod index, and write your software to read from the correct key.</p>
<p>When I was labeling my worker nodes I accidentally labeled my master node as a worker.</p> <p><strong>kubectl label node node01 node-role.kubernetes.io/worker=worker</strong></p> <p>now my cluster looks like this:</p> <pre><code>NAME STATUS ROLES AGE VERSION node01 Ready master,worker 54m v1.18.0 node02 Ready worker 52m v1.18.0 node03 Ready worker 51m v1.18.0 node04 Ready worker 51m v1.18.0 </code></pre> <p>How do I remove worker from my Master node?</p>
<p><code>kubectl label node node01 node-role.kubernetes.io/worker-</code>. The <code>-</code> tells it to remove the label.</p>
<p>Here is what I am working with. I have 3 nodepools on GKE </p> <ol> <li>n1s1 (3.75GB)</li> <li>n1s2 (7.5GB) </li> <li>n1s4 (15GB)</li> </ol> <p>I have pods that will require any of the following memory requests. Assume limits are very close to requests. </p> <pre><code>1GB, 2GB, 4GB, 6GB, 8GB, 10GB, 12GB, 14GB </code></pre> <p>How best can I associate a pod to a nodepool for max efficiency?</p> <p>So far I have 3 strategies.</p> <p>For each pod config, determine the <strong><em>“rightful nodepool”</em></strong>. This is the smallest nodepool that can accommodate the pod config in an ideal world. So for 2GB pod it's n1s1 but for 4GB pod it'd be n1s2.</p> <ol> <li>Schedule a pod only on its <em>rightful nodepool.</em> </li> <li>Schedule a pod only on its <em>rightful nodepool</em> or one nodepool higher than that.</li> <li>Schedule a pod only on any nodepool where it can currently go. </li> </ol> <p>Which of these or any other strategies will minimize wasting resources?</p> <p>=======</p>
<p>Why would you have 3 pools like that in the first place? You generally want to use the largest instance type you can that gets you under 110 pods per node (which is the default hard cap). The job of the scheduler is to optimize the packing for you, and it's pretty good at that with the default settings.</p>
<p>I am new to Kubernetes world,I am trying to deploy &quot;Filebeat&quot; demonset on Azure Kubernetes services(AKS) but facing the below error, please help me out:</p> <p><strong>Error:</strong> <a href="https://i.stack.imgur.com/7W1qJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7W1qJ.png" alt="enter image description here" /></a></p> <p>My code was grabbed from <a href="https://github.com/elastic/beats/tree/master/deploy/kubernetes/filebeat" rel="nofollow noreferrer">https://github.com/elastic/beats/tree/master/deploy/kubernetes/filebeat</a></p> <p>Below is the code which am trying to execute.</p> <p><strong>filebeat-configmap.yaml</strong></p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container paths: - /var/log/containers/*.log processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: &quot;/var/log/containers/&quot; # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: #filebeat.autodiscover: # providers: # - type: kubernetes # node: ${NODE_NAME} # hints.enabled: true # hints.default_config: # type: container # paths: # - /var/log/containers/*${data.kubernetes.container.id}.log processors: - add_cloud_metadata: - add_host_metadata: #cloud.id: ${ELASTIC_CLOUD_ID} #cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:10.x.x.x}:${ELASTICSEARCH_PORT:9200}'] #username: ${ELASTICSEARCH_USERNAME} #password: ${ELASTICSEARCH_PASSWORD} </code></pre> <p><strong>filebeat-daemonset.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:%VERSION% args: [ &quot;-c&quot;, &quot;/etc/filebeat.yml&quot;, &quot;-e&quot;, ] env: - name: ELASTICSEARCH_HOST value: &quot;10.x.x.x&quot; - name: ELASTICSEARCH_PORT value: &quot;9200&quot; #- name: ELASTICSEARCH_USERNAME # value: elastic #- name: ELASTICSEARCH_PASSWORD # value: changeme #- name: ELASTIC_CLOUD_ID # value: #- name: ELASTIC_CLOUD_AUTH # value: #- name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate </code></pre> <p><strong>filebeat-role.yaml</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: [&quot;apps&quot;] resources: - replicasets verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] </code></pre> <p><strong>filebeat-role-binding.yaml</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io </code></pre> <p><strong>filebeat-service-account.yaml</strong></p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat </code></pre>
<pre><code> #- name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName </code></pre> <p>You commented out the <code>name:</code> line when you didn't mean to.</p>
<p>I'm using k8s java client and need a way to get OAuth access token for some clusters. Now I can do that only with this bash script:</p> <pre><code>export KUBECONFIG=~/.kube/&lt;config-file&gt; APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ") SECRET_NAME=$(kubectl get secrets | grep ^default | cut -f1 -d ' ') TOKEN=$(kubectl describe secret $SECRET_NAME | grep -E '^token' | cut -f2 -d':' | tr -d " ") echo "TOKEN: ${TOKEN}" </code></pre> <p>Is there a way to do that with java code? Don't ask for the whole solution but at least for some direction to look.</p>
<p>Kubernetes is not involved in the OAuth side of things at all. That’s up to your IdP. More normally you would use a ServiceAccount token for automation though.</p>
<p>everyone.</p> <p>I'm wondering is there an option to authorize users to access kubernetes objects via local groups?</p> <p>Like currently I'm doing this:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-rolebinding namespace: mynamespace subjects: - kind: User name: example-user1 # member of local unix group "authorized" apiGroup: rbac.authorization.k8s.io - kind: User name: example-user2 # member of local unix group "authorized" apiGroup: rbac.authorization.k8s.io - kind: User name: example-user3 # member of local unix group "authorized" apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: example-role apiGroup: rbac.authorization.k8s.io </code></pre> <p>And I am trying to do it this way:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example-rolebinding namespace: mynamespace subjects: - kind: Group name: authorized apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: example-role apiGroup: rbac.authorization.k8s.io </code></pre> <p>Is there any option? Or I want too much from K8S and need to develop something on my own?</p> <p>Thanks</p>
<p>Groups are kind of an opaque construct between the authentication layer and the rbac system. Whichever authn plugin you are using can tag the request with a username and any number of groups, and then rbac will use them. But k8s itself doesn’t know what a user or group really is. So tldr it’s up to your authn configuration and plugin.</p>
<p>I'm moving a PHP application into Kubernetes and running into a "Bad Gateway" after some actions are performed.</p> <p>I'm thinking the error is from one of two things:</p> <ul> <li>The PHP program is sending a <code>GET</code> to <code>http://192.168.39.129/admin/a_merge_xls_db.php?tmp_tbl=_imp_companies_20200329_160813</code></li> <li>Or from the upload itself</li> </ul> <p>I doubt it is the last one because this Excel file is only 14kb. </p> <p>What is going on is the user has an Excel template to import new accounts. They go to the admin portal <code>/admin</code>, select to import, it parses the Excel file, and imports into Postgres. </p> <p>Despite this error, the data is making its way into the database. After it successfully imports, a <code>GET</code> is sent to <code>a_merge_xls_db.php?tmp_tbl=_imp_companies_20200329_160813</code>. That is when the "Bad Gateway" comes up after like 7 seconds.</p> <p>Seems like it might be an issue in my <code>ingress.yaml</code> for <code>ingress-nginx</code> abd handling the <code>?=_</code> or something. I've tried a few things, but still am not able to resolve the issue.</p> <p>Here is what I have in the <code>ingress.yaml</code>:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/add-base-url: "true" nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/proxy-body-size: 500mb name: ingress-service namespace: default spec: rules: - http: paths: - path: /?(.*) backend: serviceName: client-cluster-ip-service-dev servicePort: 3000 - path: /admin/?(.*) backend: serviceName: admin-cluster-ip-service-dev servicePort: 4000 - path: /api/?(.*) backend: serviceName: api-cluster-ip-service-dev servicePort: 5000 </code></pre> <p>Any suggestions?</p> <p><strong>ADDITIONAL INFO 3/30/20</strong></p> <p>I'm including the <code>Dockerfile.dev</code> and <code>admin.yaml</code> as I'm still having issues:</p> <pre><code># Dockerfile.dev FROM php:7.3-fpm EXPOSE 4000 RUN apt-get update \ &amp;&amp; apt-get install -y libpq-dev zlib1g-dev libzip-dev \ &amp;&amp; docker-php-ext-install pgsql zip COPY . /app WORKDIR /app/src CMD ["php", "-S", "0.0.0.0:4000"] </code></pre> <pre><code># admin.yaml apiVersion: apps/v1 kind: Deployment metadata: name: admin-deployment-dev spec: replicas: 1 selector: matchLabels: component: admin template: metadata: labels: component: admin spec: containers: - name: admin image: testappacr.azurecr.io/test-app-admin ports: - containerPort: 4000 env: - name: PGUSER valueFrom: secretKeyRef: name: test-app-dev-secrets key: PGUSER - name: PGHOST value: postgres-cluster-ip-service-dev - name: PGPORT value: "1423" - name: PGDATABASE valueFrom: secretKeyRef: name: test-app-dev-secrets key: PGDATABASE - name: PGPASSWORD valueFrom: secretKeyRef: name: test-app-dev-secrets key: PGPASSWORD - name: SECRET_KEY valueFrom: secretKeyRef: name: test-app-dev-secrets key: SECRET_KEY - name: SENDGRID_API_KEY valueFrom: secretKeyRef: name: test-app-dev-secrets key: SENDGRID_API_KEY - name: DOMAIN valueFrom: secretKeyRef: name: test-app-dev-secrets key: DOMAIN - name: DEBUG valueFrom: secretKeyRef: name: test-app-dev-secrets key: DEBUG volumeMounts: - mountPath: "/docs/" name: file-storage volumes: - name: file-storage persistentVolumeClaim: claimName: file-storage --- apiVersion: v1 kind: Service metadata: name: admin-cluster-ip-service-dev spec: type: ClusterIP selector: component: admin ports: - port: 4000 targetPort: 4000 </code></pre> <p>If I just have the <code>- path: /admin</code>, I have an issue where the assets are not being served. The <code>.css</code> and <code>.js</code> come back as with <code>200</code>, but nothing is applied to the application. If I navigate to the URL of the asset, for example, <code>http://192.168.39.129/admin/css/portal.css</code> it just shows the page you get when you go to <code>/admin</code>.</p> <p>This issue is resolved by changing back to <code>- path: /admin/?(.*)</code>, but then I run into the issue I initially posted about.</p> <p>I've been playing with various regex most of the morning, but still get the same results when it comes to the "Bad Domain".</p> <p>I'm likely overlooking something given I'm learning Kubernetes. It really seems like it should be working given coderanger's suggestion and also this issue which echos the same thing:</p> <p><a href="https://github.com/kubernetes/ingress-nginx/issues/3380" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3380</a></p>
<p>You need to specify the <code>nginx.ingress.kubernetes.io/use-regex: "true"</code> annotation. This is called out because by default Kubernetes Ingress objects expect plain prefix matching, not regexs. Or in your case, just use an actual prefix, <code>/</code>, <code>/admin</code>, and <code>/api</code>.</p>
<p>I was running the older 2.16.0 version of ChartMuseum Helm Chart. I am trying to update it to use newer 3.1.0. When I try to upgrade using helm upgrade -n , the upgradation fails with the following error:</p> <pre><code>Error: UPGRADE FAILED: cannot patch &quot;...&quot; with kind Deployment: Deployment.apps &quot;...&quot; is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{&quot;app.kubernetes.io/instance&quot;:&quot;chart-rep&quot;, &quot;app.kubernetes.io/name&quot;:&quot;chartmuseum&quot;}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable </code></pre> <p>I am not sure but I believe this is because of helm v3? I was going through [this][3] page where I found helm v3 is a prerequisite. The change from 2.16.0 to 3.1.0 requires helm v3.</p> <p>I also have a pv bound to older version and ideally I want it to bind to the newer one. I am also using <code>rollingupdate</code> strategy. What steps do I need to take so that the upgrade works?</p>
<p>That's not from Helm, that's a Kubernetes error. This chart does not support clean upgrades or your values are not matching what you had before. If you can take the downtime, delete the offending deployment and let Helm recreate it. Otherwise you have to look up the right dance of orphan deletes and whatnot.</p>
<p>I am developing a <a href="https://medium.com/ovni/writing-a-very-basic-kubernetes-mutating-admission-webhook-398dbbcb63ec" rel="noreferrer">mutating webhook</a> with <a href="https://kind.sigs.k8s.io/docs/user/quick-start" rel="noreferrer">kind</a> and as I understand, the API end-point should be <code>https</code>. The certificate and key of the API server should be signed with the CA of the cluster itself so as to get around issue of self-signed certificates. And, for that, the following are the recommended steps:</p> <ol> <li>Create key - <code>openssl genrsa -out app.key 2048</code></li> <li>Create CSR - <code>openssl req -new -key app.key -subj "/CN=${CSR_NAME}" -out app.csr -config csr.conf</code></li> <li>Create CSR object in kubernetes - <code>kubectl create -f csr.yaml</code></li> <li>Approve CSR - <code>kubectl certificate approve csr_name</code></li> <li>Extract PEM - <code>kubectl get csr app.csr -o jsonpath='{.status.certificate}' | openssl base64 -d -A -out app.pem</code></li> </ol> <p><strong>Notes</strong><br> 1. The <code>csr.conf</code> has details to set-up the CSR successfully.<br> 2. The <code>csr.yaml</code> is written for the kuberenetes kind <code>CertificateSigningRequest</code>.<br> 3. The <code>csr_name</code> is defined in <code>CertificateSigningRequest</code>.<br> 4. The <code>spec.request</code> in <code>csr.yaml</code> is set to <code>cat app.csr | base64 | tr -d '\n'</code>. 5. The <code>app.pem</code> and <code>app.key</code> are used to set-up the <code>https</code> end-point.</p> <p>The end-point is definitely reachable but errors out as:</p> <pre class="lang-sh prettyprint-override"><code>Internal error occurred: failed calling webhook "com.me.webhooks.demo": Post https://webhook.sidecars.svc:443/mutate?timeout=10s: x509: certificate signed by unknown authority </code></pre> <p>How do I get around the <code>certificate signed by unknown authority</code> issue?</p> <p>References:<br> 1. <a href="https://medium.com/ovni/writing-a-very-basic-kubernetes-mutating-admission-webhook-398dbbcb63ec" rel="noreferrer">Writing a very basic kubernetes mutating admission webhook</a><br> 2. <a href="https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74" rel="noreferrer">Diving into Kubernetes MutatingAdmissionWebhook</a></p>
<p>It doesn't need to be signed with the cluster's CA root. It just needs to match the CA bundle in the webhook configuration.</p>
<p>So I have two hypervisors running the following Kubernetes VMs:</p> <p>A) 1x K8s master, 1x k8s node</p> <p>B) 1x K8s node</p> <p>If hypervisor B goes offline, all pods still work, as designed. What happens to the cluster and the nodes when hypervisor A goes offline? Will all running pods on the hypervisor B K8s node still work, assuming I have node anti-affinity configured so that on every node at least one pod already runs?</p> <p>Thanks!</p>
<p>Pods will keep running and will restart if they crash but the API will not be available so it will not be possible to run anything new or change them.</p>
<p>I'm migrating our swarm cluster to a k8s one, and that means I need to rewrite all the composes files to k8s files. Everything was going smothy, till I reach the redis compose...</p> <p>The compose file from redis: Yes, Its simple because is just to test during development for cache stuff...</p> <pre><code>version: &quot;3&quot; services: db: image: redis:alpine ports: - &quot;6380:6379&quot; deploy: labels: - traefik.frontend.rule=Host:our-redis-url.com placement: constraints: - node.labels.so==linux networks: - traefik networks: traefik: external: true </code></pre> <p>So, we have 4 nodes in that swarm... my DNS (our-redis-url.com) is pointing to one of them, and it works like a charm. I simple connect to redis using that url + the port 6380.</p> <p>Now.... I have created the same thing, but for k8s, as follow:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: redis-ms namespace: prod spec: replicas: 1 selector: matchLabels: app: redis-ms template: metadata: labels: app: redis-ms spec: containers: - name: redis-ms image: redis:alpine ports: - containerPort: 6379 resources: requests: cpu: 250m memory: 256Mi limits: cpu: 500m memory: 512Mi --- apiVersion: v1 kind: Service metadata: name: redis-ms namespace: prod spec: selector: app: redis-ms ports: - protocol: TCP port: 6380 targetPort: 6379 type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: redis-ms namespace: prod annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: our-redis-url.com http: paths: - backend: service: name: redis-ms port: number: 6380 path: / pathType: Prefix </code></pre> <p>And that didn't work. The pod run, and by the logs I can see it's waiting for connections, BUT I don't know how to do the trick like in docker-compose (traefik.frontend.rule=Host:redis-ms.mstech.com.br to bind the url and the port part).</p> <p>I have tried to use the tool kompose to convert this compose file... It didn't work to lol</p> <p>If anyone could bring me some advice, or help me fix the problem I'll thankfull.</p> <p>I'm using k8s with traefik as ingress controler.</p>
<p>As mentioned in comments, the Ingress system is only for HTTP traffic. Traefik does also support TCP and UDP traffic but that's separate from Ingress stuff and had to be configured through Traefik's more-specific tools (either their custom resources or a config file). More commonly you would use a LoadBalancer-type Service which creates a TCP LB in your cloud provider.</p>
<p>I run <code>kubectl delete</code> with <code>--all</code> flag. This command deleted all namespace on my cluster. (I couldn't see any namespace on K8S Dashboard) So How can I recover all these deleted namespace ? </p> <p>And is it possible to restore data on namespance ?</p> <pre><code>➜ kubectl delete ns --all warning: deleting cluster-scoped resources, not scoped to the provided namespace namespace "1xx" deleted namespace "2xx" deleted namespace "3xx" deleted namespace "4xx" deleted namespace "5xx" deleted namespace "6xx" deleted namespace "7xx" deleted </code></pre>
<p>No. Your cluster is probably no longer viable and will need to be restored from backup or rebuilt.</p>
<p>I have simple OpenShift setup with a Service configured with 2 backend PODS. The PODS have its READINESS Probe configured. The Service is exposed via NodePort. All these configuration are fine it is working as expected. Once the readiness probes fails the Services marks the pod as unreachable and any NEW requests don't get routed to the POD.</p> <p>Scenario 1: I execute CURL command to access the services. While the curl command is executing I introduce readiness failure of Pod-1. I see that no new requests are sent to Pod -1. This is FINE</p> <p>Scenario 2: I hava Java Client and use Apache Commons Http Client library to initiate a connection to the Kubernetes Service. Connection gets established and it is working fine. The problem comes when I introduce readiness failure of Pod-1. I still see the Client sending requests to Pod-1 only, even though Services has only the endpoint of Pod-2. </p> <p>My hunch, as the HttpClient uses Persistence Connection and Services when exposed via NodePorts, the destination address for the Http Connection is the POD-1 itself. So even if the readiness probe fails it still sends requests to Pod-1.</p> <p>Can some one explain why this works they way described above ??</p>
<p>kube-proxy (or rather the iptables rules it generates) intentionally does not shut down existing TCP connections when changing the endpoint mapping (which is what a failed readiness probe will trigger). This has been discussed a lot on many tickets over the years with generally little consensus on if the behavior should be changed. For now your best bet is to instead use an Ingress Controller for HTTP traffic, since those all update live and bypass kube-proxy. You could also send back a <code>Keep-Alive</code> header in your responses and terminate persistent connections after N seconds or requests, though that only shrinks the window for badness.</p>
<p>I have multiple kubernetes clusters metric data being scraped in prometheus. When i get metrics how can i differentiate the metrics from different clusters? I am not seeing any label that contains data regarding a specific cluster, so that i can filter out data of a particular cluster like below,</p> <pre><code>container_cpu_usage_seconds_total{cluster-name="abcde"} </code></pre> <p>Is there any way where i can add a label "cluster-name" in my kubernetes_sd_configs. I have seen, labels can be added in static_config but can't find anything related to kubernetes_sd_configs.</p> <p>I tried using relabel_config like below,</p> <pre><code> - source_labels: [__meta_kubernetes_namespace] action: replace target_label: cluster-name replacement: my-cluster </code></pre> <p>This did not get reflected in metrics. When i do it with already existing label like,</p> <pre><code> - source_labels: [__meta_kubernetes_namespace] action: replace target_label: domainname replacement: my-cluster </code></pre> <p>Then the domain name value is getting changed. Am i missing any configuration here?</p>
<p>You have to add that label yourself in your relabel_configs. Generally you do this on the intake cluster (i.e. the normal "cluster-level" ones) via a global relabel to add it, but it's also possible to add during the remote write if you're doing federation that way or via the jobs on the central cluster if you're doing a pull-based.</p>
<p>everyone.</p> <p>Please teach me why <code>kubectl get nodes</code> command does not return master node information in full-managed kubernetes cluster. </p> <p>I have a kubernetes cluster in GKE. When I type <code>kubectl get nodes</code>command, I get below information.</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-istio-test-01-pool-01-030fc539-c6xd Ready &lt;none&gt; 3m13s v1.13.11-gke.14 gke-istio-test-01-pool-01-030fc539-d74k Ready &lt;none&gt; 3m18s v1.13.11-gke.14 gke-istio-test-01-pool-01-030fc539-j685 Ready &lt;none&gt; 3m18s v1.13.11-gke.14 $ </code></pre> <p>Off course, I can get worker nodes information. This information is same with GKE web console. By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type <code>kubectl get nodes</code> command to this cluster, I get below result.</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 262d v1.14.1 node01 Ready &lt;none&gt; 140d v1.14.1 node02 Ready &lt;none&gt; 140d v1.14.1 $ </code></pre> <p>This result includes master node information. </p> <p>I'm curious why I cannot get the master node information in full-managed kubernetes cluster. I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed. I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.</p> <p>At the least, I'm just learning English now. Please correct me if I'm wrong.</p>
<p>Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.</p>
<p>I have a tiny Kubernetes cluster consisting of just two nodes running on <code>t3a.micro</code> AWS EC2 instances (to save money). </p> <p>I have a small web app that I am trying to run in this cluster. I have a single <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code></a> for this app. This deployment has <code>spec.replicas</code> set to 4.</p> <p>When I run this <code>Deployment</code>, I noticed that Kubernetes scheduled 3 of its pods in one node and 1 pod in the other node.</p> <p>Is it possible to force Kubernetes to schedule at most 2 pods of this <code>Deployment</code> per node? Having 3 instances in the same pod puts me dangerously close to running out of memory in these tiny EC2 instances.</p> <p>Thanks!</p>
<p>The correct solution for this would be to set memory requests and limits correctly matching your steady state and burst RAM consumption levels on every pod, then the scheduler will do all this math for you.</p> <p>But for the future and for others, there is a new feature which kind of allows this <a href="https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/" rel="noreferrer">https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/</a>. It's not an exact match, you can't put a global cap, rather you can require pods be evenly spaced over the cluster subject to maximum skew caps.</p>
<p>I've built 3 nodes on linux academy. On the control plane I can see 3 nodes running. On either of the two worker nodes I try to run <code>kubectl get nodes</code>. Initially I was prompted that <code>KUBERNETES_MASTER</code> was not set. </p> <p>Inside the worker nodes, I've tried setting this to the server value found in <code>/kube/config</code> in master. So in worker node: <code>export KUBERNETES_MASTER=https://1.2.3.4:6443</code>. When I try this and then try again <code>kubectl get nodes</code> I get <code>Unable to connect to the server: x509: certificate signed by unknown authority</code></p> <p>I've also tried setting to <code>export KUBERNETES_MASTER=https://kubernetes.default.svc</code> in the worker nodes. When I try this and then try <code>kubectl get nodes</code> I get <code>Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 127.0.0.53:53: no such host</code></p> <p>Any idea what I'm doing wrong?</p>
<p>You can only use cluster DNS names from inside pods, not from the nodes directly. As for the cert issue, your kube config file will generally include the CA used for API TLS.</p>