Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I'm trying to install Redis on Kubernetes environment with <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis" rel="nofollow noreferrer">Bitnami Redis HELM Chart</a>. I want to use a defined password rather than randomly generated one. But i'm getting error below when i want to connect to redis master or replicas with redis-cli.</p> <pre><code>I have no name!@redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. Warning: AUTH failed </code></pre> <p>I created a Kubernetes secret like this.</p> <pre><code>--- apiVersion: v1 kind: Secret metadata: name: redis-secret namespace: redis type: Opaque data: redis-password: YWRtaW4xMjM0Cg== </code></pre> <p>And in values.yaml file i updated auth spec like below.</p> <pre><code>auth: enabled: true sentinel: false existingSecret: &quot;redis-secret&quot; existingSecretPasswordKey: &quot;redis-password&quot; usePasswordFiles: false </code></pre> <p>If i don't define <code>existingSecret</code> field and use randomly generated password then i can connect without an issue. I also tried <code>AUTH admin1234</code> after <code>Warning: AUTH failed</code> error but it didn't work either.</p>
Çağatay Çiftçi
<p>You can achieve it in much simpler way i.e. by running:</p> <pre><code>$ helm install my-release \ --set auth.password=&quot;admin1234&quot; \ bitnami/redis </code></pre> <p>This will update your <code>&quot;my-release-redis&quot;</code> secret, so when you run:</p> <pre><code>$ kubectl get secrets my-release-redis -o yaml </code></pre> <p>you'll see it contains your password, already <code>base64</code>-encoded:</p> <pre><code>apiVersion: v1 data: redis-password: YWRtaW4xMjM0Cg== kind: Secret ... </code></pre> <p>In order to get your password, you need to run:</p> <pre><code>export REDIS_PASSWORD=$(kubectl get secret --namespace default my-release-redis -o jsonpath=&quot;{.data.redis-password}&quot; | base64 --decode) </code></pre> <p>This will set and export <code>REDIS_PASSWORD</code> environment variable containing your redis password.</p> <p>And then you may run your <code>redis-client</code> pod:</p> <pre><code>kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.4-debian-10-r13 --command -- sleep infinity </code></pre> <p>which will set <code>REDIS_PASSWORD</code> environment variable within your <code>redis-client</code> pod by assigning to it the value of <code>REDIS_PASSWORD</code> set locally in the previous step.</p>
mario
<p>I am trying to deploy multiple pods in k8s like say MySQL, Mango, Redis etc</p> <p>Can i create a single deployment resource for this and have multiple containers defined in template section? Is this allowed? If so, how will replication behave in this case?</p> <p>Thanks Pavan</p>
pa1
<blockquote> <p>I am trying to deploy multiple pods in k8s like say MySQL, Mango, Redis etc</p> </blockquote> <p>From <strong>microservices architecture</strong> perspective it is actually quite a bad idea to place all those containers in a single <strong>Pod</strong>. Keep in mind that a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer"><strong>Pod</strong></a> is a smallest deployable unit that can be created and managed by <strong>Kubernetes</strong>. There are quite many good reasons you don't want to have all above mentioned services in a single <strong>Pod</strong>. Difficulties in scaling such solution is just one of them.</p> <blockquote> <p>Can i create a single deployment resource for this and have multiple containers defined in template section? Is this allowed? If so, how will replication behave in this case?</p> </blockquote> <p>No, it is not allowed in Kubernetes. As to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><strong>Deployments</strong></a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer"><strong>StatefulSets</strong></a>, (which you need for statefull applications such as databases) both manage Pods that are based on identical container spec so it is not possible to have a <strong>Deployment</strong> or <strong>StatefulSet</strong> consisting of different types of Pods, based on different specs.</p> <p>To sum up: <em>Many Deployments and StatefulSets objects, serving for different purposes are the right solution.</em></p>
mario
<p>From the <code>cert-manager</code> doc: adding the annotation <code>cert-manager.io/cluster-issuer: acme-issuer</code> to an <code>Ingress</code> object should trigger the shim, request a certificate to this issuer, and store the certificate (without any namespace ?) (with which name?).</p> <p>I tried this and it does nothing. Adding a <code>tls:</code> section to the yaml definition of the <code>Ingress</code> does trigger the shim, request a certificate and store it in the same namespace as the <code>Ingress</code>.</p> <p>This means the doc is incorrect, or should it really work without a <code>tls:</code> section ?</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: acme-issuer spec: acme: email: [email protected] server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: example-issuer-account-key solvers: - http01: ingress: class: nginx </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: acme-issuer kubernetes.io/ingress.class: nginx name: my-ingress-name namespace: mynamespace spec: rules: - host: some.domain.eu http: paths: - backend: serviceName: my-service-name servicePort: 5000 path: / tls: - hosts: - some.domain.eu secretName: secret-storage-key-for-tls-cert </code></pre>
Softlion
<p>If you created the issuer correctly, then you need to create a Certificate, so the issuer can issue the certificate using the information you have in the Certificate resource, and populate the secret:</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: certname spec: secretName: secretName issuerRef: name: letsencrypt-prod commonName: &lt;the CN&gt; dnsNames: - &lt;name&gt; </code></pre> <p>Once you have this resource, it should create a secret containing the TLS certificates, and store it in <code>secretName</code>.</p>
Burak Serdar
<p>As mentioned <a href="https://docs.harness.io/article/wnr5n847b1-kubernetes-overview#what_does_harness_deploy" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Harness takes the artifacts and <strong>Kubernetes manifests</strong> you provide and deploys them to the target Kubernetes cluster. You can simply deploy Kubernetes objects via manifests and you can provide manifests using remote sources and Helm charts.</p> </blockquote> <p><a href="https://i.stack.imgur.com/8tEHl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8tEHl.png" alt="enter image description here" /></a></p> <hr /> <p>Is harness tool equipped with <code>kubectl</code> client tool to perform <code>kubectl apply</code> on kubernetes manifests?</p>
overexchange
<p>If you're curious about the implementation details of the specific tool that are not explained in its official documentation, you should study directly its source code to find the answer.</p> <p>But answering your specific question:</p> <blockquote> <p>Is harness tool equipped with kubectl client tool to perform kubectl apply on kubernetes manifest?</p> </blockquote> <p>Well, it doesn't have to. Writing a tool which in its code uses a console <code>kubectl</code> client isn't very optimal and doesn't make much sense. For performing exactly the same actions that <code>kubectl</code> does, such tools use <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Client Libraries</a>. As you can see in the official docs, there is large variety of them, some of them are officialy supported, others are community-maintained, but altogether they support various programming languages.</p> <p>Of course, you can write an external tool which doesn't use client libraries but implements the API calls and request/response types on its own.</p>
mario
<p>I'm trying to setup docker registry via Traefik, authenticated by a Service account bearer token. The problem is that the name of default service token secret is ended with some random characters, which can not be passed to the Ingress config, or can it?</p> <p>Anyway, I want to somehow force Kubernetes to name the token in a predictable way.</p> <p>The current solution is to create an API token manually.</p> <pre><code>kind: Secret metadata: name: account-token annotations: kubernetes.io/service-account.name: account type: kubernetes.io/service-account-token </code></pre> <p>Unfortunately, the original randomly named token is still in the system, and can not be removed.</p> <p>If it is created before Service account it is dropped, but when after then the randomized secret is.</p>
majkrzak
<p>It looks like <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#to-create-additional-api-tokens" rel="nofollow noreferrer">creating additional API token</a> is the only existing solution. You are able to reference an existing service account and controller will update it with the newly generated token as described below:</p> <blockquote> <p>To create additional API tokens for a service account, create a secret of type ServiceAccountToken with an annotation referencing the service account, and the controller will update it with a generated token.</p> </blockquote> <hr> <blockquote> <p>Unfortunately, the original randomly named token is still in the system, and can not be removed.</p> </blockquote> <p>So what happens when you try to delete it / invalidate it the way described <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#to-delete-invalidate-a-service-account-token" rel="nofollow noreferrer">here</a> ?</p> <blockquote> <p>It will be recreated instantly. To avoid this, first it have to be removed from serviceaccount.secrests list. But it can not be complexly done via the yaml file. Or is there some api transaction that can be used during the config application?</p> </blockquote> <p>EDIT:</p> <p>There are two solutions you may use to obtain your goal. When you edit the default ServiceAccount token it will become not valid any more and it won't be automatically recreated as in case when removing it:</p> <p>1st is <strong>patching</strong> the token:</p> <pre><code>kubectl patch secret default-token-jrc6q -p '{"data":{"token": "c29tZW90aGVyc2hpdAo="}}' </code></pre> <p>2nd is <strong>editing</strong> it:</p> <pre><code>kubectl edit secret default-token-jrc6q # and change token to any value you want </code></pre>
mario
<p>Say we have the following deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: ... spec: replicas: 2 template: spec: containers: - image: ... ... resources: requests: cpu: 100m memory: 50Mi limits: cpu: 500m memory: 300Mi </code></pre> <p>And we also create a <code>HorizontalPodAutoscaler</code> object which automatically scales up/down the number of pods based on CPU average utilization. I know that the HPA will compute the number of pods based on the resource <strong>requests</strong>, but what if I want the containers to be able to request more resources before scaling horizontally?</p> <p>I have two questions:</p> <p>1) Are resource <strong>limits</strong> even used by K8s when a HPA is defined?</p> <p>2) Can I tell the HPA to scale based on resource <strong>limits</strong> rather than requests? Or as a means of implementing such a control, can I set the <code>targetUtilization</code> value to be more than 100%?</p>
mittelmania
<p>No, HPA is not looking at limits at all. You can specify target utilization to any value even higher than 100%. </p>
Vasili Angapov
<p>I am new to kubernetes and docker. I have been trying to install k3s on my Windows 10 system with the command mentioned on the website:</p> <pre><code>curl -sfL https://get.k3s.io | sh - </code></pre> <p>I already have <code>minikube</code>, <code>kubectl</code> and <code>docker</code> installed on my system, and all work as expected. However, when I run the above command, I get the following error message:</p> <pre><code>sh : The term 'sh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:32 + curl -sfL https://get.k3s.io | sh - + ~~ + CategoryInfo : ObjectNotFound: (sh:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException </code></pre> <p>What am I doing wrong here?</p>
Sarthak Saxena
<p>For the time being <strong>k3s</strong> doesn't support <strong>Windows</strong>, but there is an <a href="https://github.com/k3s-io/k3s/issues/114" rel="nofollow noreferrer">open issue</a> on <strong>Github</strong> you can track.</p> <p>The <a href="https://rancher.com/docs/k3s/latest/en/quick-start/#install-script" rel="nofollow noreferrer">installation script</a> you are trying to run, simply won't work on <strong>Windows</strong> machine. If you take a closer look at the <a href="https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/" rel="nofollow noreferrer">installation requirements section</a> in the very same documentation, you will see the following information, regarding to the <a href="https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#operating-systems" rel="nofollow noreferrer">supported Operating Systems</a>:</p> <blockquote> <h2>Operating Systems</h2> <p>K3s is expected to work on most modern Linux systems. 👈</p> <p>Some OSs have specific requirements:</p> <ul> <li>If you are using <strong>Raspbian Buster</strong>, follow <a href="https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster" rel="nofollow noreferrer">these steps</a> to switch to legacy iptables.</li> <li>If you are using <strong>Alpine Linux</strong>, follow <a href="https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup" rel="nofollow noreferrer">these steps</a> for additional setup.</li> <li>If you are using <strong>(Red Hat/CentOS) Enterprise Linux</strong>, follow <a href="https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux" rel="nofollow noreferrer">these steps</a> for additional setup.</li> </ul> <p>For more information on which OSs were tested with Rancher managed K3s clusters, refer to the <a href="https://rancher.com/support-maintenance-terms/" rel="nofollow noreferrer">Rancher support and maintenance terms.</a></p> </blockquote> <p>So for running <strong>k3s</strong> on <strong>Windows</strong> you would need a <strong>Linux VM</strong> which can be provisioned using a hypervisor like <strong>Hyper-V</strong> or <strong>VirtualBox</strong> that can be run on your <strong>Windows host</strong>.</p> <p>Take a look at the following article that presents how it can be done by using <strong>Hyper-V</strong>:</p> <p><a href="https://jyeee.medium.com/rancher-2-4-14c31af12b7a" rel="nofollow noreferrer">Rancher 2.4 &amp; Kubernetes on your Windows 10 laptop with multipass &amp; k3s — Elasticsearch/Kibana in minutes!</a></p>
mario
<p>I'm trying to run Cadvisor on a Kubernetes cluster following this doc <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a></p> <p>Contents of the yaml file below:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: name: cadvisor namespace: kube-system labels: name: cadvisor spec: selector: matchLabels: name: cadvisor template: metadata: labels: name: cadvisor spec: containers: - image: google/cadvisor:latest name: cadvisor ports: - containerPort: 8080 restartPolicy: Always status: {} </code></pre> <p>But when I try to deploy it :</p> <pre><code>kubectl apply -f cadvisor.daemonset.yaml </code></pre> <p>I get the output + error:</p> <p><em>error: error validating "cadvisor.daemonset.yaml": error validating data: [ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false</em></p> <p>But there is no infos about these required fields in the documentation or anywhere on Google :(</p>
Thür
<p>Do not pass <code>status: {}</code> in the yaml when creating resources. That field is only for status information returned from the API server.</p>
Burak Serdar
<p>I am trying to connect a pod which is running in <strong>Kind</strong> with a local Postgres database which runs in a Docker container. I tried to add the following service but the pod still cannot connect when using the DNS name <code>postgres.dev.svc</code>.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: postgres namespace: dev spec: type: ExternalName externalName: 10.0.2.2 </code></pre> <p>Is there another way to connect these two components?</p>
ammerzon
<p>First of all it's not the correct usage of the <code>ExternalName</code> service type. Although putting an IP address in <code>externalName</code> field it's perfectly feasible i.e. the resource will be created and you won't get any complaint from kubernetes API server. ❗<strong>But this value is treated as a domain name, comprised of digits, not as an IP adress</strong>. You can read about it in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">the official kubernetes docs</a>:</p> <blockquote> <p><strong>Note:</strong> ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. To hardcode an IP address, consider using <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless Services</a>.</p> </blockquote> <p>So what you really need here is <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">Service without a selector</a>:</p> <blockquote> <p>Services most commonly abstract access to Kubernetes Pods, but they can also abstract other kinds of backends. For example:</p> <ul> <li>You want to have an external database cluster in production, but in your test environment you use your own databases.</li> <li>You want to point your Service to a Service in a different <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces" rel="nofollow noreferrer">Namespace</a> or on another cluster.</li> <li>You are migrating a workload to Kubernetes. While evaluating the approach, you run only a portion of your backends in Kubernetes.</li> </ul> <p>In any of these scenarios you can define a Service <em>without</em> a Pod selector. For example:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: ports: - protocol: TCP port: 80 targetPort: 9376 </code></pre> <p>Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually:</p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: my-service subsets: - addresses: - ip: 192.0.2.42 ports: - port: 9376 </code></pre> </blockquote> <p>In your particular case your <code>Service</code> definition may look as follows:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres spec: ports: - protocol: TCP port: 5432 targetPort: 5432 </code></pre> <p>and the corresponding <code>Endpoints</code> object may look like this:</p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: postgres subsets: - addresses: - ip: 10.0.2.2 ports: - port: 5432 </code></pre> <p>Of course the IP address <code>10.0.2.2</code> must be reachable from within your kubernetes cluster.</p>
mario
<p>I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using <code>kubectl create -f my_spec.yaml</code>)</p> <p>The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.</p> <p>I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).</p> <p>P.S. It's obvious how to do that if you have control over the <code>command</code> of the <code>container</code>. But in case of launching arbitrary image, I have no control over the <code>command</code> attribute as I do not know it.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: generateName: puzzle spec: template: spec: containers: - name: main image: arbitrary-image env: - name: my_var valueFrom: &lt;Contents of /mnt/my_var_value.txt&gt; volumeMounts: - name: my-vol path: /mnt volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc </code></pre>
Ark-kun
<p>You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: cm-creator namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: cm-creator rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["create", "update", "get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cm-creator namespace: default subjects: - kind: User name: system:serviceaccount:default:cm-creator apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: cm-creator apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: cm-creator namespace: default labels: app: cm-creator spec: replicas: 1 serviceAccountName: cm-creator selector: matchLabels: app: cm-creator template: metadata: labels: app: cm-creator spec: containers: - name: cm-creator image: bitnami/kubectl command: - /bin/bash - -c args: - while true; kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-; sleep 60; done volumeMounts: - name: my-vol path: /mnt readOnly: true volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc </code></pre>
Vasili Angapov
<p>Kubernetes provides us two deployment strategies. One is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a></strong> and another one is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a></strong>. I should use Rolling Update when I don't want go off air. But when should I be using Recreate?</p>
Aditya Bhuyan
<p>+1 to <a href="https://stackoverflow.com/a/67606389/11714114">F1ko's answer</a>, however let me also add a few more details and some real world examples to what was already said.</p> <p>In a perfect world, where every application could be easily updated with <strong>no downtime</strong> we would be fully satisfied having only <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a> strategy.</p> <p>But as the world isn't a perfect place and all things don't go always so smoothly as we could wish, in certain situations there is also a need for using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a> strategy.</p> <p>Suppose we have a <strong>stateful application</strong>, running in a <strong>cluster</strong>, where individual instances need to comunicate with each other. Imagine our aplication has recently undergone a <strong>major refactoring</strong> and this new version can't talk any more to instances running the old version. Moreover, we may not even want them to be able to form a cluster together as we can expect that it may cause some unpredicteble mess and in consequence neither old instances nor new ones will work properly when they become available at the same time. So sometimes it's in our best interest to be able to first shutdown every old replica and only once we make sure none of them is runnig, spawn a replica that runs a new version.</p> <p>It can be the case when there is a major migration, let's say a major change in database structure etc. and we want to make sure that no pod, running the old version of our app is able to write any new data to the db during the migration.</p> <p>So I would say, that in majority of cases it is very application-specific, individual scenario involving major migrations, legacy applications etc which would require accepting a certain downtime and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a> all the pods at once, rather then updating them one-by-one like in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a> strategy.</p> <p>Another example which comes to my mind. Let's say you have an extremely old version of Mongodb running in a replicaset consisting of 3 members and you need to migrate it to a modern, currently supported version. As far as I remember, individual members of the replicaset can form a cluster only if there is 1 major version difference between them. So, if the difference is of 2 or more major versions, old and new instances won't be able to keep running in the same cluster anyway. Imagine that you have enough resources to run only 4 replicas at the same time. So rolling update won't help you much in such case. To have a quorum, so that the master can be elected, you need at least 2 members out of 3 available. If the new one won't be able to form a cluster with the old replicas, it's much better to schedule a maintanance window, shut down all the old replicas and have enough resources to start 3 replicas with a new version once the old ones are removed.</p>
mario
<p>If java/spring boot microservice A (deployed in its own container &amp; with its own Kubernetes Service of type ClusterIP) needs to send a REST request to Java/spring boot microservice B (has its own Kubernetes Service of type ClusterIP) in the same Kubernetes cluster, what's the best way for A to determine B's kubernetes service IP (especially if B is redeployed)? Note: internal call where B doesn't have a NodePort or LoadBalancer nor an Ingress. </p>
Meta
<p>The right way to do this is to have a <code>Service</code> for <code>B</code>, and have <code>A</code> use the name of that service to access <code>B</code>. This way, <code>B</code> will be accessible to <code>A</code> regardless of its address.</p>
Burak Serdar
<p>we want to create e2e test (integration test ) for our applications on k8s and we want to use minikube but it seems that there is no proper (maintained or official ) docker file for minikube. at least I didn’t find any…In addition I see <a href="https://k3s.io" rel="nofollow noreferrer">k3s</a> and not sure which is better to run e2e test on k8s ?</p> <p>I found this docker file but when I build it it fails with errors</p> <p><a href="https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/" rel="nofollow noreferrer">https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/</a> </p> <p><code>e - –no-install-recommends error</code></p> <p>any idea ?</p>
Rayn D
<p>As to the problem you encountered when building image from this particular Dockerfile...</p> <blockquote> <p>I found this docker file but when I build it it fails with errors</p> <p><a href="https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/" rel="nofollow noreferrer">https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/</a></p> <p>e - –no-install-recommends error</p> <p>any idea ?</p> </blockquote> <p>notice that:</p> <pre><code>--no-install-recommends install </code></pre> <p>and</p> <pre><code>–no-install-recommends install </code></pre> <p>are two completely different strings. So that the error you get:</p> <pre><code>E: Invalid operation –no-install-recommends </code></pre> <p>is the result you've copied content of your Dockerfile from <a href="https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/" rel="nofollow noreferrer">here</a> and you should have rather copied it from <a href="https://gist.github.com/andrewjjenkins/798f5c736a187d616d256095662c0a76" rel="nofollow noreferrer">github</a> (you can even click <code>raw</code> button there to be 100% sure you copy totally plain text without any additional formatting, changed encoding etc.)</p>
mario
<p>In kubernetes cluster for microservice application to diagnose the issue we need logs. </p> <ol> <li>Is is good idea to use NFS persistent volume for all microservice logs?</li> <li>If yes, Is it possible to apply log rotation policy on NFS persistent volume based on size or days?</li> <li>If we use ELK stack with filebeat it will need more resources and learning for customer to get the required log.</li> </ol> <p>What will be best approach i.e NFS or ELK stack or mixed?</p>
Mr.Pramod Anarase
<ol> <li>NFS is ok as long as it is able to offer required performance. </li> <li>You should apply lifecycle policy at Elasticsearch indices level. Modern Kibana has a nice interface for creation of lifecycle policies and overall monitoring of ES.</li> <li>Never worked with Filebeat. We use EFK stack - Elasticsearch, Fluentd and Kibana. It works pretty well and is installed only using Helm Charts.</li> </ol>
Vasili Angapov
<p>I am trying to understand how ingress controller works in kubernetes.</p> <p>I have deployed nginx ingress controller on bare metal k8s cluster (referred to kind ingress docs) localhost now points to nginx default page.</p> <p>I have deployed an app with an ingress resource with host as "foo.localhost". I can access my app on foo.localhost now.</p> <p>I would like to know how nginx was able to do it without any modificaion on /etc/hosts file.</p> <p>I also want to access my app from different machine over same/different network.</p> <p>I have used ngrok for this</p> <p><code>ngrok http foo.localhost</code></p> <p>but it points to nginx default page and not my app</p> <p>How can I access it using ngrok if I don't want to use port forward or kube proxy.</p>
piby180
<p>On your machine, <code>localhost</code> and <code>foo.localhost</code> all resolve to the same address, 127.0.0.1. This is already there, it is not something nginx or k8s does. That's the reason why you cannot access that from another machine, because that name resolves to the localhost for that machine as well, not the one running your k8s ingress. When you exposed it using ngrok, it exposes it using a different name. When you try to access the ingress using that name, the request contains a <code>Host</code> header with the ngrok URL, which is not the same as <code>foo.localhost</code>, so the ingress thinks the request is for a different domain.</p> <p>Try exposing your localhost in the ingress using the ngrok url.</p>
Burak Serdar
<p>Part of my deployment looks like this</p> <pre><code>client -- main service __ service 1 |__ service 2 </code></pre> <p><strong>NOTE:</strong> Each of these 4 services is a container and I'm trying to do this where each is in it's own Pod (without using multi container pod)</p> <p>Where main service must make a call to service 1, get results then send those results to service 2, get that result and send it back to the web client</p> <p>main service operates in this order</p> <ul> <li>receive request from web client pot :80</li> <li>make request to <a href="http://localhost:8000" rel="nofollow noreferrer">http://localhost:8000</a> (service 1)</li> <li>make request to <a href="http://localhost:8001" rel="nofollow noreferrer">http://localhost:8001</a> (service 2)</li> <li>merge results</li> <li>respond to web client with result</li> </ul> <p>My deployments for service 1 and 2 look like this</p> <p>SERVICE 1</p> <pre><code>apiVersion: v1 kind: Service metadata: name: serviceone spec: selector: run: serviceone ports: - port: 80 targetPort: 5050 --- apiVersion: apps/v1 kind: Deployment metadata: name: serviceone-deployment spec: replicas: 1 selector: matchLabels: run: serviceone template: metadata: labels: run: serviceone spec: containers: - name: serviceone image: test.azurecr.io/serviceone:v1 imagePullPolicy: IfNotPresent ports: - containerPort: 5050 </code></pre> <p>SERVICE 2</p> <pre><code>apiVersion: v1 kind: Service metadata: name: servicetwo spec: selector: run: servicetwo ports: - port: 80 targetPort: 5000 --- apiVersion: apps/v1 kind: Deployment metadata: name: servicetwo-deployment spec: replicas: 1 selector: matchLabels: run: servicetwo template: metadata: labels: run: servicetwo spec: containers: - name: servicetwo image: test.azurecr.io/servicetwo:v1 imagePullPolicy: IfNotPresent ports: - containerPort: 5000 </code></pre> <p>But I don't know what the service and deployment would look like for the main service that has to make request to two other services.</p> <p><strong>EDIT:</strong> This is my attempt at the service/deployment for main service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mainservice spec: selector: run: mainservice ports: - port: 80 # incoming traffic from web client pod targetPort: 80 # traffic goes to container port 80 selector: run: serviceone ports: - port: ? targetPort: 8000 # the port the container is hardcoded to send traffic to service one selector: run: servicetwo ports: - port: ? targetPort: 8001 # the port the container is hardcoded to send traffic to service two --- apiVersion: apps/v1 kind: Deployment metadata: name: mainservice-deployment spec: replicas: 1 selector: matchLabels: run: mainservice template: metadata: labels: run: mainservice spec: containers: - name: mainservice image: test.azurecr.io/mainservice:v1 imagePullPolicy: IfNotPresent ports: - containerPort: 80 </code></pre> <p><strong>EDIT 2:</strong> alternate attempt at the service after finding this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services</a></p> <pre><code>apiVersion: v1 kind: Service metadata: name: mainservice spec: selector: run: mainservice ports: - name: incoming port: 80 # incoming traffic from web client pod targetPort: 80 # traffic goes to container port 80 - name: s1 port: 8080 targetPort: 8000 # the port the container is hardcoded to send traffic to service one - name: s2 port: 8081 targetPort: 8001 # the port the container is hardcoded to send traffic to service two </code></pre>
erotavlas
<p>The main service doesn't need to know anything about the services it calls other than their names. Simply access those services using the name of the <code>Service</code>, i.e. <code>service1</code> and <code>service2</code> (<a href="http://service1:80" rel="nofollow noreferrer">http://service1:80</a>) and the requests will be forwarded to the correct pod.</p> <p>Reference: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
Burak Serdar
<p>I have 2 pods: a server pod and a client pod (basically the client hits port 8090 to interact with the server). I have created a service (which in turn creates an endpoint) but the client pod cannot reach that endpoint and therefore it crashes:</p> <blockquote> <p>Error :Error in client :rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :8090: connect: connection refused")</p> </blockquote> <p>The client pod tries to access port 8090 in its host network. What I am hoping to do is that whenever the client hits 8090 through the service it connects to the server.</p> <p>I just cannot understand how I would connect these 2 pods and therefore require help.</p> <p>server pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: server-pod labels: app: grpc-app spec: containers: - name: server-pod image: image ports: - containerPort: 8090 </code></pre> <p>client pod :</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: client-pod labels: app: grpc-app spec: hostNetwork: true containers: - name: client-pod image: image </code></pre> <p>Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: server labels: app: grpc-app spec: type: ClusterIP ports: - port: 8090 targetPort: 8090 protocol: TCP selector: app: grpc-app </code></pre>
Tanmay Shrivastava
<p>Your service is selecting both the client and the server. You should change the labels so that the server should have something like app: grpc-server and the client should have app: grpc-client. The service selector should be app: grpc-server to expose the server pod. Then in your client app, connect to server:8090. You should remove hostNetwork: true.</p>
Burak Serdar
<p>While working through the tutorial <a href="https://docs.docker.com/get-started/part3/" rel="noreferrer">Get Started, Part 3: Deploying to Kubernetes</a> I stumbled over the Pod template within the deployment definition of the manifest file. There are no ports specified, neither in the pod nor in the container section.</p> <p>That led me to my initial question: How does the port publishing work from the docker container into the pod?</p> <p>The following quote sounds like kubernetes obtains an insight into the running container once started and gets the port from the service listening at 0.0.0.0:PORT and maps it to the same port in the pod environment (network namespace).</p> <blockquote> <p>Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="noreferrer">Source</a></p> </blockquote> <p>If my assumption goes in the right direction, what does this mean for pods with multiple containers? Does kubernetes only allow containers with internal services listening on different ports? Or is it possible to map the container internal ports to different ports in the pod environment (network namespace)?</p> <p>According to the following quote, I assume port mapping from container to pod is not possible. Indeed it does not make too much sens to specify two services within two containers with the same ports, just to change them via a mapping immediately following this.</p> <blockquote> <p>Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="noreferrer">Source</a></p> </blockquote> <hr> <p><strong>UPDATE 2019-10-15</strong></p> <p>As the following quote states, a docker container does not publish any port to the outside world by default. </p> <blockquote> <p>By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. <a href="https://docs.docker.com/config/containers/container-networking/" rel="noreferrer">Source</a> </p> </blockquote> <p>That means kubernetes must configuer the docker container running within a pod somehow, so that the container's ports are published to the pod.</p> <p>Regarding the following quote, is it possible that kubernetes runs the docker containers by using the <em>--network host</em> configuration? Assumed the pod is the docker host in kubernetes.</p> <blockquote> <p>If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host [...] For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address. <a href="https://docs.docker.com/network/host/" rel="noreferrer">Source</a></p> </blockquote>
Chris
<p>Containers running in a pod are similar to processes running on a node connected to a network. Pod gets a network address, and all containers share the same network address. They can talk to each other using <code>localhost</code>.</p> <p>A container running in a pod can listen to any port on that address. If there are multiple containers running in a pod, they cannot bind to the same port, only one of them can. Note that there is no requirement about publishing those ports. A container can listen to any port, and traffic will be delivered to it if a client connects to that port of the pod.</p> <p>So, port mapping from container to pod is not possible.</p> <p>The exposed ports of containers/pods are mainly informational, and tools use them to create resources.</p>
Burak Serdar
<p>I need to rotate admin.conf for a cluster so old users who used that as their kubeconfig wouldn't be allowed to perform actions anymore. How can i do that?</p>
Julessoulfly
<p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p> <p>As <a href="https://stackoverflow.com/users/225016/mdaniel">mdaniel</a> wrote in his comment:</p> <blockquote> <p>the answer to your question is &quot;rekey the entire apiserver CA hierarchy&quot; or wait for <code>admin.conf</code> cert to expire, because those <code>admin.conf</code> credentials are absolute. Next time, use the provided <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">oidc mechanism</a> for user auth.</p> </blockquote> <p>For <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm based kubernetes cluster</a> please also refer to <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/" rel="nofollow noreferrer">Certificate Management with kubeadm</a>. For manual rotation of CA Certificates, please refer to <a href="https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/" rel="nofollow noreferrer">this section</a>. Pay special attention to point 7:</p> <blockquote> <ol start="7"> <li>Update certificates for user accounts by replacing the content of client-certificate-data and client-key-data respectively.</li> </ol> <p>For information about creating certificates for individual user accounts, see <a href="https://kubernetes.io/docs/setup/best-practices/certificates/#configure-certificates-for-user-accounts" rel="nofollow noreferrer">Configure certificates for user accounts</a>.</p> <p>Additionally, update the certificate-authority-data section in the kubeconfig files, respectively with Base64-encoded old and new certificate authority data</p> </blockquote>
mario
<p>I have setup an angular docker application (with nginx to serve it) in a Kubernetes cluster. In Kubernetes I have an ingress controller (the one from the kubernetes community) which shall route the traffic from the following domains:</p> <p>a) <a href="https://app.somedomain.com" rel="nofollow noreferrer">https://app.somedomain.com</a> and b) <a href="https://saas-customer.somedomain.com/app" rel="nofollow noreferrer">https://saas-customer.somedomain.com/app</a></p> <p>to the same angular container. The idea is that individual users can use a) and company users/customers can use b). I have setup the following ingress for a):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-ingress annotations: kubernetes.io/ingress.class: nginx spec: tls: - hosts: - app.somedomain.io secretName: tls-secret rules: - host: app.somedomain.io http: paths: - backend: serviceName: frontend-service servicePort: 80 </code></pre> <p>and this for b)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-ingress-wild annotations: kubernetes.io/ingress.class: nginx spec: tls: - hosts: - somedomain.io - '*.somedomain.io' secretName: tls-secret rules: - host: '*.somedomain.io' http: paths: - path: /app backend: serviceName: frontend-service servicePort: 80 </code></pre> <p>a) is working as intended. With b) the problem is that the index.html is served, but it references assets from '/'. base href is set to / in angular. </p> <pre><code>src="polyfills.7037a817a5bb670ed2ca.js" </code></pre> <p>But i can not change the base href because in one case it should be '/' and in the other '/app' but still all assets should be served from one location. I do not know If i need to add specific route to angular for this to work or try to fiddle on the ingress/nginx side to solve this challenge.</p>
Masiar Ighani
<p>You want to serve your assets like *.js files from "/"? You can do it using following annotation in second ingress:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^/app/(.*\.js)$ /$1 break; </code></pre> <p>This way you app will be served from /app directory and your js files - from root like this:</p> <pre><code>/app/index.html -&gt; /app/index.html /app/index.js -&gt; /index.js </code></pre> <p>Is this what you want to achieve?</p>
Vasili Angapov
<p>I was reading the example at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">kubernetes hpa example</a>. In this example they run with: <code>kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80</code>. So the pod will ask for 200m of cpu (0.2 of each core). After that they run hpa with a target cpu of 50%: <code>kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10</code>. Which mean that the desired milli-core is 200m * 0.5 = 100m. They make a load test and put up a 305% load. Which mean auto scale up to: ceil((3.05 * 200m) / 100m) = 7 pods according to: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">hpa scaling algorith</a>.</p> <p>This is all good, but we are experimenting with different values and I wonder if it's a good approach.</p> <p><a href="https://i.stack.imgur.com/VmuJ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VmuJ1.png" alt="2 options"></a></p> <p>We opted for a target cpu of 500% (second option). For me, target cpu >= 100% is a wierd concept (maybe I understand wrong also, please correct me as I'm not that familiar with the whole concept), but it slow down scaling compare to the inverted (first option).</p>
ThePainnn
<p>The first approach is correct.</p> <p>The second one is not good for a few reasons:</p> <ol> <li><strong>Decision about necessity of scaling up the cluster is taken too late</strong>, when first Pod is already overloaded. If you give only <strong>100 millicores of CPU to one Pod</strong>, but you allow a situation it can use 5 times of what is available, before the decision about scaling up the cluster can be taken. Such system isn't very efficient with load average about 5 per core which means that when 1 process is served in a given time, there are another 4 processes waiting for CPU time. </li> <li><strong>Same with scaling down the cluster. It isn't very effective either.</strong> Let's say your general CPU usage in your cluster decreased by more than 400 millicores but it is still not enough to remove one replica and scale down the cluster. In first case scenario 4 replicas would be already removed and the cluster scaled down.</li> </ol> <p><strong>Another very important thing</strong>. When planning your <strong>Horizontal Pod Autoscaler</strong>, consider total amount of resources available in your cluster so you don't find yourself in a situation when you're run out of resources. </p> <p><strong>Example:</strong> you have a system with 2-core processors which equals to having 2000 millicores available from perspective of your cluster. Let's say you decided to create following deployment:</p> <p><code>kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=500m --expose --port=80</code></p> <p>and then <strong>Horizontal Pod Autoscaler</strong>:</p> <p><code>kubectl autoscale deployment php-apache --cpu-percent=100 --min=1 --max=5</code></p> <p>This means you allow that more resources can be requested than you're actually have available in your cluster so in such situation 5th replica will never be created.</p>
mario
<p>I have a cluster in Google Kubernetes Engine, in that cluster there is a workload which runs every 4 hours, its a cron job that was set up by someone. I want to make that run whenever I need it. I am trying to achieve this by using the google Kubernetes API, sending requests from my app whenever a button is clicked to run that cron job, unfortunately the API has no apparent way to do that, or does not have a way at all. What would be some good advice to achieve my goal?</p>
Vlad Tanase
<p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer"><code>CronJob</code></a> resource in <strong>kubernetes</strong> is not meant to be used one-off tasks, that are run on demand. It is rather configured to run on a regular schedule.</p> <p><a href="https://stackoverflow.com/users/7424896/manuel-polacek">Manuel Polacek</a> has already mentioned that in his comment:</p> <blockquote> <p>For this scenario you don't need a cron job. A simple bare pod or a job would be enough, i would say. You can apply a resource on button push, for example with kubectl – Manuel Polacek Apr 24 at 19:25</p> </blockquote> <p>So rather than trying to find a way to run your <code>CronJobs</code> on demand, regardless of how they are originally scheduled (usually to be repeated at regular intervals), you should copy the code of such <code>CronJob</code> and find a different way of running it. A <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer"><code>Job</code></a> fits ideally to such use case as it is designed to run one-off tasks.</p>
mario
<p>We are trying to get live configuration data from our kubernetes cluster. Therefore we would like to read the configmaps from each of our services.</p> <p>Is there a way to exctract this data with a spring microservice which runs alongside the rest of the services?</p> <p>Or are there other (better?) ways / tools to get this information?</p>
famabenda
<p>Using Kubernetes APIs you can get the configmaps you need. I am not familiar with the Java client, but here it is:</p> <p><a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a></p> <p>You can retrieve a list of configmaps and their contents using these APIs. Your application will need a cluster role and a cluster role binding to allow it reading from configmap resources if you're using RBAC.</p>
Burak Serdar
<p>I want to setup a in-cluster NFS-Server in Kubernetes to provide shares for my pods (nginx webroot etc.).</p> <p>In theory there should be a persistent volume, a volume claim and the NFS-Server, which, as I understand is a deployment.</p> <p>To use the PV and PVC I need to assign the NFS-Server's IP-Adress, which I don't know, because it automatically generated when I expose the NFS-Server with a service.</p> <p>The same problem appears if I want to deploy the nfs-server deployment itself, because I am using the PVC as volumes. But I can't deploy the PV and PVCs without giving them the NFS-Server IP.</p> <p>I think I am lost, maybe you can help me.</p> <ol> <li>PV</li> </ol> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-pv1 labels: type: local spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: path: "/exports/www" server: SERVER_NAME:PORT </code></pre> <ol start="2"> <li>PVC</li> </ol> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-nfs-pv1 labels: type: local spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 500Mi </code></pre> <ol start="3"> <li>NFS-Deployment</li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-server spec: replicas: 1 selector: matchLabels: role: nfs-server template: metadata: labels: role: nfs-server spec: containers: - name: nfs-server image: gcr.io/google_containers/volume-nfs:0.8 ports: - name: nfs containerPort: 2049 - name: mountd containerPort: 20048 - name: rpcbind containerPort: 111 securityContext: privileged: true volumeMounts: - mountPath: /exports/www name: pv-nfs-pv1 volumes: - name: pv-nfs-pv1 gcePersistentDisk: pdName: pv-nfs-pv1 # fsType: ext4 </code></pre>
John Jameson
<p>1) You create NFS-server deployment.</p> <p>2) You expose NFS-server deployment by creating service, say "nfs-server", exposing TCP port 2049 (assuming you use NFSv4). </p> <p>3) You create PV with the following information:</p> <pre><code> nfs: path: /exports/www server: nfs-server </code></pre> <p>4) You create PVC and mount it wherever you need it.</p>
Vasili Angapov
<p>I am beginner to kubernetes. I am trying to install minikube wanted to run my application in kubernetes. I am using ubuntu 16.04</p> <p>I have followed the installation instructions provided here <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy</a></p> <p>Issue1: After installing kubectl, virtualbox and minikube I have run the command</p> <pre><code>minikube start --vm-driver=virtualbox </code></pre> <p>It is failing with following error</p> <pre><code>Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... E0912 17:39:12.486830 17689 start.go:305] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition </code></pre> <p>But when I checked the virtualbox I see the minikube VM running and when I run the kubectl</p> <pre><code>kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 </code></pre> <p>I see the deployments</p> <pre><code> kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-minikube 1 1 1 1 27m </code></pre> <p>I exposed the hello-minikube deployment as service</p> <pre><code>kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-minikube LoadBalancer 10.102.236.236 &lt;pending&gt; 8080:31825/TCP 15m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 19h </code></pre> <p>I got the url for the service</p> <pre><code>minikube service hello-minikube --url http://192.168.99.100:31825 </code></pre> <p>When I try to curl the url I am getting the following error</p> <pre><code>curl http://192.168.99.100:31825 curl: (7) Failed to connect to 192.168.99.100 port 31825: Connection refused </code></pre> <p>1)If minikube cluster got failed while starting, how did the kubectl able to connect to minikube to do deployments and services? 2) If cluster is fine, then why am i getting connection refused ?</p> <p>I was looking at this proxy(<a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster</a>) what is my_proxy in this ?</p> <p>Is this minikube ip and some port ?</p> <p>I have tried this</p> <p><a href="https://stackoverflow.com/questions/52300055/error-restarting-cluster-restarting-kube-proxy-waiting-for-kube-proxy-to-be-up">Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition</a></p> <p>but do not understand how #3(set proxy) in solution will be done. Can some one help me getting instructions for proxy ?</p> <p>Adding the command output which was asked in the comments</p> <pre><code>kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE etcd-minikube 1/1 Running 0 4m kube-addon-manager-minikube 1/1 Running 0 5m kube-apiserver-minikube 1/1 Running 0 4m kube-controller-manager-minikube 1/1 Running 0 6m kube-dns-86f4d74b45-sdj6p 3/3 Running 0 5m kube-proxy-7ndvl 1/1 Running 0 5m kube-scheduler-minikube 1/1 Running 0 5m kubernetes-dashboard-5498ccf677-4x7sr 1/1 Running 0 5m storage-provisioner 1/1 Running 0 5m </code></pre>
VSK
<blockquote> <p>I deleted minikube and removed all files under ~/.minikube and reinstalled minikube. Now it is working fine. I did not get the output before but I have attached it after it is working to the question. Can you tell me what does the output of this command tells ?</p> </blockquote> <p>It will be very difficult or even impossible to tell what was exactly wrong with your <strong>Minikube Kubernetes cluster</strong> when it is already removed and set up again.</p> <p>Basically there were a few things that you could do to properly troubleshoot or debug your issue.</p> <blockquote> <p>Adding the command output which was asked in the comments</p> </blockquote> <p>The output you posted is actually only part of the task that @Eduardo Baitello asked you to do. <code>kubectl get po -n kube-system</code> command simply shows you a list of <code>Pods</code> in <code>kube-system</code> namespace. In other words this is the list of system pods forming your Kubernetes cluster and, as you can imagine, proper functioning of each of these components is crucial. As you can see in your output the <code>STATUS</code> of your <code>kube-proxy</code> pod is <code>Running</code>:</p> <pre><code>kube-proxy-7ndvl 1/1 Running 0 5m </code></pre> <p>You were also asked in @Eduardo's question to check its logs. You can do it by issuing:</p> <pre><code>kubectl logs kube-proxy-7ndvl </code></pre> <p>It could tell you what was wrong with this particular pod at the time when the problem occured. Additionally in such case you may use <code>describe</code> command to see other pod details (sometimes looking at pod events may be very helpful to figure out what's going on with it):</p> <pre><code>kubectl describe pod kube-proxy-7ndvl </code></pre> <p>The suggestion to check this particular <code>Pod</code> status and logs was most probably motivated by this fragment of the error messages shown during your Minikube startup process:</p> <pre><code>E0912 17:39:12.486830 17689 start.go:305] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition </code></pre> <p>As you can see this message clearly suggests that there is in short "something wrong" with <code>kube-proxy</code> so it made a lot of sense to check it first.</p> <p>There is one more thing you may have not noticed:</p> <pre><code>kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-minikube LoadBalancer 10.102.236.236 &lt;pending&gt; 8080:31825/TCP 15m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 19h </code></pre> <p>Your <code>hello-minikube</code> service was not completely ready. In <code>EXTERNAL-IP</code> column you can see that its state was <code>pending</code>. As you can use <code>describe</code> command to describe <code>Pods</code> you can do so to get details of the service. Simple:</p> <pre><code>describe service hello-minikube </code></pre> <p>could tell you quite a lot in such case.</p> <blockquote> <p>1)If minikube cluster got failed while starting, how did the kubectl able to connect to minikube to do deployments and services? 2) If cluster is fine, then why am i getting connection refused ?</p> </blockquote> <p>Remember that <strong>Kubernetes Cluster</strong> is not a monolith structure and consists of many parts that depend on one another. The fact that <code>kubectl</code> worked and you could create deployment doesn't mean that the whole cluster was working fine and as you can see in the error message it was suggesting that one of its components, namely <code>kube-proxy</code>, could actually not function properly.</p> <p>Going back to the beginning of your question...</p> <blockquote> <p>I have followed the installation instructions provided here <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy</a></p> <p>Issue1: After installing kubectl, virtualbox and minikube I have run the command</p> <pre><code>minikube start --vm-driver=virtualbox </code></pre> </blockquote> <p>as far as I understood you don't use the http proxy so you didn't follow instructions from this particular fragment of the docs that you posted, did you ? </p> <p>I have the impression that you mix 2 concepts. <code>kube-proxy</code> which is a <code>Kubernetes cluster</code> component and which is deployed as pod in <code>kube-system</code> space and <a href="https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers" rel="noreferrer">http proxy server</a> mentioned in <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">this</a> fragment of documentation. </p> <blockquote> <p>I was looking at this proxy(<a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster</a>) what is my_proxy in this ?</p> </blockquote> <p>If you don't know what is your <a href="https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers" rel="noreferrer">http proxy</a> address, most probably you simply don't use it and if you don't use it to connect to the Internet from your computer, <strong><em>it doesn't apply to your case in any way</em></strong>. </p> <p>Otherwise you need to set it up for your <strong>Minikube</strong> by <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">providing additional flags when you start it</a> as follows:</p> <pre><code>minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \ --docker-env https_proxy=https://$YOURPROXY:PORT </code></pre> <p>If you were able to start your <strong>Minikube</strong> and now it works properly only using the command:</p> <pre><code>minikube start --vm-driver=virtualbox </code></pre> <p>your issue was caused by something else and <em>you don't need to provide the above mentioned flags to tell your <strong>Minikube</strong> what is your http proxy server that you're using</em>.</p> <p>As far as I understand currently everything is up and running and you can access the url returned by the command <code>minikube service hello-minikube --url</code> without any problem, right ? You can also run the command <code>kubectl get service hello-minikube</code> and check if its output differs from what you posted before. As you didn't attach any yaml definition files it's difficult to tell if it was nothing wrong with your service definition. Also note that <code>Load Balancer</code> is a service type designed to work with external load balancers provided by cloud providers and minikube uses <code>NodePort</code> instead of it.</p>
mario
<p>We are trying to figure out a microservice architecture where we have an API Gateway (Zuul in this case), now all the services that Zuul is redirecting requests to would also need to be exposed externally? It seems counter intuitive as all these services can have private/local/cluster access and gateway is the one that should be externally exposed. Is this correct assessment? In what scenarios would you want these backend services to be exposed externally?</p> <p> ----- |----- </p>
user3380149
<p>Normally, you would not expose your backend services externally. The gateway (or the ingress) serves as the external gateway and proxies the requests to the internal network. </p> <p>I am familiar with one use case where I expose some services directly: I do not want to expose some admin services running on my cluster to the external world, but I want to expose them to my VPN, so I have an ingress forwarding traffic between the external network and the cluster, and nodePort services that expose admin apps to my VPN.</p>
Burak Serdar
<p>I have Keycloak behind Kong Ingress Controller. I 'm able to see keycloak welcome page at my {url}/auth/. However, when I click at Administration Console I am redirected to {url}:8443/auth/admin/master/console/</p> <p>When I click at Administration Console I should be redirect to {url}/auth/admin/master/console/</p> <p>When I install keycloak (with helm) on minikube exposing the the service as a NodePort service without using ingress and load balancer I'm able to access Administration Console page.</p> <p>I have detailed information about this problem in this link -> <a href="https://github.com/codecentric/helm-charts/issues/17" rel="nofollow noreferrer">https://github.com/codecentric/helm-charts/issues/17</a></p> <p>I'm stuck in this and have no idea how to solve the problem.</p>
J. Jhonys C. Camacho
<p>I have faced this issue may be a year ago, I remember that stupid redirect but I was not using Kong Ingress Controller, just a plain Kong. The problem I faced is that Kong runs as unprivileged user and cannot bind to low number ports. So Kong binds to 8443 ssl and places stupid redirect from 443 to 8443. I could not normally fix that and reinvented the wheel.</p> <p>I used ports 80 and 443 for Kong:</p> <pre><code> ports: - name: kong-proxy containerPort: 80 - name: kong-proxy-ssl containerPort: 443 - name: kong-admin containerPort: 8001 - name: kong-admin-ssl containerPort: 8444 </code></pre> <p>Then defined new ports and capability:</p> <pre><code>securityContext: capabilities: add: - NET_BIND_SERVICE env: - name: KONG_PROXY_LISTEN value: 0.0.0.0:80, 0.0.0.0:443 ssl - name: KONG_ADMIN_LISTEN value: 0.0.0.0:8001, 0.0.0.0:8444 ssl </code></pre> <p>After that that stupid redirect disappeared.</p> <p>Hope that helps.</p> <p><strong>UPDATE</strong></p> <p>Sorry, forgot to mention that for ports 80 and 443 to work I build custom Docker image with that lines:</p> <pre><code>FROM kong:1.1.1-centos RUN chown -R kong:kong /usr/local/kong \ &amp;&amp; setcap 'cap_net_bind_service=+ep' /usr/local/bin/kong \ &amp;&amp; setcap 'cap_net_bind_service=+ep' /usr/local/openresty/nginx/sbin/nginx </code></pre>
Vasili Angapov
<p>I am using K8S version 19.</p> <p>I tried to install second nginx-ingress controller on my server (I have already one for Linux so I tried to install for Windows as well)</p> <pre><code>helm install nginx-ingress-win ingress-nginx/ingress-nginx -f internal-ingress.yaml --set controller.nodeSelector.&quot;beta\.kubernetes\.io/os&quot;=windows --set defaultBackend.nodeSelector.&quot;beta\.kubernetes\.io/os&quot;=windows --set controller.admissionWebhooks.patch.nodeSelector.&quot;beta\.kubernetes\.io/os&quot;=windows --set tcp.9000=&quot;default/frontarena-ads-win-test:9000&quot; </code></pre> <p>This failed with &quot;Error: failed pre-install: timed out waiting for the condition&quot;.</p> <p>So I have run helm uninstall to remove that chart</p> <pre><code>helm uninstall nginx-ingress-win release &quot;nginx-ingress-win&quot; uninstalled </code></pre> <p>But I am getting Validation Webhook Pod created constantly</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE nginx-ingress-win-ingress-nginx-admission-create-f2qcx 0/1 ContainerCreating 0 41m </code></pre> <p>I delete pod with <code>kubectl delete pod</code> but it get created again and again.</p> <p>I tried also <code>kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-win-ingress-nginx-admission</code> but I am getting message <code>not found</code> for all combinations. How I can resolve this and how I can get rid off this? Thank you!!!</p>
vel
<p>If this <code>Pod</code> is managed by a <code>Deployment</code>,<code>StatefulSet</code>,<code>DaemonSet</code> etc., it will be automatically recreated every time you delete it, so trying to remove a <code>Pod</code> in most situations makes not much sense.</p> <p>If you want to check what controlls this <code>Pod</code>, run:</p> <pre><code>kubectl describe pod nginx-ingress-win-ingress-nginx-admission-create-f2qcx | grep Controlled </code></pre> <p>You would probably see some <code>ReplicaSet</code>, which is also managed by a <code>Deployment</code> or another object. Suppose I want to check what I should delete to get rid of my <code>nginx-deployment-574b87c764-kjpf6</code> <code>Pod</code>. I can do this as follows:</p> <pre><code>$ kubectl describe pod nginx-deployment-574b87c764-kjpf6 | grep -i controlled Controlled By: ReplicaSet/nginx-deployment-574b87c764 </code></pre> <p>then I need to run again <code>kubectl describe</code> on the name of the <code>ReplicaSet</code> we found:</p> <pre><code>$ kubectl describe rs nginx-deployment-574b87c764 | grep -i controlled Controlled By: Deployment/nginx-deployment </code></pre> <p>Finally we can see that it is managed by a <code>Deployment</code> named <code>nginx-deployment</code> and this is the resource we need to delete to get rid of our <code>nginx-deployment-574b87c764-kjpf6</code> <code>Pod</code>.</p>
mario
<p>In namespace A, I have a service <code>nginx</code> running. In namespace B, I can use <code>nginx.A</code> or <code>nginx.A.svc.cluster.local</code> to get access to the <code>nginx</code> in namespace A. </p> <p>So what's the difference between these two? Which one is more recommended? Why?</p>
injoy
<p><strong>Both forms are considered to be correct</strong> (compare with <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces" rel="noreferrer">this</a> article) and in majority of cases work fine however I could find a few issues on <strong>github</strong> when people encountered some problems related only to short names resolution e.g.:</p> <p><a href="https://github.com/kubernetes/dns/issues/109" rel="noreferrer">https://github.com/kubernetes/dns/issues/109</a></p> <p><a href="https://github.com/kubernetes/kubernetes/issues/10014" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/10014</a></p> <p>As you can read in official <strong>Kubernetes</strong> documentation (<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#namespaces-and-dns" rel="noreferrer">ref1</a>, <a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces/#understanding-namespaces-and-dns" rel="noreferrer">ref2</a>), it recommends use of the long form in case of reaching services across namespaces:</p> <blockquote> <p>When you create a Service, it creates a corresponding DNS entry. This entry is of the form <strong><code>&lt;service-name&gt;.&lt;namespace-name&gt;.svc.cluster.local</code></strong>, which means that if a container just uses <strong><code>&lt;service-name&gt;</code></strong>, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the <strong>fully qualified domain name (FQDN)</strong>.</p> </blockquote> <p>In my opinion it's much better to stick to <a href="https://en.wikipedia.org/wiki/Fully_qualified_domain_name" rel="noreferrer">FQDN (fully qualified domain name)</a> standard and often being explicit is considered to be a better practice than being implicit.</p>
mario
<p>I want to know the recommendation set for pod size. I.e. when to put application within pod or at what size it will be better to use machine itself in place of pod.</p> <p>Ex. when to think of coming out of k8s and used as external service for some application, when pod required 8GB or 16GB or 32GB? Same for CPU intensive.</p> <p>Because if pod required 16GB or 16 CPU and we have a machine/node of the same size then I think there is no sense of running pod on that machine. If we run in that scenario then it will be like we will having 10 pods and which required 8 Nodes.</p> <p>Hopes you understand my concern.</p> <p>So if some one have some recommendation for that then please share your thoughts on that. Some references will be more better.</p> <p>Recommendation for ideal range:</p> <ol> <li>size of pods in terms of RAM and CPU</li> <li>Pods is to nodes ratio, i.e. number of pods per nodes </li> <li>Whether good for stateless or stateful or both type of application or not</li> </ol> <p>etc.</p>
Prakul Singhal
<p>Running 16cpu/16gb pod on 16cpu/16gb machine is normal. Why not? You think of pods to be tiny but there is no such requirement. Pods can be gigantic, there is no issue with that. Remember container is just a process on a node, why you refuse to run a fat process on a fat node? Kubernetes adds very nice orchestration level to containers, why not make use of it?</p> <p>There is no such thing as a universal or recommended pod size. Asking for recommended pod size is the same as asking for a recommended size for VM or bare metal server. It is totally up to your application. If your application requires 16 or 64 GB of RAM - this is the recommended size for you, you see?</p> <p>Regarding pods to nodes ratio - current upper limit of Kubernetes is 110 pods per node. Everything below that watermark is fine. The only thing is that recommended master node size increases with total number of pods. If you have like 1000 pods - you go with small to medium size master nodes. If you have over 10 000 pods - you should increase your master nodes size. </p> <p>Regarding statefulness - stateless application generally survive better. But often state also should be stored somewhere and stored reliably. So if you plan your application as a set of microservices - create as much stateless apps you can and as few stateful as you can. Ideally, only the relational databases should be truly stateful.</p>
Vasili Angapov
<p>I have two services deployed on the same k8s (minikube cluster). What is the url/approach I should use for one service to communicate with another service. I tried searching a bit on the web but most of them are communicating with an external db which is not what I'm after. This is what my deployments look like. I am looking for the goclient to be able to communicate with goserver. I know I need to go through the service but not sure what the url should look like. And is this dynamically discoverable? In addition to this if I expose goserver though <code>ingress</code> will this change ?</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: goserver namespace: golang-ns labels: app: goserver spec: replicas: 1 selector: matchLabels: app: goserver template: metadata: labels: app: goserver spec: containers: - name: goserver image: goserver:1.0.0 imagePullPolicy: Never ports: - containerPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: goclient namespace: golang-ns labels: app: goclient spec: replicas: 1 selector: matchLabels: app: goclient template: metadata: labels: app: goclient spec: containers: - name: goclient image: goclient:1.0.0 imagePullPolicy: Never ports: - containerPort: 8081 --- apiVersion: v1 kind: Service metadata: name: goserver-service namespace: golang-ns spec: selector: app: goserver ports: - protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: v1 kind: Service metadata: name: goclient-service namespace: golang-ns spec: selector: app: goclient ports: - protocol: TCP port: 8081 targetPort: 8081 type: LoadBalancer </code></pre>
tmp dev
<p>Note that the term <em><strong>Service</strong></em> can be quite ambiguous when used in the context of <strong>kubernetes</strong>.</p> <p><strong>Service</strong> in your question is used to denote one of your <strong>microservices</strong>, deployed as containerized applications, running in <code>Pods</code>, managed by 2 separate <code>Deployments</code>.</p> <p><code>Service</code>, that was mentioned in David Maze's comment, refers to a specific resource type which is used for exposing your apps/microservices both inside and outside your <strong>kubernetes cluster</strong>. This resource type is called a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer"><code>Service</code></a>. But I assume you know that as such <code>Services</code> are also added in your examples.</p> <p>This is the reason why I prefer to use a term <strong><em>microservice</em></strong> if I really want to call <em>&quot;a service&quot;</em> one of the apps (clients, servers, whatever... ) deployed on my <strong>kubernetes cluster</strong>. And yes, this is really important distinction as talking about communication from one <code>Service</code> to another <code>Service</code> (kubernetes resource type) doesn't make any sense at all. Your <code>Pod</code> can communicate with a different <code>Pod</code> via a <code>Service</code> that exposes this second <code>Pod</code>, but <code>Services</code> don't communicate with each other at all. I hope this is clear.</p> <p>So in order to expose one of your <strong>microservices</strong> <strong>within your cluster</strong> and make it easily accessible for other <strong>microservices</strong>, running on the same cluster, use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer"><code>Service</code></a>. But what you really need in your case is it's simplest form. ❗<strong>There is no need for using <code>LoadBalancer</code> type here</strong>. In your case you want to expose your <code>Deployment</code> named <code>goserver</code> to make it accessible by <code>Pods</code> from second <code>Deployment</code>, named <code>goclient</code>, not by external clients, sending requests from the public Internet.</p> <p>Note that <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer"><code>LoadBalancer</code></a> type that you used in your <code>Service</code>'s yaml manifests has completely different purpose - it is used for exposing your app for clients reaching to it from outside your <strong>kubernetes cluster</strong> and is mainly applicable in cloud environments.</p> <p>So again, what you need in your case is the simplest <code>Service</code> (often called <code>ClusterIP</code> as it is the default <code>Service</code> type) which exposes a <code>Deployment</code> <strong>within</strong> the cluster. ⚠️ Remember that <code>ClusterIP</code> <code>Service</code> also has loadbalancing capabilities.</p> <p>OK, enough of explanations, let's move on to the practical part. As I said, it's really simple and it can be done with one command:</p> <pre><code>kubectl expose deployment goserver --port 8080 --namespace golang-ns </code></pre> <p>Yes! That's all! It will create a <code>Service</code> named <code>goserver</code> (there is no reason to name it differently than the <code>Deployment</code> it exposes) which will expose <code>Pods</code> belonging to <code>goserver</code> <code>Deployment</code> within your <strong>kubernetes cluster</strong>, making it easily accessible (and discoverable) via it's <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="noreferrer">DNS</a> name.</p> <p>If you prefer declarative <code>Service</code> definition, here it is as well:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: goserver namespace: golang-ns spec: selector: app: goserver ports: - port: 8080 </code></pre> <p>Your <code>golang-client</code> <code>Pods</code> need only the <code>Service</code> name i.e. <code>goserver</code> to access <code>goserver</code> <code>Pods</code> as they are deployed in the same namespace (<code>golang-ns</code>). If you need to access them from a <code>Pod</code> deployed to a different namespace, you need to use <code>&lt;servicename&gt;.&lt;namespace&gt;</code> i.e. <code>goserver.golang-ns</code>. You can also use <strong>fully quallified domain name (FQDN)</strong> (see the official docs <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="noreferrer">here</a>):</p> <pre><code>my-svc.my-namespace.svc.cluster-domain.example </code></pre> <p>which in your case may look as follows:</p> <pre><code>goserver.golang-ns.svc.cluster.local </code></pre> <p>As to:</p> <blockquote> <p>In addition to this if I expose goserver though ingress will this change ?</p> </blockquote> <p>❗Unless you want to expose your <code>goserver</code> to the external world, don't use <code>Ingress</code>, you don't need it.</p>
mario
<p>I work with a dev shop as a contractor &amp; I'm pitching Kubernetes to my CTO.</p> <p>It's on the premise that they can deploy multiple websites and abstract away multi-server management.</p> <p>However, the one stipulation is that in this new cluster of resources they would be able to point multiple different domains at it and still be able to route requests accordingly.</p> <p>So my question is: how can i manage multiple domains on a single Kubernetes cluster?</p> <p>I don't know if this sort of thing is possible in Kubernetes, any help would be appreciated.</p>
Obinna
<p>You can use an ingress with multiple domain names:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>In the <code>rules</code> section, you can define multiple hosts like:</p> <pre><code>rules: - host: host1.com http: paths: ... - host: host2.com http: paths: ... </code></pre>
Burak Serdar
<p>I have a k8s deployment that pulls an image based on the <em>digest</em> rather than the <em>tag</em>.</p> <p>Why? I have multiple lower k8s namespaces which all pull from the same Docker repo. I don't want a bug fix for <em>ns-dv</em> to accidentally replace with an image pushed for <em>ns-qa</em>. So I'd like to keep both images around, even if they share a tag.</p> <p>And since imagePullPolicy is always, new dynamic pods in <em>ns-qa</em> may use the latest, incorrect image.</p> <pre><code>imagePullPolicy: Always </code></pre> <p>Thus, in my Docker repo (Mirantis) I'd like to keep multiple images per tag, one per digest.</p> <p>Is this possible?</p>
paiego
<p>A digest uniquely identifies an image. A tag points to a digest. So, you cannot have multiple images that have the same tag. The difference is, a tag may be updated to point to a different digest. Two different tags can point to the same digest.</p> <p>So you either have to use the digests, or different tags for each namespace (app-dev, app-qa, etc.). The different tags may point to the same image, or they may point to different images.</p> <p>When you promote a dev image to qa, for instance, you can simply tag the dev image as qa, so both app-dev and app-qa tags pull the same image. Then you can make updates to the dev image, and tag that as app-dev, so dev namespace updates, but qa namespace stays the same.</p>
Burak Serdar
<p>With stackdriver's kubernetes engine integration, I can view real-time information on my pods and services, including how many are ready. I can't find any way to monitor this, however.</p> <p>Is there a way to set up an alerting policy that triggers if no pods in a deployment or service are ready? I can set up a log-based metric, but this seems like a crude workaround for information that stackdriver logging seems to already have access to.</p>
Nick Johnson
<p>I am not sure about the Stackdriver support of this feature; however, you can try creating following alerts as a workaround:</p> <ol> <li>In Alerting policy creation user interface, select resource type as "k8s_container", also select a metric that always exists ( for example, 'CPU usage time').</li> <li>Define any "filter" or you can use "group by" which will trigger the alert conditions.</li> <li>In aggregation, choose "count" aggregator.</li> </ol>
userX
<p>I'm running a Kubernetes cluster in a public cloud (Azure/AWS/Google Cloud), and I have some non-HTTP services I'd like to expose for users.</p> <p>For HTTP services, I'd typically use an Ingress resource to expose that service publicly through an addressable DNS entry.</p> <p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p> <p>I considered using <code>NodePort</code> services, but this requires the nodes themselves to be publicly accessible (relying on <code>kube-proxy</code> to route to the appropriate node). I'd prefer to avoid this if possible.</p> <p><code>LoadBalancer</code> services seem like another option, though I don't want to create a dedicated cloud load balancer for <em>each</em> TCP service I want to expose.</p> <p>I'm aware that the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">NGINX Ingress controller supports exposing TCP and UDP services</a>, but that seems to require a static definition of the services you'd like to expose. For my use case, these services are being dynamically created and destroyed, so it's not possible to define these service mappings upfront in a static <code>ConfigMap</code>.</p>
cjheppell
<p>Maybe this workflow can help:</p> <p>(I make the assumption that the cloud provider is AWS)</p> <ul> <li><p><strong>AWS Console:</strong> Create a segregated VPC and create your Kubernetes ec2 instances (or autoscaling group) disabling the creation of public IP. This makes it impossible to access the instance from the Internet, you still can access through the private IP (ex, 172.30.1.10) via a Site 2 Site VPN or through a secondary ec2 instance in the same VPC with Public IP. </p></li> <li><p><strong>Kubernetes:</strong> Create a service with a Fixed NodePort (eg 35432 for Postgres).</p></li> <li><p><strong>AWS console:</strong> create a Classic o Layer 4 Loadblancer inside the same VPC of your nodes, in the Listeners Tab open the port 35432 (and other ports that you might need) pointing to one or all of your nodes via a "Target Group". There is no charge in the number of ports. </p></li> </ul> <p>At this point, I don't know how to automate the update of the current living nodes in the Load Balancer's Target Group, this maybe could be an issue with Autoscaling features, if any... Maybe a Cron job with a bash script pulling info from AWS API and update the Target Group?</p>
Hugo V
<p>I just started to learn Kubernetes. I know what a rollback is, but I have never heard of rollout. Is "<strong>rollout</strong>" related to rollback in any way? Or "<strong>rollout</strong> is similar to deploying something? </p> <p><a href="https://i.stack.imgur.com/p77yB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p77yB.png" alt="("></a></p>
Jin Lee
<p>Rollout simply means rolling update of application. Rolling update means that application is updated gradually, gracefully and with no downtime. So when you push new version of your application's Docker image and then trigger rollout of your deployment Kubernetes first launches new pod with new image while keeping old version still running. When new pod settles down (passes its readiness probe) - Kubernetes kills old pod and switches Service endpoints to point to new version. When you have multiple replicas it will happen gradually until all replicas are replaced with new version.</p> <p>This behavior however is not the only one possible. You can tune Rolling Update settings in your deployments <code>spec.strategy</code> settings.</p> <p>Official docs even have interactive tutorial on Rolling Update feature, it perfectly explains how it works: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></p>
Vasili Angapov
<p>Created Kubernetes cluster deployment with 3 Pods, and all are running fine, but when trying to run them cannot do it, tried doing curl the Ip (Internal)of the Pods in describe section i could see this error &quot;&quot; MountVolume.SetUp failed for volume &quot;default-token-twhht&quot; : failed to sync secret cache:</p> <p>errors below:</p> <pre><code>5m51s Normal RegisteredNode node/ip-10-1-1-4 Node ip-10-1-1-4 event: Registered Node ip-10-1-1-4 in Controller 57m Normal Scheduled pod/nginx-deployment-585449566-9bqp7 Successfully assigned default/nginx-deployment-585449566-9bqp7 to ip-10-1-1-4 57m Warning FailedMount pod/nginx-deployment-585449566-9bqp7 MountVolume.SetUp failed for volume &quot;default-token-twhht&quot; : failed to sync secret cache: timed out waiting for the condition 57m Normal Pulling pod/nginx-deployment-585449566-9bqp7 Pulling image &quot;nginx:latest&quot; 56m Normal Pulled pod/nginx-deployment-585449566-9bqp7 Successfully pulled image &quot;nginx:latest&quot; in 12.092210534s 56m Normal Created pod/nginx-deployment-585449566-9bqp7 Created container nginx 56m Normal Started pod/nginx-deployment-585449566-9bqp7 Started container nginx 57m Normal Scheduled pod/nginx-deployment-585449566-9hlhz Successfully assigned default/nginx-deployment-585449566-9hlhz to ip-10-1-1-4 57m Warning FailedMount pod/nginx-deployment-585449566-9hlhz MountVolume.SetUp failed for volume &quot;default-token-twhht&quot; : failed to sync secret cache: timed out waiting for the condition 57m Normal Pulling pod/nginx-deployment-585449566-9hlhz Pulling image &quot;nginx:latest&quot; 56m Normal Pulled pod/nginx-deployment-585449566-9hlhz Successfully pulled image &quot;nginx:latest&quot; in 15.127984291s 56m Normal Created pod/nginx-deployment-585449566-9hlhz Created container nginx 56m Normal Started pod/nginx-deployment-585449566-9hlhz Started container nginx 57m Normal Scheduled pod/nginx-deployment-585449566-ffkwf Successfully assigned default/nginx-deployment-585449566-ffkwf to ip-10-1-1-4 57m Warning FailedMount pod/nginx-deployment-585449566-ffkwf MountVolume.SetUp failed for volume &quot;default-token-twhht&quot; : failed to sync secret cache: timed out waiting for the condition 57m Normal Pulling pod/nginx-deployment-585449566-ffkwf Pulling image &quot;nginx:latest&quot; 56m Normal Pulled pod/nginx-deployment-585449566-ffkwf Successfully pulled image &quot;nginx:latest&quot; in 9.459864756s 56m Normal Created pod/nginx-deployment-585449566-ffkwf Created container nginx </code></pre>
Deep Kundu
<p>You can add an additional RBAC role permission to your Pod's service account, reference <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">1</a> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#define-and-assign" rel="nofollow noreferrer">2</a> <a href="https://kubernetes.io/docs/reference/access-authn-authz/node/" rel="nofollow noreferrer">3</a>.</p> <p>Assure as well that you have the workload identity set up, reference <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_on_cluster" rel="nofollow noreferrer">4</a>.</p> <hr /> <p>This can also happen when apiserver is on high load, you could have more smaller nodes to spread your pods and increase your resource requests.</p>
Toni
<p>Say we have a simple deployment.yml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: namespace: ikg-api-demo name: ikg-api-demo spec: selector: matchLabels: app: ikg-api-demo replicas: 3 template: metadata: labels: app: ikg-api-demo spec: containers: - name: ikg-api-demo imagePullPolicy: Always image: example.com/main_api:private_key ports: - containerPort: 80 </code></pre> <p>the problem is that this image/container depends on another image/container - it needs to cp data from the other image, or use some shared volume.</p> <p>How can I tell kubernetes to download another image, run it as a container, and then copy data from it to the container declared in the above file?</p> <p>It looks like <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">this article</a> explains how.</p> <p>but it's not 100% clear how it works. It looks like you create some shared volume, launch the two containers, using that shared volume?</p> <p>so I according to that link, I added this to my deployment.yml:</p> <pre><code>spec: volumes: - name: shared-data emptyDir: {} containers: - name: ikg-api-demo imagePullPolicy: Always volumeMounts: - name: shared-data mountPath: /nltk_data image: example.com/nltk_data:latest - name: ikg-api-demo imagePullPolicy: Always volumeMounts: - name: shared-data mountPath: /nltk_data image: example.com/main_api:private_key ports: - containerPort: 80 </code></pre> <p>my primary hesitation is that mounting /nltk_data as a shared volume will overwrite what might be there already.</p> <p>So I assume what I need to do is mount it at some other location, and then make the ENTRYPOINT for the source data container:</p> <pre><code>ENTRYPOINT ['cp', '-r', '/nltk_data_source', '/nltk_data'] </code></pre> <p>so that will write it to the shared volume, once the container is launched.</p> <p><strong>So I have two questions:</strong></p> <ol> <li><p>How to run one container and finish a job, before another container starts using kubernetes?</p> </li> <li><p>How to write to a shared volume without having that shared volume overwrite what's in your image? In other words, if I have /xyz in the image/container, I don't want to have to copy <code>/xyz</code> to <code>/shared_volume_mount_location</code> if I don't have to.</p> </li> </ol>
Alexander Mills
<h1>How to run one container and finish a job, before another container starts using kubernetes?</h1> <p>Use initContainers - updated your deployment.yml, assuming <code>example.com/nltk_data:latest</code> is your data image</p> <h1>How to write to a shared volume without having that shared volume overwrite?</h1> <p>As you know what is there in your image, you need to select an appropriate mount path. I would use <code>/mnt/nltk_data</code></p> <h1>Updated deployment.yml with init containers</h1> <pre><code>spec: volumes: - name: shared-data emptyDir: {} initContainers: - name: init-ikg-api-demo imagePullPolicy: Always # You can use command, if you don't want to change the ENTRYPOINT command: ['sh', '-c', 'cp -r /nltk_data_source /mnt/nltk_data'] volumeMounts: - name: shared-data mountPath: /mnt/nltk_data image: example.com/nltk_data:latest containers: - name: ikg-api-demo imagePullPolicy: Always volumeMounts: - name: shared-data mountPath: /nltk_data image: example.com/main_api:private_key ports: - containerPort: 80 </code></pre>
Prakash Krishna
<p>I am trying to figure out the minimum number of kubernetes master nodes in a master replica set for kubernetes that will allow the entire cluster to behave as normal. Their official docs mention that you need a of three master nodes. </p> <p>What happens when you lose 1 of the 3 master nodes? Can you lose two of the master nodes in a replica set and still have the cluster behave as normal?</p>
Dylan
<p>Kubernetes API works as long as Etcd cluster works. Etcd cluster works when there is quorum, so at least 2 of 3 Etcd pods are alive. If only 1 of 3 Etcd pod is alive - cluster goes to read-only state when no new pods can be scheduled and no resource creates/updates/deletes are allowed. </p>
Vasili Angapov
<p>When attempting to drain a node on an AKS K8s cluster using:</p> <pre><code>kubectl drain ${node_name} --ignore-daemonsets </code></pre> <p>I get the following error:</p> <pre><code> "The Node \"aks-agentpool-xxxxx-0\" is invalid: []: Forbidden: node updates may only change labels, taints, or capacity (or configSource, if the DynamicKubeletConfig feature gate is enabled)" </code></pre> <p>Is there something extra that needs to be done on AKS nodes to allow draining?</p> <p>(Context: This is part of an automation script I'm writing to drain a kubernetes node for maintenance operations without downtime, so the draining is definitely a prerequisite here)</p> <p>An additional troubleshooting note:</p> <p>This command is being run via Ansible's "shell" module, but when the command is run directly in BASH, it works fine.</p> <p>Further, the ansible is being run via a Jenkins pipeline. Debug statements seem to show:</p> <ul> <li>the command being correctly formed and executed.</li> <li>the context seems correct (so kubeconfig is accessible)</li> <li>pods can be listed (so kubeconfig is active and correct)</li> </ul>
Traiano Welcome
<blockquote> <p>This command is being run via Ansible's "shell" module, but when the command is run directly in BASH, it works fine.</p> <p>Further, the ansible is being run via a Jenkins pipeline.</p> </blockquote> <p>It's good that you added this information because it totally changes the perspective from which we should look at the issue you experience.</p> <p><strong>For debugging purposes instead of running your command, try to run:</strong></p> <pre><code>kubectl auth can-i drain node --all-namespaces </code></pre> <p>both directly in <code>bash shell</code> as well as via <strong>Ansible's</strong> <code>shell module</code></p> <p>It should at least give you an answer if this is not a permission issue.</p> <p>Other commands that you may use to debugging in this case are:</p> <pre><code>ls -l .kube/config cat .kube/config whoami </code></pre> <p>Last one to make sure that <strong>Ansible</strong> uses the same <code>user</code>. If you already know that it uses different <code>user</code>, try to run the script as the same <code>user</code> you use for running it in a <strong>bash shell</strong>.</p> <p>Once you check this, we can continue the debugging process.</p>
mario
<p>I am using <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">kubespray</a> for the deployment of a kubernetes cluster and want to set some API Server parameters for the deployment. In specific I want to configure the authentication via OpenID Connect (e.g set the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server" rel="nofollow noreferrer"><code>oidc-issuer-url</code></a> parameter). I saw that kubespray has some vars to set (<a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md</a>), but not the ones I am looking for.</p> <p>Is there a way to set these parameters via kubespray? I don't want to configure each master manually (e.g by editing the <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> files).</p> <p>Thanks for your help</p>
chresse
<p>On the bottom of the page you are referring to there is description how to define custom flags for various components of k8s:</p> <pre><code>kubelet_custom_flags: - "--eviction-hard=memory.available&lt;100Mi" - "--eviction-soft-grace-period=memory.available=30s" - "--eviction-soft=memory.available&lt;300Mi" </code></pre> <p>The possible vars are:</p> <pre><code>apiserver_custom_flags controller_mgr_custom_flags scheduler_custom_flags kubelet_custom_flags kubelet_node_custom_flags </code></pre>
Vasili Angapov
<p>I am following instructions on <a href="https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/" rel="nofollow noreferrer">site</a> to spin up a multi-node kubernetes cluster using vagrant/ansible. Unfortunately, I get following error:</p> <pre><code>TASK [Configure node ip] ******************************************************* fatal: [k8s-master]: FAILED! =&gt; {"changed": false, "msg": "Destination /etc/default/kubelet does not exist !", "rc": 257} </code></pre> <p>The relevant passage in the Vagrantfile is:</p> <pre><code>- name: Install Kubernetes binaries apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - kubelet - kubeadm - kubectl - name: Configure node ip lineinfile: path: /etc/default/kubelet line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }} </code></pre> <p>Is it just the wrong path? Which one would it be then ?</p> <p>P.S.: I also get a warning beforehand indicating:</p> <pre><code>[WARNING]: Could not find aptitude. Using apt-get instead </code></pre> <p>Is it not installing the kubelet package and might that be the reason why it doesn't find the file ? How to fix it in that case ?</p>
Paul Rousseau
<p>Updating the node ip in the config file is not required. If you still wanted to change for any specific reason, below is the solution. </p> <p>You can change the file to <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> as per the <a href="https://github.com/kubernetes/release/issues/654" rel="noreferrer">change</a> </p> <p>Before you change, please check if this file exists in the nodes. </p> <p><code>/etc/default/kubelet</code> is for yum package. </p>
Prakash Krishna
<p>Currently we have a set of microservice hosted on kubernetes cluster. We are setting hpa values based on rough estimates. I am planning to monitor horizontal pod autoscaling behavior using grafana to ensure we are not over/under allocating the resources like CPU/memory and come up with possible cost optimization recommendation. Need directions on how to achieve this.</p> <p>I am new to Kubernetes world. Need directions on how to achieve this.</p>
Microsoft Tech Enthusiast
<p><strong>tl;dr</strong></p> <ol> <li>Monitor resource consumption of each pod.</li> <li>Monitor pod restarts and number of replicas.</li> <li>Use a load test.</li> </ol> <p><strong>Memory</strong></p> <p>As a starting point, you could monitor CPU- and memory-consumption of each pod. For example you can do something like this:</p> <pre><code>sum by (pod) (container_memory_usage_bytes{container=...}/ sum by (pod) (kube_pod_container_resource_requests{container=...}) </code></pre> <p>If you follow the advice given in <a href="https://blog.kubecost.com/blog/requests-and-limits/" rel="nofollow noreferrer">A Practical Guide to Setting Kubernetes Requests and Limits</a>, the limit setting is related to the request setting. With such a query you can analyse, if the requested memory per pod is roughly realistic. Depending on the configuration of the autoscaler, this could be helpful. You could define some grafana alert rule that triggers an alarm if the desired ratio between used and requested memory exceeds some threshold.</p> <p><strong>Restarts</strong></p> <p>If the pod exceeds a given memory limit, the pod will crash and kubernetes will trigger a restart. With the following metric you can monitor restarts:</p> <pre><code>sum by (pod) (increase(kube_pod_container_status_restarts_total{...}[1h])) </code></pre> <p><strong>CPU</strong></p> <p>CPU usage is also relevant:</p> <pre><code>process_cpu_usage{container=&quot;...&quot;} </code></pre> <p>For additional queries, have a look at <a href="https://stackoverflow.com/q/55143656/11934850">Prometheus queries to get CPU and Memory usage in kubernetes pods</a>.</p> <p><strong>Replicas</strong></p> <p>Now, as you have basic metrics in place, what about the autoscaler itself? You'll be able to count the number of active pods like this:</p> <pre><code>kube_horizontalpodautoscaler_status_current_replicas{} </code></pre> <p>Note that you might need to filter this metric by label <code>horizontalpodautoscaler</code>. But I recommend that you first run the metric without filters to get information about all running autoscalers.</p> <p>To have better cost control, autoscaling is usually limited to a maximum of replicas. If you are running on maximum, you might want to check if the given maximum is to low. With kubectl you can check the status like this:</p> <pre><code>kubectl describe hpa </code></pre> <p>Have a look at condition <code>ScalingLimited</code>.</p> <p>With grafana:</p> <pre><code>kube_horizontalpodautoscaler_status_condition{condition=&quot;ScalingLimited&quot;} </code></pre> <p>A list of kubernetes metrics can be found at <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs#exposed-metrics" rel="nofollow noreferrer">kube-state-metrics</a>. Have a look at <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/horizontalpodautoscaler-metrics.md" rel="nofollow noreferrer">Horizontal Pod Autoscaler Metrics</a> and <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/replicationcontroller-metrics.md" rel="nofollow noreferrer">ReplicationController metrics</a>.</p> <p><strong>Use a load test</strong></p> <p>In the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">HorizontalPodAutoscaler Walkthrough</a> there is a point where you need to increase the load on your application. There are several tools, that you may use for this, such as Apache Bench or JMeter.</p> <p>In my experience, upscaling is easy to achieve, the tricky part is the downscaling. Therefore, you need to play with increasing <em>and decreasing</em> the load.</p>
Sascha Doerdelmann
<p>For testing and learning purposes I tried to use <code>istio</code> in microk8s. <code>microk8s.enable istio</code> </p> <p>Then</p> <p><code>export MYHOST=$(microk8s.kubectl config view -o jsonpath={.contexts..namespace}).bookinfo.com</code></p> <p><code>microk8s.kubectl apply -l version!=v2,version!=v3 -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/platform/kube/bookinfo.yaml</code></p> <p><code>microk8s.kubectl get pods</code> shows running bookinfo containers.</p> <p>But when I try to get <code>gateway</code> it shows me nothing.</p> <p><code>microk8s.kubectl get gateway</code> </p> <blockquote> <p>No resources found in default namespace.</p> </blockquote> <p><code>microk8s.kubectl get all --all-namespaces</code> shows <code>pod/istio-engressgateway</code> and its IP address.</p> <p>But I can not access to that IP address, it shows not found.</p> <p>What am I missing here? I just started Kubernetes and microk8s.</p>
Sachith Muhandiram
<p>You also need to get bookinfo sample gateway yaml. To get that you must-</p> <p><code>microk8s.kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/networking/bookinfo-gateway.yaml</code></p> <p>That should work..</p>
Himsara Gallege
<p>I would like to copy the kube admin config file from the Kubernetes master host to the nodes using ansible synchronize but that fails due to a missing python interpreter, but I have already installed docker on all machines without any issue. </p> <p>See my task </p> <p><code>- name: admin conf file to nodes environment: ANSIBLE_PYTHON_INTERPRETER: python3 synchronize: src: /home/{{ansible_ssh_user}}/.kube/config dest: /home/{{ansible_ssh_user}} delegate_to: "{{item}}" loop: "{{groups.node}}" </code></p>
Herve Meftah
<p>You can use synchronize module only when rsync is enabled either in source server (kube master in your case) or in the kube nodes.</p> <h3>Method 1: to push from master, need rsync enabled in master</h3> <p>Synchronize use <code>push</code> mode by default </p> <pre><code>- hosts: nodes tasks: - name: Transfer file from master to nodes synchronize: src: /etc/kubernetes/admin.conf dest: $HOME/.kube/config delegate_to: "{{ master }}" </code></pre> <h3>Method 2: to use fetch and copy modules</h3> <pre><code> - hosts: all tasks: - name: Fetch the file from the master to ansible run_once: yes fetch: src=/etc/kubernetes/admin.conf dest=temp/ flat=yes when: "{{ ansible_hostname == 'master' }}" - name: Copy the file from the ansible to nodes copy: src=temp/admin.conf dest=$HOME/.kube/config when: "{{ ansible_hostname != 'master' }}" </code></pre> <p>Hope this helps. </p>
Prakash Krishna
<p>I am using Helm/Stable/Prometheus Server for my Metrics datasource and the Prometheus Server Dashboard is exposed using alb-ingress controller in AWS. Somehow the Prometheus webpage is not loading fully (few parts of the webpage are not getting loaded and throwing 404 errors). Here is the Ingress configuration:</p> <pre><code> ingress: ## If true, Prometheus server Ingress will be created ## enabled: true ## Prometheus server Ingress annotations ## annotations: kubernetes.io/ingress.class: 'alb' #kubernetes.io/tls-acme: 'true' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=60' alb.ingress.kubernetes.io/certificate-arn: certname alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' service: annotations: alb.ingress.kubernetes.io/target-type: ip labels: {} path: /* hosts: - prometheus.company.com ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. extraPaths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation </code></pre> <p>When I access prometheus.company.com, its getting properly redirected to prometheus.company.com/graph (assuming the redirect is working fine). However, some parts (*.js &amp; *.css files) of the webpage is throwing 404 errors.</p> <p>How can I resolve this?</p>
devops_dummy
<p>I fixed the issue. Solution: hosts: - prometheus.company.com/*</p>
devops_dummy
<p>In Kubernetes job, there is a spec for .spec.activeDeadlineSeconds. If you don't explicitly set it, what will be the default value? 600 secs?</p> <p>here is the example from k8s doc</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-with-timeout spec: backoffLimit: 5 activeDeadlineSeconds: 100 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never </code></pre> <p>assume I remove the line</p> <pre><code>activeDeadlineSeconds: 100 </code></pre>
xsqian
<p>By default, a Job will run uninterrupted. If you don't set <code>activeDeadlineSeconds</code>, the job will not have active deadline limit. It means <code>activeDeadlineSeconds</code> doesn't have default value.</p> <p>By the way, there are several ways to terminate the job.(Of course, when a Job completes, no more Pods are created.)</p> <ul> <li><p>Pod backoff failure policy(<code>.spec,.backofflimit</code>) You can set <code>.spec.backoffLimit</code> to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time.</p> </li> <li><p>Setting an active deadline(<code>.spec.activeDeadlineSeconds</code>) The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.</p> </li> </ul> <p>Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached.</p>
James Wang
<p>I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state. </p> <p>Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?</p> <p>Thanks </p>
Shadman Anwer
<p>A few interesting articles about <strong>backing up and restoring <code>Kubernetes cluster</code></strong> using various tools:</p> <p><a href="https://medium.com/@pmvk/kubernetes-backups-and-recovery-efc33180e89d" rel="nofollow noreferrer">https://medium.com/@pmvk/kubernetes-backups-and-recovery-efc33180e89d</a></p> <p><a href="https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487" rel="nofollow noreferrer">https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487</a></p> <p><a href="https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark</a></p> <p><a href="https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state" rel="nofollow noreferrer">https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state</a></p> <p>I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.</p> <p>Please notice that in all of those articles they use <code>namespace deletion</code> to <strong>simulate disaster scenario</strong> so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually <strong>namespace deletion</strong> <em>removes all kubernetes cluster resources in a given namespace</em> including <code>LoadBalancers</code> or <code>PersistentVolumes</code>. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. <code>PersistentVolume</code> by running <code>Pod</code>).</p> <p>You can try and run <a href="https://gist.github.com/negz/c3ee465b48306593f16c523a22015bec" rel="nofollow noreferrer">this</a> script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add <code>--all-namespaces</code> flag to list them.</p> <p>You may also try to dump any resource which is still available manually. If you still can see some resources like <code>Pods</code>, <code>Deployments</code> etc. and you can run on them <code>kubectl get</code> you may try to save their definition to a yaml file:</p> <pre><code>kubectl get deployment nginx-deployment -o yaml &gt; deployment_backup.yaml </code></pre> <p>Once you have your resources backed up you should be able to recreate your cluster more easily.</p>
mario
<p>we have an application deployed on AWS EKS, with these components:</p> <ul> <li>Apache Artemis JMS</li> <li>PostgreSQL</li> <li>Kafka</li> <li>and some application stateless pods made in node.js</li> </ul> <p>Which is the best approach to move the entire application from one nodegroup to another?</p> <p>We were thinking to use the &quot;kubectl drain&quot; command and move the EBS manually to the new node.</p> <p>Is there any better option?</p> <p>The reason behind this request is that we started with 2 xlarge nodes and we want to move to 4 large nodes, also to have the application on all 3 AWS zones, because we are worried that if a node dies, AWS may start the node on a different zone and EBS disks will not be mounted.</p> <p>Thanks for any advise</p>
Mario Stefanutti
<p>I would just add nodeselectors of nodeaffinity and then delete the running pods (so they will be rescheduled on the correct nodes)</p>
Duncan
<p>I tried to deploy kubernetes using minikube using both from local docker image and from docker hub. But both doesn't work. </p> <p>method-1: Using save and load the tar file, created the image and it is available to kubectl.</p> <pre><code>root@arun-desktop-e470:/var/local/dprojects/elasticsearch# kubectl get pods --all-namespaces -o jsonpath="{..image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c|grep elk 2 elk/elasticsearch:latest </code></pre> <p>Execute below commands to create the deployment:</p> <pre><code>kubectl run elastic --image=elk/elasticsearch:latest --port=9200 kubectl expose deployment elastic --target-port=9200 --type=NodePort minikube service elastic --url </code></pre> <p>From kubectl describe pod command,</p> <pre><code> Warning Failed 122m (x4 over 124m) kubelet, minikube Failed to pull image "elk/elasticsearch:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for elk/elasticsearch, repository does not exist or may require 'docker login' </code></pre> <p>Method-2: I did pushed the image to my docker hub repository, (<a href="https://hub.docker.com/r/get2arun/elk/tags" rel="nofollow noreferrer">https://hub.docker.com/r/get2arun/elk/tags</a>) and then login to docker hub in the terminal and created the deployment again. </p> <p>pushed to docker hub like below and hence I have permission to push and pull the images to my docker hub account. I have checked the "collaborators" under manage repositories and it has my docker hub id.</p> <pre><code>root@arun-desktop-e470:~# docker push get2arun/elk:elasticsearch_v1 The push refers to repository [docker.io/get2arun/elk] 19b7091eba36: Layer already exists 237c06a69e1c: Layer already exists c84fa0f11212: Layer already exists 6ca6c301e2ab: Layer already exists 76dd25653d9b: Layer already exists 602956e7a499: Layer already exists bde76be259f3: Layer already exists 2333287a7524: Layer already exists d108ac3bd6ab: Layer already exists elasticsearch_v1: digest: sha256:6f0b981b5dedfbe3f8e0291dc17fc09d32739ec3e0dab6195190ab0cc3071821 size: 2214 </code></pre> <p>kubectl run elasticsearch-v2 --image=get2arun/elk:elasticsearch_v1 --port=9200</p> <p>From kubectl describe pods command:</p> <pre><code> Normal BackOff 21s kubelet, minikube Back-off pulling image "get2arun/elk:elasticsearch_v1" Warning Failed 21s kubelet, minikube Error: ImagePullBackOff Normal Pulling 7s (x2 over 24s) kubelet, minikube Pulling image "get2arun/elk:elasticsearch_v1" Warning Failed 4s (x2 over 21s) kubelet, minikube Failed to pull image "get2arun/elk:elasticsearch_v1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for get2arun/elk, repository does not exist or may require 'docker login' </code></pre> <p>I removed the proxy settings and tried from open wifi account but still seeing permission denied.</p> <p>This error message is not sufficient to identify the issue and hoping there should be some way to narrow down these kind of issues.</p> <ol> <li>What happens in the background when Kubernetes is asked to use the local docker image or pull the image from docker hub? </li> <li>How to get all the log information when deployment is started ?</li> <li>What are the other sources for logs</li> </ol>
intechops6
<p>In method-1, as the image is not pushed to the repository, you have to use the imagePullPolicy.</p> <h3>Never try to pull the image</h3> <pre><code>imagePullPolicy: Never </code></pre> <h3>Try to pull the image, if it is not present</h3> <pre><code>imagePullPolicy: IfNotPresent </code></pre> <p>I think IfNotPresent is ideal, if you want to use local image / repository. Use as per your requirement.</p> <h3>kubectl</h3> <pre><code>kubectl run elastic --image=elk/elasticsearch:latest --port=9200 --image-pull-policy IfNotPresent </code></pre>
Prakash Krishna
<p><strong>Current flow:</strong></p> <blockquote> <p>incoming request (/sso-kibana) --> Envoy proxy --> /sso-kibana</p> </blockquote> <p><strong>Expected flow:</strong></p> <blockquote> <p>incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper --> keycloak </p> <p>--> If not logged in --> keycloak loging page --> /sso-kibana</p> <p>--> If Already logged in --> /sso-kibana</p> </blockquote> <p>I deployed keycloak-gatekeeper as a k8s cluster which has the following configuration:</p> <p><strong>keycloak-gatekeeper.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: keycloak-gatekeeper name: keycloak-gatekeeper spec: selector: matchLabels: app: keycloak-gatekeeper replicas: 1 template: metadata: labels: app: keycloak-gatekeeper spec: containers: - image: keycloak/keycloak-gatekeeper imagePullPolicy: Always name: keycloak-gatekeeper ports: - containerPort: 3000 args: - "--config=/keycloak-proxy-poc/keycloak-gatekeeper/gatekeeper.yaml" - "--enable-logging=true" - "--enable-json-logging=true" - "--verbose=true" volumeMounts: - mountPath: /keycloak-proxy-poc/keycloak-gatekeeper name: secrets volumes: - name: secrets secret: secretName: gatekeeper </code></pre> <p><strong>gatekeeper.yaml</strong> </p> <pre><code>discovery-url: https://keycloak/auth/realms/MyRealm enable-default-deny: true listen: 0.0.0.0:3000 upstream-url: https://kibana.k8s.cluster:5601 client-id: kibana client-secret: d62e46c3-2a65-4069-b2fc-0ae5884a4952 </code></pre> <p><strong>Envoy.yaml</strong></p> <pre><code>- name: kibana hosts: [{ socket_address: { address: keycloak-gatekeeper, port_value: 3000}}] </code></pre> <p><strong>Problem:</strong> </p> <p>I am able to invoke keycloak login on /Kibana but after login user is not going to /Kibana url i.e. Kibana dashboard is not loading.</p> <p><strong>Note:</strong> Kibana is also running as k8s cluster.</p> <p><strong>References:</strong><br> <a href="https://medium.com/@vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382" rel="nofollow noreferrer">https://medium.com/@vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382</a> </p> <p><a href="https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d" rel="nofollow noreferrer">https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d</a></p> <p><strong>Update 1:</strong></p> <p>I'm able to invoke keycloak login on /sso-kibana but after entering credentials its giving 404. The flow is following:</p> <p><strong>Step 1.</strong> Clicked on <a href="http://something/sso-kibana" rel="nofollow noreferrer">http://something/sso-kibana</a><br> <strong>Step 2.</strong> Keycloak login page opens at <a href="https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth" rel="nofollow noreferrer">https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth</a>?...<br> <strong>Step 3.</strong> After entering credentials redirected to this URL <a href="https://something/sso-kibana/oauth/callback?state=890cd02c-f" rel="nofollow noreferrer">https://something/sso-kibana/oauth/callback?state=890cd02c-f</a>...<br> <strong>Step 4.</strong> 404</p> <p><strong>Update 2:</strong></p> <p>404 error was solved after I added a new route in Envoy.yaml</p> <p><strong>Envoy.yaml</strong> </p> <pre><code> - match: { prefix: /sso-kibana/oauth/callback } route: { prefix_rewrite: "/", cluster: kibana.k8s.cluster } </code></pre> <p>Therefore, Expected flow (as shown below) is working fine now.</p> <blockquote> <p>incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper --> keycloak</p> <p>--> If not logged in --> keycloak loging page --> /sso-kibana</p> <p>--> If Already logged in --> /sso-kibana</p> </blockquote>
Aftab
<p>In your config you explicitly enabled <code>enable-default-deny</code> which is explained in the documentation as:</p> <blockquote> <p>enables a default denial on all requests, you have to explicitly say what is permitted (recommended)</p> </blockquote> <p>With that enabled, you will need to specify urls, methods etc. either via <code>resources</code> entries as shown in [1] or an commandline argument [2]. In case of Kibana, you can start with:</p> <pre><code>resources: - uri: /app/* </code></pre> <p>[1] <a href="https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration" rel="nofollow noreferrer">https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration</a></p> <p>[2] <a href="https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing" rel="nofollow noreferrer">https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing</a></p>
Joe
<p>I'am using the new <a href="https://kit.svelte.dev/docs" rel="nofollow noreferrer">SvelteKit</a> Framework with the <code>node-adapter</code></p> <p>and i have a problem of <code>undefined</code> Environment-Variables when using <code>process.env.APPLICATION_KEY_ID</code> Syntax in an endpoint in production build.</p> <p>When i use: <code>console.log(process.env)</code> i'am getting a list of all variables, including my <code>APPLICATION_KEY_ID</code></p> <pre><code>ALLUSERSPROFILE: 'C:\\ProgramData', APPDATA: 'C:\\Users\\user\\AppData\\Roaming', APPLICATION_KEY_ID: 'test', </code></pre> <p>But when i use <code>console.log(process.env.APPLICATION_KEY_ID)</code></p> <p>i'am getting <code>undefined</code></p> <p>Can someone give me a hint what i'am doing wrong?</p> <p>I'am running the app in kubernetes, this is my Dockerfile for building this image:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code># build the sapper app FROM mhart/alpine-node:14 AS build WORKDIR /app COPY . . RUN npm install RUN npm run build # install dependencies FROM mhart/alpine-node:14 AS deps WORKDIR /app COPY package.json . COPY --from=build /app/package-lock.json package-lock.json RUN npm ci --prod COPY --from=build /app/build build COPY --from=build /app/node_modules node_modules # copy node_modules/ and other build files over FROM mhart/alpine-node:slim-14 WORKDIR /app COPY --from=deps /app . EXPOSE 3000 CMD ["node", "build"] ENV HOST=0.0.0.0</code></pre> </div> </div> </p>
rubiktubik
<p>SvelteKit uses <a href="https://vitejs.dev/" rel="nofollow noreferrer">Vite</a> as it's bundler. It is probably best to stick to how this package deals with environment variables. Which is to say, all env variabled prefixed with <code>VITE_</code> will be available in your code using <code>import.meta.env.VITE_xxx</code></p>
Stephane Vanraes
<p>I'm seeing the following error when running a pod. I matched with the documentation in the Kubernetes webpage and it is the code is same as the one i have written below but Istill end up with the below error.</p> <hr> <p>error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false</p> <hr> <pre><code>apiVersion: v1 kind: pod metadata: name: helloworld-deployment labels: app: helloworld spec: containers: - name: helloworld image: anishanil/kubernetes:node ports: containerPort: 3000 resources: limits: memory: "100Mi" cpu: "100m" </code></pre> <hr> <pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6+IKS", GitCommit:"44b769243cf9b3fe09c1105a4a8749e8ff5f4ba8", GitTreeState:"clean", BuildDate:"2019-08-21T12:48:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <hr> <p>Any help is greatly appreciated</p> <p>Thank you</p>
anish anil
<blockquote> <p>I matched with the documentation in the Kubernetes webpage and it is the code is same as the one i have written below...</p> </blockquote> <p>Could you link the fragment of documentation with which you compare your code ? As other people already suggested in their answers and comments, your <code>yaml</code> is not valid. Are you sure you're not using some outdated tutorial or docs ?</p> <p><strong>Let's debug it together step by step:</strong></p> <ol> <li>When I use exactly the same code you posted in your question, the error message I got is quite different than the one you posted:</li> </ol> <blockquote> <p>error: error parsing pod.yml: error converting YAML to JSON: yaml: line 12: did not find expected key</p> </blockquote> <p>OK, so let's go to mentioned line 12 and check where can be the problem:</p> <pre><code> 11 ports: 12 containerPort: 3000 13 resources: 14 limits: 15 memory: "100Mi" 16 cpu: "100m" </code></pre> <p>Line 12 itself looks actually totally ok, so the problem should be elsewhere. Let's debug it further using <a href="http://www.yamllint.com/" rel="nofollow noreferrer">this</a> online yaml validator. It also suggests that this <code>yaml</code> is syntactically not correct however it pointed out different line:</p> <blockquote> <p>(): did not find expected key while parsing a block mapping at line 9 column 5</p> </blockquote> <p>If you look carefully at the above quoted fragment of code, you may notice that the indentation level in line 13 looks quite strange. When you remove one unnecessary space right before <code>resources</code> ( it should be on the same level as ports ) <strong><a href="http://www.yamllint.com/" rel="nofollow noreferrer">yaml validador</a></strong> will tell you that your <code>yaml</code> syntax is correct. Although it may already be a valid <code>yaml</code> it does not mean that it is a valid input for <strong>Kubernetes</strong> which requires specific structure following certain rules.</p> <ol start="2"> <li>Let's try it again... Now <code>kubectl apply -f pod.yml</code> returns quite different error:</li> </ol> <blockquote> <p>Error from server (BadRequest): error when creating "pod.yml": pod in version "v1" cannot be handled as a Pod: no kind "pod" is registered for version "v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"</p> </blockquote> <p>Quick search will give you an answer to that as well. Proper value of <code>kind:</code> key is <code>Pod</code> but not <code>pod</code>.</p> <ol start="3"> <li>Once we fixed that, let's run <code>kubectl apply -f pod.yml</code> again. Now it gives us back different error:</li> </ol> <blockquote> <p>error: error validating "pod.yml": error validating data: ValidationError(Pod.spec.containers[0].ports): invalid type for io.k8s.api.core.v1.Container.ports: got "map", expected "array";</p> </blockquote> <p>which is pretty self-explanatory and means that you are not supposed to use <strong>"map"</strong> in a place where an <strong>"array"</strong> was expected and the error message precisely pointed out where, namely: </p> <p><code>Pod.spec.containers[0].ports</code>. </p> <p>Let's correct this fragment:</p> <pre><code>11 ports: 12 containerPort: 3000 </code></pre> <p>In yaml formatting the <code>-</code> character implies the start of an array so it should look like this:</p> <pre><code>11 ports: 12 - containerPort: 3000 </code></pre> <ol start="4"> <li>If we run <code>kubectl apply -f pod.yml</code> again, we finally got the expected message:</li> </ol> <blockquote> <p>pod/helloworld-deployment created</p> </blockquote> <p>The final, correct version of the <code>Pod</code> definition looks as follows:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: helloworld-deployment labels: app: helloworld spec: containers: - name: helloworld image: anishanil/kubernetes:node ports: - containerPort: 3000 resources: limits: memory: "100Mi" cpu: "100m" </code></pre>
mario
<p>I have a web service running on a port on my local network exposed at port 6003. I also have a Kubernetes Cluster running on a different machine on the same network that uses and Nginx Ingress to proxy to all the services in the cluster. How can I set up an ingress to proxy to the machine? I had a set up that worked. But now, I am either getting DNS errors on the nginx pod or the response times out in the browser and nothing happens. </p> <p>Here is the manifest I have been using.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myservice-service spec: type: ExternalName externalName: 192.xxx.xx.x ports: - name: myservice port: 80 protocol: TCP targetPort: 6003 --- apiVersion: v1 kind: Endpoints metadata: name: myservice-ip subsets: - addresses: # list all external ips for this service - ip: 192.xxx.xx.x ports: - name: myservice port: 6003 protocol: TCP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: service.example.com annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" spec: rules: - host: service.example.com http: paths: - backend: serviceName: myservice-service servicePort: 80 path: / tls: - secretName: secret-prod-tls hosts: - service.example.com </code></pre> <p><strong>Edit for more information:</strong> This manifest does work. What I realized is that you <strong>must</strong> specify https even though the ingress has a tls block. This still is showing Lua DNS errors in the Nginx-ingress pod though.</p>
James Teague II
<p>You don't need ExternalName here. Usual headless service will do the job:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: external-ip spec: ports: - name: http port: 80 clusterIP: None type: ClusterIP --- apiVersion: v1 kind: Endpoints metadata: name: external-ip subsets: - addresses: - ip: 172.17.0.5 ports: - name: http port: 80 </code></pre>
Vasili Angapov
<p>We are using <code>jetstack/cert-manager</code> to automate certificate management in a k8s environment.</p> <p>Applying a Certificate with <code>kubectl apply -f cert.yaml</code> works just fine:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: test-cert spec: secretName: test-secret issuerRef: name: letsencrypt kind: Issuer dnsNames: - development.my-domain.com - production.my-domain.com </code></pre> <p>However, it fails when installing a Helm template:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: {{.Values.cert}} spec: secretName: {{.Values.secret}} issuerRef: name: letsencrypt kind: Issuer dnsNames: [{{.Values.dnsNames}}] </code></pre> <pre><code>E0129 09:57:51.911270 1 sync.go:264] cert-manager/controller/orders &quot;msg&quot;=&quot;failed to create Order resource due to bad request, marking Order as failed&quot; &quot;error&quot;=&quot;400 urn:ietf:params:acme:error:rejectedIdentifier: NewOrder request did not include a SAN short enough to fit in CN&quot; &quot;resource_kind&quot;=&quot;Order&quot; &quot;resource_name&quot;=&quot;test-cert-45hgz-605454840&quot; &quot;resource_namespace&quot;=&quot;default&quot; &quot;resource_version&quot;=&quot;v1&quot; </code></pre>
Vasyl Herman
<p>Try to inspect you Certificate object wiht <code>kubectl -n default describe certificate test-cert</code> and post here if you don't find any issues with it.</p> <p>your Certificate Object should be like the following:</p> <pre class="lang-yaml prettyprint-override"><code>Name: test-cert Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; API Version: cert-manager.io/v1 Kind: Certificate Metadata: Creation Timestamp: 2022-01-28T12:25:40Z Generation: 4 Managed Fields: API Version: cert-manager.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:dnsNames: f:issuerRef: .: f:kind: f:name: f:secretName: Manager: kubectl-client-side-apply Operation: Update Time: 2022-01-28T12:25:40Z API Version: cert-manager.io/v1 Fields Type: FieldsV1 fieldsV1: f:status: .: f:conditions: f:lastFailureTime: f:notAfter: f:notBefore: f:renewalTime: f:revision: Manager: controller Operation: Update Subresource: status Time: 2022-01-29T09:57:51Z Resource Version: 344677 Self Link: /apis/cert-manager.io/v1/namespaces/istio-ingress/certificates/test-cert-2 UID: 0015cc16-06c3-4e33-bb99-0f336cf7b788 Spec: Dns Names: development.my-domain.com production.my-domain.com Issuer Ref: Kind: Issuer Name: letsencrypt Secret Name: test-secret </code></pre> <p>Pay closer attention to Spec.DnsNames values. Sometime Heml's template engine renders it as string instead of array object due to missconfigurating.</p> <p>Also, it's a good proctice to inspect Helm charts with <code>helm template mychart</code> before installing.</p>
Vasyl Herman
<pre><code>gcloud compute ssl-certificates create flytime-google-cert --domains=flytime.io,api.flytime.io,socket.flytime.io --global </code></pre> <p>Above commands give the error of</p> <pre><code>ERROR: (gcloud.compute.ssl-certificates.create) Could not fetch resource: - Invalid value for field 'resource.managed.domains[0]': 'flytime.io api.flytime.io socket.flytime.io'. Invalid domain name specified. </code></pre> <p>What's the wrong with that?</p>
william007
<p>It will work if you put it inside the quote:</p> <pre><code>gcloud compute ssl-certificates create flytime-google-cert --domains=&quot;flytime.io,api.flytime.io,socket.flytime.io&quot; --global </code></pre>
user3279733
<p>I deployed a grpc service on eks and expose the service using ingress. I deployed an demo https app, it worked. However, I have a problem with the grpc app. The service is running but when I log the service I get an error.<br/> The grpc request does not even go to the server. The log is as following</p> <blockquote> <p>level=info msg="grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\"GET / HTTP/1.1\\r\\nHost: 19\\"\"" system=system</p> </blockquote> <p>It seems it should receive http2 but it just has <code>HTTP/1.1</code>??</p> <p>For ingress I tried </p> <pre><code> alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxxx alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true' </code></pre> <p>for service.yml</p> <pre><code> annotations: service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2", "http": "HTTP2"}' </code></pre> <p>For the service deployed it seems fine. Once I have ingress deployed it keeps have the error above.</p>
Stella
<p>I use Istio service mesh to solve this problem. Istio virtual service can route HTTP1.1 HTTP2 GRPC traffic By setting service port name to grpc or prefixing it with grpc-. Istio will configure the service with HTTP2 protocol</p>
Stella
<p>I have been trying to run kafka/zookeeper on Kubernetes. Using helm charts I am able to install zookeeper on the cluster. However the ZK pods are stuck in pending state. When I issued describe on one of the pod "<code>didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.</code>" was the reason for scheduling failure. But when I issue describe on PVC , I am getting "<code>waiting for first consumer to be created before binding</code>". I tried to re-spawn the whole cluster but the result is same. Trying to use <a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a> as guide. </p> <p>Can someone please guide me here ? </p> <p><strong>kubectl get pods -n zoo-keeper</strong></p> <pre><code>kubectl get pods -n zoo-keeper NAME READY STATUS RESTARTS AGE zoo-keeper-zk-0 0/1 Pending 0 20m zoo-keeper-zk-1 0/1 Pending 0 20m zoo-keeper-zk-2 0/1 Pending 0 20m </code></pre> <p><strong>kubectl get sc</strong></p> <pre><code>kubectl get sc NAME PROVISIONER AGE local-storage kubernetes.io/no-provisioner 25m </code></pre> <p><strong>kubectl describe sc</strong></p> <pre><code>kubectl describe sc Name: local-storage IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"} Provisioner: kubernetes.io/no-provisioner Parameters: &lt;none&gt; AllowVolumeExpansion: &lt;unset&gt; MountOptions: &lt;none&gt; ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: &lt;none&gt; </code></pre> <p><strong>kubectl describe pod foob-zookeeper-0 -n zoo-keeper</strong></p> <pre><code>ubuntu@kmaster:~$ kubectl describe pod foob-zookeeper-0 -n zoo-keeper Name: foob-zookeeper-0 Namespace: zoo-keeper Priority: 0 PriorityClassName: &lt;none&gt; Node: &lt;none&gt; Labels: app=foob-zookeeper app.kubernetes.io/instance=data-coord app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=foob-zookeeper app.kubernetes.io/version=foob-zookeeper-9.1.0-15 controller-revision-hash=foob-zookeeper-5321f8ff5 release=data-coord statefulset.kubernetes.io/pod-name=foob-zookeeper-0 Annotations: foobar.com/product-name: zoo-keeper ZK foobar.com/product-revision: ABC Status: Pending IP: Controlled By: StatefulSet/foob-zookeeper Containers: foob-zookeeper: Image: repo.data.foobar.se/latest/zookeeper-3.4.10:1.6.0-15 Ports: 2181/TCP, 2888/TCP, 3888/TCP, 10007/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP Limits: cpu: 2 memory: 4Gi Requests: cpu: 1 memory: 2Gi Liveness: exec [zkOk.sh] delay=15s timeout=5s period=10s #success=1 #failure=3 Readiness: tcp-socket :2181 delay=15s timeout=5s period=10s #success=1 #failure=3 Environment: ZK_REPLICAS: 3 ZK_HEAP_SIZE: 1G ZK_TICK_TIME: 2000 ZK_INIT_LIMIT: 10 ZK_SYNC_LIMIT: 5 ZK_MAX_CLIENT_CNXNS: 60 ZK_SNAP_RETAIN_COUNT: 3 ZK_PURGE_INTERVAL: 1 ZK_LOG_LEVEL: INFO ZK_CLIENT_PORT: 2181 ZK_SERVER_PORT: 2888 ZK_ELECTION_PORT: 3888 JMXPORT: 10007 Mounts: /var/lib/zookeeper from datadir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-nfcfx (ro) Conditions: Type Status PodScheduled False Volumes: datadir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: datadir-foob-zookeeper-0 ReadOnly: false default-token-nfcfx: Type: Secret (a volume populated by a Secret) SecretName: default-token-nfcfx Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 69s (x4 over 3m50s) default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p><strong>kubectl get pv</strong></p> <pre><code>ubuntu@kmaster:~$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv 50Gi RWO Retain Available local-storage 10m ubuntu@kmaster:~$ </code></pre> <p><strong>kubectl get pvc local-claim</strong></p> <pre><code>ubuntu@kmaster:~$ kubectl get pvc local-claim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE local-claim Pending local-storage 8m9s ubuntu@kmaster:~$ </code></pre> <p><strong>kubectl describe pvc local-claim</strong> </p> <pre><code>ubuntu@kmaster:~$ kubectl describe pvc local-claim Name: local-claim Namespace: default StorageClass: local-storage Status: Pending Volume: Labels: &lt;none&gt; Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 2m3s (x26 over 7m51s) persistentvolume-controller waiting for first consumer to be created before binding Mounted By: &lt;none&gt; </code></pre> <p><strong>MY PV files:</strong></p> <p><strong>cat create-pv.yml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 50Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/kafka-mount nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kmaster </code></pre> <p><strong>cat pvc.yml</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-claim spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 50Gi </code></pre>
monk
<p>It looks like you created your PV on master node. By default master node is marked unschedulable by ordinary pods using so called taint. To be able to run some service on master node you have two options:</p> <p>1) Add toleration to some service to allow it to run on master node:</p> <pre><code>tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master </code></pre> <p>You may even specify that some service runs only on master node:</p> <pre><code>nodeSelector: node-role.kubernetes.io/master: "" </code></pre> <p>2) You can remove taint from master node, so any pod can run on it. You should know that this is dangerous because can make your cluster very unstable.</p> <pre><code>kubectl taint nodes --all node-role.kubernetes.io/master- </code></pre> <p>Read more here and taints and tolerations: <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</a></p>
Vasili Angapov
<p>Our docker images are pushed to nexus which is hosted on another gcp machine and our k8s cluster is GKE.</p> <p>is there a way wherein we can configure nexus as a private registry for gke cluster?</p>
Sugatur Deekshith S N
<p>Other users have tried to do this and there's a known issue:</p> <pre><code>Allow user to use 3rd party repository for GKE. Right now we got a x509 error saying the certificate is signed by unknown authority, meaning Nexus cannot trust a self signed certificate from GKE. To overcome that, we need to import the root ca certificate to GKE so both party has the same root of trust, but we are stuck because we can't import root ca to GKE master. </code></pre> <p>All the reproduction steps are documented in this <a href="https://issuetracker.google.com/195774978" rel="nofollow noreferrer">public thread</a></p> <p>NOTE: This public thread is being handled currenlty as a Feature Request which means that Google Cloud is working on this to include it in future GKE versions.</p> <p>However, I have good news, there's a <a href="https://medium.com/google-cloud/gke-and-private-registries-with-self-signed-certificates-b37b5fd1f982" rel="nofollow noreferrer">public guide</a> that seems to provide a workaround to fix the known issue.</p> <p>I hope this information helps you to get your nexus working as a private container registry.</p>
Gabo Licea
<p>I have copied rancher config file to local kube config, and once I tried to connect, get an error</p> <pre><code>Unable to connect to the server: x509: certificate signed by unknown authority </code></pre> <p>I'm not admin of this cluster, and can't really change settings. So I googled that I can add</p> <pre><code>insecure-skip-tls-verify: true </code></pre> <p>And removed certificates, leaving only username and token, and it starts to work.</p> <p>Can you explain me, is it safe to use it like so, and why do we need certs at all if it could work without it as well?</p>
ogbofjnr
<p>You may treat it as additional layer of security. If you allow someone ( in this case to yourself ) to connect to cluster and manage it without a need to have a proper certificate, just keep in mind you allow it for everyone else.</p> <pre><code>insecure-skip-tls-verify: true </code></pre> <p>is pretty self-explanatory - yes, it's insecure as it skips tls verification and it is not recommended on production. As you can read in <a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure</p> </blockquote> <p>Username and token provide some level of security as they are still required to be able to connect but it has nothing to do with establishing secure trusted connection. By default it can only be done by clients who have also proper certificate. If you don't want to skip tls verification, you may want to try <a href="https://imti.co/kubectl-remote-x509-valid/" rel="nofollow noreferrer">this</a> solution. Only for kubernetes >= 1.15 use command <code>kubeadm alpha certs renew all</code>.</p> <p>More about managing TLS Certificates in a Kubernetes Cluster you can read <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">here</a>.</p>
mario
<p>I want to setup elastic stack (elastic search, logstash, beats and kibana) for monitoring my kubernetes cluster which is running on on-prem bare metals. I need some recommendations on the following 2 approaches, like which one would be more robust,fault-tolerant and of production grade. Let's say I have a K8 cluster named as K8-abc.</p> <p>Approach 1- Will be it be good to setup the elastic stack outside the kubernetes cluster? </p> <p>In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be fetched by beats(running on K8-abc) and put into into the ES Cluster which is configured on Linux Bare Metals via Logstash (which is also running on VMs). And for fetching the kubernetes node logs, the beats running on respective VMs (which are participating in forming the K8-abc) would fetch the logs and put it into the ES Cluster which is configured on VMs. The thing to note here is the VMs used for forming the ES Cluster are not the part of the K8-abc. </p> <p>Approach 2- Will be it be good to setup the elastic stack on the kubernetes cluster k8-abc itself?</p> <p>In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be send to Elastic search cluster configured on the K8-abc via logstash and beats (both running on K8-abc). For fetching the K8-abc node logs, the beats running on VMs (which are participating in forming the K8-abc) would put the logs into ES running on K8-abc via logstash which is running on k8-abc.</p> <p>Can some one help me in evaluating the pros and cons of the before mentioned two approaches? It will be helpful even if the relevant links to blogs and case studies is provided.</p>
Nitesh Ratnaparkhe
<p>I would be more inclined to the <strong>second solution</strong>. It has many advantages over the first one however it may seem more complex as it comes to the initial setup. You can actually ask similar question when it comes to migrate any other type of workload to <strong>Kubernetes</strong>. It has many advantages over VM. To name just a few:</p> <ol> <li><strong><a href="https://www.stratoscale.com/blog/kubernetes/auto-healing-containers-kubernetes/" rel="nofollow noreferrer">self-healing cluster</a></strong>, </li> <li><strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">service discovery</a></strong> and integrated <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer"><strong>load balancing</strong></a>,</li> <li>Such solution is much easier to scale (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer"><strong>HPA</strong></a>) in comparison with VMs,</li> <li>Storage orchestration. <strong>Kubernetes</strong> allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">many more</a> including <strong><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a></strong> mechanism.</li> </ol> <p>All the above points could be easily applied to any other workload and may bee seen as <strong>Kubernetes</strong> advantages in general so let's look why to use it for implementing <strong>Elastic Stack</strong>:</p> <ol start="5"> <li>It looks like <strong>Elastic</strong> is actively promoting use of <strong>Kubernetes</strong> on their <a href="https://www.elastic.co/elasticsearch-kubernetes" rel="nofollow noreferrer">website</a>. See also <a href="https://www.elastic.co/blog/alpha-helm-charts-for-elasticsearch-kibana-and-cncf-membership" rel="nofollow noreferrer">this</a> article.</li> <li>They also provide an official <strong><a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch" rel="nofollow noreferrer">elasticsearch helm chart</a></strong> so it is already quite well supported by <strong>Elastic</strong>.</li> </ol> <p>Probably there are many other reasons in favour of <strong>Kubernetes</strong> solution I didn't mention here. <a href="https://medium.com/faun/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61" rel="nofollow noreferrer">Here</a> you can find a hands-on article about setting up <strong>Highly Available and Scalable Elasticsearch on Kubernetes</strong>.</p>
mario
<p>I am trying to setup an IPv6 kubernetes cluster. I have two IPv6 interfaces and one docker interface (172.17.0.1). The docker interface is setup by docker itself.</p> <pre><code>kahou@kahou-master:~$ ip a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:50:56:af:1d:25 brd ff:ff:ff:ff:ff:ff inet6 2001:420:293:242d:250:56ff:feaf:1d25/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 2591949sec preferred_lft 604749sec inet6 fe80::250:56ff:feaf:1d25/64 scope link valid_lft forever preferred_lft forever 3: ens224: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:50:56:af:a5:15 brd ff:ff:ff:ff:ff:ff inet6 2000::250:56ff:feaf:a515/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 2591933sec preferred_lft 604733sec inet6 2000::3/64 scope global valid_lft forever preferred_lft forever inet6 fe80::250:56ff:feaf:a515/64 scope link valid_lft forever preferred_lft forever 4: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:53:f2:46:8c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 5: tunl0@NONE: &lt;NOARP,UP,LOWER_UP&gt; mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 </code></pre> <p>When I initialize my cluster thru kubeadm, all the hostnetwork pods IP are using the docker IP addresses:</p> <pre><code>etcd-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; kube-apiserver-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; kube-controller-manager-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; kube-proxy-pnq7g 1/1 Running 0 178m 172.17.0.1 kahou-master &lt;none&gt; kube-scheduler-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; </code></pre> <p>Is it possible to tell kubeadm which interface I use during the installation?</p> <p>Below is my api-server call (generated by kubeadm)</p> <pre><code>kube-apiserver --authorization-mode=Node,RBAC --bind-address=2001:420:293:242d:250:56ff:feaf:1d25 --service-cluster-ip-range=fd03::/112 --advertise-address=2001:420:293:242d:250:56ff:feaf:1d25 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key </code></pre> <p>This is my kubeadm config file:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration api: advertiseAddress: 2001:420:293:242d:250:56ff:feaf:1d25 apiServerExtraArgs: bind-address: 2001:420:293:242d:250:56ff:feaf:1d25 service-cluster-ip-range: fd03::/112 controllerManagerExtraArgs: node-cidr-mask-size: "96" cluster-cidr: fd02::/80 service-cluster-ip-range: fd03::/112 networking: serviceSubnet: fd03::/112 nodeRegistration: node-ip: 2001:420:293:242d:250:56ff:feaf:1d25 </code></pre>
Kintarō
<p>A helpful note for configuring <code>node-ip</code> to be passed on to kubelet via kubeadm config file: according to <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1" rel="nofollow noreferrer">https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1</a> and some experimentation, it should be under <code>kubeletExtraArgs</code> of the <code>nodeRegistration</code> section (example using IP from your config file): </p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-ip: 2001:420:293:242d:250:56ff:feaf:1d25 </code></pre>
IG_
<p>I have a GKE cluster set up with <a href="https://cloud.google.com/nat/docs/overview" rel="nofollow noreferrer">Cloud NAT</a>, so traffic from any node/container going outward would have the same external IP. (I needed this for whitelisting purposes while working with 3rd-party services).</p> <p>Now, if I want to deploy a proxy server onto this cluster that does basic traffic forwarding, how do I expose the proxy server "endpoint"? Or more generically, how do I expose a service if I deploy it to this GKE cluster? </p>
xiaolong
<p><strong>Proxy server running behind NAT ?</strong></p> <p>Bad idea, unless it is only for your <strong>kubernetes cluster</strong> workload, but you didn't specify anywhere that it should be reachable only by other <code>Pods</code> running in the same cluster.</p> <p>As you can read <a href="https://cloud.google.com/nat/docs/overview" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Cloud NAT does not implement unsolicited inbound connections from the internet. DNAT is only performed for packets that arrive as responses to outbound packets.</p> </blockquote> <p>So it is not meant to be reachable from outside.</p> <p>If you want to expose any application within your cluster, making it available for other <code>Pods</code>, use simple <code>ClusterIP</code> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">Service</a> which is the default type and it will be created as such even if you don't specify its type at all.</p>
mario
<p>I create one pod with <code>label:appenv</code> and one service of type node port with a selector as <code>appenv</code>. But when I use <code>kubectl get ep service-name</code> it's showing "no endpoints"(means service is not connecting with that pod). <br>Here are my <code>pod.yaml</code> and <code>service.yaml</code><br> <br><code>pod.yaml</code></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod1 labels: app: appenv spec: containers: - name: app image: aathith/lbt:v1 ports: - name: app-port containerPort: 8082 restartPolicy: Never </code></pre> <p><br><code>service.yaml</code></p> <pre><code>apiVersion: v1 kind: Service metadata: name: app spec: selector: app: appenv type: NodePort ports: - name: http port: 8082 targetPort: 8082 nodePort: 30082 protocol: TCP </code></pre> <p>output for <code>kubectl get po --show-labels</code> <a href="https://i.stack.imgur.com/Hk04j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hk04j.png" alt="enter image description here"></a></p> <p>output for <code>kubectl get svc</code><a href="https://i.stack.imgur.com/i6EKR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i6EKR.png" alt="enter image description here"></a></p> <p>output for <code>kubectl get svc app</code></p> <p><a href="https://i.stack.imgur.com/edQDL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/edQDL.png" alt="enter image description here"></a></p> <ol> <li>Why am I not able to connect my pod to this service? </li> <li>How can I change the above files to connect with each other?</li> </ol>
AATHITH RAJENDRAN
<p>Your pod is in "Completed" state - that is the problem. It is not in state "Running". Why? Because command in container was finished with 0 exit code. In normal situation container's running command should not exit unless it's a Job or Cronjob. You see what I mean?</p>
Vasili Angapov
<p>How to list Batch-Job by label-selector. I want to list out the job with a certain label like: <code>type: upgrade</code> or something else. Looking out for label selector fields while querying job from client-go.</p>
Abhishek Kumar
<p>I was making mistake worrying about the <code>.Get()</code> method to find job by labelSelector, and thus working hard in the wrong direction.</p> <p>Here is how you can list out all job with the label-selector<br /> To get job by label-selector, We have to use <code>.List()</code> method.</p> <pre class="lang-golang prettyprint-override"><code>label := &quot;type=upgrade,name=jiva-upgrade&quot;; jobs, err := k.K8sCS.BatchV1().Jobs(namespace).List(context.TODO(), metav1.ListOptions{LabelSelector: label}) </code></pre>
Abhishek Kumar
<p>I have a minikube cluster with a running WordPress in one deployment, and MySQL in another. Both of the deployments have corresponding services. The definition for WordPress service looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: wordpress spec: selector: app: wordpress ports: - port: 80 type: LoadBalancer </code></pre> <p>The service works fine, and <code>minikube service</code> gives me a nice path, with an address of <code>minikube ip</code> and a random high port. The problem is <strong>WordPress needs a full URL in the name of the site</strong>. I'd rather not change it every single time and have local DNS name for the cluster.</p> <p>Is there a way to expose the LoadBalancer on an arbitrary port in minikube? I'll be fine with any port, as long as it's port is decided by me, and not minikube itself?</p>
mhaligowski
<p>Keep in mind that <strong>Minikube</strong> is unable to provide real <strong>loadbalancer</strong> like different cloud providers and it merely simulates it by using simple <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>nodePort</code></a> <code>Service</code> instead.</p> <p>You can have full control over the port that is used. First of all you can specify it manually in the <code>nodePort</code> <code>Service</code> specification (remember it should be within the default range: 30000-32767):</p> <blockquote> <p>If you want a specific port number, you can specify a value in the <code>nodePort</code> field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that’s inside the range configured for NodePort use.</p> </blockquote> <p>Your example may look as follows:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: wordpress spec: selector: app: wordpress ports: - port: 80 targetPort: 80 nodePort: 30000 type: NodePort </code></pre> <p>You can also change this default range by providing your custom value after <code>--service-node-port-range</code> flag when starting your <code>kube-apiserver</code>.</p> <p>When you use <strong>kubernetes cluster</strong> set up by <strong>kukbeadm</strong> tool (<strong>Minikube</strong> also uses it as a default bootstrapper), you need to edit <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> file and provide the required flag with your custom port range.</p>
mario
<p>I'm trying to figure out why a GKE "Workload" CPU usage is not equivalent to the sum of cpu usage of its pods.</p> <p>Following image shows a Workload CPU usage.</p> <p><a href="https://i.stack.imgur.com/b45rT.png" rel="nofollow noreferrer">Service Workload CPU Usage</a></p> <p>Following images show pods CPU usage for the above Workload.</p> <p><a href="https://i.stack.imgur.com/pPptu.png" rel="nofollow noreferrer">Pod #1 CPU Usage</a></p> <p><a href="https://i.stack.imgur.com/UBbEG.png" rel="nofollow noreferrer">Pod #2 CPU Usage</a></p> <p>For example, at 9:45, the Workload cpu usage was around 3.7 cores, but at the same time Pod#1 CPU usage was around 0.9 cores and Pod#2 CPU usage was around 0.9 cores too. It means, the service Workload CPU Usage should have been around 1.8 cores, but it wasn't.</p> <p>Does anyone have an idea of this behavior?</p> <p>Thanks.</p>
Danny Cadavid
<p>Danny,</p> <p>The CPU chart on the Workloads page is an aggregate of CPU usage for managed pods. The values are taken from the Stackdriver Monitoring metric container/cpu/usage_time, check this <a href="https://cloud.google.com/monitoring/api/metrics_gcp#gcp-container" rel="nofollow noreferrer">link</a>. That metric represents "Cumulative CPU usage on all cores in seconds. This number divided by the elapsed time represents usage as a number of cores, regardless of any core limit that might be set."</p> <p>Please let me know if you have further questions in regard to this.</p>
Gabo Licea
<p>I have to include the client script file as configmap and mount to pod how to create configmap for below structure in values.yaml</p> <pre><code>app: server: client-cli1.sh: | #!/bin/bash echo "Hello World" client-cli2.sh: | #!/bin/bash echo "Hello World" </code></pre> <p>this is configmap file</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: cli-config data: {{ range $key, $val:= .Values.app.server }} {{ $key }}: | {{ $val }} {{ end }} </code></pre> <p>i am getting error "error converting YAML to JSON: yaml: line 14: could not find expected ':'" Note: cant change structure and cant use File function because the build happen somewhere else only values.ymal will be provided.</p> <p>how to parse this.</p>
user1184777
<p>Try this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: cli-config data: {{ toYaml .Values.app.server | indent 2 }} </code></pre>
Vasili Angapov
<p>In this blog here: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details</a></p> <p>There is a blurb:</p> <blockquote> <p>For object metrics and external metrics, a single metric is fetched, which describes the object in question. This metric is compared to the target value, to produce a ratio as above. In the autoscaling/v2beta2 API version, this value can optionally be divided by the number of pods before the comparison is made.</p> </blockquote> <p>I need to do exactly this; divide my current metric by the current number of pods.</p> <p>Where can I find the specification for this API? I have googled frantically to see what the autoscaling yaml specification is to do this but I cannot find it. IE I need to write the autoscaler resource as part of our helm chart. </p>
Tommy
<p>The specification for k8s API can be found here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/</a></p> <p>The above is for k8s version 1.18, you'll have to switch to the right version for you.</p> <p>The spec for HPA v2beta2 would be here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#horizontalpodautoscaler-v2beta2-autoscaling" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#horizontalpodautoscaler-v2beta2-autoscaling</a></p>
weichung.shaw
<p>I want to deploy a windows service in azure cluster. I have created the yaml file, which deploys the service, however when I run <code>kubectr get pods</code> I get the following,</p> <pre><code>NAME READY STATUS RESTARTS AGE windowsservice-deploy-5994764596-jfghj 0/1 ImagePullBackOff 0 39m </code></pre> <p>My yaml file looks as follows, </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: windowsservice-deploy labels: app: windowsservice spec: replicas: 1 template: metadata: name: windowsservice labels: app: windowsservice spec: containers: - name: windowsservice image: windowskube.azurecr.io/windowsimage:v1 imagePullPolicy: IfNotPresent restartPolicy: Always selector: matchLabels: app: windowsservice --- apiVersion: v1 kind: Service metadata: name: windows-service spec: selector: app: windowsservice ports: - port: 80 type: LoadBalancer </code></pre> <p>Here is the output kubectl describe pod windowsservice-deploy-5994764596-jfghj</p> <pre><code>Name: windowsservice-deploy-5994764596-jfghj Namespace: default Priority: 0 Node: aks-nodepool1-41533414-vmss000000/10.240.0.4 Start Time: Mon, 15 Jun 2020 11:24:18 +0100 Labels: app=windowsservice pod-template-hash=5994764596 Annotations: &lt;none&gt; Status: Pending IP: 10.244.0.8 IPs: &lt;none&gt; Controlled By: ReplicaSet/windowsservice-deploy-5994764596 Containers: workerservice: Container ID: Image: windowskube.azurecr.io/windowsimage:v1 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-zvwh8 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-zvwh8: Type: Secret (a volume populated by a Secret) SecretName: default-token-zvwh8 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Failed 18m (x330 over 93m) kubelet, aks-nodepool1-41533414-vmss000000 Error: ImagePullBackOff Normal BackOff 3m11s (x395 over 93m) kubelet, aks-nodepool1-41533414-vmss000000 Back-off pulling image "windowskube.azurecr.io/windowsimage:v1" </code></pre> <p>It's a windows service and I haven't deployed on before, am I missing something?</p> <p>Thanks</p>
Garry A
<p>Given <code>windowskube.azurecr.io/windowsimage:v1</code> seems to be your private Azure container registry, I think the missing piece is to provide kubernetes with the login credentials to the private registry.</p> <p>See: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p> <p>As has been mentioned, by doing <code>kubectl describe pod windowsservice-deploy-5994764596-jfghj</code> and scrolling down to the bottom to view the <code>Events</code>, you'll get a better error message describing why the image pull has failed.</p>
weichung.shaw
<p>I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.</p> <p>I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.</p> <p>I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.</p> <p>Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed. I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.</p>
AndD
<p>Posting the solution I used since it could be useful for other people searching around.</p> <p>In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.</p> <p>StatefulSet can declare a property <code>spec.serviceName</code> which can have the same name of a headless service to create a unique network name for each Pod. Something like <code>&lt;pod_name&gt;.&lt;service_name&gt;</code></p> <p>Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.</p> <p>Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.</p> <p>This way, the StatefulSet is scalable at run time and works as expected.</p>
AndD
<p>Full Minikube start command <code>minikube start --driver=docker --alsologtostderr -v=3</code></p> <p>Output:</p> <pre><code>minikube start --driver=docker --alsologtostderr -v=3 2022-10-10 15:03:12.668 | . I1010 13:03:11.656428 24138 out.go:296] Setting OutFile to fd 1 ... 2022-10-10 15:03:12.668 | . I1010 13:03:11.657466 24138 out.go:343] TERM=,COLORTERM=, which probably does not support color 2022-10-10 15:03:12.668 | . I1010 13:03:11.657488 24138 out.go:309] Setting ErrFile to fd 2... 2022-10-10 15:03:12.668 | . I1010 13:03:11.657526 24138 out.go:343] TERM=,COLORTERM=, which probably does not support color 2022-10-10 15:03:12.668 | . I1010 13:03:11.657771 24138 root.go:333] Updating PATH: /home/builder/.minikube/bin 2022-10-10 15:03:12.668 | . W1010 13:03:11.658273 24138 root.go:310] Error reading config file at /home/builder/.minikube/config/config.json: open /home/builder/.minikube/config/config.json: no such file or directory 2022-10-10 15:03:12.668 | . I1010 13:03:11.670761 24138 out.go:303] Setting JSON to false 2022-10-10 15:03:12.669 | . I1010 13:03:11.702854 24138 start.go:115] hostinfo: {&quot;hostname&quot;:&quot;vkvm1.eng.marklogic.com&quot;,&quot;uptime&quot;:359354,&quot;bootTime&quot;:1665072838,&quot;procs&quot;:189,&quot;os&quot;:&quot;linux&quot;,&quot;platform&quot;:&quot;redhat&quot;,&quot;platformFamily&quot;:&quot;rhel&quot;,&quot;platformVersion&quot;:&quot;7.9&quot;,&quot;kernelVersion&quot;:&quot;3.10.0-1160.76.1.el7.x86_64&quot;,&quot;kernelArch&quot;:&quot;x86_64&quot;,&quot;virtualizationSystem&quot;:&quot;&quot;,&quot;virtualizationRole&quot;:&quot;guest&quot;,&quot;hostId&quot;:&quot;8eb41075-ad46-4948-8bf3-6f56c8fc814f&quot;} 2022-10-10 15:03:12.669 | . I1010 13:03:11.702985 24138 start.go:125] virtualization: guest 2022-10-10 15:03:12.669 | . I1010 13:03:11.705293 24138 out.go:177] * minikube v1.27.0 on Redhat 7.9 (amd64) 2022-10-10 15:03:12.669 | . * minikube v1.27.0 on Redhat 7.9 (amd64) 2022-10-10 15:03:12.669 | . I1010 13:03:11.706999 24138 notify.go:214] Checking for updates... 2022-10-10 15:03:12.669 | . W1010 13:03:11.707409 24138 preload.go:295] Failed to list preload files: open /home/builder/.minikube/cache/preloaded-tarball: no such file or directory 2022-10-10 15:03:12.669 | . W1010 13:03:11.708033 24138 out.go:239] ! Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases. 2022-10-10 15:03:12.669 | . ! Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases. 2022-10-10 15:03:12.669 | . W1010 13:03:11.708170 24138 out.go:239] ! For more information, see: https://github.com/kubernetes/kubernetes/issues/112135 2022-10-10 15:03:12.669 | . ! For more information, see: https://github.com/kubernetes/kubernetes/issues/112135 2022-10-10 15:03:12.669 | . I1010 13:03:11.708304 24138 driver.go:365] Setting default libvirt URI to qemu:///system 2022-10-10 15:03:12.669 | . I1010 13:03:11.779753 24138 docker.go:137] docker version: linux-20.10.18 2022-10-10 15:03:12.669 | . I1010 13:03:11.780033 24138 cli_runner.go:164] Run: docker system info --format &quot;{{json .}}&quot; 2022-10-10 15:03:12.935 | . I1010 13:03:11.804722 24138 lock.go:35] WriteFile acquiring /home/builder/.minikube/last_update_check: {Name:mkfeeafdcd5b2a03a55be5c45e91f1633dbd4269 Clock:{} Delay:500ms Timeout:1m0s Cancel:&lt;nil&gt;} 2022-10-10 15:03:12.935 | . 2022-10-10 15:03:12.936 | . I1010 13:03:11.957361 24138 info.go:265] docker info: {ID:U2TF:AUHN:IGPM:LOIS:LYU5:UDYQ:NGVO:W6RZ:NRM4:ZUBC:VRBL:C54T Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:&lt;nil&gt; Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:&lt;nil&gt; Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-10 13:03:11.819053176 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1160.76.1.el7.x86_64 OperatingSystem:Red Hat Enterprise Linux Server 7.9 (Maipo) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16654606336 GenericResources:&lt;nil&gt; DockerRootDir:/space/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vkvm1.eng.marklogic.com Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:&lt;nil&gt;} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:&lt;nil&gt; ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:&lt;nil&gt;}} 2022-10-10 15:03:12.936 | . I1010 13:03:11.957632 24138 docker.go:254] overlay module found 2022-10-10 15:03:12.936 | . I1010 13:03:11.959650 24138 out.go:177] * Using the docker driver based on user configuration 2022-10-10 15:03:12.936 | . * Using the docker driver based on user configuration 2022-10-10 15:03:12.936 | . I1010 13:03:11.960738 24138 start.go:284] selected driver: docker 2022-10-10 15:03:12.936 | . I1010 13:03:11.960804 24138 start.go:808] validating driver &quot;docker&quot; against &lt;nil&gt; 2022-10-10 15:03:12.936 | . I1010 13:03:11.960861 24138 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:&lt;nil&gt; Reason: Fix: Doc: Version:} 2022-10-10 15:03:12.936 | . I1010 13:03:11.961184 24138 cli_runner.go:164] Run: docker system info --format &quot;{{json .}}&quot; 2022-10-10 15:03:13.209 | . I1010 13:03:12.105432 24138 info.go:265] docker info: {ID:U2TF:AUHN:IGPM:LOIS:LYU5:UDYQ:NGVO:W6RZ:NRM4:ZUBC:VRBL:C54T Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:&lt;nil&gt; Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:&lt;nil&gt; Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-10 13:03:11.999402744 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1160.76.1.el7.x86_64 OperatingSystem:Red Hat Enterprise Linux Server 7.9 (Maipo) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16654606336 GenericResources:&lt;nil&gt; DockerRootDir:/space/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vkvm1.eng.marklogic.com Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:&lt;nil&gt;} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:&lt;nil&gt; ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:&lt;nil&gt;}} 2022-10-10 15:03:13.209 | . I1010 13:03:12.105750 24138 start_flags.go:296] no existing cluster config was found, will generate one from the flags 2022-10-10 15:03:13.210 | . I1010 13:03:12.106519 24138 start_flags.go:377] Using suggested 3900MB memory alloc based on sys=15883MB, container=15883MB 2022-10-10 15:03:13.210 | . I1010 13:03:12.106744 24138 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true] 2022-10-10 15:03:13.210 | . I1010 13:03:12.109246 24138 out.go:177] * Using Docker driver with root privileges 2022-10-10 15:03:13.210 | . * Using Docker driver with root privileges 2022-10-10 15:03:13.210 | . I1010 13:03:12.110491 24138 cni.go:95] Creating CNI manager for &quot;&quot; 2022-10-10 15:03:13.210 | . I1010 13:03:12.110542 24138 cni.go:169] CNI unnecessary in this configuration, recommending no CNI 2022-10-10 15:03:13.210 | . I1010 13:03:12.110582 24138 start_flags.go:310] config: 2022-10-10 15:03:13.210 | . {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:&lt;nil&gt; ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/builder:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} 2022-10-10 15:03:13.210 | . I1010 13:03:12.112214 24138 out.go:177] * Starting control plane node minikube in cluster minikube 2022-10-10 15:03:13.210 | . * Starting control plane node minikube in cluster minikube 2022-10-10 15:03:13.210 | . I1010 13:03:12.113593 24138 cache.go:120] Beginning downloading kic base image for docker with docker 2022-10-10 15:03:13.210 | . I1010 13:03:12.114952 24138 out.go:177] * Pulling base image ... 2022-10-10 15:03:13.210 | . * Pulling base image ... ... (lots of logs from pulling images removed as I hit max char limit) 2022-10-10 15:04:10.436 | . * Creating docker container (CPUs=2, Memory=3900MB) ... 2022-10-10 15:04:10.436 | . I1010 13:04:09.178221 24138 start.go:159] libmachine.API.Create for &quot;minikube&quot; (driver=&quot;docker&quot;) 2022-10-10 15:04:10.436 | . I1010 13:04:09.178329 24138 client.go:168] LocalClient.Create starting 2022-10-10 15:04:10.436 | . I1010 13:04:09.179098 24138 client.go:171] LocalClient.Create took 685.72µs 2022-10-10 15:04:12.358 | . I1010 13:04:11.181303 24138 ssh_runner.go:195] Run: sh -c &quot;df -h /var | awk 'NR==2{print $5}'&quot; 2022-10-10 15:04:12.358 | . I1010 13:04:11.181622 24138 cli_runner.go:164] Run: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube 2022-10-10 15:04:12.358 | . W1010 13:04:11.224149 24138 cli_runner.go:211] docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube returned with exit code 1 2022-10-10 15:04:12.358 | . I1010 13:04:11.224495 24138 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 2022-10-10 15:04:12.358 | . stdout: 2022-10-10 15:04:12.358 | . 2022-10-10 15:04:12.358 | . 2022-10-10 15:04:12.358 | . stderr: 2022-10-10 15:04:12.358 | . Error: No such container: minikube 2022-10-10 15:04:12.618 | . I1010 13:04:11.501210 24138 cli_runner.go:164] Run: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube 2022-10-10 15:04:12.618 | . W1010 13:04:11.542666 24138 cli_runner.go:211] docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube returned with exit code 1 2022-10-10 15:04:12.618 | . I1010 13:04:11.542848 24138 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 (repeated logs removed here due to char limit) 2022-10-10 15:04:15.676 | . stdout: 2022-10-10 15:04:15.676 | . 2022-10-10 15:04:15.676 | . 2022-10-10 15:04:15.676 | . stderr: 2022-10-10 15:04:15.676 | . Error: No such container: minikube 2022-10-10 15:04:15.676 | . I1010 13:04:14.569874 24138 start.go:128] duration metric: createHost completed in 5.395396901s 2022-10-10 15:04:15.676 | . I1010 13:04:14.569895 24138 start.go:83] releasing machines lock for &quot;minikube&quot;, held for 5.396481666s 2022-10-10 15:04:15.676 | . W1010 13:04:14.569998 24138 start.go:602] error starting host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:15.676 | . I1010 13:04:14.570230 24138 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} 2022-10-10 15:04:15.676 | . W1010 13:04:14.609043 24138 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1 2022-10-10 15:04:15.676 | . I1010 13:04:14.609181 24138 delete.go:46] couldn't inspect container &quot;minikube&quot; before deleting: unknown state &quot;minikube&quot;: docker container inspect minikube --format={{.State.Status}}: exit status 1 2022-10-10 15:04:15.676 | . stdout: 2022-10-10 15:04:15.676 | . 2022-10-10 15:04:15.676 | . 2022-10-10 15:04:15.676 | . stderr: 2022-10-10 15:04:15.676 | . Error: No such container: minikube 2022-10-10 15:04:15.676 | . I1010 13:04:14.611918 24138 cli_runner.go:164] Run: sudo -n podman container inspect minikube --format={{.State.Status}} 2022-10-10 15:04:15.676 | . W1010 13:04:14.649788 24138 cli_runner.go:211] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 1 2022-10-10 15:04:15.676 | . I1010 13:04:14.649837 24138 delete.go:46] couldn't inspect container &quot;minikube&quot; before deleting: unknown state &quot;minikube&quot;: sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 1 2022-10-10 15:04:15.676 | . stdout: 2022-10-10 15:04:15.676 | . 2022-10-10 15:04:15.676 | . stderr: 2022-10-10 15:04:15.676 | . sudo: a password is required 2022-10-10 15:04:15.676 | . W1010 13:04:14.649922 24138 start.go:607] delete host: Docker machine &quot;minikube&quot; does not exist. Use &quot;docker-machine ls&quot; to list machines. Use &quot;docker-machine create&quot; to add a new one. 2022-10-10 15:04:15.676 | . W1010 13:04:14.650376 24138 out.go:239] ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:15.676 | . ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:15.676 | . I1010 13:04:14.650435 24138 start.go:617] Will try again in 5 seconds ... 2022-10-10 15:04:20.953 | . I1010 13:04:19.653290 24138 start.go:364] acquiring machines lock for minikube: {Name:mke10511c9cb3816f0997f9cfc8a1716887d51cb Clock:{} Delay:500ms Timeout:10m0s Cancel:&lt;nil&gt;} 2022-10-10 15:04:20.953 | . I1010 13:04:19.653711 24138 start.go:368] acquired machines lock for &quot;minikube&quot; in 296.18µs 2022-10-10 15:04:20.954 | . I1010 13:04:19.653797 24138 start.go:93] Provisioning new machine with config: &amp;{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:&lt;nil&gt; ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/builder:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &amp;{Name: IP: Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true} 2022-10-10 15:04:20.954 | . I1010 13:04:19.654129 24138 start.go:125] createHost starting for &quot;&quot; (driver=&quot;docker&quot;) 2022-10-10 15:04:20.954 | . I1010 13:04:19.656399 24138 out.go:204] * Creating docker container (CPUs=2, Memory=3900MB) ... 2022-10-10 15:04:20.954 | . * Creating docker container (CPUs=2, Memory=3900MB) ... 2022-10-10 15:04:20.954 | . I1010 13:04:19.656656 24138 start.go:159] libmachine.API.Create for &quot;minikube&quot; (driver=&quot;docker&quot;) 2022-10-10 15:04:20.954 | . I1010 13:04:19.656760 24138 client.go:168] LocalClient.Create starting 2022-10-10 15:04:20.954 | . I1010 13:04:19.656936 24138 client.go:171] LocalClient.Create took 155.782µs 2022-10-10 15:04:22.871 | . I1010 13:04:21.657412 24138 ssh_runner.go:195] Run: sh -c &quot;df -h /var | awk 'NR==2{print $5}'&quot; 2022-10-10 15:04:22.871 | . I1010 13:04:21.658965 24138 cli_runner.go:164] Run: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube 2022-10-10 15:04:22.871 | . W1010 13:04:21.696755 24138 cli_runner.go:211] docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube returned with exit code 1 2022-10-10 15:04:22.871 | . I1010 13:04:21.696968 24138 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 2022-10-10 15:04:22.871 | . stdout: 2022-10-10 15:04:22.871 | . 2022-10-10 15:04:22.871 | . 2022-10-10 15:04:22.871 | . stderr: 2022-10-10 15:04:22.871 | . Error: No such container: minikube 2022-10-10 15:04:22.871 | . I1010 13:04:21.897543 24138 cli_runner.go:164] Run: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube 2022-10-10 15:04:22.871 | . W1010 13:04:21.936224 24138 cli_runner.go:211] docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube returned with exit code 1 2022-10-10 15:04:22.871 | . I1010 13:04:21.936374 24138 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 (lines removed due to char limit) 2022-10-10 15:04:24.531 | . stdout: 2022-10-10 15:04:24.531 | . 2022-10-10 15:04:24.531 | . 2022-10-10 15:04:24.531 | . stderr: 2022-10-10 15:04:24.531 | . Error: No such container: minikube 2022-10-10 15:04:24.790 | . I1010 13:04:23.735213 24138 cli_runner.go:164] Run: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube 2022-10-10 15:04:24.791 | . W1010 13:04:23.774692 24138 cli_runner.go:211] docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube returned with exit code 1 2022-10-10 15:04:24.791 | . I1010 13:04:23.774864 24138 retry.go:31] will retry after 545.000538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 2022-10-10 15:04:24.791 | . stdout: 2022-10-10 15:04:24.791 | . 2022-10-10 15:04:24.791 | . 2022-10-10 15:04:24.791 | . stderr: 2022-10-10 15:04:24.791 | . Error: No such container: minikube 2022-10-10 15:04:25.361 | . I1010 13:04:24.320916 24138 cli_runner.go:164] Run: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube 2022-10-10 15:04:25.361 | . W1010 13:04:24.359812 24138 cli_runner.go:211] docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube returned with exit code 1 2022-10-10 15:04:25.361 | . I1010 13:04:24.359995 24138 retry.go:31] will retry after 660.685065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 2022-10-10 15:04:26.188 | . stdout: 2022-10-10 15:04:26.188 | . 2022-10-10 15:04:26.188 | . 2022-10-10 15:04:26.188 | . stderr: 2022-10-10 15:04:26.188 | . Error: No such container: minikube 2022-10-10 15:04:26.188 | . 2022-10-10 15:04:26.188 | . W1010 13:04:25.061915 24138 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for &quot;minikube&quot;: docker container inspect -f &quot;'{{(index (index .NetworkSettings.Ports &quot;22/tcp&quot;) 0).HostPort}}'&quot; minikube: exit status 1 2022-10-10 15:04:26.188 | . stdout: 2022-10-10 15:04:26.188 | . 2022-10-10 15:04:26.188 | . 2022-10-10 15:04:26.188 | . stderr: 2022-10-10 15:04:26.188 | . Error: No such container: minikube 2022-10-10 15:04:26.188 | . I1010 13:04:25.061939 24138 start.go:128] duration metric: createHost completed in 5.407782504s 2022-10-10 15:04:26.188 | . I1010 13:04:25.061969 24138 start.go:83] releasing machines lock for &quot;minikube&quot;, held for 5.408232309s 2022-10-10 15:04:26.188 | . W1010 13:04:25.062386 24138 out.go:239] * Failed to start docker container. Running &quot;minikube delete&quot; may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:26.188 | . * Failed to start docker container. Running &quot;minikube delete&quot; may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:26.188 | . I1010 13:04:25.064746 24138 out.go:177] 2022-10-10 15:04:26.188 | . 2022-10-10 15:04:26.188 | . W1010 13:04:25.066259 24138 out.go:239] X Exiting due to GUEST_PROVISION_ACQUIRE_LOCK: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:26.188 | . X Exiting due to GUEST_PROVISION_ACQUIRE_LOCK: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor 2022-10-10 15:04:26.188 | . W1010 13:04:25.066434 24138 out.go:239] * Suggestion: Please try purging minikube using `minikube delete --all --purge` 2022-10-10 15:04:26.188 | . * Suggestion: Please try purging minikube using `minikube delete --all --purge` 2022-10-10 15:04:26.188 | . W1010 13:04:25.066568 24138 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11022 2022-10-10 15:04:26.188 | . * Related issue: https://github.com/kubernetes/minikube/issues/11022 2022-10-10 15:04:26.188 | . I1010 13:04:25.067905 24138 out.go:177] </code></pre> <p>I have tried running just the docker image and it works fine, I've tried purging minikube as it recommends and that has not solved the issue. I've also tried setting the MINIKUBE_HOME variable. I'm pretty new to K8S on centos7 so any advice would be greatly apreciated</p>
Ilan Rosenbaum
<p>Fixed by setting MINIKUBE_HOME variable to a value outside of the home directory. This github issue helped in solving this: <a href="https://github.com/kubernetes/minikube/issues/11022#issuecomment-848387322" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/11022#issuecomment-848387322</a></p>
Ilan Rosenbaum
<p>I have kubernetes cluster with more thane 50 pods. I want to get alerted on email when an update happens with pod and another kubernetes resource. if anyone doing manual deployment like that, how I can achieve this is in Linux.</p>
Raghunath Babalsure
<p>If you have Prometheus then you can create alert like <code>changes(kube_deployment_status_observed_generation[5m]) &gt; 0</code> which means that deployment was changed at least once for last 5 minutes. </p> <p>If you don't have Prometheus - then you may install quite fast using this repo: <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator</a></p>
Vasili Angapov
<p>I am trying to test a liveness probe while learning kubernetes. I have set up a minikube and configured a pod with a liveness probe.</p> <p>Testing the script (E.g. via docker exec) it seems to report success and failure as required.</p> <p>The probe leads to failures events which I can view via <em>kubectl describe podname</em> but it does not report recovery from failures.</p> <p><a href="https://stackoverflow.com/a/34599554/1569204">This answer</a> says that liveness probe successes are not reported by default.</p> <p>I have been trying to increase the log level with no success by running variations like:</p> <pre><code>minikube start --extra-config=apiserver.v=4 minikube start --extra-config=kubectl.v=4 minikube start --v=4 </code></pre> <p>As suggested <a href="https://stackoverflow.com/questions/55929593/run-kubernetes-api-server-in-minikube-in-verbose-mode">here</a> &amp; <a href="https://minikube.sigs.k8s.io/docs/reference/configuration/kubernetes/" rel="nofollow noreferrer">here</a>.</p> <p><strong>What is the proper way to configure the logging level for a kubelet?</strong></p> <p>can it be modified without restarting the pod or minikube?</p> <p>An event will be reported if a failure causes the pod to be restarted. For kubernetes itself I understand that using it to decide whether to restart the pod is sufficient.</p> <p>Why aren't events recorded for recovery from a failure which does not require a restart? This is how I would expect probes to work in a health monitoring system.</p> <p>How would recovery be visible if the same probe was used in prometheus or similar? For an expensive probe I would not want it to be run multiple times. (granted one probe could cache the output to a file making the second probe cheaper)</p>
Bruce Adams
<blockquote> <p>I have been trying to increase the log level with no success by running variations like:</p> <pre><code>minikube start --extra-config=apiserver.v=4 minikube start --extra-config=kubectl.v=4 minikube start --v=4 </code></pre> </blockquote> <p>@Bruce, none of the options mentioned by you will work as they are releted with other components of Kubernetes cluster and in the answer you referred to it was clearly said:</p> <blockquote> <p>The output of successful probes isn't recorded anywhere unless your <strong>Kubelet</strong> has a log level of at least --v=4, <strong><em>in which case it'll be in the Kubelet's logs</em></strong>.</p> </blockquote> <p>So you need to set <code>-v=4</code> specifically for <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a>. In the official docs you can see that it can be started with specific flags including the one, changing default verbosity level of it's logs:</p> <pre><code>-v, --v Level number for the log level verbosity </code></pre> <p><strong>Kubelet</strong> runs as a system service on each node so you can check it's status by simply issuing:</p> <pre><code>systemctl status kubelet.service </code></pre> <p>and if you want to see it's logs issue the command:</p> <pre><code>journalctl -xeu kubelet.service </code></pre> <p>Try:</p> <pre><code>minikube start --extra-config=kubelet.v=4 </code></pre> <p>however I'm not 100% sure if <strong>Minikube</strong> is able to pass this parameter so you'll need to verify it on your own. If it doesn't work you should still be able to add it in kubelet configuration file, specifying parameters with which it is started (don't forget to restart your <code>kubelet.service</code> after submitting the changes, you simply need to run <code>systemctl restart kubelet.service</code>)</p> <p>Let me know if it helps and don't hesitate to ask additional questions if something is not completely clear.</p>
mario
<p>I'm running some stresses tests in cluster. I compile Gatling code in a jar and run it in a dockerized environment. Likewise, I was wondering if there is a way to upload the final Gatling report to S3. There is <code>after</code> hook in Gatling simulation class, but I think this is getting executed before the report is generated.</p>
Ahmad.Masood
<p>An easy way I can think to do this without changing even the gatling code or the jar, is to simply:</p> <ul> <li>Make the docker container running the tests run as an init container</li> <li>After the init container had run, a main container starts and can do the s3 upload, either with bash commands or with whatever (example, with tinys3 - <a href="https://github.com/smore-inc/tinys3" rel="nofollow noreferrer">https://github.com/smore-inc/tinys3</a> )</li> </ul> <p>Just a general example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app.kubernetes.io/instance: gatling-stress-test app.kubernetes.io/name: gatling-stress-test name: gatling-stress-test spec: initContainers: - name: gatling-stress-test-runner image: gatling-docker-image:latest imagePullPolicy: Always volumeMounts: - mountPath: /full/path/to/gatling/reports name: reports containers: - name: stress-test-s3-copier image: ubuntu-image-with-tinys3:latest imagePullPolicy: Always volumeMounts: - mountPath: /full/path/to/gatling/reports name: reports volumes: - emptyDir: {} name: reports </code></pre>
AndD
<p>I am trying to lock down my kubernetes cluster and currently use cloudflare on the front in I am trying to whitelist cloudflare's IPs</p> <p>this is in my service yaml:</p> <pre><code>spec: type: LoadBalancer loadBalancerSourceRanges: - 130.211.204.1/32 - 173.245.48.0/20 - 103.21.244.0/22 - 103.22.200.0/22 - 103.31.4.0/22 - 141.101.64.0/18 - 108.162.192.0/18 - 190.93.240.0/20 - 188.114.96.0/20 - 197.234.240.0/22 - 198.41.128.0/17 - 162.158.0.0/15 - 104.16.0.0/12 - 172.64.0.0/13 - 131.0.72.0/22 </code></pre> <p>after applying this manifest, i can still access the loadbalancer URL from any browser! is this feature not working or perhaps I configured this incorrectly.</p> <p>Thanks.</p>
Jeryl Cook
<p>From <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service</a>:</p> <blockquote> <p>When using a Service with spec.type: LoadBalancer, you can specify the IP ranges that are allowed to access the load balancer by using spec.loadBalancerSourceRanges. This field takes a list of IP CIDR ranges, which Kubernetes will use to configure firewall exceptions. This feature is currently supported on Google Compute Engine, Google Kubernetes Engine, AWS Elastic Kubernetes Service, Azure Kubernetes Service, and IBM Cloud Kubernetes Service. This field will be ignored if the cloud provider does not support the feature.</p> </blockquote> <p>May be your cloud simply does not support it.</p> <p>You can use other things that allow blocking by source IP, like nginx or ingress-nginx. In ingress-nginx you just specify list of allowed IPs in annotation <code>ingress.kubernetes.io/whitelist-source-range</code>. </p> <p>If you want to go Nginx or other proxy route - don't forget to change Load Balancer Service <code>externalTrafficPolicy</code> to <code>Local</code>. Otherwise you will not see real client IPs.</p>
Vasili Angapov
<p>I've a perl command which I need to pass to a kubernetes job. Command computes and prints the value of pi to 2000 places.</p> <p>perl -Mbignum=bpi -wle 'print bpi(2000)'</p> <p>I've passed the command to a job yaml file as shown below. Via kubectl, yaml file creates job successfully and prints the value of pi. </p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-job spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never </code></pre> <p>When I tried with below command syntax, it gives me error. Could you please tell me why the below command synax is incorrect? </p> <pre><code>command: ["perl", "-Mbignum=bpi", "-wle", "'print bpi(2000)'"] # placed inside single quote command: ["perl", "-Mbignum=bpi", "-wle", "print", " bpi(2000)"] # print splitted in two quotes. command: ["perl", "-Mbignum=bpi", "-wle", "print", "bpi(2000)"] # print splitted in two quotes. </code></pre> <p>Also when I pass the complete command in a single quote, it gives me error.</p> <pre><code>command: ["perl -Mbignum=bpi -wle print bpi(2000)"] # complete command in single quotes. </code></pre>
Naseem Khan
<p><code>command:</code> key requires that you pass <strong>an array</strong> as it's argument (compare with <a href="https://rollout.io/blog/yaml-tutorial-everything-you-need-get-started/" rel="nofollow noreferrer">this</a> example). You can specify an array in two equivalent forms:</p> <p>in a single line, using square brackets <code>[]</code>:</p> <blockquote> <pre><code>--- items: [ 1, 2, 3, 4, 5 ] names: [ "one", "two", "three", "four" ] </code></pre> </blockquote> <p>and in a multi-line fashion:</p> <blockquote> <pre><code>--- items: - 1 - 2 - 3 - 4 - 5 names: - "one" - "two" - "three" - "four" </code></pre> </blockquote> <p>You can use both double <code>"</code> and signle <code>'</code> quotes, so the example below will be also correct and will work:</p> <pre><code>command: ['perl', '-Mbignum=bpi', '-wle', 'print bpi(2000)'] </code></pre> <p>However you cannot use both at the same time. I don't know what you would like to achieve by putting there <code>"'print bpi(2000)'"</code>, but this syntax doesn't make any sense and is simply incorrect.</p> <p>You might as well ask why you can't run <code>'echo bla'</code> in <code>bash</code> while at the same time <code>echo bla</code> runs successfully, giving you the desired result.</p> <p>Note that when providing <code>command</code> this way in <strong>kubernetes</strong>, only first element of the array is an actual <code>command</code> (executable file searched in <code>$PATH</code>) and other following elements are its arguments. Keeping this in mind, you should notice that your next two examples also don't make any sense as neither "print" nor "bpi(2000)" provided separately are not valid arguments:</p> <blockquote> <pre><code>command: ["perl", "-Mbignum=bpi", "-wle", "print", " bpi(2000)"] # print splitted in two quotes. command: ["perl", "-Mbignum=bpi", "-wle", "print", "bpi(2000)"] # print splitted in two quotes. </code></pre> </blockquote> <p>To be able to fully understand what this command is doing, we need to dive a bit into basic <code>perl</code> documentation. I left only those options that were used in our example and are relevant to it:</p> <pre><code>$ perl --help Usage: perl [switches] [--] [programfile] [arguments] -e program one line of program (several -e's allowed, omit programfile) -l[octal] enable line ending processing, specifies line terminator -[mM][-]module execute "use/no module..." before executing program -w enable many useful warnings Run 'perldoc perl' for more help with Perl. </code></pre> <p>Now let's analyse our command step by step:</p> <pre><code>["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] </code></pre> <p>Each element of the array, which follows the main command <code>"perl"</code> is a separate entity which is passed as <em>an argument of the main command</em>, <em>its flag</em>, or <em>an argument of recently provided flag</em>, which according to documentation is required and should be provided in a very specific form.</p> <p>In our <code>-wle</code> set of flags, <code>e</code> option is crucial as it must be followed by specific argument:</p> <pre><code> -e program one line of program (several -e's allowed, omit programfile) </code></pre> <p>which in our example is:</p> <pre><code>print bpi(2000) </code></pre> <p>I want to emphasize it again. Every element of the array is treated as a separate entity. By breaking it into two separate elements such as <code>"print", " bpi(2000)"</code> or <code>"print", "bpi(2000)"</code> you feed <code>perl -e</code> only with <code>print</code> argument which doesn't make for it any sense as it requires very specific command telling it what should be printed. It is exactly as if you run in bash shell:</p> <pre><code>perl -Mbignum=bpi -wle 'print' 'bpi(2000)' </code></pre> <p>which will result in the following error from perl interpreter:</p> <pre><code>Use of uninitialized value $_ in print at -e line 1. </code></pre> <p>And finally when you run your last example:</p> <blockquote> <p>command: ["perl -Mbignum=bpi -wle print bpi(2000)"] # complete command in single quotes.</p> </blockquote> <p>you'll get quite detailed message explaining why it doesn't work in <code>Pod</code> events (<code>kubectl describe pod pi</code>):</p> <pre><code>Error: failed to start container "pi": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"perl -Mbignum=bpi -wle print bpi(2000)\": executable file not found in $PATH": unknown </code></pre> <p>It basically tries to find in $PATH an executable file named <code>"perl -Mbignum=bpi -wle print bpi(2000)"</code> which of course cannot be done.</p> <p>If you want to familiarize with different ways of defining a <code>command</code> and its <code>arguments</code> for a <code>container</code> in <strong>kubernetes</strong>, I recommend you reading <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">this</a> section in official kubernetes documentation.</p>
mario
<p>I'm developing a Java application that creates some Kubernetes Jobs using the official Kubernetes Java client. Each Job uses its configuration directory, which already exists as a configMap in the cluster. (This configMap was created using <code>kubectl create configmap {name} --from-file=/... </code>)</p> <p>Using Java client, I successfully created a <code>V1ConfigMap</code> object that refers to a specific configMap, and I also found that <code>V1Volume.setConfigMap()</code> will convert <code>V1ConfigMapVolumeSource</code> to <code>V1Volume</code> that can be mounted by a container.</p> <p>However, I couldn't find the way to map <code>V1ConfigMap</code> and <code>V1ConfigMapVolumeSource</code>.</p> <p>Here is my code:</p> <pre class="lang-java prettyprint-override"><code>public void setConfigMap(V1ConfigMap cm, String mountPath){ V1ConfigMapVolumeSource volSource = new V1ConfigMapVolumeSource(); //Some additional mappings are needed here. //volSource = ...(cm) //create V1Volume from V1ConfigMapVolumeSource String volName = &quot;appSetting&quot;; V1Volume settingVol = new V1Volume().name(volName); settingVol.setConfigMap(volSource); //create V1VolumeMount V1VolumeMount volumeMount = new V1VolumeMount(); volumeMount.setMountPath(mountPath); volumeMount.setName(volName); //set created objects to the Job job.getSpec().getTemplate().getSpec().addVolumesItem(settingVol); job.getSpec().getTemplate().getSpec().getContainers().get(0).addVolumeMountsItem(volumeMount); } </code></pre> <p>Does anyone know the way to solve this, or is my way to approach completely wrong?</p>
Daigo
<p>As stated into the <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/V1ConfigMapVolumeSource.md" rel="nofollow noreferrer">documentation</a> for <code>V1ConfigMapVolumeSource</code>, there is a <code>name</code> parameter (type <code>String</code>) which is the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names" rel="nofollow noreferrer">name of the referent</a>.</p> <p>To link a <code>ConfigMap</code> inside a <code>ConfigMapVolumeSource</code>, just put the name of the <code>ConfigMap</code> in the name parameter, that should be enough.</p>
AndD
<p>I'm using <a href="https://metacontroller.app" rel="nofollow noreferrer">Metacontroller</a> to implement a Kubernetes operator. </p> <p>My problem is the following:</p> <ol> <li>Metacontroller never stops calling my sync hook for my controller ( <a href="https://metacontroller.app/api/compositecontroller/" rel="nofollow noreferrer">composite controllers</a> in that case), and </li> <li>the parent resources <code>status.observedGeneration</code> field is getting updated continuously (from what I understand that means the resource was recreated).</li> </ol> <p>The composite controller documentation (specifically the <a href="https://metacontroller.app/api/compositecontroller/#sync-hook-response" rel="nofollow noreferrer">response documentation</a>) suggests that if there are no changes in the returned parent status or in the children collection, Metacontroller should stop calling the sync hook. </p> <p>I additionally removed <code>spec.resyncPeriodSeconds</code> and <code>spec.parentResource.revisionHistory</code> from the composite controller manifest (to not trigger any calls to the sync hook due to timer events or changes to the parent's <code>status</code> field) .</p> <p>Sadly, none of this worked. How can I tell Metacontroller to stop calling the sync hook and stop to create the resource?</p>
FK82
<p>You probably need to enable the "status" subresource for your CRD: <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#status-subresource" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#status-subresource</a></p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: myresource spec: ... subresources: status: {} </code></pre> <p>Without that, Metacontroller will treat status updates as normal resource updates which in turn creates a new <code>.metadata.resourceVersion</code> / <code>.metadata.generation</code> because Metacontroller always adds an updated <code>.status.observedGeneration</code> field. </p> <p>See here: <a href="https://github.com/GoogleCloudPlatform/metacontroller/blob/985572b9052a306f7e4d4fb84f2ced6f74247dd5/dynamic/clientset/clientset.go#L200" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/metacontroller/blob/985572b9052a306f7e4d4fb84f2ced6f74247dd5/dynamic/clientset/clientset.go#L200</a></p> <p>I have created an issue for this: <a href="https://github.com/GoogleCloudPlatform/metacontroller/issues/176" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/metacontroller/issues/176</a></p> <p>Hopefully this will make this situation more obvious in the future.</p>
Michael Grüner
<p>I'm thinking of a solution to do a rolling update on a schedule without really releasing something. I was thinking of an ENV variable change through kubectl patch to kick off the update in GKE.</p> <p>The context is we have containers that don't do garbage collection, and the temporary fix and easiest path forward and is cycling out pods frequently that we can schedule on a nightly.</p> <p>Our naive approach would be to schedule this through our build pipeline, but seems like there's a lot of moving parts.</p> <p>I haven't looked at Cloud Functions, but I'm sure there's an API that could do this and I'm leaning towards automating this with Cloud Functions.</p> <p>Or is there already a GKE solution to do this?</p> <p>Any other approaches to this problem?</p> <p>I know AWS's ec2 has this feature for ASG, I was looking for the same thing so I don't to do a hacky ENV var change on manifest.</p>
edmamerto
<p>I can think of two possibilities:</p> <ul> <li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs" rel="nofollow noreferrer">Cronjobs</a>. You can use CronJobs to run tasks at a specific time or interval. Suggested for automatic tasks, such as backups, reporting, sending emails, or cleanup tasks. More details <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">here</a>.</li> <li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/automated-deployment" rel="nofollow noreferrer">CI/CD with CloudBuild</a>. When you push a change to your repository, Cloud Build automatically builds and deploys the container to your GKE cluster.</li> </ul>
Judith Guzman
<p>I am trying to track down some bottlenecks in a GKE kubernetes cluster. I am trying to rule out resource (cpu and memory) limits. However, I am having trouble because there are resource limits set on the containers in the deployment yaml files, but there are also resource quotas set as defaults in the cluster that presumably don't apply when they are specified in the container definitions. But then I have also read that you can have pod resource limits and even limits on specific namespaces.</p> <p>Where is the authoritative listing of what limits are being applied to a container? If I kubectl Describe the Node that has the pods, should the limits in the details be the actual limits?</p>
Kyle
<p>I see you confusion, but reality is more simple. First of all - resource requests and limits are applied on container level, not pod. Pod resources are defined as simple sum of its container resources.</p> <ol> <li><a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Namespace quotas</a>. They influence the namespace as a whole and do not influence pods themselves. The only thing namespace quota does is ensuring that sum of all pod resources in given namespace does not exceed the configured limits. If you have reasonably high namespace quota - you may forget that it even exists. But you should remember that once you define namespace quota - any single container in namespace must have resources specified. </li> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/#create-a-limitrange-and-a-pod" rel="nofollow noreferrer">LimitRange</a>. LimitRange works on namespace level. You can configure default resource requests and limits both for pods and containers. Another thing it does is specifying minimum and maximum resources for pods and containers in namespace.</li> <li>The last but the most important is the actual pod resources. I guess this thing is more or less self-explanatory in this context.</li> </ol> <p>Now, when you hit "kubectl describe" on node or pod - then the <strong>actual</strong> pod requests and limits are displayed. Actual means those resources that matter. It does not matter where those resources came from - from deployment/pod spec or from LimitRange. What you see is what you get here, regardless of have you specified some quotas or LimitRange or not.</p> <p>Does that answer your question?</p>
Vasili Angapov
<p>I have this section in my service template. I want, if variable publicIP is defined, define loadBalancerIP.</p> <pre><code> {{- if .Values.publicIP }} loadBalancerIP: {{ .Values.publicIP }} {{- end }} </code></pre> <p>in my values.$environment.yml</p> <p>I define</p> <p>publicIP: null</p> <p>And I need to define publicIP when I do helm install, for example:</p> <pre><code>helm install release release/path -f values.$environment.yml -—set publicIP=127.0.0.1 </code></pre> <p>But it isn't working. What can I do to define publicIP and it defines in my template?</p>
Win32Sector
<p>I am not sure I understand the question but if you want to set an individual parameter you need to do it with</p> <blockquote> <p>&quot;--set&quot;</p> </blockquote> <p>eg : <code>helm install --set foo=bar ./mychart</code></p> <hr /> <p>What is the name of your values.yml? values.$environment.yml? Try also command like this</p> <pre><code>helm install release release/path -f --values {YOUR_VALUES_FILE}.yml --set publicIP=127.0.0.1 </code></pre> <p>and be sure to use --, you seem in the example provided to use a weird special character much longer than a normal dash.</p> <p>And second is <strong>publicIP</strong> a root parameter? if you have something like this in {YOUR_VALUES_FILE}.yml</p> <pre><code>root node: null </code></pre> <p>you need to set it like this :</p> <pre><code>helm install release release/path -f --values {YOUR_VALUES_FILE}.yml --set root.node=value </code></pre>
Opri
<p>I want to setting an Istio ingressgateway on a single node K8s cluster (taint nodes --all) hosted on private VMs for Dev purpose. Since I don't have a Load balancer, the istio-ingressgateway external IP is always on "Pending" mode (which is normal). In this configuration i need to use for example port 31380/31390 instead of 80/443.</p> <p>What's the best practice to bypass this behavior ? can i patch the External IP of the istio-ingressgateway ? initialize the ingressgateway with a different type (NodePort) ? redirect the traffic with a local LB or anothier Ingress controler ?</p> <p>Thanks in advance for your feedbacks. A.</p>
alxsbn
<p>You may add externalIPs to your Service definition, e.g. add nodes IP addresses as externalIPs. Then once you hit node1_IP:443 - it will forward you to IngressGateway.</p> <p>Like this:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - name: http protocol: TCP port: 80 targetPort: 9376 externalIPs: - node1_IP - node2_IP - node3_IP </code></pre> <p>Read more here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a></p> <p>Alternatively you can define IngressGateway pod to use hostNetwork. In that case it can also use 80 and 443 ports, but only using IP of the node which it is running on.</p>
Vasili Angapov
<p>When JVM crashes with <code>OutOfMemoryError</code> there are some options to store a dump:</p> <pre><code>-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./java_pid&lt;pid&gt;.hprof </code></pre> <p>but what does it happen in kubernetes cluster after crashing? It seems kubernetes will clean up everything related to the pod crushed? How can I save it?</p>
J.J. Beam
<p>just mount it as a volume.</p> <pre><code>volumeMounts: - name: heap-dumps mountPath: /dumps volumes: - name: heap-dumps emptyDir: {} </code></pre> <p><a href="https://danlebrero.com/2018/11/20/how-to-do-java-jvm-heapdump-in-kubernetes/" rel="nofollow noreferrer">How to do a java heap dump in K8s</a></p>
Opri
<p>A cronjob we scheduled at night was started correctly but the image itself run much later than scheduled. We expect that there was a problem either pulling the image or requesting the resources from the cluster. Usually I can see such errors in the events section of the <code>kubectl describe job-name</code> output. In this section I can see events such as <code>pull image</code> <code>create container</code> etc. But after the job is finished, no events are shown anymore. </p> <p>Is it possible the see those events for a finished job?</p> <p>Or is there another way to investigate such a problem?</p>
mjspier
<p>The problem with storing events is wider than just cronjobs. Events in Kubernetes by default are stored only for 1 hour (--event-ttl flag for kube-apiserver). That means that if your cronjob was run two hours ago - you will not see events in "kubectl describe". </p> <p>In order to save events for later investigations you need to export them somewhere. For example, Google Kubernetes Engine stores events into Stackdriver. For vanilla Kubernetes you may store events in Prometheus using <a href="https://github.com/caicloud/event_exporter" rel="noreferrer">event_exporter</a> or to <a href="https://github.com/alauda/event-exporter" rel="noreferrer">Elasticsearch</a>. Does that answer your question?</p>
Vasili Angapov
<p>I want to set environment variables via Kubernetes for server.xml in tomcat. Here is my deployment.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: tomcat-test-pod ... ... env: - name: hostName value: 'test.com' - name: localhost value: 'localhost.com' </code></pre> <p>And here is my server.xml:</p> <pre><code>&lt;?xml version='1.0' encoding='utf-8'?&gt; &lt;Resource auth=&quot;Container&quot; description=&quot;Global E-Mail Resource&quot; mail.debug=&quot;false&quot; mail.smtp.auth=&quot;false&quot; mail.smtp.ehlo=&quot;true&quot; mail.smtp.host=&quot;${hostName}&quot; mail.smtp.localhost=&quot;${localhost}&quot; mail.smtp.port=&quot;25&quot; mail.smtp.sendpartial=&quot;true&quot; mail.transport.protocol=&quot;smtp&quot; name=&quot;mail/Session&quot; type=&quot;javax.mail.Session&quot;/&gt; </code></pre> <p>From <a href="https://tomcat.apache.org/tomcat-9.0-doc/config/systemprops.html" rel="noreferrer">https://tomcat.apache.org/tomcat-9.0-doc/config/systemprops.html</a>, it says that I need to set <code>org.apache.tomcat.util.digester. PROPERTY_SOURCE</code> to <code>org.apache.tomcat.util.digester.EnvironmentPropertySource</code>, but I am not sure what I am supposed to do. Do I need to set it in setenv.sh or do I need to create another class? Any help will be appreciated..</p>
Jonathan Hagen
<p><code>org.apache.tomcat.util.digester.PROPERTY_SOURCE</code> is a Java system property so you can set it where system properties are accepted:</p> <ul> <li>you can add it to the command line options, e.g. by adding to <code>setenv.sh</code>:</li> </ul> <pre class="lang-sh prettyprint-override"><code>CATALINA_OPTS=&quot;$CATALINA_OPTS -Dorg.apache.tomcat.util.digester.PROPERTY_SOURCE=org.apache.tomcat.util.digester.EnvironmentPropertySource </code></pre> <p>This will work only if you call <code>catalina.sh/startup.sh</code> to start Tomcat (directly or indirecly). For example it will not work on Windows, when starting Tomcat as a service.</p> <ul> <li>add the system property to <code>catalina.properties</code>:</li> </ul> <pre><code>org.apache.tomcat.util.digester.PROPERTY_SOURCE=org.apache.tomcat.util.digester.EnvironmentPropertySource </code></pre> <p>This always works.</p>
Piotr P. Karwasz
<p>When scheduling a kubernetes Job and Pod, if the Pod can't be placed the explanation available from <code>kubectl describe pods PODNAME</code> looks like:</p> <pre><code>Warning FailedScheduling &lt;unknown&gt; default-scheduler 0/172 nodes are available: 1 Insufficient pods, 1 node(s) were unschedulable, 11 Insufficient memory, 30 Insufficient cpu, 32 node(s) didn't match node selector, 97 Insufficient nvidia.com/gpu. </code></pre> <p>That's useful but a little too vague. I'd like more detail than that. </p> <blockquote> <p>Specifically can I list all nodes with the reason the pod wasn't scheduled to each particular node? </p> </blockquote> <p>I was recently changing labels and the node selector and want to determine if I made a mistake somewhere in that process or if the nodes I need really are just busy.</p>
David Parks
<p>You can find more details related to problems with scheduling particular <code>Pod</code> in <code>kube-scheduler</code> logs. If you set up your cluster with <strong>kubeadm</strong> tool, <code>kube-scheduler</code> as well as other key components of the cluster is deployed as a system <code>Pod</code>. You can list such <code>Pods</code> with the following command:</p> <pre><code>kubectl get pods -n kube-system </code></pre> <p>which will show you among others your <code>kube-scheduler</code> <code>Pod</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE kube-scheduler-master-ubuntu-18-04 1/1 Running 0 2m37s </code></pre> <p>Then you can check its logs. In my example the command will look as follows:</p> <pre><code>kubectl logs kube-scheduler-master-ubuntu-18-04 -n kube-system </code></pre> <p>You should find there the information you need.</p> <hr> <p>One more thing...</p> <p>If you've already verified it, just ignore this tip</p> <p>Let's start from the beginning...</p> <p>I've just created a simple job from the example you can find <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">here</a>:</p> <pre><code>kubectl apply -f https://k8s.io/examples/controllers/job.yaml job.batch/pi created </code></pre> <p>If I run:</p> <pre><code>kubectl get jobs </code></pre> <p>it shows me:</p> <pre><code>NAME COMPLETIONS DURATION AGE pi 0/1 17m 17m </code></pre> <p>Hmm... completions 0/1 ? Something definitely went wrong. Let's check it.</p> <pre><code>kubectl describe job pi </code></pre> <p>tells me basically nothing. In it's events I can see only:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 18m job-controller Created pod: pi-zxp4p </code></pre> <p>as if everything went well... but we already know it didn't. So let's investigate further. As you probably know, <code>job-controller</code> creates <code>Pods</code> that run to completion to perform certain task. From the perspective of the <code>job-controller</code> everything went well (we've just seen it in it's events):</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 23m job-controller Created pod: pi-zxp4p </code></pre> <p>It did it's part of the task and reported that everything went fine. But it's just part of the whole task. It passed actual <code>Pod</code> creation task further to the <code>kube-scheduler</code> controller as being just a <code>job-controller</code> it isn't responsible (and doesn't even have enough privileges) to schedule the actual <code>Pod</code> on particular node. If we run:</p> <pre><code>kubectl get pods </code></pre> <p>we can see one <code>Pod</code> in a <code>Pending</code> state:</p> <pre><code>NAME READY STATUS RESTARTS AGE pi-zxp4p 0/1 Pending 0 30m </code></pre> <p>Let's describe it:</p> <pre><code>kubectl describe pod pi-zxp4p </code></pre> <p>In events we can see some very important and specific info:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 20s (x24 over 33m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p>so now we know the actual reason why our Pod couldn't be scheduled.</p> <p>Pay attention to different fields of the event:</p> <p><code>From</code>: <code>default-scheduler</code> - it means that the message was originated from our <code>kube-scheduler</code>.</p> <p><code>Type</code>: <code>Warning</code>, which isn't as important as <code>Critical</code> or <code>Error</code> so chances are that it may not appear in <code>kube-scheduler</code> logs if the last one was started with the default level of log verbosity.</p> <p>You can read <a href="https://github.com/ravigadde/kube-scheduler/blob/master/docs/devel/logging.md" rel="nofollow noreferrer">here</a> that:</p> <blockquote> <p>As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in -v=X where X is the desired maximum level to log.</p> </blockquote>
mario
<p>I have the following:</p> <p>2 pod replicas, load balanced. Each replica having 2 containers sharing network.</p> <p>What I am looking for is a shared volume...</p> <p>I am looking for a solution where the 2 pods and each of the containers in the pods can share a directory with read+write access. So if a one container from pod 1 writes to it, containers from pod 2 will be able to access the new data.</p> <p>Is this achievable with persistent volumes and PVCs? if so what do i need and what are pointers to more details around what FS would work best, static vs dynamic, and storage class.</p> <p>Can the volume be an S3 bucket?</p> <p>Thank you!</p>
Assaf Moldavsky
<p>There are several options depending on price and efforts needed:</p> <ol> <li>Simplest but a bit more expensive solution is to use <a href="https://aws.amazon.com/efs/" rel="nofollow noreferrer">EFS</a> + <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">NFS Persistent Volumes</a>. However, EFS has serious throughput limitations, read <a href="https://docs.aws.amazon.com/efs/latest/ug/performance.html" rel="nofollow noreferrer">here</a> for details.</li> <li>You can create pod with NFS-server inside and again mount NFS Persistent Volumes into pods. See example <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="nofollow noreferrer">here</a>. This requires more manual work and not completely highly available. If NFS-server pod fails, then you will observe some (hopefully) short downtime before it gets recreated.</li> <li>For HA configuration you can provision <a href="https://github.com/gluster/gluster-kubernetes" rel="nofollow noreferrer">GlusterFS</a> on Kubernetes. This requires the most efforts but allows for great flexibility and speed.</li> <li>Although mounting S3 into pods is somehow possible using awful crutches, this solution has numerous drawbacks and overall is not production grade. For testing purposes you can do that.</li> </ol>
Vasili Angapov
<p>While working with minikube ingress, I have to write <code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code>. I have been trying hard to understand why we need this annotation and how to use it.</p> <p>I know that the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="noreferrer">doc</a> says the following:</p> <blockquote> <p>In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.</p> </blockquote> <p>But I am not able to get the exact point of what exactly <code>the exposed URL in the backend service differs from the specified path in the Ingress rule</code> means. I am not able to get the idea clearly.</p> <p>Further, upon trying to execute ingress file with services:</p> <p>code 1:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress namespace: example-namespace annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: myexample.com http: paths: - path: / pathType: Prefix backend: service: name: example-service port: number: 80 </code></pre> <p>code 2:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress namespace: example-namespace annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: myexample.com http: paths: - path: / pathType: Prefix backend: service: name: example-service port: number: 80 </code></pre> <p>code 3:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress namespace: example-namespace annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: myexample.com http: paths: - path: /index pathType: Prefix backend: service: name: example-service port: number: 80 </code></pre> <p>what exactly is the difference between each pair of the above 3 code snippets with respect to rewrite-target and path mentioned above?</p> <p>PS: I'm new to minikube and trying to figure out the exact way things work. Please help.</p>
vagdevi k
<p>I don't know if things changes with new versions of the Ingress resource or with new versions of the Nginx Ingress Controller, but this is how I think it works.</p> <p>Suppose I want to serve 2 different web applications under the same domain, with a Ingress.</p> <ul> <li>App A is expecting requests under <code>/</code></li> <li>App B is expecting requests under <code>/</code></li> </ul> <p>So, both Apps are expecting their requests directly under root, which (at first glance) seems to make them impossible to be served at the same domain.</p> <p>Except, with rewrite targets, I can. I could just serve them under a different path, but rewrite the target to <code>/</code></p> <ul> <li>Serve App A under <code>myexample.com/aaa/</code></li> <li>Serve App B under <code>myexample.com/bbb/</code></li> </ul> <p>And add a rewrite target to remove the first part of the path. This is just an example of what a rewrite target can be used for, it simply makes you able to serve an application under a different path of the one expected by the app itself.</p> <p>Example of the ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress namespace: example-namespace annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: myexample.com http: paths: - path: /aaa(/|$)(.*) pathType: Prefix backend: service: name: app-a port: number: 80 - path: /bbb(/|$)(.*) pathType: Prefix backend: service: name: app-b port: number: 80 </code></pre> <p><strong>Notice that</strong>, while this works pretty well on Rest API and similar things, it may work less well on web pages, because a web page could try to load resources at a different path (for example if it does not use relative paths). This is why (usually) frontend applications needs to know on which path they are being served under a domain.</p> <hr /> <p>Regarding the syntax of rewrite targets, I'll take as example the Ingress I wrote above. There are a couple things to take into consideration:</p> <ul> <li>path</li> <li>pathType</li> <li>rewrite-target</li> </ul> <p>Let's start with <code>path</code> and <code>pathType</code> interactions. With the path I can define where to serve my services. Depending on the <code>pathType</code>, it may be just a <code>Prefix</code> of the whole path, the <code>Exact</code> path, or it can depends on the Ingress Controller (aka <code>ImplementationSpecific</code>). Everything is nicely explain in the documentation with a long table of examples ( <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#examples" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#examples</a> )</p> <p>I can do almost everything just with path and pathType, except if the applications I want to serve are expecting to be served at different paths from the ones specified in the Ingress; this is when <code>rewrite-target</code> comes into play.</p> <p>Like in the example above, I can use <code>rewrite-target</code> to serve an application under a different path from the one expected, composing the url as I want. I can also use regex and capture groups (this is what <code>$1</code>, <code>$2</code> and so on are)</p> <p>For example, if I write <code>path: /bbb(/|$)(.*)</code> I mean that this path will match everything under /bbb, with or without a / after bbb. And if I then write <code>rewrite-target: /$2</code> I'm meaning that requests will be rewritten to substitute <code>/bbb</code> with <code>/</code> and then take the second capture group (which means the second regex expression, <code>(.*)</code> )</p> <p>The documentation explains it pretty well, even if it makes still use of the old Ingress resource ( <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a> )</p>
AndD
<p>What is the recommended approach to store the logs of applications deployed on Kubernetes? I read about ELK stack, but not sure about the pros and cons. Needs recommendations. </p>
d2602766
<p>If you ask specifically about storing application logs in <strong>kubernetes cluster</strong>, there are a few different approaches. First I would recommend you to familiarize with <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">this article</a> in the official <strong>kubernetes documentation</strong>.</p>
mario
<p>I'm fairly new to k8s, so forgive me if I misuse k8s terminology. I'm hoping someone could point me in the right direction and advise the best way to do this.</p> <p>I have a k8s cluster running on a group of raspberry pis. I want to add a database volume accessible by all workers. I plan to use a usb external drive to store the database content.</p> <p>Do I want to mount the external drive to the master node?</p> <p>How is the external drive declared as a k8s resource?</p> <p>Once configured, how is this external drive accessible by pods in other k8s nodes?</p> <p>After reading through k8s Volumes page, it sounds like I might be looking for a Volume of "local" type. If I mount a local volume to the master node, will I be able to run a postgres container in a worker node and access the volume mounted on the master node?</p>
user9270368
<p>Easiest thing is to <a href="https://www.htpcguides.com/configure-nfs-server-and-nfs-client-raspberry-pi/" rel="nofollow noreferrer">set up NFS server</a> on master node, export your USB drive over NFS and then mount it as Persistent Volume in pod. For this you need first to create PersistentVolume:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: nfs spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: server: master-node-ip path: /mnt/nfsserver </code></pre> <p>And then create PersistentVolumeClaim of the same size:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 5Gi </code></pre> <p>After that you can mount this PVC on all needed pods:</p> <pre><code> volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html" volumes: - name: nfs persistentVolumeClaim: claimName: nfs </code></pre>
Vasili Angapov
<p>I'm currently writing a kubernetes operator in go using the <code>operator-sdk</code>. This operator creates two <code>StatefulSet</code> and two <code>Service</code>, with some business logic around it.</p> <p>I'm wondering what CRD status is about ? In my reconcile method I use the default client (i.e. <code>r.List(ctx, &amp;setList, opts...)</code>) to fetch data from the cluster, shall I instead store data in the status to use it later ? If so how reliable this status is ? I mean is it persisted ? If the control plane dies is it still available ? What about disaster recovery, what if the persisted data disappear ? Doesn't that case invalidate the use of the CRD status ?</p>
JesusTheHun
<p>The status subresource of a CRD can be considered to have the same objective of non-custom resources. While the <strong>spec</strong> defines the desired state of that particular resource, basically I declare what I want, the <strong>status</strong> instead explains the current situation of the resource I declared on the cluster and should help in understanding what is different between the desired state and the actual state.</p> <p>Like a <strong>StatefulSet</strong> <strong>spec</strong> could say I want 3 replicas and its <strong>status</strong> say that right now only 1 of those replicas is ready and the next one is still starting, a custom resource <strong>status</strong> may tell me what is the current situation of whatever I declared in the specs.</p> <p>For example, using the Rook Operator, I could declare I want a CephCluster made in a certain way. Since a CephCluster is a pretty complex thing (made of several StatefulSets, Daemons and more), the status of the custom resource definition will tell me the current situation of the whole ceph cluster, if it's health is ok or if something requires my attention and so on.</p> <p>From my understandings of the Kubernetes API, you shouldn't rely on status subresource to decide what your operator should do regarding a CRD as it is way better and less prone to errors to always check the current situation of the cluster (at operator start or when a resource is defined, updated or deleted)</p> <hr /> <p>Last, let me quote from Kubernetes API conventions as it exaplins the convention pretty well ( <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status</a> )</p> <blockquote> <p>By convention, the Kubernetes API makes a distinction between the specification of the desired state of an object (a nested object field called &quot;<strong>spec</strong>&quot;) and the status of the object at the current time (a nested object field called &quot;<strong>status</strong>&quot;).</p> <p>The specification is a complete description of the desired state, including configuration settings provided by the user, default values expanded by the system, and properties initialized or otherwise changed after creation by other ecosystem components (e.g., schedulers, auto-scalers), and is persisted in stable storage with the API object. If the specification is deleted, the object will be purged from the system.</p> <p>The status summarizes the current state of the object in the system, and is usually persisted with the object by automated processes but may be generated on the fly. At some cost and perhaps some temporary degradation in behavior, the status could be reconstructed by observation if it were lost.</p> <p>When a new version of an object is POSTed or PUT, the &quot;<strong>spec</strong>&quot; is updated and available immediately. Over time the system will work to bring the &quot;<strong>status</strong>&quot; into line with the &quot;<strong>spec</strong>&quot;. The system will drive toward the most recent &quot;<strong>spec</strong>&quot; regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the &quot;<strong>status</strong>&quot; to 3. In other words, the system's behavior is level-based rather than edge-based. This enables robust behavior in the presence of missed intermediate state changes.</p> </blockquote>
AndD
<p><strong>Problem: For some reason, helm release of <code>kube-prometheus-stack</code> is stuck in <code>Pending-install</code> status. What is the correct to install a helm release for this using <code>helm cli</code>?</strong></p> <p><strong>Details:</strong></p> <p>Due to Docker registry <code>k8s.gcr.io</code> getting frozen, I had to update the Docker image registry to <code>registry.k8s.io</code> for <code>kube-state-metrics</code> by updating the <code>values.yaml</code> as follows:</p> <pre class="lang-yaml prettyprint-override"><code>kube-state-metrics: prometheusScrape: true image: repository: registry.k8s.io/kube-state-metrics/kube-state-metrics tag: v1.9.8 pullPolicy: Always namespaceOverride: &quot;&quot; rbac: create: true podSecurityPolicy: enabled: true </code></pre> <p>After that, when I tried update the helm release for <code>kube-prometheus-stack</code> using same version of <code>14.9.0</code>, it failed with status <code>Failed</code> for helm release. Upon retrying, it deleted the previous helm release and created a new one. All the components by the new one created successfully but the helm release got stuck in the <code>Pending-install</code> status.</p> <p>I waited for almost 30 minutes but no success. I also tried deleting helm release, rollbacking helm release, deleting helm release secret but got no success.</p> <p>What could be the issue? How can I solve it?</p>
Abdullah Khawer
<p><strong>Solution:</strong> After some investigation, I found that there was a job named <code>kube-prometheus-stack-admission-patch</code> which was failing with <code>BackoffLimitExceeded</code> error. It was some kind of an initializing job. Deleting the job (not pod) fixed the issue and the helm release changed its status to <code>Deployed</code>.</p> <p><strong>Error Log in <code>kube-prometheus-stack-admission-patch</code> job:</strong></p> <pre><code>W0331 10:58:03.079451 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. {&quot;level&quot;:&quot;info&quot;,&quot;msg&quot;:&quot;patching webhook configurations 'kube-prometheus-stack-admission' mutating=true, validating=true, failurePolicy=Fail&quot;,&quot;source&quot;:&quot;k8s/k8s.go:39&quot;,&quot;time&quot;:&quot;2023-03-31T10:58:03Z&quot;} {&quot;err&quot;:&quot;the server could not find the requested resource&quot;,&quot;level&quot;:&quot;fatal&quot;,&quot;msg&quot;:&quot;failed getting validating webhook&quot;,&quot;source&quot;:&quot;k8s/k8s.go:48&quot;,&quot;time&quot;:&quot;2023-03-31T10:58:03Z&quot;} </code></pre>
Abdullah Khawer
<p>I'm learning Kubernetes, and trying to setup a cluster that could handle a single Wordpress site with high traffic. From reading multiple examples online from both Google Cloud and Kubernetes.io - they all set the "accessMode" - "readWriteOnce" when creating the PVCs. </p> <p>Does this mean if I scaled the Wordpress Deployment to use multiple replicas, they all use the same single PVC to store persistent data - read/write data. (Just like they use the single DB instance?) </p> <p>The google example here only uses a single-replica, single-db instance - <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk</a></p> <p>My question is how do you handle persistent storage on a multiple-replica instance?</p>
user2006185
<p>ReadWriteOnce means all replicas will use the same volume and therefore they will all run on one node. This can be suboptimal.</p> <p>You can set up ReadWriteMany volume (NFS, GlusterFS, CephFS and others) storage class that will allow multiple nodes to mount one volume. </p> <p>Alternatively you can run your application as StatefulSet with volumeClaimTemplate which ensures that each replica will mount its own ReadWriteOnce volume.</p>
Vasili Angapov
<p>I have a <code>filebeat</code> configured to send my k8s cluster logs to <code>Elasticsearch</code>.<br /> When I connect to the pod directly (<code>kubectl exec -it &lt;pod&gt; -- sh -c bash</code>),<br /> the generated output logs aren't being sent to the destination.</p> <p>Digging at k8s docs, I couldn't find how k8s is handling STDOUT from a running shell.</p> <p>How can I configure k8s to send live shell logs?</p>
itaied
<p>Kubernetes has (mostly) nothing to do with this, as logging is handled by the container environment used to support Kubernetes, which is usually docker.</p> <p>Depending on docker version, logs of containers could be written on json-file, journald or more, with the default being a json file. You can do a <code>docker info | grep -i logging</code> to check what is the Logging Driver used by docker. If the result is json-file, logs are being written down on a file in json format. If there's another value, logs are being handled in another way (and as there are various logging drivers, I suggest to check the documentation about them)</p> <p>If the logs are being written on file, chances are that by using <code>docker inspect container-id | grep -i logpath</code>, you'll be able to see the path on the node.</p> <p>Filebeat simply harvest the logs from those files and it's docker who handles the redirection between the application STDOUT inside the container and one of those files, with its driver.</p> <p>Regarding exec commands not being in logs, this is an open proposal ( <a href="https://github.com/moby/moby/issues/8662" rel="nofollow noreferrer">https://github.com/moby/moby/issues/8662</a> ) as not everything is redirected, just logs of the apps started by the entrypoint itself.</p> <p>There's a suggested workaround which is ( <a href="https://github.com/moby/moby/issues/8662#issuecomment-277396232" rel="nofollow noreferrer">https://github.com/moby/moby/issues/8662#issuecomment-277396232</a> )</p> <blockquote> <p>In the mean time you can try this little hack....</p> <p><code>echo hello &gt; /proc/1/fd/1</code></p> <p>Redirect your output into PID 1's (the docker container) file descriptor for STDOUT</p> </blockquote> <p>Which works just fine but has the problem of requiring a manual redirect.</p>
AndD
<p>I'm trying to deploy a model on GKE with tensorflow model serving using a GPU. I created a container with docker and it works great on a cloud VM. I'm trying to scale using GKE but the deployment exists with above error.</p> <p>I created the GKE cluster with only 1 node, with a GPU (Tesla T4). I installed the drivers according to the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers" rel="nofollow noreferrer">docs</a></p> <p>It seems to be successful as much as I understand (a pod named <code>nvidia-driver-installer-tckv4</code> was added to the pods list in the node, and it's running without errors)</p> <p>Next I created the deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reph-deployment spec: replicas: 1 template: metadata: labels: app: reph spec: containers: - name: reph-container image: gcr.io/&lt;project-id&gt;/reph_serving_gpu resources: limits: nvidia.com/gpu: 1 ports: - containerPort: 8500 args: - "--runtime=nvidia" </code></pre> <p>Then I ran kubectl create -f d1.yaml and the container exited with the above error in the logs.</p> <p>I also tried to switch the os from cos to ubuntu and run an example from the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#pods_gpus" rel="nofollow noreferrer">docs</a></p> <p>I installed the drivers as above, this time for ubuntu. and applied this yaml taken from the GKE docs (only changed number of gpus to consume):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-gpu-pod spec: containers: - name: my-gpu-container image: nvidia/cuda:10.0-runtime-ubuntu18.04 resources: limits: nvidia.com/gpu: 1 </code></pre> <p>This time i'm getting CrashLoopBackOff without anything more in the logs.</p> <p>Any idea wha'ts wrong? I'm a total newcomer to kubernetes and docker so I may be missing something trivial, but I really tried to stick to the GKE docs.</p>
RT36
<p>ok, I think the docs aren't clear enough on this, but it seems that what was missing was including <code>/usr/local/nvidia/lib64</code> in the <code>LD_LIBRARY_PATH</code> environment variable. The following yaml file runs successfully:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reph-deployment spec: replicas: 1 template: metadata: labels: app: reph spec: containers: - name: reph-container env: - name: LD_LIBRARY_PATH value: "$LD_LIBRARY_PATH:/usr/local/nvidia/lib64" image: gcr.io/&lt;project-id&gt;/reph_serving_gpu imagePullPolicy: IfNotPresent resources: limits: nvidia.com/gpu: 1 requests: nvidia.com/gpu: 1 ports: - containerPort: 8500 args: - "--runtime=nvidia" </code></pre> <p>Here's the relevant part in the GKE <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#cuda" rel="nofollow noreferrer">docs</a> </p>
RT36
<p>I have a <code>pod</code> which embed an <code>initContainer</code> named <code>initdb</code>. Is there a <code>kubectl</code> command which returns <code>true</code> if <code>initdb</code> is started or else <code>false</code>? I need it to display logs of <code>initdb</code> in Github Action CI (<code>kubectl log &lt;pod&gt; -c initdb</code> crashes if <code>initd</code> is not started yet).</p>
Fabrice Jammes
<p>If you have a single init container in the Pod, you could so something like the following:</p> <pre><code>k get pod pod-name --output=&quot;jsonpath={.status.initContainerStatuses[0].ready}&quot; </code></pre> <p>This will return true if the init container is in status Ready, but this means only that the init container is ready, it could be already terminated (because it completed the execution) or still running. I'm not completely sure but if an init container is ready, requesting its logs should work without errors)</p> <p>You can use jsonpath to select specific sections of Pods definitions exactly for the scope of automating certain checks.</p> <p>To see the full definition of your Pod, just use:</p> <pre><code>k get pod pod-name -oyaml </code></pre> <p>and maybe select what you are interested from there. If you want to wait until the init container is terminated or started, you could check its <code>state</code> section which explains the current state in detail and basically create a more finer check on what you are expecting to see.</p>
AndD
<p>I'm trying to deploy RabbitMQ on the Kubernetes cluster and using the initcontainer to copy a file from ConfigMap. However, the file is not copying after POD is in a running state. </p> <p>Initially, I have tried without using an initcontainer, but I was getting an error like "touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Read-only file system."</p> <pre><code>kind: Deployment metadata: name: broker01 namespace: s2sdocker labels: app: broker01 spec: replicas: 1 selector: matchLabels: app: broker01 template: metadata: name: broker01 labels: app: broker01 spec: initContainers: - name: configmap-copy image: busybox command: ['/bin/sh', '-c', 'cp /etc/rabbitmq/files/definitions.json /etc/rabbitmq/'] volumeMounts: - name: broker01-definitions mountPath: /etc/rabbitmq/files - name: pre-install mountPath: /etc/rabbitmq containers: - name: broker01 image: rabbitmq:3.7.17-management envFrom: - configMapRef: name: broker01-rabbitmqenv-cm ports: volumeMounts: - name: broker01-data mountPath: /var/lib/rabbitmq - name: broker01-log mountPath: /var/log/rabbitmq/log - name: broker01-definitions mountPath: /etc/rabbitmq/files volumes: - name: pre-install emptyDir: {} - name: broker01-data persistentVolumeClaim: claimName: broker01-data-pvc - name: broker01-log persistentVolumeClaim: claimName: broker01-log-pvc - name: broker01-definitions configMap: name: broker01-definitions-cm </code></pre> <p>The file "definitions.json" should be copied to /etc/reabbitmq folder. I have followed "<a href="https://stackoverflow.com/questions/49614034/kubernetes-deployment-read-only-filesystem-error">Kubernetes deployment read-only filesystem error</a>". But issue did not fix.</p>
ratnakar
<p>After making changes in the "containers volumeMount section," I was able to copy the file on to /etc/rabbitmq folder. </p> <p>Please find a modified code here. </p> <pre><code> - name: broker01 image: rabbitmq:3.7.17-management envFrom: - configMapRef: name: broker01-rabbitmqenv-cm ports: volumeMounts: - name: broker01-data mountPath: /var/lib/rabbitmq - name: broker01-log mountPath: /var/log/rabbitmq/log - name: pre-install mountPath: /etc/rabbitmq </code></pre>
ratnakar
<p>What triggers init container to be run?</p> <p>Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?</p> <p>Will deleting the pod trigger the init container?</p> <p>Will reducing replica set to null and then increasing it trigger the init container?</p> <p>Is it possible to manually trigger init container?</p>
9ilsdx 9rvj 0lo
<blockquote> <p>What triggers init container to be run?</p> </blockquote> <p>Basically <code>initContainers</code> are run every time a <code>Pod</code>, which has such containers in its definition, is created and reasons of creation of a <code>Pod</code> can be quite different. As you can read in official <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">documentation</a> <strong><em>init containers</em></strong> <em>run before app containers in a <code>Pod</code></em> and they <em>always run to completion</em>. <em>If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds.</em> So one of the things that trigger starting an <code>initContainer</code> is, among others, previous failed attempt of starting it.</p> <blockquote> <p>Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?</p> </blockquote> <p>Yes, basically every change to <code>Deployment</code> definition that triggers creation/re-creation of <code>Pods</code> managed by it, also triggers their <code>initContainers</code> to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your <code>Deployment</code> don't make it to re-create its <code>Pods</code> but changing the container <code>image</code> for sure causes the controller (<code>Deployment</code>, <code>ReplicationController</code> or <code>ReplicaSet</code>) to re-create its <code>Pods</code>.</p> <blockquote> <p>Will deleting the pod trigger the init container?</p> </blockquote> <p>No, deleting a <code>Pod</code> will not trigger the init container. If you delete a <code>Pod</code> which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its <code>initConainers</code>. If you delete a <code>Pod</code> which is managed by a controller, let's say a <code>replicaSet</code>, it will detect that there are less <code>Pods</code> than declared in its yaml definition and it will try to create such missing <code>Pod</code> to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the <code>Pod</code> that triggers its <code>initContainers</code> to be run, but <code>Pod</code> creation, no matter manual or managed by the controller such as <code>replicaSet</code>, which of course can be triggered by manual deletion of the <code>Pod</code> managed by such controller.</p> <blockquote> <p>Will reducing replica set to null and then increasing it trigger the init container?</p> </blockquote> <p>Yes, because when you reduce the number of replicas to 0, you make the controller delete all <code>Pods</code> that fall under its management. When they are re-created, all their startup processes are repeated including running <code>initContainers</code> being part of such <code>Pods</code>.</p> <blockquote> <p>Is it possible to manually trigger init container?</p> </blockquote> <p>As @David Maze already stated in his comment <em>The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that.</em> I would say it depends what you mean by the term <em>manually</em>. If you ask whether this is possible to trigger somehow an <code>initContainer</code> without restarting / re-creating a <code>Pod</code> - no, it is not possible. Starting <code>initContainers</code> is tightly related with <code>Pod</code> creation or in other words with its startup process.</p> <p>Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in <em>kubernetes official docs</em> that you can use for testing different scenarios and you can also create simple <code>initContainer</code> by yourself e.g. using <code>busybox</code> image which only task is to <code>sleep</code> for the required number of seconds. Here you have some useful links from different k8s docs sections related to <em>initContainers</em>:</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">Init Containers</a></p> <p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-init-containers/" rel="noreferrer">Debug Init Containers</a></p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="noreferrer">Configure Pod Initialization</a></p>
mario
<p>I have been wondering how an internal Kubernetes service distributes the load of requests made from within the cluster to all PODs associated with the service. For example, given the following simple service from <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">K8s docs</a>.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 </code></pre> <p>I understand that the default service type is <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ClusterIP</a> when the type property is not specified. But I couldn't find any docs clearly stating how all requests for this kind of service are distributed across the selected PODs.</p> <p>The far I've got was to this <a href="https://groups.google.com/g/kubernetes-users/c/lvfyKzUf-Vg" rel="nofollow noreferrer">post</a> where there's a comment from <em>Tim Hockin</em> stating the following</p> <blockquote> <p>The load is random, but the distribution should be approximately equal for non-trivial loads. E.g. when we run tests for 1000 requests you can see it is close to equal.</p> </blockquote> <p>Is this the policy followed by <code>CluterIP</code> services? Can someone give more clarity on this topic?</p>
João Pedro Schmitt
<p>The request load distribuition depends on what proxy mode is configured in your cluster on the kube-proxy. Often the chosen configuration is iptables. And according to the <a href="https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-mode-iptables" rel="nofollow noreferrer">documentation</a> on it:</p> <blockquote> <p>In this mode, kube-proxy watches the Kubernetes control plane for the addition and removal of Service and EndpointSlice objects. For each Service, it installs iptables rules, which capture traffic to the Service's clusterIP and port, and redirect that traffic to one of the Service's backend sets. For each endpoint, it installs iptables rules which select a backend Pod. By default, kube-proxy in iptables mode chooses a backend at random.</p> </blockquote> <p>Usually this configuration is fine as probability will spread your requests somewhat evenly across pods. But if you need more more control over that you can change that configuration to IPVS mode where you can use round robin, least connections, among other options. More information on it can be seen <a href="https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-mode-iptables" rel="nofollow noreferrer">here</a>.</p> <p>I hope this helps.</p>
Marcelo Canaparro