Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>Can I lint only changes or pull requests in the kubernetes file instead of linting whole kubernetes files everytime I make any changes in the kubernetes folder?</p> <p>I was trying this <a href="https://docs.kubelinter.io/#/?id=using-docker" rel="nofollow noreferrer">https://docs.kubelinter.io/#/?id=using-docker</a></p>
Priyanka Kumari
<p>I'm assuming that you are referering to the <a href="https://github.com/marketplace/actions/kube-linter" rel="nofollow noreferrer"><em>kube-linter</em> GitHub action</a>, because that's the one <a href="https://docs.kubelinter.io/#/?id=kubelinter-github-action" rel="nofollow noreferrer">referenced by the kube-linter documentation</a>.</p> <p>Yes, that action can be given individual files, the <code>directory</code> parameter can be a single file, even though the name doesn't suggest this.</p> <p>See the <a href="https://github.com/marketplace/actions/kube-linter#parameters" rel="nofollow noreferrer">documented parameters</a>:</p> <blockquote> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Parameter name</th> <th>Required?</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>directory</code></td> <td><strong>(required)</strong></td> <td><em><strong>Path of file</strong></em> or directory to scan, absolute or relative to the root of the repo.</td> </tr> </tbody> </table> </div></blockquote> <p>(Bold italics emphasis mine).</p> <p>The parameter is simply given to the <code>kube-linter</code> command line; see the <a href="https://github.com/stackrox/kube-linter-action/blob/3e0698d47a525061e50c1380af263c18824c748b/action.yml#L62-L68" rel="nofollow noreferrer">linting step in the <code>action.yml</code> definition file</a>:</p> <blockquote> <pre class="lang-bash prettyprint-override"><code>./kube-linter $CONFIG lint &quot;${{ inputs.directory }}&quot; --format &quot;${{ inputs.format }}&quot; </code></pre> </blockquote> <p><code>$CONFIG</code> is set to <code>--config &lt;filename&gt;</code> if you provided a <code>config</code> parameter.</p> <p>In short, it acts <em>exactly</em> like <a href="https://docs.kubelinter.io/#/using-kubelinter?id=running-locally" rel="nofollow noreferrer">running the tool locally</a>, which explicitly states that it can take either an individual file or a directory:</p> <blockquote> <ul> <li><p>The path to your Kubernetes <code>yaml</code> file:</p> <pre><code>kube-linter lint /path/to/yaml-file.yaml </code></pre> </li> <li><p>The path to a directory containing your Kubernetes <code>yaml</code> files:</p> <pre><code>kube-linter lint /path/to/directory/containing/yaml-files/ </code></pre> </li> </ul> </blockquote>
Martijn Pieters
<p>I am new at kubernetes so apologies in advance for any silly questions and mistakes. I am trying to setup external access through ingress for ArgoCD. My setup is an aws eks cluster. I have setup alb following the guide <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/v2.2.3/helm/aws-load-balancer-controller" rel="noreferrer">here</a>. I have also setup external dns service as described <a href="https://github.com/kubernetes-sigs/external-dns/blob/v0.9.0/docs/tutorials/aws.md" rel="noreferrer">here</a>. I also followed the verification steps in that guide and was able to confirm that the dns record got created as well and i was able to access the foo service.</p> <p>For argoCD I installed the manifests via</p> <pre><code>kubectl create namespace argocd kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -n argocd </code></pre> <p>The argoCD docs mention adding a service to split up http and grpc and an ingress setup <a href="https://argoproj.github.io/argo-cd/operator-manual/ingress/#aws-application-load-balancers-albs-and-classic-elb-http-mode" rel="noreferrer">here</a>. I followed that and installed those as well</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 external-dns.alpha.kubernetes.io/hostname: argocd.&lt;mydomain.com&gt; labels: app: argogrpc name: argogrpc namespace: argocd spec: ports: - name: &quot;443&quot; port: 443 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server sessionAffinity: None type: ClusterIP </code></pre> <pre><code>apiVersion: networking.k8s.io/v1 # Use extensions/v1beta1 for Kubernetes 1.18 and older kind: Ingress metadata: annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/conditions.argogrpc: | [{&quot;field&quot;:&quot;http-header&quot;,&quot;httpHeaderConfig&quot;:{&quot;httpHeaderName&quot;: &quot;Content-Type&quot;, &quot;values&quot;:[&quot;application/grpc&quot;]}}] alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}]' name: argocd namespace: argocd spec: rules: - host: argocd.&lt;mydomain.com&gt; http: paths: - backend: service: name: argogrpc port: number: 443 pathType: ImplementationSpecific - backend: service: name: argocd-server port: number: 443 pathType: ImplementationSpecific tls: - hosts: - argocd.&lt;mydomain.com&gt; </code></pre> <p>The definitions are applied successfully but I don't see the dns record created neither any external IP listed. Am I missing any steps or is there any misconfiguration here? Thanks in advance!</p>
Abhishek
<p>Service type needs to be NodePort.</p>
Joey Guerra
<p>How do I make the <code>celery -A app worker</code> command to consume only a single task and then exit.</p> <p>I want to run celery workers as a kubernetes Job that finishes after handling a single task.</p> <p>I'm using KEDA for autoscaling workers according to queue messages. I want to run celery workers as jobs for long running tasks, as suggested in the documentation: <a href="https://keda.sh/docs/1.5/concepts/scaling-deployments/#long-running-executions" rel="nofollow noreferrer">KEDA long running execution</a></p>
WolfThreeFeet
<p>There's not really anything specific for this. You would have to hack in your own driver program, probably via a custom concurrency module. Are you trying to use Keda ScaledJobs or something? You would just use a ScaledObject instead.</p>
coderanger
<p>I am using this command to deploy kubernetes dashboard:</p> <pre><code> wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml kubectl create -f kubernetes-dashboard.yaml </code></pre> <p>and the result is:</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl -n kube-system get svc kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.254.19.89 &lt;none&gt; 443/TCP 15s </code></pre> <p>check the pod:</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get pod --namespace=kube-system No resources found. </code></pre> <p>is there any possible to output the logs of kubectl create, so I can know the kubernetes dashboard create status,where is going wrong.how to fix it.Now I am hardly know where is going wrong and what should I do to fix the problem.</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get all -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 10.43.0.10 &lt;none&gt; 53/UDP,53/TCP 102d service/kubernetes-dashboard ClusterIP 10.254.19.89 &lt;none&gt; 443/TCP 22h service/metrics-server ClusterIP 10.43.96.112 &lt;none&gt; 443/TCP 102d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kubernetes-dashboard 0/1 0 0 22h NAME DESIRED CURRENT READY AGE replicaset.apps/kubernetes-dashboard-7d75c474bb 1 1 0 9d </code></pre>
Dolphin
<p>Take a look at the file you downloaded. It defines several objects including a <code>Deployment</code> kind. Let's assume that you know that this is the one that does the creating, then you can do:</p> <pre><code>kubectl describe deployment kubernetes-dashboard -n kube-system </code></pre> <p>This will give you a list of events that will give more information about what is happening. A <code>Deployment</code> is responsible for creating <code>Pod</code>s.</p>
Jamie
<p>On step 8 of <em>Deploying the app to GKE</em> in <a href="https://cloud.google.com/python/django/kubernetes-engine" rel="nofollow noreferrer">Running Django on Kubernetes Engine</a>, it asks you to run this command:</p> <pre><code>kubectl create secret generic cloudsql-oauth-credentials --from-file=credentials.json=[PATH_TO_CREDENTIAL_FILE] </code></pre> <p>What is <code>PATH_TO_CREDENTIAL_FILE</code> supposed to be? I'm a bit lost here.</p>
Pablo Fernandez
<p>As it says in the previous line, it's the "location of the key you downloaded when you created your service account".</p>
Daniel Roseman
<p>My goal is to create a <code>StatefulSet</code> in the <code>production</code> namespace and the <code>staging</code> namespace. I am able to create the production StatefulSet however when deploying one to the staging namespace, I receive the error:</p> <pre><code>failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017] </code></pre> <p>The YAML I am using for the staging setup is as so:</p> <p><strong>staging-service.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongodb-staging namespace: staging labels: app: ethereumdb environment: staging spec: ports: - name: http protocol: TCP port: 27017 targetPort: 27017 clusterIP: None selector: role: mongodb environment: staging </code></pre> <p><strong>staging-statefulset.yml</strong></p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongodb-staging namespace: staging labels: app: ethereumdb environment: staging annotations: prometheus.io.scrape: "true" spec: serviceName: "mongodb-staging" replicas: 1 template: metadata: labels: role: mongodb environment: staging spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: role operator: In values: - mongo - key: environment operator: In values: - staging topologyKey: "kubernetes.io/hostname" terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo command: - mongod - "--replSet" - rs0 - "--smallfiles" - "--noprealloc" - "--bind_ip_all" - "--wiredTigerCacheSizeGB=0.5" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db - name: mongo-sidecar image: cvallance/mongo-k8s-sidecar env: - name: MONGO_SIDECAR_POD_LABELS value: "role=mongodb,environment=staging" - name: KUBERNETES_MONGO_SERVICE_NAME value: "mongodb-staging" volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: accessModes: [ "ReadWriteOnce" ] storageClassName: fast-storage resources: requests: storage: 1Gi </code></pre> <p>The <code>production</code> namespace deployment differs only in:</p> <ul> <li><code>--replSet</code> value (<code>rs0</code> instead of <code>rs1</code>)</li> <li>Use of the name 'production' to describe values</li> </ul> <p>Everything else remains identical in both deployments.</p> <p>The only thing I can imagine is that it is not possible to run both these deployments on the port <code>27017</code> despite being in separate namespaces.</p> <p>I am stuck as to what is causing the <code>failed to connect to server</code> error described above.</p> <p><strong>Full log of the error</strong></p> <pre><code>Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017] at Pool.&lt;anonymous&gt; (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35) at Pool.emit (events.js:182:13) at Connection.&lt;anonymous&gt; (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12) at Object.onceWrapper (events.js:273:13) at Connection.emit (events.js:182:13) at Socket.&lt;anonymous&gt; (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:189:49) at Object.onceWrapper (events.js:273:13) at Socket.emit (events.js:182:13) at emitErrorNT (internal/streams/destroy.js:82:8) at emitErrorAndCloseNT (internal/streams/destroy.js:50:3) name: 'MongoError', message: 'failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]' } </code></pre>
Nick
<p>It seems like the error you are getting is from the mongo-sidecar container in the pod. As for why the mongo container is failing, can you obtain more detailed information? It could be something like a failed PVC.</p>
Jamie
<p>Is there a way to disable WAL replay on crash for Prometheus?</p> <p>It takes a while for a pod to come back up due to WAL replay:</p> <p>We can afford to lose some metrics if it meant faster recovery after the crash.</p> <pre><code>level=info ts=2021-04-22T20:13:42.568Z caller=head.go:714 component=tsdb msg=&quot;WAL segment loaded&quot; segment=449 maxSegment=513 level=info ts=2021-04-22T20:13:57.555Z caller=head.go:714 component=tsdb msg=&quot;WAL segment loaded&quot; segment=450 maxSegment=513 level=info ts=2021-04-22T20:14:12.222Z caller=head.go:714 component=tsdb msg=&quot;WAL segment loaded&quot; segment=451 maxSegment=513 level=info ts=2021-04-22T20:14:25.491Z caller=head.go:714 component=tsdb msg=&quot;WAL segment loaded&quot; segment=452 maxSegment=513 level=info ts=2021-04-22T20:14:39.258Z caller=head.go:714 component=tsdb msg=&quot;WAL segment loaded&quot; segment=453 maxSegment=513 </code></pre>
Steve
<p>Not specifically that I'm aware of. You would have to <code>rm -rf wal/</code> before starting Prom. Usually better to run multiple via Thanos or Cortex than to go down this path.</p>
coderanger
<p>I am new to Kubernetes and I am not really sure on how to proceed to implement correctly a watch; especially I am not sure on how to deal with the resourceVersion parameter.</p> <p>The goal is to watch for new pods with a specific label, and in case of error or disconnection from the cluster being able to restart the watch from the last event occurred.</p> <p>I am doing something like this:</p> <pre class="lang-cpp prettyprint-override"><code>// after setting up the connection and some parameters String lastResourceVersion = null; // at beginning version is unknown while (true) { try { Watch&lt;V1Pod&gt; watcher = Watch.createWatch( client, api.listNamespacedPodCall(namespace, pretty, fieldSelector, labelSelector, lastResourceVersion, forEver, true, null, null), new TypeToken&lt;Watch.Response&lt;V1Pod&gt;&gt;() {}.getType() ); for (Watch.Response&lt;V1Pod&gt; item : watcher) { //increment the version lastResourceVersion = item.object.getMetadata().getResourceVersion(); // do some stuff with the pod } } catch (ApiException apiException) { log.error(&quot;restarting the watch from &quot;+lastResourceVersion, apiException); } } </code></pre> <p>Is it correct to use the resourceVersion of a Pod to reinitialize the watch call? Is this number a kind of timestamp for all the events in the cluster, or different api will use different sequences?</p> <p>Do I need to watch for specific exceptions? eg. in case of the resourceVersion is to old?</p> <p>thanks</p>
G. Bricconi
<p>Adam is right.</p> <p>This is best explained by <strong><a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="noreferrer">https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes</a></strong></p> <p>Quoting relevant parts (emphasis mine):</p> <blockquote> <p>When retrieving a <strong>collection of resources</strong> (either namespace or cluster scoped), the response from the server will contain a resourceVersion value that can be used to initiate a watch against the server. </p> </blockquote> <p>... snip ...</p> <blockquote> <p>When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code 410 Gone, clearing their local cache, <strong>performing a list operation, and starting the watch from the resourceVersion returned by that new list operation.</strong></p> </blockquote> <p>So before you call watch, you should list and pull the resourceVersion from the list (not the objects inside of it). Then start the watch with that resourceVersion. If the watch fails for some reason, you will have to list again and then use the resourceVersion from that list to re-establish the watch.</p>
krousey
<p>we want to create e2e test (integration test ) for our applications on k8s and we want to use minikube but it seems that there is no proper (maintained or official ) docker file for minikube. at least I didn’t find any…In addition I see <a href="https://k3s.io" rel="nofollow noreferrer">k3s</a> and not sure which is better to run e2e test on k8s ?</p> <p>I found this docker file but when I build it it fails with errors</p> <p><a href="https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/" rel="nofollow noreferrer">https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/</a> </p> <p><code>e - –no-install-recommends error</code></p> <p>any idea ?</p>
Rayn D
<p>Currently there's no official way to run minikube from within a container. Here's a two months old <a href="https://github.com/kubernetes/minikube/issues/3192#issuecomment-496186427" rel="nofollow noreferrer">quote</a> from one of minikube's contributors:</p> <blockquote> <p>It is on the roadmap. For now, it is VM based.</p> </blockquote> <p>If you decide to go with using a VM image containing minikube, there are some guides how to do it out there. Here's one called "<a href="https://banzaicloud.com/blog/minikube-ci/" rel="nofollow noreferrer">Using Minikube as part of your CI/CD flow </a>".</p> <p>Alternatively, there's a project called <a href="https://microk8s.io/" rel="nofollow noreferrer">MicroK8S</a> backed by Canonical. In a <strong>Kubernetes Podcast <a href="https://kubernetespodcast.com/episode/039-minikube/" rel="nofollow noreferrer">ep. 39</a></strong> from February, <a href="https://github.com/dlorenc" rel="nofollow noreferrer">Dan Lorenc</a> mentions this:</p> <blockquote> <p>MicroK8s is really exciting. That's based on some new features of recent Ubuntu distributions to let you run a Kubernetes environment in an <strong>isolated fashion without using a virtual machine</strong>. So if you happen to be on one of those Ubuntu distributions and can take advantage of those features, then I would definitely recommend MicroK8s.</p> </blockquote> <p>I don't think he's referring to running minikube in a container though, but I am not fully sure: I'd enter a Ubuntu container, try to install microk8s as a package, then see what happens.</p> <p>That said, unless there's a compelling reason you want to run kubernetes from within a container and you are ready to spend the time going the possible rabbit hole – I think these days running minikube, k3s or microk8s from within a VM should be the safest bet if you want to get up and running with a CI/CD pipeline relatively quickly.</p>
oldhomemovie
<p>I recently encountered an issue where something (which I was not able to identify) deleted a PVC and the corresponding PV in my k8s cluster. The data can be recovered but I have two questions:</p> <ol> <li>Is there some hack to prevent the PVC from being deleted accidentally if someone issues a wrong command which deletes it?</li> <li>Is it possible to check what command caused the deletion of the PVC via some logs?</li> </ol>
Axel Chauvin
<p>For question 1, you can set the Reclaim Policy to <code>Retain</code>. This means that the PV and PVC can be deleted but the underlying storage volume will stick around forever (or until you delete it in whatever the underlying system is).</p> <p>For 2, yes if you have audit logging turned on. <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-backends" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-backends</a>. Otherwise not really.</p>
coderanger
<p>Is there a way to specify the nodeSelector when using the Kubernetes run command? </p> <p>I don't have a yaml file and I only want to override the nodeSelector.</p> <p>I tried the following but didn't work:</p> <pre><code>kubectl run myservice --image myserviceimage:latest --overrides='{ "nodeSelector": { "beta.kubernetes.io/os": "windows" } }' </code></pre>
nbilal
<p><code>nodeSelector</code> must be wrapped with a <code>spec</code>. Like so</p> <pre><code>kubectl run -ti --rm test --image=ubuntu:18.04 --overrides='{&quot;spec&quot;: { &quot;nodeSelector&quot;: {&quot;kubernetes.io/hostname&quot;: &quot;eks-prod-4&quot;}}}' </code></pre>
RubenLaguna
<p>in the output of $ kubectl describe node ip-10-0-1-21</p> <p>I receive the following annotations:</p> <pre><code>Annotations: node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true </code></pre> <p>can you please tell me the meaning of them and if there is a universal guide for all of these annotations - I could not find them by googling.</p> <p>is there any logic how these annotations are created?</p>
yurasov
<p><code>node.alpha.kubernetes.io/ttl</code> is a tuning parameter for how long the Kubelet can cache objects, only rarely used for extreme high-density or high-scale clusters. <code>controller-managed-attach-detach</code> is a feature flag from long ago, Kubernetes 1.3. It was originally used to enable or disable the attach-detach-controller for specific nodes. From the code it looks like it probably still works though that controller has been the default mode for years so we should probably remove it some day.</p>
coderanger
<p>Is there a recommended way to use <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Kubernetes Secrets</a>? They can be exposed as environment variables or using a volume mount. Is one more secure than the other?</p>
Muhammad Rehan Saeed
<p><a href="https://www.oreilly.com/library/view/velocity-conference-2017/9781491985335/video316233.html" rel="noreferrer">https://www.oreilly.com/library/view/velocity-conference-2017/9781491985335/video316233.html</a></p> <p>Kubernetes secrets exposed by environment variables may be able to be enumerated on the host via /proc/. If this is the case it's probably safer to load them via volume mounts.</p>
tmc
<p>Apologies if this question is asked before, am new to Kubernetes</p> <p>Am trying to access the k8s cluster through ingress-nginx as proxy running on my machine, through react app running on localhost</p> <p><a href="https://i.stack.imgur.com/KDrMn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDrMn.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/RPEWt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RPEWt.jpg" alt="enter image description here" /></a></p> <p>Am getting <strong>NET::ERR_CERT_AUTHORITY_INVALID</strong> Error in my browser.</p> <p>I tried <a href="https://stackoverflow.com/questions/54903199/how-to-ignore-ssl-certificate-validation-in-node-requests/54903835">this</a> but didn't worked.</p> <p>How can I get around this?</p> <p>Thank You.</p>
Shreyas Chorge
<p>If you don't install a real TLS certificate, you're just getting the default, self-signed one that the ingress controller includes as a fallback. Check out cert-manager for a path forward or just ignore the error for now (but probably don't ignore it, that's bad).</p>
coderanger
<p>Earlier today I had increased my Docker desktop resources, but when ever since it restarted Kubernetes has not been able to complete its startup. Whenever I try to run a kubectl command, I get <code>Unable to connect to the server: EOF</code> in response.</p> <p>I had thought that it started because I hadn't deleting a helm chart before adjusting the resource values in Settings, thus said resources having been assigned to the pods instead of the Kubernetes api server. But I have not been able to fix this issue.</p> <p>This is what I have tried thus far:</p> <ul> <li>Restarting Docker again</li> <li>Reset Kubernetes</li> <li>Reset Docker to factory settings</li> <li>Deleting the VM in hyper-v and restarting Docker</li> <li>Uninstalling and reinstalling Docker Desktop</li> <li>Deleting the pki folder and restart Docker</li> <li>Set the Environment Variable for KUBECONFIG</li> <li>Deleting .kube/config and restart</li> <li>Another clean reinstall of Docker Desktop</li> </ul> <p>But Kubernetes does not complete its startup, so I still get <code>Unable to connect to the server: EOF</code> in response.</p> <p>Is there anything I haven't tried yet?</p>
shenyongo
<p>I'll share that what solved this for me was Docker Desktop settings feature for &quot;<strong>reset kubernetes cluster</strong>&quot;. I know that @shenyongo said that a &quot;reset kubernetes&quot; didn't work, and I suppose they mean this.</p> <p>But <strong>for the sake of other readers who may find this</strong>, I had this same error message (with Docker Desktop on Windows 11, using wsl2), and the solution for me was indeed to do this:</p> <ol> <li>open the Settings page (in Docker Desktop--right-click on it in the status tray)</li> <li>then choose &quot;Kubernetes&quot; on the left</li> <li>then choose &quot;reset kubernetes cluster&quot;</li> </ol> <p>Yes, that warns that &quot;all stacks and kubernetes resources will be deleted&quot;, but as nothing else had worked for me (and I wasn't worried about losing much), I tried it, and it did the trick. In moments, all my k8s functionality was back to working.</p> <p>As background, k8s had been working fine for me for some time. It was just that one day I found I was getting this error. I searched and searched and found lots of folks asking about it but not getting answers, let alone this answer. To be clear, like the OP here I had tried restarting Docker Desktop, restarting the host machine, even downloading and installing an available DD update (I was only a bit behind), and none of those worked. I didn't proceed to ALL the steps shenyongo did, as I thought I'd try this first, and the reset worked.</p> <p>Hope that may help others. I realize some may fear losing something, but this helps stress the power of declarative vs imperative k8s configuration. It SHOULD be easy to recreate most everything if necessary. I realize it may not be so for everyone.</p>
charlie arehart
<p>I am trying to copy files from the pod to local using following command:</p> <pre><code>kubectl cp /namespace/pod_name:/path/in/pod /path/in/local </code></pre> <p>But the <code>command terminates with exit code 126</code> and copy doesn't take place.</p> <p>Similarly while trying from local to pod using following command:</p> <pre><code>kubectl cp /path/in/local /namespace/pod_name:/path/in/pod </code></pre> <p>It throws the following error:</p> <p><code>OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: &quot;tar&quot;: executable file not found in $PATH: unknown</code></p> <p>Please help through this.</p>
kkpareek
<p><code>kubectl cp</code> is actually a very small wrapper around <code>kubectl exec whatever tar c | tar x</code>. A side effect of this is that you need a working <code>tar</code> executable in the target container, which you do not appear to have.</p> <p>In general <code>kubectl cp</code> is best avoided, it's usually only good for weird debugging stuff.</p>
coderanger
<p>I have a simple wordpress site defined by the <code>ReplicationController</code> and <code>Service</code> below. Once the app is deployed and running happily, I enabled autoscaling on the instance group created by Kubernetes by going to the GCE console and enabling autoscaling with the same settings (max 5, cpu 10).</p> <p>Autoscaling the instances and the pods seem to work decent enough except that they keep going out of sync with each other. The RC autoscaling removes the pods from the CE instances but nothing happens with the instances so they start failing requests until the LB health check fails and removes them.</p> <h2>Is there a way to make kubernetes scale the pods AS WELL as scale the instances that they run on so this doesn't happen? Or is there a way to keep them in sync?</h2> <p>My process is as follows:</p> <p><em>Create the cluster</em></p> <p><code>$ gcloud container clusters create wordpress -z us-central1-c -m f1-micro</code></p> <p><em>Create the rc</em></p> <p><code>$ kubectl create -f rc.yml</code></p> <p><em>Create the service</em></p> <p><code>$ kubectl create -f service.yml</code></p> <p><em>Autoscale the rc</em></p> <p><code>$ kubectl autoscale rc frontend --max 5 --cpu-percent=10</code></p> <p>Then I enabled the autoscaling in the console and gave the servers load to make them scale.</p> <p><strong>rc.yml</strong></p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: frontend spec: replicas: 1 template: metadata: labels: app: wordpress spec: containers: - image: custom-wordpress-image name: wordpress ports: - containerPort: 80 hostPort: 80 </code></pre> <p><strong>service.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: name: frontend name: frontend spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP selector: name: wordpress </code></pre> <hr /> <p><strong>Update for more information</strong></p> <p>If I don't use kubernetes autoscaler and instead set the replicas to the same number as the instance group autoscaler max instance count, I seem to get the desired result. As instances are added to the instance group, kubernetes provisions them, as they are removed kubernetes updates accordingly. At this point I wonder what the purpose of the Kubernetes autoscaler is for.</p>
nathanjosiah
<h2>TLDR;</h2> <p>In your usecase kubernetes is only giving you overhead. You are running 1 pod (docker container) on each instance in your instance group. You could also have your Docker container be deployed to App Engine flexible (former Managed VM's) <a href="https://cloud.google.com/appengine/docs/flexible/custom-runtimes/" rel="nofollow">https://cloud.google.com/appengine/docs/flexible/custom-runtimes/</a> and let the autoscaling of your instance group handle it.</p> <h2>Longer answer</h2> <p>It is not possible (yet) to link the instance scaling to the pod scaling in k8s. This is because they are two separate problems. The HPA of k8s is meant to have (small) pods scale to spread load over your cluster (big machines) so they will be scaling because of increased load.</p> <p>If you do not define any limits (1 pod per machine) you could set the max amount of pods to the max scaling of your cluster effectively setting all these pods in a <code>pending</code> state until another instance spins up.</p> <p>If you want your pods to let your nodes scale then the best way (we found out) is to have them 'overcrowd' an instance so the instance-group scaling will kick in. We did this by setting pretty low memory/cpu requirements for our pods and high limits, effectively allowing them to burst over the total available CPU/memory of the instance.</p> <pre><code>resources: requests: cpu: 400m memory: 100Mi limits: cpu: 1000m memory: 1000Mi </code></pre>
Mark van Straten
<p>I have a single node Kubernetes instance from <a href="https://microk8s.io/" rel="noreferrer">microk8s</a>. It is installed on a Ubuntu Server 20.20 running on Raspberry Pi 4.</p> <p>I am tring to setup an ingress resource which cannot get working.</p> <p>When I run <code>kubectl describe ingress my-ingress</code> I get this output</p> <pre><code>Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) </code></pre> <p>From what I found in the internet, <code>default-http-backend</code> is something that should have been there by default, but when I run <code>kubectl get pods -n kube-system</code> I don't see it.</p> <p><strong>Question:</strong> How to enable <code>default-http-backend</code> in mikrok8s? Or more generally, how do I make ingress work?</p> <p>Note: Ingress and DNS addons are enabled.</p>
Sasha Shpota
<p>The <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules" rel="noreferrer">default backend</a> is a fallback for when the ingress controller cannot match any of the rules.</p> <h2><code>apiVersion: networking.k8s.io/v1</code></h2> <pre class="lang-yaml prettyprint-override"><code>spec: defaultBackend: service: name: tea-svc port: number: 80 </code></pre> <p>Here is a complete example using <code>v1</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress spec: defaultBackend: service: name: tea-svc port: number: 80 rules: - host: cafe.example.com http: paths: - path: / pathType: Prefix backend: service: name: tea-svc port: number: 80 </code></pre> <h2><code>apiVersion: networking.k8s.io/v1beta1</code></h2> <p>Depending on the <code>apiVersion</code> of your yaml file, the default backend is specified in a different format. It looks like you are using the beta format.</p> <pre class="lang-yaml prettyprint-override"><code>spec: backend: serviceName: tea-svc servicePort: 80 </code></pre> <p>The NGINX Ingress Controller complains about <code>v1beta1</code>, so far it works in kubernetes 1.21.2, but as the warning says it won't soon:</p> <pre><code>networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress </code></pre>
Robert
<p>I have a 'UI' and an 'API' microservice that I'm deploying on k8s default namespace with Istio enabled. My k8s environment is a dev box and doesn't have an External Load Balancer.</p> <p>The UI's port configuration is 80(service port):80(container port in pod).<br /> The API's port configuration is 8000(service port):80(container port in pod)</p> <p>I have to expose both these microservices for external traffic, since some people might use the 'UI' and some people might directly call the 'API' (via postman) for their requests.</p> <p>When these microservices were running as simple docker containers without the k8s layer, users directly used the <code>host.example.com</code> for UI and <code>host.example.com:8000/api</code> for API calls (API calls are JSON-RPC).</p> <p>I have a Gateway and VirtualService set up for both these microservices:</p> <p>For UI:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ui-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - host.example.com --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ui-vs spec: hosts: - host.example.com gateways: - ui-gateway http: - route: - destination: port: number: 80 host: ui --&gt; name of k8s svc </code></pre> <p>For API:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: api-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - host.example.com --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: api-vs spec: hosts: - host.example.com gateways: - api-gateway http: - route: - destination: host: api -&gt; name of api service port: number: 8000 </code></pre> <p>Now going by the Istio documentation (<a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#accessing-ingress-services-using-a-browser" rel="nofollow noreferrer">accessing on browser</a>) to access this UI in the browser I need to access it via <code>${INGRESS_HOST}:${INGRES_PORT}</code>. In my case:</p> <pre><code>INGRESS_HOST=host.example.com INGRESS_PORT=31165 </code></pre> <p>So accessing <a href="http://host.example.com:31165" rel="nofollow noreferrer">http://host.example.com:31165</a> loads the UI, how do I now access the API microservice externally on <code>host.example.com</code> via Postman etc? The 8000 API port is not accessible from outside. I guess it all has to go via 31165, but what route do I need to use to access the API directly? What changes do I need to do for this, if any, in my set-up? I have just started with Istio.</p>
user1452759
<p>one option is to add a host header.</p> <p>an easier way for local dev stuff is to use a <code>*.nip.io</code> address.</p> <p>If your ingress got an IP (look for an external IP in the result of <code>k get svc -n istio-system istio-ingressgateway</code>), then that's what you would use in the url.</p> <p>e.g.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: grafana-virtualservice namespace: monitoring spec: gateways: - grafana-gateway hosts: - grafana.192.168.87.2.nip.io http: - route: - destination: host: kube-prometheus-stack-grafana </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: grafana-gateway namespace: monitoring spec: selector: istio: ingressgateway servers: - hosts: - grafana.192.168.87.2.nip.io port: name: http number: 80 protocol: HTTP </code></pre> <p>HTTPS redirects work too if you create a certificate</p> <p>e.g.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: argocd-gateway namespace: argocd spec: selector: istio: ingressgateway servers: - hosts: - argocd.192.168.87.2.nip.io port: name: http number: 80 protocol: HTTP tls: httpsRedirect: true - hosts: - argocd.192.168.87.2.nip.io port: name: https number: 443 protocol: HTTPS tls: credentialName: argocd.192.168.87.3.nip.io mode: SIMPLE </code></pre>
Stand__Sure
<p>I have a nodejs pod running in kubernetes production environment. Additionally there is staging and review environment in the same cluster running the same app. I recently added --inspect to the start command in the dockerfile which gets deployed to all environments. My question is, if I enable debugging in production as well, will it impact performance or memory usage? Is it a good practice in general? Otherwise I'll need to create a separate dockerfile for production.</p>
Jayadeep KM
<blockquote> <p>will it impact performance or memory usage?</p> </blockquote> <p>Both probably negligiable if just having the flag enabled, mileage may vary if actually live debugging.</p> <blockquote> <p>Is it good practice</p> </blockquote> <p>I would say no, and it does have <a href="https://nodejs.org/de/docs/guides/debugging-getting-started/#security-implications" rel="nofollow noreferrer">security implications</a>. Although, this would only be a problem if you were to set a public IP, by default debugging would only be permitted on the localhost.</p> <p>My.advice would be create a separate Dockerfile for prod.</p>
James
<p>I have a Kubernetes cluster (installed on premise), and I deployed an application based on Websphere Liberty image (from docker hub).</p> <p>I configured a session affinity (or sticky session) for my service, then it can keep session via requests (access the same pod). But now, I want to keep application session when a node or pod died (for HA and using LB). Can I do that in Websphere liberty ? How to setup a Websphere liberty cluster</p>
taibc
<p>You can configure session persistence via hazelcast or via a traditional database running inside or outside of the cluster. This frees the application from being sensitive to scaling up/down.</p> <p><a href="https://openliberty.io/guides/sessions.html" rel="nofollow noreferrer">https://openliberty.io/guides/sessions.html</a> <a href="https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_admin_session_persistence.html" rel="nofollow noreferrer">https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_admin_session_persistence.html</a></p>
covener
<p>this question is about k8s readiness probe. I am trying to add the command in readiness probe curl to the new pod that is creating.</p> <p>I mean that I want to check that the new pod that is created is ready to accept traffic, before the old one is terminated. I alrady have a command that is execute in the readines probe, so it is not possible for me to add an httpGet in this way:</p> <pre><code>readinessProbe: httpGet: path: /health </code></pre> <p>because I saw that there is an <a href="https://github.com/kubernetes/kubernetes/issues/37218" rel="nofollow noreferrer">issue</a> that it is not possible to add httpGet &amp; command that will be execute.</p> <p>Therefore, I must add this curl to the script that is running each time before new pod is created.</p> <pre><code>status=$( curl -s -o -k /dev/null -w %{http_code} /health); echo "statusCode: $status" if [ "$status" -ne "200" ]; exit 1 fi </code></pre> <p>My problem is that that it is not working, and using <code>kubectl describe po XXXXX</code> I see this output:</p> <pre><code> Readiness probe failed: statusCode: 000000 if [ 000000 -ne 200 ] </code></pre> <p>So, I'm not sure how to make request to the new pod, because the only thing that I know about the new pod in this level is that it include an api named <strong>health</strong>.</p> <p>am I making the request correct?</p>
Sariel
<p>You are missing some colons and a then.</p> <p>You have:</p> <pre><code>status=$( curl -s -o -k /dev/null -w %{http_code} /health); echo "statusCode: $status" if [ "$status" -ne "200" ]; exit 1 fi </code></pre> <p>instead of</p> <pre><code>status=$( curl -s -o -k /dev/null -w %{http_code} /health); echo "statusCode: $status"; if [ "$status" -ne "200" ]; then exit 1; fi </code></pre>
vladmihaisima
<p>Reading the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-patterns" rel="nofollow noreferrer">Kubernetes official docs</a> on Job parallel execution (e.g. one job, multiple pods with <code>parallelism</code> set to &gt; 1), under the section, &quot;<a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#parallel-jobs" rel="nofollow noreferrer">Parallel execution for Jobs / Parallel Jobs with a fixed completion count</a>,&quot; the documentation states:</p> <blockquote> <p><strong>not implemented yet:</strong> Each Pod is passed a different index in the range 1 to .spec.completions.</p> </blockquote> <p>This statement suggests that a future version of Kubernetes will be able to pass a unique counter to each pod in a job with <code>parallelism</code> &gt; 1.</p> <p>Since this is in the official documentation as of (17 Mar 2021) I would like to know if there is an official timeline or expected release version for this feature. It would alleviate a lot of pain for me.</p>
David Parks
<p>Yes, this feature will be released as alpha in 1.21 which is due in a few weeks. <a href="https://github.com/kubernetes/kubernetes/pull/98812" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/98812</a> has details and links to the KEP.</p>
coderanger
<p>I am trying to deploy production grade Elasticsearch 6.3.0 on Kubernetes.</p> <p>Came across few articles, but still not sure what is the best approach to go with.</p> <ol> <li><a href="https://github.com/pires/kubernetes-elasticsearch-cluster" rel="nofollow noreferrer">https://github.com/pires/kubernetes-elasticsearch-cluster</a></li> </ol> <p>It doesn't use stateful set.</p> <ol start="2"> <li><a href="https://anchormen.nl/blog/big-data-services/elastic-search-deployment-kubernetes/" rel="nofollow noreferrer">https://anchormen.nl/blog/big-data-services/elastic-search-deployment-kubernetes/</a></li> </ol> <p>This is pretty old.</p> <p>Using elastic search for App search.</p> <p>Images from Elasticsearch are</p> <pre><code>docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.0 docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.0 </code></pre> <p>I would like to go with -oss image and it is the core Apache one.</p> <p>Is there any good documentation on setting up production grade 6.3.0 version on Kubernetes.</p>
user1578872
<p>One of the most promising new developments for running Elasticearch on Kubernetes is the <a href="https://github.com/upmc-enterprises/elasticsearch-operator" rel="nofollow noreferrer">Elasticsearch Operator</a>.</p> <p>Kubernetes <a href="https://coreos.com/operators/" rel="nofollow noreferrer">Operators</a> allow for more sophistication when it comes to dealing with the requirements of complex tools (and Elasticsearch is definitely one). Especially when considering the need to avoid losing Elasticsearch data, an operator is the way to go.</p>
orangejulius
<p>I have the following minimal example of a pod list:</p> <pre><code>{ &quot;items&quot;: [ { &quot;metadata&quot;: { &quot;name&quot;: &quot;app&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;some-registry/istio/proxyv2:new-version&quot;, &quot;name&quot;: &quot;istio-proxy&quot; }, { &quot;image&quot;: &quot;some-registry/app/app:latest&quot;, &quot;name&quot;: &quot;app&quot; } ] } }, { &quot;metadata&quot;: { &quot;name&quot;: &quot;another-app&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;some-registry/istio/proxyv2:old-version&quot;, &quot;name&quot;: &quot;istio-proxy&quot; }, { &quot;image&quot;: &quot;some-registry/another-app/another-app:latest&quot;, &quot;name&quot;: &quot;another-app&quot; } ] } }, { &quot;metadata&quot;: { &quot;name&quot;: &quot;no-sidecar-app&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;some-registry/no-sidecar-app/no-sidecar-app:latest&quot;, &quot;name&quot;: &quot;no-sidecar-app&quot; } ] } } ] } </code></pre> <p>Now I want a list of pod names that have a sidecar of <code>proxyv2:old-version</code>. So I tried filtering for the all pods that have such a sidecar and then try to filter out the ones that already have the new version. But I just can't find the right query.</p> <p>Using <code>.items[] | select(.spec.containers[].image | test(&quot;some-registry/istio/proxyv2:.*&quot;))</code> gives me a list that contains the pods having a sidecar but if I then try to filter the pods with the old sidecar like this <code>.items[] | select(.spec.containers[].image | test(&quot;some-registry/istio/proxyv2:.*&quot;)) | select(.spec.containers[].image | test(&quot;.*:old-version$&quot;) | not)</code> I suddenly get the first pod output twice instead of only the one pod that still runs the old sidecar.</p> <p>Can someone add the right filter/statement I'm missing?</p> <p><a href="https://jqplay.org/s/4COz0LNtLiF" rel="nofollow noreferrer">https://jqplay.org/s/4COz0LNtLiF</a></p>
micxer
<p>You are testing all images against the old version. As soon as a single image does not contain the old version string, you have a match. Your &quot;app&quot; images do not contain the old version, therefore your full item is matched.</p> <p>You want to use <code>all(… | not)</code> or <code>any(…) | not</code> instead:</p> <pre><code>.items[] | select(.spec.containers[].image | test(&quot;^some-registry/istio/proxyv2:&quot;)) | select(all(.spec.containers[].image; test(&quot;:old-version$&quot;) | not)) </code></pre> <pre><code>.items[] | select(.spec.containers[].image | test(&quot;^some-registry/istio/proxyv2:&quot;)) | select(any(.spec.containers[].image; test(&quot;:old-version$&quot;)) | not) </code></pre> <p>But maybe the better solution is checking your condition with <code>any</code> and <code>all</code>:</p> <pre><code>.items[] | select( .spec.containers | map(.image) | any(test(&quot;^some-registry/istio/proxyv2:&quot;)) and all(test(&quot;:old-version$&quot;)|not) ) </code></pre> <p>To me that feels a like a more natural description of the problem.</p>
knittl
<p>Please help me to understand one thing about <code>Prometheus</code> and <code>Prometheus operator</code> integration into Kubernetes.</p> <p>From the documentation I see that a new and not standard kinds of Kubernetes objects are used to configure <code>Prometheus operator</code>. By standard kinds I mean <code>Pod</code>, <code>Service</code>, <code>ReplicaSet</code>, <code>Deployment</code> etc. How the new like <code>PrometheusRule</code> and <code>Prometheus</code> was created? There is a point of integration here?</p> <p>The documentation which brings me to this questions is here <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md</a></p> <p>The example of this kind of Kubernetes object YAML</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: example spec: replicas: 2 alerting: alertmanagers: - namespace: default name: alertmanager-example port: web serviceMonitorSelector: matchLabels: team: frontend ruleSelector: matchLabels: role: alert-rules prometheus: example </code></pre>
Alexey Usharovski
<p>This is a Kubernetes <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">Custom Resource</a>.</p>
brian-brazil
<p>I'm trying to write a jsonschema for a list of dictionaries (aka an array of objects) where I validate the keys in the dictionary. The labels in this example are what I'm interested in. I'd like to allow an arbitrary number of labels and like to validate that the <code>name</code> and <code>value</code> fields always exist in the label dictionary. Here is an example input represented as yaml. </p> <pre><code>some_field1: "value_a" some_field2: "value_b" labels: - name: "bar" value: "foo" - name: "baz" value: "blah" </code></pre> <p>Here is what I've pieced together so far, but it doesn't validate the keys in the dictionaries. I'm not sure exactly how the additionalProperites works in this case, but I found an example online.</p> <pre><code>properties: some_field1: type: string default: 'value_a' some_field2: type: string default: 'value_b' labels: type: array items: type: object additionalProperties: type: string </code></pre> <p>My use case is that I'm trying to create a Custom Resource Definition (CRD) for Kubernetes where I validate the input, and my understanding is that the CRDs use openapi3/jsonschema validation to define their fields. </p> <p>I'm having trouble finding an information about how to validate a list of dictionaries with specific keys. I'd appreciate any help that you have to offer. </p>
Joe J
<p>Known/fixed keys of a dictionary can be defined in <code>properties</code> and included in the <code>required</code> list:</p> <pre class="lang-yaml prettyprint-override"><code> labels: type: array items: type: object required: [name, value] properties: name: type: string value: type: string additionalProperties: type: string </code></pre>
Helen
<p>I have a master node that has disk pressure and is spamming the log full with endless messages like these:</p> <blockquote> <p>Mar 18 22:53:04 kubelet[7521]: W0318 22:53:04.413211 7521 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage</p> <p>Mar 18 22:53:04 kubelet[7521]: I0318 22:53:04.413235 7521 container_gc.go:85] attempting to delete unused containers</p> <p>......................</p> <p>Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429446 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod kube-controller-manager_kube-system(5308d5632ec7d3e588c56d9f0bca17c8) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429458 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod kube-apiserver_kube-system(9fdc5b37e61264bdf7e38864e765849a) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429464 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod kube-scheduler_kube-system(90280dfce8bf44f46a3e41b6c4a9f551) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429472 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-74ff55c5b-th722_kube-system(33744a13-8f71-4e36-8cfb-5955c5348a14) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429478 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-74ff55c5b-d45hd_kube-system(65a5684e-5013-4683-aa38-820114260d63) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429487 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod weave-net-wjs78_kube-system(f0f9a4e5-98a4-4df4-ac28-6bc1202ec06d) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429493 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod kube-proxy-8dvws_kube-system(c55198f4-38bc-4adf-8bd8-4a2ec2d8a46d) Mar 18 22:53:04 kubelet[7521]: E0318 22:53:04.429498 7521 eviction_manager.go:574] eviction manager: cannot evict a critical pod etcd_kube-system(e3f86cf1b5559dfe46a5167a548f8a4d) Mar 18 22:53:04 kubelet[7521]: I0318 22:53:04.429502 7521 eviction_manager.go:396] eviction manager: unable to evict any pods from the node</p> <p>..............</p> </blockquote> <p>This has been going on for months. I know that disk pressure is probably set the default value, but WHERE is it configured in the first place?</p> <p>I do know about this: <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/</a></p> <p>It is probably this setting that can be set:</p> <p><code>imagefs.available imagefs.available := node.stats.runtime.imagefs.available </code></p> <p>(according to the link above)</p> <p>But again, where? In <code>etcd</code>? How can I set this for all nodes to a default?</p> <p>It is true that there is less space available than the setting is set to, but this is the controlplane (there are no other pods on it) and not a productive system, it is for testing only and I can't see anything in the logs because kubernetes spams it full of garbage. Garbage because these messages make absolutely not sense: These pods are not supposed to be evicted ever, they are essential and they should not even be tried to evict.</p> <p>My questions:</p> <ul> <li>Also, what about the rate limiter?</li> <li>Of stopping after it failing 10 times?</li> <li>Crashloopbackoff?</li> <li>Also, I can't see the currently set values.</li> </ul>
Markus Bawidamann
<p>There's three ways to set Kubelet options. First is <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">command line options</a> like <code>--eviction-hard</code>. Next is a <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">config file</a>. And more recent is <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/" rel="nofollow noreferrer">dynamic configuration</a>.</p> <p>Of course the better answer here is to free up some disk space.</p>
coderanger
<p>I have deployed my Kubernetes cluster on EKS. I have an ingress-nginx which is exposed via load balancer to route traffic to different services. In ingress-nginx first request goes to auth service for authentication and if it is a valid request then I allow it to move forward. This is done using ingress-nginx annotation <strong>nginx.ingress.kubernetes.io/auth-url</strong>. Auth service is developed using FastAPI. In case of <strong>401</strong> response from fastAPI look like this <a href="https://i.stack.imgur.com/uI2Za.png" rel="nofollow noreferrer">FASTAPI</a></p> <p>But when I use ingress-nginx the response look like this <a href="https://i.stack.imgur.com/wt5UA.png" rel="nofollow noreferrer">INGRESS_NGINX</a></p> <p>Is there a way to get JSON respone from Ingress-nginx? Ingress File</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/use-regex: 'true' nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/auth-response-headers: item_id nginx.ingress.kubernetes.io/auth-method: POST nginx.ingress.kubernetes.io/auth-url: http://pth-auth.default.svc.cluster.local:8000/item/1 # UPDATE THIS LINE ABOVE spec: rules: - http: paths: - path: /?(.*) # UPDATE THIS LINE ABOVE backend: serviceName: client-cluster-ip-service servicePort: 3000 - path: /api/?(.*) # UPDATE THIS LINE ABOVE backend: serviceName: server-cluster-ip-service servicePort: 5000 - path: /pth-auth/?(.*) # UPDATE THIS LINE ABOVE backend: serviceName: pth-auth servicePort: 8000 </code></pre>
Devendra Singh khurana
<p>Here's a solution that worked for me. It allows the auth service to return a custom error message for each request.</p> <p>The caveat is that because nginx can't access auth response body, the <code>pth-auth</code> service needs to put the data in <code>Pth-Auth-Error</code> header (base64-encoded).</p> <p>This example handles 401, 500, and a special case when <code>pth-auth</code> service is unavailable.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/use-regex: 'true' nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/auth-response-headers: item_id nginx.ingress.kubernetes.io/auth-method: POST nginx.ingress.kubernetes.io/auth-url: http://pth-auth.default.svc.cluster.local:8000/item/1 # UPDATE THIS LINE ABOVE nginx.ingress.kubernetes.io/configuration-snippet: | # Redirect auth errors to custom named locations error_page 401 = @ingress_service_custom_error_401; error_page 500 = @ingress_service_custom_error_500; # Grab data from auth error response auth_request_set $pth_auth_error $upstream_http_pth_auth_error; auth_request_set $pth_auth_error_content_type $upstream_http_content_type; auth_request_set $pth_auth_status $upstream_status; nginx.ingress.kubernetes.io/server-snippet: | location @ingress_service_custom_error_401 { internal; # Decode auth response header set_decode_base64 $pth_auth_error_decoded $pth_auth_error; # Return the error from pth-auth service if any if ($pth_auth_error_decoded != &quot;&quot;){ add_header Content-Type $pth_auth_error_content_type always; return 401 $pth_auth_error_decoded; } # Fall back to default nginx response return 401; } location @ingress_service_custom_error_500 { internal; # Decode auth response header set_decode_base64 $pth_auth_error_decoded $pth_auth_error; # Return the error from pth-auth service if any if ($pth_auth_error_decoded != &quot;&quot;){ add_header Content-Type $pth_auth_error_content_type always; return 500 $pth_auth_error_decoded; } # Return a hardcoded error in case no pth-auth pods are available if ($pth_auth_status = 503){ add_header Content-Type application/json always; return 503 &quot;{\&quot;msg\&quot;:\&quot;pth-auth service is unavailable\&quot;}&quot;; } # Fall back to default nginx response return 500; } spec: rules: - http: paths: - path: /?(.*) # UPDATE THIS LINE ABOVE backend: serviceName: client-cluster-ip-service servicePort: 3000 - path: /api/?(.*) # UPDATE THIS LINE ABOVE backend: serviceName: server-cluster-ip-service servicePort: 5000 - path: /pth-auth/?(.*) # UPDATE THIS LINE ABOVE backend: serviceName: pth-auth servicePort: 8000 </code></pre> <p>Inspired by: <a href="https://stackoverflow.com/a/31485557/99237">https://stackoverflow.com/a/31485557/99237</a></p> <h2>Troubleshooting tips:</h2> <ul> <li><a href="https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/template/nginx.tmpl" rel="nofollow noreferrer">Here's the template nginx ingress uses when transforming the ingress annotations into nginx config file.</a></li> <li>Connect to the ingress controller pod and look at <code>/etc/nginx/nginx.conf</code> to view the generated nginx config.</li> </ul>
Tereza Tomcova
<p>I've been doing a lot of digging on Kubernetes, and I'm liking what I see a lot! One thing I've been unable to get a clear idea about is what the exact distinctions are between the Deployment and StatefulSet resources and in which scenarios would you use each (or is one generally preferred over the other).</p>
SS781
<p>Deployments and ReplicationControllers are meant for stateless usage and are rather lightweight. <a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/" rel="noreferrer">StatefulSets</a> are used when state has to be persisted. Therefore the latter use <code>volumeClaimTemplates</code> / claims on persistent volumes to ensure they can keep the state across component restarts.</p> <p>So if your application is stateful or if you want to deploy stateful storage on top of Kubernetes use a StatefulSet.</p> <p>If your application is stateless or if state can be built up from backend-systems during the start then use Deployments.</p> <p>Further details about running stateful application can be found in <a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/" rel="noreferrer">2016 kubernetes' blog entry about stateful applications</a></p>
pagid
<p>I am using Terraform to provision resources in Azure, one of which is a Postgres database. My Terraform module includes the following to generate a random password and output to console.</p> <pre><code>resource "random_string" "db_master_pass" { length = 40 special = true min_special = 5 override_special = "!-_" keepers = { pass_version = 1 } } # For postgres output "db_master_pass" { value = "${module.postgres.db_master_pass}" } </code></pre> <p>I am using Kubernetes deployment manifest to deploy the application to Azure managed Kubernetes service. Is there a way of passing the database password to Kubernetes in the deployment pipeline? I am using CircleCI for CICD. Currently, I'm copying the password, encoding it to base64 and pasting it to the secrets manifest before running the deployment.</p>
Confounder
<p>One solution is to generate the Kubernetes yaml from a template.</p> <p>The pattern uses <a href="https://www.terraform.io/docs/configuration/functions/templatefile.html" rel="nofollow noreferrer">templatefile</a> function in Terraform 0.12 or the <a href="https://www.terraform.io/docs/providers/template/index.html" rel="nofollow noreferrer">template</a> provider earlier versions to read and <a href="https://www.terraform.io/docs/providers/local/r/file.html" rel="nofollow noreferrer">local_file</a> resource to write. For example:</p> <pre><code>data "template_file" "service_template" { template = "${file("${path.module}/templates/service.tpl")}" vars { postgres_password = ""${module.postgres.db_master_pass}" } } resource "local_file" "template" { content = "${data.template_file.service_template.rendered}" filename = "postegres_service.yaml" } </code></pre> <p>There are many other options, like using to the <a href="https://www.terraform.io/docs/providers/kubernetes/guides/getting-started.html" rel="nofollow noreferrer">Kubernetes</a> provider, but I think this better matches your question.</p>
Giulio Vian
<p>I am currently provision my EKS cluster/s using EKSCTL and I want to use Terraform to provision the cluster/s. I am using Terraform EKS module to create cluster. I have use EKSCTL to create identity mapping with following command</p> <pre><code>eksctl create iamidentitymapping -- region us-east-1 --cluster stage-cluster --arn arn:aws:iam::111222333444:role/developer --username dev-service </code></pre> <p>I want to convert this command to Terraform with following, but it is not the best way</p> <pre><code> resource &quot;null_resource&quot; &quot;eks-identity-mapping&quot; { depends_on = [ module.eks, aws_iam_policy_attachment.eks-policy-attachment ] provisioner &quot;local-exec&quot; { command = &lt;&lt;EOF eksctl create iamidentitymapping \ --cluster ${var.eks_cluster_name} \ --arn ${data.aws_iam_role.mwaa_role.arn} \ --username ${var.mwaa_username} \ --profile ${var.aws_profile} \ --region ${var.mwaa_aws_region} EOF } } </code></pre> <p>How can I use Kubernetes provider to achieve this</p>
Ruwan Vimukthi Mettananda
<p>I haven't found a clear matching for this particular command, but you can achieve something similar by setting the <code>aws-auth</code> config map in kubernetes, adding all of the users/roles and their access rights in one go.</p> <p>For example we use something like the following below to supply the list of admins to our cluster:</p> <pre class="lang-rb prettyprint-override"><code>resource &quot;kubernetes_config_map&quot; &quot;aws_auth&quot; { metadata { name = &quot;aws-auth&quot; namespace = &quot;kube-system&quot; } data = { mapRoles = &lt;&lt;CONFIGMAPAWSAUTH - rolearn: ${var.k8s-node-iam-arn} username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - rolearn: arn:aws:iam::111222333444:role/developer username: dev-service groups: - system:masters CONFIGMAPAWSAUTH } } </code></pre> <p>Note that this file contains all of the role mappings, so you should make sure <code>var.k8s-node-iam-arn</code> is set to the superuser of the cluster otherwise you can get locked out. Also you have to set what access these roles will get.</p> <p>You can also add specific IAM users instead of roles as well:</p> <pre class="lang-yaml prettyprint-override"><code>- userarn: arn:aws:iam::1234:user/user.first username: user.first groups: - system:masters </code></pre>
SztupY
<p>I’m looking for a way to differentiate between Prometheus metrics gathered from different dynamically discovered services running in a Kubernetes cluster (we’re using <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator</a>). E.g. for the metrics written into the db, I would like to understand from which service they actually came. I guess you can do this via a label from within the respective services, however, swagger-stats (<a href="http://swaggerstats.io/" rel="nofollow noreferrer">http://swaggerstats.io/</a>) which we’re using does not yet offer this functionality (to enhance this, there is an issue open: <a href="https://github.com/slanatech/swagger-stats/issues/50" rel="nofollow noreferrer">https://github.com/slanatech/swagger-stats/issues/50</a>). Is there a way to implement this over Prometheus itself, e.g. that Prometheus adds a service-specific label per time series after a scrape?</p> <p>Appreciate your feedback! </p>
Florian
<blockquote> <p>Is there a way to implement this over Prometheus itself, e.g. that Prometheus adds a service-specific label per time series after a scrape?</p> </blockquote> <p>This is how Prometheus is designed to be used, as a target doesn't know how the monitoring system views it and prefixing metric names makes cross-service analysis harder. Both setting labels across an entire target and prefixing metric names are considered anti-patterns.</p> <p>What you want is called a target label, these usually come from relabelling applied to metadata from service discovery.</p> <p>When using the Prometheus Operator, you can specify <code>targetLabels</code> as a list of labels to copy from the Kubernetes Service to the Prometheus targets.</p>
brian-brazil
<p>Below is the report for liveness &amp; readiness after running <code>kubectl -n mynamespace describe pod pod1</code>:</p> <pre><code>Liveness: http-get http://:8080/a/b/c/.well-known/heartbeat delay=3s timeout=3s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/a/b/c/.well-known/heartbeat delay=3s timeout=3s period=10s #success=1 #failure=3 </code></pre> <hr /> <ol> <li><p>Is this the valid(working) url? <code>http://:80/</code></p> </li> <li><p>What does <code>#success=1 #failure=3</code> mean?</p> </li> </ol>
overexchange
<p>The results are completely right:</p> <ul> <li>http://:8080 indicates that it will try an http-get in port 8080 inside your pod</li> <li>#success=1 indicates a success threshold of 1 (the default), so the first time it gets an answer it will mark the pod as live or ready</li> <li>#failure=3 indicates a failure threshold of 3 (the default again), so the third time the call fails will mark it unready or try to restart it.</li> </ul> <p>See the official docs: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes</a></p> <p>You may try to execute this command to see how the probes are configured:</p> <pre class="lang-bash prettyprint-override"><code>kubectl -n mynamespace get pod pod1 -o yaml </code></pre>
jmservera
<p>In the <em><a href="https://rads.stackoverflow.com/amzn/click/com/B072TS9ZQZ" rel="nofollow noreferrer" rel="nofollow noreferrer">Kubernetes Book</a></em>, it says that it's poor form to run pods on the master node.</p> <p>Following this advice, I'd like to create a policy that runs a pod on all nodes, except the master if there are more than one nodes. However, to simplify testing and work in single-node environments, I'd also like to run my pod on the master node if there is just a single node in the entire system.</p> <p>I've been looking around, and can't figure out how to express this policy. I see that <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSets</a> have affinities and anti-affinities. I considered labeling the master node and adding an anti-affinity for that label. However, I didn't see how to require that at least a single pod would always come up (to ensure that things worked for single-node environment). Please let me know if I'm misunderstanding something. Thanks!</p>
Behram Mistree
<p>How about something like this:</p> <ol> <li>During node provisioning, assign a particular label to each node that should run the job. In a single node cluster, this would be the master. In a multi-node environment, it would be every node except the master(s).</li> <li>Create a deamonset that has tolerations for any nodes</li> </ol> <pre><code>tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule </code></pre> <ol start="3"> <li>As described in that doc you linked, use <code>.spec.template.spec.nodeSelector</code> to select only nodes with your special label. (<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">node selector docs</a>).</li> </ol> <p>How you assign the special label to nodes is probably a fairly manual process heavily dependent on how you are actually deploying your clusters, but that is the general plan I would follow.</p> <p><strong>EDIT:</strong> Or I believe it may be simplest to just remove the master node taint from your single-node cluster. I believe most simple distributions like minikube will come this way by default.</p>
captncraig
<p>How via command line can I detect if a Kubernetes node is a master/control plane or not? Is there an environment variable I can check?</p>
Justin
<p>Kubernetes provides <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labels and selectors</a> which can be used to select the role assigned to a node.</p> <p>To select controlplane nodes, use a selector to select that role:</p> <pre><code># kubectl get nodes --selector 'node-role.kubernetes.io/controlplane' NAME STATUS ROLES AGE VERSION cp01 Ready controlplane,etcd 90d v1.26.1 cp02 Ready controlplane,etcd 93d v1.26.1 cp03 Ready controlplane,etcd 93d v1.26.1 </code></pre> <p>To see a list of available labels, print out the configuration of the node in YAML format, and look at the <code>labels:</code> section:</p> <pre><code># kubectl get nodes cp01 -o yaml ... labels: kubernetes.io/os: linux node-role.kubernetes.io/controlplane: &quot;true&quot; node-role.kubernetes.io/etcd: &quot;true&quot; ... </code></pre>
Stefan Lasiewski
<p>I have a service on my kubernetes cluster that generates massive assets to my machine's hard disk. Some of that information could also be served statically by a different service in my system. The save location is mapped to an actual folder on my disk.</p> <p>I already found that I can see some information about my &quot;ephemral&quot; storage capacity and allocatbility through <code>kubectl describe node</code> but the data doesn't align with what I see when I run <code>df -h</code> on my machine's terminal. on the node, I can see that I could allocate 147GB and on my terminal I can see that I could only allocate 98GB (this means we probably have some reserved space by Kubernetes in our deployments). I would like my metrics to reflect the actual state of the hard drive.</p> <p>My Question:<br /> How do I check through Kubernetes's python package what's the status of the storage on my machine <em><strong>without mounting my root path into the relevant container</strong></em>? Is there an API to the metrics service that shows me the status of my machine's actual storage? I tried looking at the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">Python API</a> and couldn't find it. What am I missing?</p>
Oren_C
<p>Kubernetes does not track overall storage available. It only knows things about emptyDir volumes and the filesystem backing those. If you're using a hostPath mount (which it sounds like you are), that is outside of Kube's view of the world. You can use something like node_exporter to gather those statistics yourself.</p>
coderanger
<pre><code>minikube start kubectl config use-context minikube kubectl create ns my-namespace </code></pre> <p>About half the time this succeeds and about half the time I get an error creating the namespace: <strong>Unable to connect to the server: dial tcp 192.168.99.100:8443: getsockopt: operation timed out</strong></p> <p>Any thoughts?</p>
Aliisa Roe
<p>There's a lot of configuration variation possible with minikube, so I'm going to have to make a bit of a leap and assume you're running pretty close to the default configuration.</p> <p>By default, the VM minikube runs on Virtual Box, using a dynamically allocated IP address. Frequently it will be assigned 192.168.99.100, but there's no guarantee that it will get this IP, and it can be something else.</p> <p>Run <code>minikube ip</code> and see if the IP minikube is using is something other than 192.168.99.100. If it is, then check your Kubeconfig and see if the IP address matches.</p> <p><code>minikube start</code> usually updates your kubeconfig with the correct IP, so try running that if there's a mismatch and it should fix your issue.</p>
Swiss
<p>Backstory: I was running an Airflow job on a daily schedule, with a <code>start_date</code> of July 1, 2019. The job gathered requested each day's data from a third party, then loaded that data into our database.</p> <p>After running the job successfully for several days, I realized that the third party data source only refreshed their data once a month. As such, I was simply downloading the same data every day. </p> <p>At that point, I changed the <code>start_date</code> to a year ago (to get previous months' info), and changed the DAG's schedule to run once a month.</p> <p>How do I (in the airflow UI) restart the DAG completely, such that it recognizes my new <code>start_date</code> and schedule, and runs a complete backfill as if the DAG is brand new?</p> <p>(I know this backfill can be requested via the command line. However, I don't have permissions for the command line interface and the admin is unreachable.)</p>
Ashley O
<p>Click on the green circle in the Dag Runs column for the job in question in the web interface. This will bring you to a list of all successful runs.</p> <p>Tick the check mark on the top left in the header of the list to select all instances, then in the menu above it choose "With selected" and then "Delete" in the drop down menu. This should clear all existing dag run instances.</p> <p>If catchup_by_default is not enabled on your Airflow instance, make sure <code>catchup=True</code> is set on the DAG until it has finished catching up.</p>
Lars Haugseth
<p>Team, I need to execute a shell script that is within a kubernetes pod. However the call needs to come from outside the pod. Below is the script for your reference:</p> <p><code>echo 'Enter Namespace: '; read namespace; echo $namespace;</code></p> <p><code>kubectl exec -it `kubectl get po -n $namespace|grep -i podName|awk '{print $1}'` -n $namespace --- {scriptWhichNeedToExecute.sh}</code></p> <p>Can anyone suggest on how to do this?`</p>
Sandeep Kumar
<p>There isn't really a good way. A simple option might be <code>cat script.sh | kubectl exec -i -- bash</code> but that can have weird side effects. The more correct solution would be to use a debug container but that feature is still in alpha right now.</p>
coderanger
<p>I have a pod which contains two containers. One container is a web application and another store some static data for this web application. </p> <p>The data here is a set of files which are stored in the folder of this container with name <code>/data</code> and that's only function of the container to store this data and expose them to the web application.</p> <p>I'm looking for the way to share the content of this folder with web application container in this pod.</p> <p>If I'm using the YAML spec below the folder in both containers is empty. Is there a way to share the data from container folder without cleaning it up?</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod labels: app: my-app version: 1.2.3 spec: volumes: - name: my-app-data-volume containers: - name: my-app-server image: my-app-server-container-name volumeMounts: - name: my-app-data-volume mountPath: /data ports: - containerPort: 8080 - name: my-app-data image: my-app-data-container-name volumeMounts: - name: my-app-data-volume mountPath: /data </code></pre>
Alexey Usharovski
<p>You can use an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noreferrer">EmptyDir</a> volume for this. Specify the container that contains the files as an <code>initContainer</code>, then copy the files into the EmptyDir volume. Finally, mount that volume in the web app container.</p>
Jamie
<p>I'm trying to start kubernetes with an iscsi plugin inside rkt on CoreOS using the <a href="https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html#customizing-rkt-options" rel="nofollow">instruction here</a>. The problem is the iscsi daemon can't start, so I'm getting an error and can't mount the volume to the pod.</p> <pre><code>iscsi_util.go:112] iscsi: failed to sendtargets to portal 156.64.48.59:3260 error: iscsiadm: Failed to load module tcp: No such file iscsiadm: Could not load transport tcp.Dropping interface default. [disk_manager.go:50] failed to attach disk iscsi: failed to setup kubelet.go:1780] Unable to mount volumes for pod ... </code></pre> <p>I tried to mount the whole /dev/ inside the rkt container, but it doesn't help me.</p>
SerCe
<p>It doesn't look like they'll add it default into CoreOS but you can add it in the ignition config. The <code>iscsid-initiatorname.service</code> will create the name for you.</p> <pre><code> "storage": { "files": [{ "filesystem": "root", "path": "/etc/modules-load.d/iscsi_tcp.conf", "contents": { "source": "data:iscsi_tcp" }, "mode": 420 }] }, "systemd": { "units": [{ "enable": true, "name": "iscsid-initiatorname.service" }] } </code></pre> <p>This only works on a fresh install or fresh root disk so create the file, do <code>modprode iscsi_tcp</code>, and do <code>systemctl start iscsid-initiatorname.service</code> if you don't want to start with clean root.</p> <p>Then if you're using kubernetes just setup the volume mappings:</p> <pre><code> kubelet: extra_args: feature-gates: MountPropagation=true extra_binds: - /usr/sbin/iscsiadm:/usr/sbin/iscsiadm - /usr/sbin/iscsid:/usr/sbin/iscsid - /etc/iscsi/:/etc/iscsi/ </code></pre> <p>This got OpenEBS working on my baremetal CoreOS cluster.</p>
KRavEN
<p>I am using Rancher Pipelines and catalogs to run Helm Charts like this:</p> <p><code>.rancher-pipeline.yml</code></p> <pre><code>stages: - name: Deploy app-web steps: - applyAppConfig: catalogTemplate: cattle-global-data:chart-web-server version: 0.4.0 name: ${CICD_GIT_REPO_NAME}-${CICD_GIT_BRANCH}-serv targetNamespace: ${CICD_GIT_REPO_NAME} answers: pipeline.sequence: ${CICD_EXECUTION_SEQUENCE} ... - name: Another chart needs to wait until the previous one success ... </code></pre> <p>And in the <code>chart-web-server</code> app, it has a deployment:</p> <p><code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Release.Name }}-dpy labels: {{- include &quot;labels&quot; . | nindent 4 }} spec: replicas: 1 selector: matchLabels: app: {{ .Release.Name }} {{- include &quot;labels&quot; . | nindent 6 }} template: metadata: labels: app: {{ .Release.Name }} {{- include &quot;labels&quot; . | nindent 8 }} spec: containers: - name: &quot;web-server-{{ include &quot;numericSafe&quot; .Values.git.commitID }}&quot; image: &quot;{{ .Values.harbor.host }}/{{ .Values.web.imageTag }}&quot; imagePullPolicy: Always env: ... ports: - containerPort: {{ .Values.web.port }} protocol: TCP resources: {{- .Values.resources | toYaml | nindent 12 }} </code></pre> <p>Now, I need the pipeline to be blocked until the deployment is upgraded since I want to do some server testing in the following stages.</p> <p>My idea is to use Helm hook: If I can create a <code>Job</code> hooking <code>post-install</code> and <code>post-upgrade</code> and waiting for the deployment to be completed, I can then block the whole pipeline until the deployment (a web server) is updated.</p> <p>Does this idea work? If so, how can I write such a blocking and detecting <code>Job</code>?</p>
Romulus Urakagi Ts'ai
<p>Does not appear to be supported from what I can find of their code. It would appear they just shell out to <code>helm upgrade</code>, would need to use the <code>--wait</code> mode.</p>
coderanger
<p>I had a &quot;stuck&quot; namespace that I deleted showing in this eternal &quot;terminating&quot; status.</p>
ximbal
<p>Assuming you've already tried to force-delete resources like: <a href="https://stackoverflow.com/q/35453792">Pods stuck at terminating status</a>, and your at your wits' end trying to recover the namespace...</p> <p>You can force-delete the namespace (perhaps leaving dangling resources):</p> <pre><code>( NAMESPACE=your-rogue-namespace kubectl proxy &amp; kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' &gt;temp.json curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize ) </code></pre> <ul> <li><p>This is a refinement of the answer <a href="https://stackoverflow.com/a/52412965/86967">here</a>, which is based on the comment <a href="https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-408599873" rel="noreferrer">here</a>.</p></li> <li><p>I'm using the <code>jq</code> utility to programmatically delete elements in the finalizers section. You could do that manually instead.</p></li> <li><p><code>kubectl proxy</code> creates the listener at <code>127.0.0.1:8001</code> <em>by default</em>. If you know the hostname/IP of your cluster master, you may be able to use that instead.</p></li> <li><p>The funny thing is that this approach seems to work even when using <code>kubectl edit</code> making the same change has no effect.</p></li> </ul>
Brent Bradburn
<p>I have a simple setup that is using OAuth2 Proxy to handle authentication. It works fine locally using minikube but when I try to use GKE when the oauth callback happens I get a 403 status and the the following message...</p> <blockquote> <p>Login Failed: Unable to find a valid CSRF token. Please try again.</p> </blockquote> <p>The offending url is <code>http://ourdomain.co/oauth2/callback?code=J_6ao0AxSBRn4bwr&amp;state=r_aFqM9wsSpPvyKyyzE_nagGnpNKUp1pLyZafOEO0go%3A%2Fip</code></p> <p>What should be configured differently to avoid the CSRF error?</p>
Jackie
<p>In my case it was because I needed to set the cookie to <code>secure = false</code>. Apparently I could still have secure true no problem with http and an IP but once I uploaded with a domain it failed.</p>
Jackie
<p>When using <a href="https://docs.gitlab.com/ee/topics/autodevops/" rel="nofollow noreferrer">GitLab Auto DevOps</a> to build and deploy application from my repository to <a href="https://microk8s.io/" rel="nofollow noreferrer">microk8s</a>, the build jobs often take a long time to run, eventually timing out. The issue happens 99% of the time, but some builds run through. Often, the build stops at a different time in the build script.</p> <p>The projects do not contain a <code>.gitlab-ci.yml</code> file and fully rely on the Auto DevOps feature to do its magic.</p> <p>For Spring Boot/Java projects, the build often fails when downloading the Gradle via the Gradle wrapper, other times it fails while downloading the dependencies itself. The error message is very vague and not helpful at all:</p> <pre><code>Step 5/11 : RUN /bin/herokuish buildpack build ---&gt; Running in e9ec110c0dfe -----&gt; Gradle app detected -----&gt; Spring Boot detected The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 35 </code></pre> <p>Sometimes, if you get lucky, the error is different:</p> <pre><code>Step 5/11 : RUN /bin/herokuish buildpack build ---&gt; Running in fe284971a79c -----&gt; Gradle app detected -----&gt; Spring Boot detected -----&gt; Installing JDK 11... done -----&gt; Building Gradle app... -----&gt; executing ./gradlew build -x check Downloading https://services.gradle.org/distributions/gradle-7.0-bin.zip ..........10%...........20%...........30%..........40%...........50%...........60%...........70%..........80%...........90%...........100% To honour the JVM settings for this build a single-use Daemon process will be forked. See https://docs.gradle.org/7.0/userguide/gradle_daemon.html#sec:disabling_the_daemon. Daemon will be stopped at the end of the build &gt; Task :compileJava &gt; Task :compileJava FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':compileJava'. &gt; Could not download netty-resolver-dns-native-macos-4.1.65.Final-osx-x86_64.jar (io.netty:netty-resolver-dns-native-macos:4.1.65.Final) &gt; Could not get resource 'https://repo.maven.apache.org/maven2/io/netty/netty-resolver-dns-native-macos/4.1.65.Final/netty-resolver-dns-native-macos-4.1.65.Final-osx-x86_64.jar'. &gt; Could not GET 'https://repo.maven.apache.org/maven2/io/netty/netty-resolver-dns-native-macos/4.1.65.Final/netty-resolver-dns-native-macos-4.1.65.Final-osx-x86_64.jar'. &gt; Read timed out </code></pre> <p>For React/TypeScript problems, the symptoms are similar but the error itself manifests in a different way:</p> <pre><code>[INFO] Using npm v8.1.0 from package.json /cnb/buildpacks/heroku_nodejs-npm/0.4.4/lib/build.sh: line 179: /layers/heroku_nodejs-engine/toolbox/bin/yj: Permission denied ERROR: failed to build: exit status 126 ERROR: failed to build: executing lifecycle: failed with status code: 145 </code></pre> <p>The problem seems to occur mostly when the GitLab runners itself are deplyoed in Kubernetes. microk8s uses <a href="https://projectcalico.docs.tigera.io/getting-started/kubernetes/" rel="nofollow noreferrer">Project Calico</a> to implement virtual networks.</p> <p>What gives? Why are the error messages to unhelpful? Is there a way to turn up verbose build logs or debug the build steps?</p>
knittl
<p>This seems to be a networking problem caused by incompatbile MTU settings between the Calico network layer and Docker's network configuration (and an inability to autoconfige the MTU correctly?) When the MTU values don't match, network packets get fragmented and the Docker runners fail to complete TLS handshakes. As far as I understand, this only affects DIND (docker-in-docker) runners.</p> <p>Even finding this out requires jumping through a few hoops. You have to:</p> <ol> <li>Start a CI pipeline and wait for the job to &quot;hang&quot;</li> <li><code>kubectl exec</code> into the current/active GitLab runner pod</li> <li>Find out the correct value for the <code>DOCKER_HOST</code> environment variable (e.g. by grepping through <code>/proc/$pid/environ</code>. Very likely, this will be <code>tcp://localhost:2375</code>.</li> <li>Export the value to be used by the <code>docker</code> client: <code>export DOCKER_HOST=tcp://localhost:2375</code></li> <li><code>docker ps</code> and then <code>docker exec</code> into the actual CI job container</li> <li>Use ping and other tools to find proper MTU values (but MTU for what? Docker, Calico, OS, router, …?). Use curl/openssl to verify that (certain) https sites cause problems from inside the DIND container.</li> </ol> <p>Execute</p> <pre><code>microk8s kubectl get -n kube-system cm calico-config -o yaml </code></pre> <p>and look for the <code>veth_mtu</code> value, which will very likely be set to <code>1440</code>. DIND uses the same MTU and thus fails send or receive certain network packages (each virtual network needs to add its own header to the network packet, which adds a few bytes at every layer).</p> <p>The naïve fix would be to change the Calico settings to a higher or lower value, but somehow this did not really work, even after the Calico deployment. Furthermore, the value seems to be reset to its original value from time to time; probably caused by automatic updates to microk8s (which comes as a <a href="https://snapcraft.io/" rel="nofollow noreferrer">Snap</a>).</p> <p>So what is a solution that actually works and is permanent? It is possible to override DIND settings for Auto DevOps by writing a custom <code>.gitlab-ci.yml</code> file and simply includes the Auto DevOps template:</p> <pre class="lang-yaml prettyprint-override"><code>build: services: - name: docker:20.10.6-dind # make sure to update version command: ['--tls=false', '--host=tcp://0.0.0.0:2375', '--mtu=1240'] include: - template: Auto-DevOps.gitlab-ci.yml </code></pre> <p>The <code>build.services</code> definition is copied from the <a href="https://gitlab.com/gitlab-org/gitlab-foss/-/blob/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml" rel="nofollow noreferrer"><code>Jobs/Build.gitlab-ci</code></a> template and extended with an additional <code>--mtu</code> option.</p> <p>I've had good experience so far by setting the DIND MTU to 1240, which is 200 bytes lower than Calico's MTU. As an added bonus, it doesn't affect any other pods' network settings. And for CI builds I can live with non-optimal network settings.</p> <p>References:</p> <ul> <li><a href="https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27300" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27300</a></li> <li><a href="https://projectcalico.docs.tigera.io/networking/mtu" rel="nofollow noreferrer">https://projectcalico.docs.tigera.io/networking/mtu</a></li> <li><a href="https://liejuntao001.medium.com/fix-docker-in-docker-network-issue-in-kubernetes-cc18c229d9e5" rel="nofollow noreferrer">https://liejuntao001.medium.com/fix-docker-in-docker-network-issue-in-kubernetes-cc18c229d9e5</a></li> <li><a href="https://kb.netgear.com/19863/Ping-Test-to-determine-Optimal-MTU-Size-on-Router" rel="nofollow noreferrer">https://kb.netgear.com/19863/Ping-Test-to-determine-Optimal-MTU-Size-on-Router</a></li> </ul>
knittl
<p>I'm writing a program that can deploy to Kubernetes. The main problem that I'm facing is &quot;Offline mode&quot; when I disconnect the computer from the router Kubernetes stops working because it needs the default route in the network interfaces.</p> <p>Does anyone know how to set up Kubernetes so it will work without the default network interface?</p> <p>I tried Minikube and MicroK8S without success.</p>
Nejc
<p>Few Kubernetes installers support air-gapped installation and doing it yourself is way out of scope for a new user. If this is for work, you'll want to talk to some of the major commercial distros (OpenShift I'm pretty sure has an air-gap installer, probably also Tanzu) but for new-user home use you should consider this not an option.</p>
coderanger
<p>Note: solution can use netcat or any other built-in Linux utility</p> <p>I need to implement an initContainer and liveness probe that confirms my redis pod is up for one of my redis dependent pods. I have attempted the netcat solution offered as the answer <a href="https://stackoverflow.com/questions/33243121/abuse-curl-to-communicate-with-redis">here</a> (<code>(printf "PING\r\n"; sleep 1) | nc 10.233.38.133 6379</code>) but I get <code>-NOAUTH Authentication required.</code> error in response. Any way around this? I am aware I could install redis-cli or make a management command in my Django code but would prefer not to. Nor do I want to implement a web server for my Redis instance and use curl command.</p>
bbmhmmad
<p>You could always send in your <code>AUTH</code> command as part of your probe, like:</p> <pre><code>`"AUTH ....\r\nPING\r\n"` </code></pre> <p>Unless you're getting <code>INFO</code> from the server, you don't seem to care about the nature of the response, so no auth is required, just test for <code>NOAUTH</code>.</p>
tadman
<p>I have 2 yaml files with configuration and certs and everything from 2 different hyperscaler to use to access kubernetes clusters in each of them, so I wonder if I can add to my actual .kube/config file both of them , on my mac I have kind clusters and also in a VM so everything is fine I see them configured on my config file (one cluster from KIDN and another running on my VM) but idk if the merging this yamls can break this file and then have to get the config files again.</p> <p>In short, I don't want to use <code>kubectl get ns -kubeconfig=configfile.yaml</code> every single time to access a context for a cluster, instead to put them in my config file</p> <p>Any help will be very appreciated</p>
Ray Escobar
<p><code>export KUBECONFIG=/path/to/first/config:/path/to/second/config&quot;</code></p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a> has details.</p>
coderanger
<p><strong>Summary</strong></p> <p>I have a flask application deployed to Kubernetes with python 2.7.12, Flask 0.12.2 and using requests library. I'm getting a SSLError while using requests.session to send a POST Request inside the container. When using requests sessions to connect to a https url , requests throws a SSLError</p> <p><strong>Some background</strong></p> <ul> <li>I have not added any certificates</li> <li>The project works when I run a docker image locally but after deployment to kubernetes, from inside the container - the post request is not being sent to the url verify=false does not work either</li> </ul> <p><strong>System Info</strong> - What I am using: Python 2.7.12, Flask==0.12.2, Kubernetes, python-requests-2.18.4</p> <p><strong>Expected Result</strong></p> <p>Get HTTP Response code 200 after sending a POST request</p> <p><strong>Error Logs</strong></p> <pre><code>r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 511, in send raise SSLError(e, request=request) SSLError: HTTPSConnectionPool(host='dev.domain.nl', port=443): Max retries exceeded with url: /ingestion?LrnDevEui=0059AC0000152A03&amp;LrnFPort=1&amp;LrnInfos=TWA_100006356.873.AS-1-135680630&amp;AS_ID=testserver&amp;Time=2018-06-22T11%3A41%3A08.163%2B02%3A00&amp;Token=1765b08354dfdec (Caused by SSLError(SSLEOFError(8, u'EOF occurred in violation of protocol (_ssl.c:661)'),)) </code></pre> <p>/usr/local/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: <a href="https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings" rel="nofollow noreferrer">https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings</a> InsecureRequestWarning)</p> <p><strong>Reproduction Steps</strong></p> <pre><code>import requests from flask import Flask, request, jsonify from requests import Request, Session sess = requests.Session() adapter = requests.adapters.HTTPAdapter(max_retries = 200) sess.mount('http://', adapter) sess.mount('https://', adapter) sess.cert ='/usr/local/lib/python2.7/site-packages/certifi/cacert.pem' def test_post(): url = 'https://dev.domain.nl/ingestion/?' header = {'Content-Type': 'application/json', 'Accept': 'application/json'} response = sess.post(url, headers= header, params= somepara, data= json.dumps(data),verify=True) print response.status_code return response.status_code def main(): threading.Timer(10.0, main).start() test_post() if __name__ == '__main__': main() app.run(host="0.0.0.0", debug=True, port=5001, threaded=True) </code></pre> <p>Docker File</p> <pre><code>FROM python:2.7-alpine COPY ./web /web WORKDIR /web RUN pip install -r requirements.txt ENV FLASK_APP app.py EXPOSE 5001 EXPOSE 443 CMD ["python", "app.py"] </code></pre>
StarJedi
<p>The problem may be in the Alpine Docker image that lacks CA certificates. On your laptop code works as it uses CA certs from you local workstation. I would think that running Docker image locally will fail too - so the problem is not k8s.</p> <p>Try to add the following line to the Dockerfile:</p> <pre><code>RUN apk update &amp;&amp; apk add ca-certificates &amp;&amp; rm -rf /var/cache/apk/* </code></pre> <p>It will install CA certs inside the container. </p>
lexsys
<p>From <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">Kubernetes API Concepts &gt; Efficient detection of changes</a>:</p> <blockquote> <p>When retrieving a collection of resources (either namespace or cluster scoped), the response from the API server contains a resourceVersion value. The client can use that resourceVersion to initiate a watch against the API server. When you send a watch request, the API server responds with a stream of changes. These changes itemize the outcome of operations (such as create, delete, and update) that occurred after the resourceVersion you specified as a parameter to the watch request. The overall watch mechanism allows a client to fetch the current state and then subscribe to subsequent changes, without missing any events.</p> </blockquote> <p>When I tried a watch operation (using kubernetes python client) I get a stream of kubernetes events, the events themselves <strong>do not have</strong> a <code>resourceVersion</code>, the object inside the event (<code>kind: Pod</code>) do have <code>resourceVersion</code>.</p> <pre><code>from kubernetes import client,config,watch config.load_kube_config(context='eks-prod') v1 = client.CoreV1Api() watcher = watch.Watch() namespace = 'my-namespace' last_resource_version=0 for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5): print(i['object'].metadata.resource_version) last_resource_version = i['object'].metadata.resource_version </code></pre> <p>The resource version are output in the order they are received and <strong>they are not monotonically increasing</strong> at least in the initial batch of events:</p> <pre><code>380744163 380641499 380710458 380775853 380525082 381044514 380676150 380641735 380566799 380806984 380566585 380710721 378885571 380524650 380710218 380806798 373502190 380566206 381044372 380524615 380676624 380806573 380775663 380605904 380743917 380606153 380676388 380744368 380641258 380775416 380606397 </code></pre> <p>But can I assume that if this watch is disconnected I can <strong>safely</strong> resume from the highest resource version I've seen? In the above case, can I safely resume from <code>381044514</code> (the highest) without missing events?</p>
RubenLaguna
<p>From <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">Resource Version Semantics</a></p> <blockquote> <p>You must <strong>not assume resource versions are numeric</strong> or collatable. API clients may only compare two resource versions for equality (this means that you must not compare resource versions for greater-than or less-than relationships).</p> </blockquote> <p>So in principle no you can't use the &quot;highest&quot; resource version because they are not really numeric or sortable. The best you can do is use the latest <code>resourceVersion</code> that you received as is , verbatim. And be prepared to get a <code>resource too old</code> that you are supposed to handle by <strong>retrying without specifying a resource version</strong>, in that case you must also handle the case where you will likely receive some events more than once.</p> <p>This scenario where the <code>resourceVersion</code> in the last event received is not the actual latest/most recent is easily reproducible in EKS 1.21 where the initial response to the watch will return the events in more or less random order. If I send two watch requests simultaneously I'll get the same 30 events but in different order.</p>
RubenLaguna
<p>I am using openshift with glusterfs as storage system. Dynamic provisioning works very well but always rounds the allocated capacity to the next GB value. E.g.: I request a volume of 400MB but a volume of 1GB is created.</p> <p>Is this behavior configurable? I setup openshift via the advanced installation with openshift/ansible. </p>
siavash9000
<p>It is how Kubernetes underneath works. Where you have static volumes defined, the allocation request is used to grab the best match available. So if there isn't one of the exact size, it will grab the next size up. It isn't able to split up a persistent volume and just give part of it to you. It also doesn't enforce any limit, so although you request 400MB, you will be able to use up to the 1GB the persistent volume provides.</p> <p>If you are trying to be economical with storage space, provided storage is of type ReadWriteMany, you could use one persistent volume claim for multiple applications, by specifying that a sub path from the volume should be mounted in each case, into the respective containers. Just realise there is no quota to prevent one application from using all the storage from the persistent volume up, so be careful for example with sharing a persistent volume with a database and some other application which could run rampant and use all the space, as last thing you want to do is run out of space for the database.</p>
Graham Dumpleton
<p><em>This applications which are programmed to use the kubernetes API.</em></p> <p>Should we assume that openshift container platform, from a kubernetes standpoint, matches all the standards that openshift origin (and kubernetes) does?</p> <p><em>Background</em></p> <p>Compatibility testing cloud native apps that are shipped can include a large matrix. It seems that, as a baseline, if OCP is meant to be a pure kubernetes distribution, with add ons, then testing against it is trivial, so long as you are only using core kubernetes features.</p> <p>Alternatively, if shipping an app with support on OCP means you must test OCP, that would to me imply that (1) the app uses OCP functionality or (2) the app uses kube functionality which may not work correctly in OCP, which should be a considered a bug.</p>
jayunit100
<p>In practice you should be able to regard OpenShift Container Platform (OCP) as being the same as OKD (previously known as Origin). This is because it is effectively the same software and setup.</p> <p>In comparing both of these to plain Kubernetes there are a few things you need to keep in mind.</p> <p>The OpenShift distribution of Kubernetes is set up as a multi-tenant system, with a clear distinction between normal users and administrators. This means RBAC is setup so that a normal user is restricted in what they can do out of the box. A normal user cannot for example create arbitrary resources which affect the whole cluster. They also cannot run images that will only work if run as <code>root</code> as they run under a default service account which doesn't have such rights. That default service also has no access to the REST API. A normal user has no privileges to enable the ability to run such images as <code>root</code>. A user who is a project admin, could allow an application to use the REST API, but what it could do via the REST API will be restricted to the project/namespace it runs in.</p> <p>So if you develop an application on Kubernetes where you have an expectation that you have full admin access, and can create any resources you want, or assume there is no RBAC/SCC in place that will restrict what you can do, you can have issues getting it running.</p> <p>This doesn't mean you can't get it working, it just means that you need to take extra steps so you or your application is granted extra privileges to do what it needs.</p> <p>This is the main area where people have issues and it is because OpenShift is setup to be more secure out of the box to suit a multi-tenant environment for many users, or even to separate different applications so that they cannot interfere with each other.</p> <p>The next thing worth mentioning is Ingress. When Kubernetes first came out, it had no concept of Ingress. To fill that hole, OpenShift implemented the concept of Routes. Ingress only came much later, and was based in part of what was done in OpenShift with Routes. That said, there are things you can do with Routes which I believe you still can't do with Ingress.</p> <p>Anyway, obviously, if you use Routes, that only works on OpenShift as a plain Kubernetes cluster only has Ingress. If you use Ingress, you need to be using OpenShift 3.10 or later. In 3.10, there is an automatic mapping of Ingress to Route objects, so I believe Ingress should work even though OpenShift actually implements Ingress under the covers using Routes and its haproxy router setup.</p> <p>There are obviously other differences as well. OpenShift has DeploymentConfig because Kubernetes never originally had Deployment. Again, there are things you can do with DeploymentConfig you can't do with Deployment, but Deployment object from Kubernetes is supported. One difference with DeploymentConfig is how it works in with ImageStream objects in OpenShift, which don't exist in Kubernetes. Stick with Deployment/StatefulSet/DaemonSet and don't use the OpenShift objects which were created when Kubernetes didn't have such features you should be fine.</p> <p>Do note though that OpenShift takes a conservative approach on some resource types and so they may not by default be enabled. This is for things that are still regarded as alpha, or are otherwise in very early development and subject to change. You should avoid things which are still in development even if using plain Kubernetes.</p> <p>That all said, for the core Kubernetes bits, OpenShift is verified for conformance against CNCF tests for Kubernetes. So use what is covered by that and you should be okay.</p> <ul> <li><a href="https://www.cncf.io/certification/software-conformance/" rel="noreferrer">https://www.cncf.io/certification/software-conformance/</a></li> </ul>
Graham Dumpleton
<p>I read a bit about <code>Deployment</code> vs <code>StatefulSet</code> in Kubernetes. We usually need <code>StatefulSet</code> when we have a stateful app, so every pod can have its own volume.</p> <p>Now, I have a task to introduce persistence for <code>RabbitMq</code>. I will have only one pod replica of <code>RabbitMq</code>. Can I do it with <code>Deployment</code>? I don't see any problem with this. That one <code>RabbitMq</code> replica will have its own <code>PersistentVolume</code>(it will not share volume with other pods since I have only one replica). Also, I would say that if for any reason my <code>RabbitMq</code> pod gets restarted, it will continue to read and write from the same storage as before restart.</p> <p>Am I missing something?</p>
Spasoje Petronijević
<p>Even with 1 replica, a statefulset still gets you some thing, like stable network ID. You are right that most features stop mattering but it's really up to your specific needs.</p>
coderanger
<p>I'm facing an issue with the deployement of my Node.js application on my Kubernetes container.</p> <p>The container is stuck on Crashlooping with this error &quot;Back-off restarting failed container&quot; and as error code i have this &quot;Reason: Error - exit code: 243&quot;</p> <p>I did a describe of the pod i found nothing except the &quot;Back-off restarting failed container&quot; . If someone could help that would be great thanks</p>
Blitz crank
<p>I'm not sure why this worked, but it seems to be something with using <code>npm run...</code> to start the node service. I experimented with changing my Docker file to launch the container using:</p> <p><code>CMD npm run start</code></p> <p>To just running the node command, using exactly what NPM should have been running, directly:</p> <p><code>CMD node ...</code></p> <p>EDIT:</p> <p>In our environment it was an access problem. To get NPM working, we had to chown all the directories:</p> <p><code>COPY --chown=${uid}:${gid} --from=builder /app .</code></p>
Jereme
<p>By some reason Kubernetes cannot pull an image from my private account on Docker Hub. I tried all possible ways of creating a secret (from config.json, by providing credentials directly on command line) but still no success.</p> <p>Last time I did <code>docker login</code> and executed the following command to create the secret:</p> <pre><code>kubectl create secret docker-registry dockerhub-credentials --from-file=.dockerconfigjson=/home/myuser/.docker/config.json </code></pre> <p>Also tried the following command (which is same but I thought there might be a bug in <code>kubectl</code> that doesn't recognize parameters correctly:</p> <pre><code>kubectl create secret generic dockerhub-credentials --from-file=.dockerconfigjson=/home/myuser/.docker/config.json --type=kubernetes.io/dockerconfigjson </code></pre> <p>After the deployment I can see the following in the pod's YAML file:</p> <pre><code>spec: volumes: ... containers: - name: container-name image: 'username/projects:web_api_123' ports: - containerPort: 80 protocol: TCP ... imagePullPolicy: IfNotPresent ... imagePullSecrets: - name: dockerhub-credentials </code></pre> <p>An image name is correct (I verified) and a secret with Docker Hub credentials was correctly assigned to my POD. I even patched default service account! But it still doesn't work.</p>
Volodymyr Usarskyy
<p>OK, the problem lies in namespaces: all my deployments, pods, services, etc. live inside a separate namespace BUT command that creates a secret does in 'default' namespace.</p> <p>By some reason, I thought that these secrets in 'default' namespace are visible from another namespace, which is not the case. So, if you want to create a docker config secret, you will have to do it using YAML:</p> <pre><code>kind: Secret apiVersion: v1 metadata: name: dockerhub-credentials namespace: your-namespace data: .dockerconfigjson: base64-encoded-/.docker/config.json type: kubernetes.io/dockerconfigjson </code></pre>
Volodymyr Usarskyy
<p>I need to deploy Grafana in a Kubernetes cluster in a way so that I can have multiple persistent volumes stay in sync - similar to what they <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">did here</a>.</p> <p>Does anybody know how I can use the <a href="https://ishanul.medium.com/kubernetes-statefulsets-a90891ae769d" rel="nofollow noreferrer">master/slave architecture</a> so that only 1 pod writes while the others read? How would I keep them in sync? Do I need additional scripts to do that? Can I use Grafana's built-in sqlite3 database or do I have to set up a different one (Mysql, Postgres)?</p> <p>There's really not a ton of documentation out there about how to deploy statefulset applications other than Mysql or MongoDB.</p> <p>Any guidance, experience, or even so much as a simple suggestion would be a huge help. Thanks!</p>
FestiveHydra235
<ol> <li>StatefulSets are not what you think and have nothing to do with replication. They just handle the very basics of provisioning storage for each replica.</li> <li>The way you do this is as you said by pointing Grafana at a &quot;real&quot; database rather than local Sqlite.</li> <li>Once you do that, you use a Deployment because Grafana itself is then stateless like any other webapp.</li> </ol>
coderanger
<p>I created a microk8s cluster, pods could be listed by <code>get pod</code> command:</p> <pre><code>ubuntu@ip-172-31-16-34:~$ microk8s.kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-f7867546d-mlsbm 1/1 Running 1 98m kube-system hostpath-provisioner-65cfd8595b-l2hjz 1/1 Running 1 98m kube-system tiller-deploy-758bcdc94f-cwbjd 1/1 Running 0 93m seldon-system seldon-controller-manager-54955d8675-72qxn 1/1 Running 0 33m </code></pre> <p>However, I tried to list containers with ctr, nothing showing</p> <pre><code>ubuntu@ip-172-31-16-34:~$ microk8s.ctr c ls CONTAINER IMAGE RUNTIME </code></pre> <hr> <p>also try image list</p> <pre><code>$ microk8s.ctr image list REF TYPE DIGEST SIZE PLATFORMS LABELS </code></pre> <p>nothing :P maybe I need to find which namespace it used ?</p>
qrtt1
<p>I found the correct namespace for <code>microk8s.ctr</code> from issue <a href="https://github.com/ubuntu/microk8s/issues/756" rel="noreferrer">https://github.com/ubuntu/microk8s/issues/756</a></p> <p>it works after adding <code>-n k8s.io</code></p> <pre><code>ubuntu@ip-172-31-16-34:~$ microk8s.ctr -n k8s.io c ls | head CONTAINER IMAGE RUNTIME 040bd2dcc65ecbd5cd6fc6621ed8059864d0b9f33ac1a5bac129ba3da9d45993 k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 04b368611ede93ad9bcc90c1cca2e0285697a85e51afb7a8acd60e73ee27dc2a k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 050b0a44da4f89b34a4d415c0b584dc6c01fad3ba4ad5676e291113efe889099 k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 0e807caf6967f11eff003fb4dd756b1c9665b3c72297903189b3478fe7b46bc1 k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 144f38f7bd30bdff65a79fd627f52545612cc8669e5851ca4e6d80b99004b546 k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 164bc117c9b128632be3466ce50408be5bf32e68bcc3fd6e062d7f1ec2ab89f6 k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 16fae375f02bc617dd99f102f0230954ec71a4850c3428b86215b05977679a24 k8s.gcr.io/pause:3.1 io.containerd.runtime.v1.linux 18389fce9a2c4bd4fab9a0e2d905592a9df8b73a7bdf1e42a314b7e7e557187e docker.io/jupyterhub/configurable-http-proxy:4.1.0 io.containerd.runtime.v1.linux 1e56ccf5a49df5b3acda2ca0634bc8661da91476c0a611deeb96cd2190b66985 docker.io/metacontroller/jsonnetd:0.1 io.containerd.runtime.v1.linux </code></pre>
qrtt1
<p>I get this log error for a pod like below but I updated kubernetes orchestrator, clusters, and nodes to kubernetes v1.21.2. Before updating it, they were v1.20.7. I found a reference that from v1.21, selfLink is completely removed. Why am I getting this error? How can I resolve this issue?</p> <p><strong>error log for kubectl logs (podname)</strong></p> <pre><code>... 2021-08-10T03:07:19.535Z INFO setup starting manager 2021-08-10T03:07:19.536Z INFO controller-runtime.manager starting metrics server {&quot;path&quot;: &quot;/metrics&quot;} E0810 03:07:19.550636 1 event.go:247] Could not construct reference to: '&amp;v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:&quot;&quot;, APIVersion:&quot;&quot;}, ObjectMeta:v1.ObjectMeta{Name:&quot;controller-leader-election-helper&quot;, GenerateName:&quot;&quot;, Namespace:&quot;kubestone-system&quot;, SelfLink:&quot;&quot;, UID:&quot;b01651ed-7d54-4815-a047-57b16d26cfdf&quot;, ResourceVersion:&quot;65956&quot;, Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764161639, loc:(*time.Location)(0x21639e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{&quot;control-plane.alpha.kubernetes.io/leader&quot;:&quot;{\&quot;holderIdentity\&quot;:\&quot;kubestone-controller-manager-f467b7c47-cv7ws_1305bc36-f988-11eb-81fc-a20dfb9758a2\&quot;,\&quot;leaseDurationSeconds\&quot;:15,\&quot;acquireTime\&quot;:\&quot;2021-08-10T03:07:19Z\&quot;,\&quot;renewTime\&quot;:\&quot;2021-08-10T03:07:19Z\&quot;,\&quot;leaderTransitions\&quot;:0}&quot;}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:&quot;&quot;, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:&quot;manager&quot;, Operation:&quot;Update&quot;, APIVersion:&quot;v1&quot;, Time:(*v1.Time)(0xc0000956a0), Fields:(*v1.Fields)(nil)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'kubestone-controller-manager-f467b7c47-cv7ws_1305bc36-f988-11eb-81fc-a20dfb9758a2 became leader' 2021-08-10T03:07:21.636Z INFO controller-runtime.controller Starting Controller {&quot;controller&quot;: &quot;kafkabench&quot;} ... </code></pre> <p><strong>kubectl get nodes to show kubernetes version: the node that the pod is scheduled is aks-default-41152893-vmss000000</strong></p> <pre><code>PS C:\Users\user&gt; kubectl get nodes -A NAME STATUS ROLES AGE VERSION aks-default-41152893-vmss000000 Ready agent 5h32m v1.21.2 aks-default-41152893-vmss000001 Ready agent 5h29m v1.21.2 aksnpwi000000 Ready agent 5h32m v1.21.2 aksnpwi000001 Ready agent 5h26m v1.21.2 aksnpwi000002 Ready agent 5h19m v1.21.2 </code></pre> <p><strong>kubectl describe pods (pod name: kubestone-controller-manager-f467b7c47-cv7ws)</strong></p> <pre><code>PS C:\Users\user&gt; kubectl describe pods kubestone-controller-manager-f467b7c47-cv7ws -n kubestone-system Name: kubestone-controller-manager-f467b7c47-cv7ws Namespace: kubestone-system Priority: 0 Node: aks-default-41152893-vmss000000/10.240.0.4 Start Time: Mon, 09 Aug 2021 23:07:16 -0400 Labels: control-plane=controller-manager pod-template-hash=f467b7c47 Annotations: &lt;none&gt; Status: Running IP: 10.240.0.21 IPs: IP: 10.240.0.21 Controlled By: ReplicaSet/kubestone-controller-manager-f467b7c47 Containers: manager: Container ID: containerd://01594df678a2c1d7163c913eff33881edf02e39633b1a4b51dcf5fb769d0bc1e Image: user2/imagename Image ID: docker.io/user2/imagename@sha256:aa049f135931192630ceda014d7a24306442582dbeeaa36ede48e6599b6135e1 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /manager Args: --enable-leader-election State: Running Started: Mon, 09 Aug 2021 23:07:18 -0400 Ready: True Restart Count: 0 Limits: cpu: 100m memory: 30Mi Requests: cpu: 100m memory: 20Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvjjh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-jvjjh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned kubestone-system/kubestone-controller-manager-f467b7c47-cv7ws to aks-default-41152893-vmss000000 Normal Pulling 23m kubelet Pulling image &quot;user2/imagename&quot; Normal Pulled 23m kubelet Successfully pulled image &quot;user2/imagename&quot; in 354.899039ms Normal Created 23m kubelet Created container manager Normal Started 23m kubelet Started container manager </code></pre>
yunlee
<p>Kubestone has had no releases since 2019, it needs to upgrade its copy of the Kubernetes Go client. That said, this appears to only impact the event recorder system so probably not a huge deal.</p>
coderanger
<p>Currently I've deployed Spark to <code>minikube</code> in my local machine. Pod and its containers are up and running, and I've already checked that port <code>7077</code> is listening from the host machine (local machine).</p> <p>Now I want to <code>spark-submit</code> from the host machine. Thus, I've downloaded Spark's binaries and I've moved them to <code>c:\bin\spark-3.2.1-bin-hadoop3.2</code>, and I've added <code>c:\bin\spark-3.2.1-bin-hadoop3.2\bin</code> to the <code>PATH</code>.</p> <p>When I run <code>spark-submit</code>as follows...</p> <pre><code>spark-submit --class org.apache.spark.deploy.dotnet.DotnetRunner --master spark.local:7077 microsoft-spark-3-2_2.12-2.1.1.jar dotnet C:\projects\xxx\xxx-dotnet-solution\xx-services/infrastructure/etl-service/Spark/bin/Debug/netcoreapp3.1/xxx.xx.Services.Infraestructure.ETLService.Spark.dll </code></pre> <p>...I get the following error <code>org.apache.spark.SparkException: Could not parse Master URL: 'spark.local'</code>.</p> <p>I'm not sure if I'm mistaken, and maybe the issue is I can't <code>spark-submit</code> from my local machine to the remote Spark. Is this ever possible?</p>
Matías Fidemraizer
<p>According to the <a href="https://spark.apache.org/docs/latest/submitting-applications.html#master-urls" rel="nofollow noreferrer">master URL docs</a> that parameter accepts either some keywords like <code>local</code>, <code>yarn</code> or specific URL protocols, <code>spark://</code>, <code>mesos://</code>, <code>k8s://</code>. It can't handle machine or domain names.</p> <p>In the <a href="https://learn.microsoft.com/en-us/dotnet/spark/tutorials/get-started?tabs=windows" rel="nofollow noreferrer">.NET for Apache Spark tutorial</a> the command uses the <code>local</code> keyword, not a host name :</p> <pre><code>spark-submit ^ --class org.apache.spark.deploy.dotnet.DotnetRunner ^ --master local ^ microsoft-spark-3-0_2.12-&lt;version&gt;.jar ^ dotnet MySparkApp.dll &lt;path-of-input.txt&gt; </code></pre> <p>From the docs :</p> <blockquote> <p>The master URL passed to Spark can be in one of the following formats:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Master URL</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>local</td> <td>Run Spark locally with one worker thread (i.e. no parallelism at all).</td> </tr> <tr> <td>local[K]</td> <td>Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).</td> </tr> <tr> <td>local[K,F]</td> <td>Run Spark locally with K worker threads and F maxFailures (see spark.task.maxFailures for an explanation of this variable).</td> </tr> <tr> <td>local[*]</td> <td>Run Spark locally with as many worker threads as logical cores on your machine.</td> </tr> <tr> <td>local[*,F]</td> <td>Run Spark locally with as many worker threads as logical cores on your machine and F maxFailures.</td> </tr> <tr> <td>local-cluster[N,C,M]</td> <td>Local-cluster mode is only for unit tests. It emulates a distributed cluster in a single JVM with N number of workers, C cores per worker and M MiB of memory per worker.</td> </tr> <tr> <td>spark://HOST:PORT</td> <td>Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, which is 7077 by default.</td> </tr> <tr> <td>spark://HOST1:PORT1,HOST2:PORT2</td> <td>Connect to the given Spark standalone cluster with standby masters with Zookeeper. The list must have all the master hosts in the high availability cluster set up with Zookeeper. The port must be whichever each master is configured to use, which is 7077 by default.</td> </tr> <tr> <td>mesos://HOST:PORT</td> <td>Connect to the given Mesos cluster. The port must be whichever one your is configured to use, which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, use mesos://zk://.... To submit with --deploy-mode cluster, the HOST:PORT should be configured to connect to the MesosClusterDispatcher.</td> </tr> <tr> <td>yarn</td> <td>Connect to a YARN cluster in client or cluster mode depending on the value of --deploy-mode. The cluster location will be found based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable.</td> </tr> <tr> <td>k8s://HOST:PORT</td> <td>Connect to a Kubernetes cluster in client or cluster mode depending on the value of --deploy-mode. The HOST and PORT refer to the Kubernetes API Server. It connects using TLS by default. In order to force it to use an unsecured connection, you can use k8s://http://HOST:PORT.</td> </tr> </tbody> </table> </div></blockquote>
Panagiotis Kanavos
<p>Somehow, I have 2 versions of fluentd running in my cluster:</p> <p><a href="https://i.stack.imgur.com/73Lt6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/73Lt6.png" alt="enter image description here"></a></p> <p>They end up fighting over the same port, they just keep cranking away, trying to start up on that port, and it saturates all the CPU in the cluster.</p> <p><code>unexpected error error_class=Errno::EADDRINUSE error="Address already in use - bind(2) for 0.0.0.0:24231</code> <code>/opt/google-fluentd/embedded/lib/ruby/2.6.0/socket.rb:201:in 'bind'</code></p> <p>I've tried deleting the daemon sets and deployments, they just keep coming back. Also tried ssh'ing into the machines and killing the process on that port. Nothing seems to work.</p> <p>Obviously, I only want one version of fluentd to run (and I'm not even sure which one).</p>
Mark
<p>I seem to have fixed it. I went to GCP dashboard cluster edit page, <code>Kubernetes Engine Monitoring</code> dropdown <strong>was blank</strong>. It seems not even the dropdown could decide what to display here.</p> <p>It seems the automated agent, or whatever, seriously messed up here, and had 2 versions of the logging and monitoring system running, fighting over a port, and crushing the CPU on every machine in the cluster. On top of that, I couldn't delete the daemon sets, pods, or deployments. It seems Google treats these as special somehow, maybe with some kind of automated agent, I don't know.</p> <p>From the dropdown, I just selected <code>System and workload logging and monitoring</code>, saved, and it applied the changes.</p> <p>Everything looking good so far, but this whole event has me worried, I didn't do anything. This just....happened.</p> <p>This is a dev cluster, but if it was a production cluster...</p>
Mark
<p>I am following argocd-autopilot <a href="https://github.com/argoproj-labs/argocd-autopilot/blob/main/docs/Getting-Started.md" rel="nofollow noreferrer">Getting Started</a> guide on windows 10 using powershell.</p> <p>I am creating these env variable:</p> <p>$env:GIT_TOKEN = ghp_oOaezyetwer345345</p> <p>$env:GIT_REPO = <a href="https://github.com/myorg/argocdinfra" rel="nofollow noreferrer">https://github.com/myorg/argocdinfra</a></p> <p>I made sure my kubernetes context is minikube and run Bootstrap Argo-CD command:</p> <p><strong>argocd-autopilot repo bootstrap</strong></p> <p>After executing this command I could see the repo $env:GIT_REPO being created with folders:</p> <ul> <li>apps</li> <li>bootstrap</li> <li>projects</li> </ul> <p>After clonning $env:GIT_REPO, here is the content of argo-cd\kustomization.yaml that, I think is supposed to manage argo-cd itself.</p> <p>After loging in to ArgoCD GUI I can see 3 apps, but Argo-CD app in in Uknown status and there is one error: ComparisonError rpc error: code = Unknown desc = Manifest generation error (cached): bootstrap\argo-cd: app path does not exist</p> <p>Content of: C:\argocdinfra\bootstrap\argo-cd\kustomization.yaml</p> <pre class="lang-html prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1 configMapGenerator: - behavior: merge literals: - | repository.credentials=- passwordSecret: key: git_token name: autopilot-secret url: https://github.com/ usernameSecret: key: git_username name: autopilot-secret name: argocd-cm kind: Kustomization namespace: argocd resources: - github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.2.28 </code></pre> <p><strong>I don't quit understand this customization and if I need to do anything else in either my git repo or argocd GUI after initial setup.</strong></p> <p><strong>What I tried was set repository.credentials url = <a href="https://github.com/myorg/argocdinfra" rel="nofollow noreferrer">https://github.com/myorg/argocdinfra</a> but I still get the same error.</strong></p> <p><strong>Is this related to the fact that I use a private github org? Note autopilot-bootstrap and root applications are OK</strong></p> <p>[<img src="https://i.stack.imgur.com/acsc7.png" alt="ArgoCD argo-cd application error2" /></p>
Rad
<p>i have responded to your <a href="https://github.com/argoproj-labs/argocd-autopilot/issues/454" rel="nofollow noreferrer">issue</a> in argocd-autopilot issues page. i <em>think</em> you might be running an outdated version to the binary. the original problem was with the <code>argo-cd.yaml</code> file tries to reference <code>bootstrap\argo-cd</code>, while it should be <code>bootstrap/argo-cd</code> - this issue was resolved in v0.4.11</p> <p>please let us know if you are running an updated version and still encounter the same issue, so we will find the root cause and fix it.</p>
Noam Gal
<p>I need to set a custom error in traefik ingress on kubernetes so that when there is no endpoint or when the status is "404", or "[500-600]" it redirects to another error service or another custom error message I used the annotation as it's in the documentation in the ingress file as this (Note: this a helm template output of passing the annotation as a yaml in the values.yaml file)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend namespace: "default" annotations: external-dns.alpha.kubernetes.io/target: "domain.com" kubernetes.io/ingress.class: "traefik" traefik.ingress.kubernetes.io/error-pages: "map[/:map[backend:hello-world status:[502 503]]]" spec: rules: - host: frontend.domain.com http: paths: - backend: serviceName: frontend servicePort: 3000 path: / </code></pre>
yara mohamed
<p>The answer by ldez is correct, but there are a few caveats:</p> <ul> <li>First off, these annotations only work for traefik >= 1.6.x (earlier versions may support error pages, but not for the kubernetes backend)</li> <li>Second, the traefik backend <strong>must</strong> be configured through kubernetes. You cannot create a backend in a config file and use it with kubernetes, at least not in traefik 1.6.x </li> </ul> <p>Here's how the complete thing looks like. <code>foo</code> is just a name, as explained in the other answer, and can be anything:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend namespace: "default" annotations: external-dns.alpha.kubernetes.io/target: "domain.com" kubernetes.io/ingress.class: "traefik" traefik.ingress.kubernetes.io/error-pages: |- foo: status: - "404" - "500" # See below on where "error-pages" comes from backend: error-pages query: "/{{status}}.html" spec: rules: # This creates an ingress on an non-existing host name, # which binds to a service. As part of this a traefik # backend "error-pages" will be created, which is the one # we use above - host: error-pages http: paths: - backend: serviceName: error-pages-service servicePort: https - host: frontend.domain.com http: # The configuration for your "real" Ingress goes here # This is the service to back the ingress defined above # Note that you can use anything for this, including an internal app # Also: If you use https, the cert on the other side has to be valid --- kind: Service apiVersion: v1 metadata: name: error-pages-service namespace: default spec: ports: - name: https port: 443 type: ExternalName externalName: my-awesome-errors.mydomain.test </code></pre> <p>If you use this configuration, and your app sends a 404, then <code>https://my-awesome-errors.mydomain.test/404.html</code> would be shown as the error page.</p>
averell
<p>I have created an image of my TWAS application and deployed it in a container inside an openshift POD. In my TWAS ND I use to go to the admin console WebSphere environment truststore on a node on a virtual machine and set up TLS certificates so my application can have communication with external API's in the secure communication channel HTTPS. These certificates are public certificates and don't have any private keys. They are .crt and .pem files. Now I am wondering how I can set up my third-party TLS certificates for my application running inside the POD as a container? I don't want to make any code changes to my J2EE application which I have migrated from on-prem VM to Openshift.</p> <p><strong>Note:</strong> I am using TWAS base runtime here and not liberty for my newly migrated J2EE app on openshift.</p>
Marcer
<p>When you build your application image, you can add a trusted signer and a short script into /work/ prior to configure.sh</p> <p><a href="https://www.ibm.com/docs/en/was/9.0.5?topic=tool-signercertificatecommands-command-group-admintask-object#rxml_atsignercert__cmd1" rel="nofollow noreferrer">https://www.ibm.com/docs/en/was/9.0.5?topic=tool-signercertificatecommands-command-group-admintask-object#rxml_atsignercert__cmd1</a></p> <pre><code>AdminTask.addSignerCertificate('[-keyStoreName NodeDefaultTrustStore -certificateAlias signer1 -certificateFilePath /work/signer.pem -base64Encoded true]') AdminConfig.save() </code></pre> <p>The root signer might not be either the pem/crt you have, those could be the issued certificate and the signers. WebSphere allows you to setup the trust at any level, but it's ideal to trust the root CA that issued the cert.</p>
covener
<p>Is there a way to reference a secret value from a configmap?</p> <p>Example:</p> <p><strong>CONFIGMAP: app.properties</strong></p> <pre><code>context-path=/test-app1 dbhost=www.db123.com username=user1 password=[getValueFromSecret] </code></pre> <p>the value of password here is saved in k8s secret</p>
letthefireflieslive
<p>Not in core, but you can use the configmapsecrets operator for this. <a href="https://github.com/machinezone/configmapsecrets" rel="nofollow noreferrer">https://github.com/machinezone/configmapsecrets</a></p> <p>Helm also has accessors to do it client side.</p>
coderanger
<h2>Deployment overview</h2> <p>We are using the Azure Gateway Ingress Controller (AGIC) to automatically create listeners and back-ends on an app gateway for ingresses in our AKS cluster</p> <p><a href="https://argoproj.github.io/argo-cd/" rel="nofollow noreferrer">ArgoCD</a> is deployed to the K8s cluster to create applications. When ArgoCD creates an app, it pulls a helm chart from a git repo created for that instance of our app, and creates the app</p> <p>The app is created with a Persistent Volume Claim to an Azure Storage File folder to store user data. It also gets an Ingress for the app that is labelled so that AGIC creates it in the App Gateway.</p> <p>When everything works, all is well. I can access my argocd on one hostname, and each of my deployed apps on their hostnames - all through the App Gateway that is being maintained by AGIC</p> <h2>Problem description</h2> <p>When one of my pods fails to start (because the storage key used by the PVC is incorrect), then AGIC updates the app gateway to remove my argoCD backend, which still works correctly.</p> <p>AGIC <em>deletes</em> my working ARGOCD back-end.</p> <p>If I delete the failed pod, AGIC deploys my HTTP back-end for ArgoCD again on the app gateway.</p> <h2>Questions:</h2> <ol> <li>How can I troubleshoot <em>why</em> AGIC removes the ArgoCD back-end? Is there a log I can enable that will tell me in detail how it is making deployment decisions?</li> <li>Is there anything I can do on AKS to try and separate the ArgoCD from the pods so that AGIC doesn't remove the back-end for ArgoCD when a pod is broken? (they are already deployed in different namespaces)</li> </ol>
Joon
<p>There appears to be a bug in AGIC where when some back-ends are resolved, and some are not, as soon as the first back-end in the list is unresolved, the rest of the backends are not created.</p> <p>I have logged the following issue in Github to get it fixed: <a href="https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1054" rel="nofollow noreferrer">https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1054</a></p> <p>I found this by setting the logging parameter for AGIC to level 5, reviewing the logs and matching up the log messages to the AGIC source code in that repo.</p>
Joon
<p>I have a SpringBoot application, dockerized, and deployed in a kubernetes cluster. There is any way to log the pod name and pod ip from the springboot application inside the container?</p> <p>Thanks in advance.</p>
rocky
<p>One approach is to run a Fluentd agent on each cluster node. The agent collects all pod sysouts, decorates the logs with pod attributes and pipes them into ElasticSearch or some other searchable store. ala <a href="https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd" rel="nofollow noreferrer">kubernetes-fluentd</a></p>
MarkOfHall
<p>I have built two services in k8s cluster, how can they interact with each other, if I want to make http request from one service to another, I know I can’t use local host, but how can I know the host when I am coding.</p>
John Wu
<p>Service objects are automatically exposed in DNS as <code>&lt;servicename&gt;.&lt;namespace&gt;.svc.&lt;clusterdomain&gt;</code> where <code>clusterdomain</code> is usually <code>cluster.local</code>. The default resolv.conf allows for relative lookups so if the service is in the same namespace you can use just the name, otherwise <code>&lt;servicename&gt;.&lt;namespace&gt;</code>.</p>
coderanger
<p>I have a kubernetes cluster</p> <p>In Master node,</p> <p>If I give the command <code>kubectl get nodes</code> it should show all the nodes.</p> <p>But, If I give the same command in nodes it should not show the master node.</p> <p>Is it possible in kubernetes?</p> <p>Please help anyone. Thanks in advance.</p>
az rnd
<p>No, this is not possible. The kubernetes API will always respond to the same queries in the same way. <code>kubectl get nodes</code> is asking for information about all nodes, and the api will always answer an authorized user with all of the nodes. </p> <p>With RBAC it is possible to limit what a particular user or account has access to view or edit, but the <code>nodes</code> resource is not namespaced, and does not give granularity to restrict access to certain nodes.</p> <p>You can, however, filter the results of <code>kubectl get nodes</code> any way you like. <a href="https://stackoverflow.com/a/52434076/121660">This question</a> has some good examples of showing only worker nodes using the <code>-l</code> argument to kubectl.</p>
captncraig
<p>I am currently trying to execute a simple bash command onto my kubernetes pod but seem to be getting some errors which does not make sense.</p> <p>If I exec into the docker container an run the command plain</p> <pre><code>I have no name!@kafka-0:/tmp$ if [ $(comm -13 &lt;(sort selectedTopics) &lt;(sort topics.sh) | wc -l) -gt 0 ]; then echo &quot;hello&quot;; fi </code></pre> <p>I get Hello as output.</p> <p>But if execute the same from the outside as</p> <pre><code>kubectl exec --namespace default kafka-0 -c kafka -- bash -c &quot;if [ $(comm -13 &lt;/tmp/selectedTopics &lt;/tmp/topics.sh| wc -l) -gt 0 ]; then echo topic does not exist &amp;&amp; exit 1; fi&quot; </code></pre> <p>Then I get an error message stating that <code>/tmp/topics.sh: No such file or directory</code></p> <p>event though I able to do this</p> <pre><code>kubectl exec --namespace $namespace kafka-0 -c kafka -- bash -c &quot;cat /tmp/topics.sh&quot; </code></pre> <p>why does kubectl <code>exec</code> causing me problems?</p>
kafka
<p>When you write:</p> <pre><code> kubectl ... &quot;$(cmd)&quot; </code></pre> <p><code>cmd</code> is executed on the local host to create the string that is used as the argument to <code>kubectl</code>. In other words, you are executing <code>comm -13 &lt;/tmp/selectedTopics &lt;/tmp/topics.sh| wc -l</code> on the local host, and not in the pod.</p> <p>You should use single quotes if you want to avoid expanding locally:</p> <pre><code>kubectl exec --namespace default kafka-0 -c kafka -- bash -c 'if comm -13 &lt;/tmp/topics.sh grep -q . ; then echo topic does not exist &gt;&amp;2 &amp;&amp; exit 1; fi' </code></pre>
William Pursell
<p>I am currently using ubuntu machines for creating a kubernetes cluster.</p> <p>All machines are on-prem.</p> <p>but adding / upgrating machines, require lot of maintanence like installing ubuntu, adding needed packages, open-ssh, then adding kubernetes and adding to cluster.</p> <p>Is there a better way to install and add machines to kubernetes cluster.</p>
shrw
<p>There are many products and projects available for this. You'll just have to try some and see which you like. I couldn't list them all if I tried but a few I'm pretty sure are compatible with Ubuntu (in no particular order):</p> <ul> <li>kubespray</li> <li>Rancher (and RKE with it)</li> <li>Microk8s (uses Snaps)</li> <li>Charmed Kubernetes</li> </ul>
coderanger
<p>We have the following code (don't ask me why...even as none-javascript dev it doesn't look pretty to me), which throws error after Kubernetes upgrade:</p> <pre><code>module.exports.getReplicationControllers = async function getReplicationControllers(namespace) { const kubeConfig = (await getNamespacesByCluster()).get(namespace); if (!kubeConfig) throw new Error(`No clusters contain the namespace ${namespace}`) const kubeConfigEscaped = shellEscape([kubeConfig]); const namespaceEscaped = shellEscape([namespace]); const result = await cpp(`kubectl --kubeconfig ${kubeConfigEscaped} get replicationcontrollers -o json -n ${namespaceEscaped}`); console.error(result.stderr); /** @type {{items: any[]}} */ const resultParsed = JSON.parse(result.stdout); const serviceNames = resultParsed.items.map((item) =&gt; item.metadata.name); return serviceNames; } </code></pre> <blockquote> <p>ChildProcessError: stdout maxBuffer length exceeded kubectl --kubeconfig /kubeconfig-staging get replicationcontrollers -o json -n xxx (exited with error code ERR_CHILD_PROCESS_STDIO_MAXBUFFER)</p> </blockquote> <p>What I've tried so far is:</p> <pre><code> const result = await cpp(`kubectl --kubeconfig ${kubeConfigEscaped} get replicationcontrollers -o=jsonpath='{.items[*].metadata.name}' -n ${namespaceEscaped}`); console.error(result.stderr); const serviceNames = result.split(' '); return serviceNames; </code></pre> <p>Which returns</p> <blockquote> <p>TypeError: result.split is not a function</p> </blockquote> <p>I am not super versed with JavaScript, any help appreciated.</p>
Anton Kim
<p><strong>Answering the question in general</strong> (rather than getting you to switch to a different tool), for people who have this question and may be using other apps:</p> <blockquote> <p>RangeError [ERR_CHILD_PROCESS_STDIO_MAXBUFFER]: stdout maxBuffer length exceeded</p> </blockquote> <p><strong>The issue is caused by your command sending a lot of data (more than 1MB) to stdout or stderr.</strong></p> <p>Increase the <code>maxBuffer</code> option in exec() for <a href="https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback" rel="nofollow noreferrer">the node docs for process.exec</a></p> <pre><code>exec(someCommand, { maxBuffer: 5 * 1024 * 1024, }) </code></pre>
mikemaccana
<p>I'm trying to use minikube and kitematic for testing kubernetes on my local machine. However, kubernetes fail to pull image in my local repository (<code>ImagePullBackOff</code>).</p> <p>I tried to solve it with this : <a href="https://stackoverflow.com/questions/38748717/can-not-pull-docker-image-from-private-repo-when-using-minikube">Can not pull docker image from private repo when using Minikube</a></p> <p>But I have no <code>/etc/init.d/docker</code>, I think it's because of kinematic ? (I am on OS X)</p> <p><strong>EDIT :</strong></p> <p>I installed <a href="https://github.com/docker/docker-registry" rel="noreferrer">https://github.com/docker/docker-registry</a>, and</p> <pre><code>docker tag local-image-build localhost:5000/local-image-build docker push localhost:5000/local-image-build </code></pre> <p>My kubernetes yaml contains :</p> <pre><code>spec: containers: - name: backend-nginx image: localhost:5000/local-image-build:latest imagePullPolicy: Always </code></pre> <p>But it's still not working... Logs :</p> <pre><code>Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error while pulling image: Get http://127.0.0.1:5000/v1/repositories/local-image-build/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused </code></pre> <p><strong>EDIT 2 :</strong></p> <p>I don't know if I'm on the good path, but I find this :</p> <p><a href="http://kubernetes.io/docs/user-guide/images/" rel="noreferrer">http://kubernetes.io/docs/user-guide/images/</a></p> <p>But I don't know what is my DOCKER_USER...</p> <pre><code>kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL </code></pre> <p><strong>EDIT 3</strong></p> <p>now I got on my pod :</p> <pre><code>Failed to pull image "local-image-build:latest": Error: image library/local-image-build not found Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error: image library/local-image-build not found" </code></pre> <p>Help me I'm going crazy.</p> <p><strong>EDIT 4</strong></p> <pre><code>Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error response from daemon: Get https://192.168.99.101:5000/v1/_ping: tls: oversized record received with length 20527" </code></pre> <p>I added :</p> <pre><code>EXTRA_ARGS=' --label provider=virtualbox --insecure-registry=192.168.99.101:5000 </code></pre> <p>to my docker config, but it's still don't work, the same message....</p> <p>By the way, I changed my yaml :</p> <pre><code> spec: containers: - name: backend-nginx image: 192.168.99.101:5000/local-image-build:latest imagePullPolicy: Always </code></pre> <p>And I run my registry like that :</p> <pre><code>docker run -d -p 5000:5000 --restart=always --name myregistry registry:2 </code></pre>
Xero
<p>Use the minikube docker registry instead of your local docker</p> <p><a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-docker-container-image" rel="noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-docker-container-image</a></p> <h2>Set docker to point to minikube</h2> <p><code>eval $(minikube docker-env)</code></p> <h2>Push to minikube docker</h2> <p><code>docker build -t hello-node:v1 .</code></p> <h2>Set your deployment to not pull IfNotPresent</h2> <p>K8S default is set to "Always" Change to "IfNotPresent"</p> <p><code>imagePullPolicy: IfNotPresent</code></p> <p><a href="https://stackoverflow.com/questions/40144138/pull-a-local-image-to-run-a-pod-in-kubernetes">Related Issue</a></p>
Doug
<p>I am new to Docker/Kubernetes and inherited an application and I am looking to upgrade a JAR file on a pod.</p> <p>This is the pod:</p> <pre><code>Name: app-name-7c7fddfc7c-vthhr Namespace: default Node: ip-ip-address-goes-here.us-east-2.compute.internal/ip.address.goes.here Start Time: Sat, 06 Jul 2019 19:19:37 +0000 Labels: app=app-name pod-template-hash=3739889737 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"app-name-7c7fddfc7c","uid":"d771243c-9992-11e8-ac11-0298f3... Status: Running IP: other.ip.address.here Created By: ReplicaSet/app-name-7c7fddfc7c Controlled By: ReplicaSet/app-name-7c7fddfc7c Containers: app-name: Container ID: docker://fefd826441f2d672c3e622727f6f3c26b9ece4e60c624b6dc96de6f8e97e336f Image: remoteserver.com/app-name:1.24.237 Image ID: docker-pullable://remoteserver.com/app-name@sha256:5ffc7926e0437f89e7308b09514ec17cf0679fb20dbf97d78b307d7ee4fb13e2 Port: 8080/TCP State: Running Started: Sat, 06 Jul 2019 19:19:52 +0000 Ready: True Restart Count: 0 Limits: memory: 1200Mi Requests: cpu: 200m memory: 900Mi Environment: ... Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-nvwhs (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-nvwhs: Type: Secret (a volume populated by a Secret) SecretName: default-token-nvwhs Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>As far as I can tell, the ReplicaSet is replicating the servers and mounting volumes, which are Amazon snapshots.</p> <p>Would I just.. upload the file to the pod and due to the fact that it is a mounted volume (my assumption) - it will be updated forever? Am I understanding how this works accurately?</p> <p>If I am missing any information for anyone who is an expert to know my use-case, I am happy to include it. I just don't completely know what I don't know yet.</p>
Steven Matthews
<p>Pods are ephemeral. You know, "Cattle versus Pets". They're put to slaughter not taken to the vet.</p> <p>When you want to add new code / new dependancies you build a new Docker image and deploy it to the cluster.</p> <p>Somewhere in your code / CI pipeline there is a Dockerfile file that defines what / how dependancies are added to the Docker image. Start there, then move on to what ever CI / CD pipeline exists for deploying to the cluster. It may be as unsophisticated as a script calling kubeclt to apply the image to the cluster.</p>
MarkOfHall
<p>I'm relatively new (&lt; 1 year) to GCP, and I'm still in the process of mapping the various services onto my existing networking mental model.</p> <p>Once knowledge gap I'm struggling to fill is how HTTP requests are load balanced to services running in our GKE clusters.</p> <p>On a test cluster, I created a service in front of pods that serve HTTP:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: contour spec: ports: - port: 80 name: http protocol: TCP targetPort: 8080 - port: 443 name: https protocol: TCP targetPort: 8443 selector: app: contour type: LoadBalancer </code></pre> <p>The service is listening on node ports 30472 and 30816.:</p> <pre><code>$ kubectl get svc contour NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE contour LoadBalancer 10.63.241.69 35.x.y.z 80:30472/TCP,443:30816/TCP 41m </code></pre> <p>A GCP network load balancer is automatically created for me. It has its own public IP at 35.x.y.z, and is listening on ports 80-443:</p> <p><a href="https://i.stack.imgur.com/VAimU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VAimU.png" alt="auto load balancer"></a></p> <p>Curling the load balancer IP works:</p> <pre><code>$ curl -q -v 35.x.y.z * TCP_NODELAY set * Connected to 35.x.y.z (35.x.y.z) port 80 (#0) &gt; GET / HTTP/1.1 &gt; Host: 35.x.y.z &gt; User-Agent: curl/7.62.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 404 Not Found &lt; date: Mon, 07 Jan 2019 05:33:44 GMT &lt; server: envoy &lt; content-length: 0 &lt; </code></pre> <p>If I ssh into the GKE node, I can see the <code>kube-proxy</code> is listening on the service nodePorts (30472 and 30816) and nothing has a socket listening on ports 80 or 443:</p> <pre><code># netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:20256 0.0.0.0:* LISTEN 1022/node-problem-d tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1221/kubelet tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1369/kube-proxy tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 297/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 330/sshd tcp6 0 0 :::30816 :::* LISTEN 1369/kube-proxy tcp6 0 0 :::4194 :::* LISTEN 1221/kubelet tcp6 0 0 :::30472 :::* LISTEN 1369/kube-proxy tcp6 0 0 :::10250 :::* LISTEN 1221/kubelet tcp6 0 0 :::5355 :::* LISTEN 297/systemd-resolve tcp6 0 0 :::10255 :::* LISTEN 1221/kubelet tcp6 0 0 :::10256 :::* LISTEN 1369/kube-proxy </code></pre> <p>Two questions:</p> <ol> <li>Given nothing on the node is listening on ports 80 or 443, is the load balancer directing traffic to ports 30472 and 30816?</li> <li>If the load balancer is accepting traffic on 80/443 and forwarding to 30472/30816, where can I see that configuration? Clicking around the load balancer screens I can't see any mention of ports 30472 and 30816.</li> </ol>
James Healy
<p>I think I found the answer to my own question - can anyone confirm I'm on the right track?</p> <p>The network load balancer redirects the traffic to a node in the cluster without modifying the packet - packets for port 80/443 still have port 80/443 when they reach the node.</p> <p>There's nothing listening on ports 80/443 on the nodes. However <code>kube-proxy</code> has written iptables rules that match packets <strong>to</strong> the load balancer IP, and rewrite them with the appropriate ClusterIP and port:</p> <p>You can see the iptables config on the node:</p> <pre><code>$ iptables-save | grep KUBE-SERVICES | grep loadbalancer -A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-D53V3CDHSZT2BLQV -A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-J3VGAQUVMYYL5VK6 $ iptables-save | grep KUBE-SEP-ZAA234GWNBHH7FD4 :KUBE-SEP-ZAA234GWNBHH7FD4 - [0:0] -A KUBE-SEP-ZAA234GWNBHH7FD4 -s 10.60.0.30/32 -m comment --comment "default/contour:http" -j KUBE-MARK-MASQ -A KUBE-SEP-ZAA234GWNBHH7FD4 -p tcp -m comment --comment "default/contour:http" -m tcp -j DNAT --to-destination 10.60.0.30:8080 $ iptables-save | grep KUBE-SEP-CXQOVJCC5AE7U6UC :KUBE-SEP-CXQOVJCC5AE7U6UC - [0:0] -A KUBE-SEP-CXQOVJCC5AE7U6UC -s 10.60.0.30/32 -m comment --comment "default/contour:https" -j KUBE-MARK-MASQ -A KUBE-SEP-CXQOVJCC5AE7U6UC -p tcp -m comment --comment "default/contour:https" -m tcp -j DNAT --to-destination 10.60.0.30:8443 </code></pre> <p>An interesting implication is the the nodePort is created but doesn't appear to be used. That matches this comment in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">kube docs</a>:</p> <blockquote> <p>Google Compute Engine does not need to allocate a NodePort to make LoadBalancer work</p> </blockquote> <p>It also explains why GKE creates an automatic firewall rule that allows traffic from 0.0.0.0/0 towards ports 80/443 on the nodes. The load balancer isn't rewriting the packets, so the firewall needs to allow traffic from anywhere to reach iptables on the node, and it's rewritten there.</p>
James Healy
<p>I recently came across this canary deployment process, it is said </p> <p><em>Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. The canary deployment serves as an early warning indicator with less impact on downtime: if the canary deployment fails, the rest of the servers aren't impacted.</em></p> <p>Some articles mentioned *it is TEST IN PRODUTION * strategy.</p> <p>Does this mean the code is not being tested in lower environments ( integration and performance testing)? If yes, how could without code confidence these deployments are roled out ?</p> <p>Please clarify me. Thanks in advance</p>
pavan reddy
<p>Canary deployments are a way of gradually opening the requests firehose to a new server while continuing to respond to the majority of the requests with an already-deployed service. So yes, it is really a "test in production" strategy, but the idea is that if the canary falls over you don't deploy to to the whole cluster.</p> <p>The name comes from the idea that coal miners used to carry canaries, who are rather more sensitive than humans to the effects of carbon oxides (the monoxide is both toxic and potentially explosive, the dioxide will suffocate you if it excludes enough oxygen). If the canary keeled over the miners knew it was time to high-tail it.</p>
holdenweb
<p>I'm testing kubernetes behavior when pod getting error.</p> <p>I now have a pod in CrashLoopBackOff status caused by liveness probe failed, from what I can see in kubernetes events, pod turns into CrashLoopBackOff after 3 times try and begin to back off restarting, but the related Liveness probe failed events won't update?</p> <pre><code>➜ ~ kubectl describe pods/my-nginx-liveness-err-59fb55cf4d-c6p8l Name: my-nginx-liveness-err-59fb55cf4d-c6p8l Namespace: default Priority: 0 Node: minikube/192.168.99.100 Start Time: Thu, 15 Jul 2021 12:29:16 +0800 Labels: pod-template-hash=59fb55cf4d run=my-nginx-liveness-err Annotations: &lt;none&gt; Status: Running IP: 172.17.0.3 IPs: IP: 172.17.0.3 Controlled By: ReplicaSet/my-nginx-liveness-err-59fb55cf4d Containers: my-nginx-liveness-err: Container ID: docker://edc363b76811fdb1ccacdc553d8de77e9d7455bb0d0fb3cff43eafcd12ee8a92 Image: nginx Image ID: docker-pullable://nginx@sha256:353c20f74d9b6aee359f30e8e4f69c3d7eaea2f610681c4a95849a2fd7c497f9 Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 15 Jul 2021 13:01:36 +0800 Finished: Thu, 15 Jul 2021 13:02:06 +0800 Ready: False Restart Count: 15 Liveness: http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r7mh4 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-r7mh4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 37m default-scheduler Successfully assigned default/my-nginx-liveness-err-59fb55cf4d-c6p8l to minikube Normal Created 35m (x4 over 37m) kubelet Created container my-nginx-liveness-err Normal Started 35m (x4 over 37m) kubelet Started container my-nginx-liveness-err Normal Killing 35m (x3 over 36m) kubelet Container my-nginx-liveness-err failed liveness probe, will be restarted Normal Pulled 31m (x7 over 37m) kubelet Container image &quot;nginx&quot; already present on machine Warning Unhealthy 16m (x32 over 36m) kubelet Liveness probe failed: Get &quot;http://172.17.0.3:8080/&quot;: dial tcp 172.17.0.3:8080: connect: connection refused Warning BackOff 118s (x134 over 34m) kubelet Back-off restarting failed container </code></pre> <p>BackOff event updated 118s ago, but Unhealthy event updated 16m ago?</p> <p>and why I'm getting only 15 times Restart Count while BackOff events with 134 times?</p> <p>I'm using minikube and my deployment is like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx-liveness-err spec: selector: matchLabels: run: my-nginx-liveness-err replicas: 1 template: metadata: labels: run: my-nginx-liveness-err spec: containers: - name: my-nginx-liveness-err image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 livenessProbe: httpGet: path: / port: 8080 </code></pre>
Sean Yu
<p>I think you might be confusing Status Conditions and Events. Events don't &quot;update&quot;, they just exist. It's a stream of event data from the controllers for debugging or alerting on. The <code>Age</code> column is the relative timestamp to the most recent instance of that event type and you can see if does some basic de-duplication. Events also age out after a few hours to keep the database from exploding.</p> <p>So your issue has nothing to do with the liveness probe, your container is crashing on startup.</p>
coderanger
<p>I have installed postgres cluster Zalando postgres operator <a href="https://github.com/zalando/postgres-operator" rel="nofollow noreferrer">https://github.com/zalando/postgres-operator</a> How can I get access to postgres database from outside? I tried to change cluster service type from ClusterIP to NodePort, but it is overwritten automatically.</p>
NameOff
<p>The process is explained here: <a href="https://postgres-operator.readthedocs.io/en/latest/user/" rel="nofollow noreferrer">https://postgres-operator.readthedocs.io/en/latest/user/</a></p> <p>You need 2 steps.</p> <ol> <li>script that opens up and forwards a port to your local machine.</li> </ol> <p>I created a script called <code>set_dbforwarding.sh</code>. You need to change the names in the script to your cluster names and settings! so <code>cdf-cluster</code> should become <code>yourclustername</code>.</p> <pre><code> #!/usr/bin/env bash set -u # crash on missing env variables set -e # stop on any error set -x # print what we are doing export NAMESPACE=$1 export PGMASTER=$(kubectl -n cdf-acc get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=cdf-cluster,spilo-role=m # PGMASTER should be now the master node. There are cases under failover # that you should connect to a different node in your cluster. # If you want to change something you should always connect to the master. # otherwise you get # set up port forward kubectl -n $NAMESPACE port-forward $PGMASTER 6432:5432 # get the password..it is printend in your terminal # so you can use it in your db tool of choice. export PGPASSWORD=$(kubectl -n $NAMESPACE get secret cdf.cdf-cluster.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | b export PGSSLMODE=require </code></pre> <p>executed like:</p> <pre><code> ./set_dbforwarding.sh yourclusternamespace </code></pre> <ol start="2"> <li><p>connect to your cluster with the correct credentials. <code>restore_db.sh</code> script.</p> <pre><code> #!/usr/bin/env bash set -u # crash on missing env variables set -e # stop on any error set -x # print what we are doing export NAMESPACE=$1 export DATABASE=$2 export DATABASEDUMP=$3 export PGMASTER=$(kubectl -n $NAMESPACE get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=cdf-cluster,spilo-rol export PGPASSWORD=$(kubectl -n $NAMESPACE get secret postgres.cdf-cluster.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password} export PGSSLMODE=require # examples you can run now the the above ENV variables set. # psql -h 127.0.0.1 -U postgres -d cdf -p 6432 #cat ~/dumps/cbs_schema_wfs.sql | psql -h 127.0.0.1 -U postgres -d cdf -p 6432 # pg_restore -h 127.0.0.1 -U postgres -p 6432 -d $2 -c $3 # data only # pg_restore -h 127.0.0.1 -U postgres -p 6432 -d $2 -a $3 # everything pg_restore -h 127.0.0.1 -U postgres -p 6432 -d $2 $3 </code></pre> </li> </ol> <p>used like</p> <pre><code> ./restore_db.sh namespace databasename backup.gz </code></pre> <ol start="3"> <li><p>Tip if you are using a Database tool like DBbeaver make sure to check the keep-alive box every 5 seconds or so. Or the connection will be dropped. The keep alive will keep it open. But the settings is rather hidden on DBBeaver.</p> <p>editconnection -&gt; connectionsettings -&gt; initialization -&gt; Keep-Alive.</p> </li> </ol>
Stephan
<blockquote> <p>Updated with more information</p> </blockquote> <p>I am trying to set up OpenTSDB on Bigtable, following this guide: <a href="https://cloud.google.com/solutions/opentsdb-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/solutions/opentsdb-cloud-platform</a></p> <p>Works well, all good. </p> <p>Now I was trying to open the <code>opentsdb-write</code> service with a LoadBalancer (type). Seems to work well, too.</p> <p>Note: using a GCP load balancer.</p> <p>I am then using insomnia to send a POST to the <code>./api/put</code> endpoint - and I get a <code>204</code> as expected (also, using the <code>?details</code> shows no errors, neither does the <code>?sync</code>) (see <a href="http://opentsdb.net/docs/build/html/api_http/put.html" rel="nofollow noreferrer">http://opentsdb.net/docs/build/html/api_http/put.html</a>)</p> <p>When querying the data (GET on <code>./api/query</code>), I don't see the data (same effect in grafana). Also, I do not see any data added in the <code>tsdb</code> table in bigtable.</p> <p>My conclusion: no data is written to Bigtable, although tsd is returning 204. </p> <p>Interesting fact: the <strong>metric</strong> is created (I can see it in Bigtable (<code>cbt read tsdb-uid</code>) and also the autocomplete in the opentsdb-ui (and grafana) pick the metric up right away. But no data.</p> <p>When I use the Heapster-Example as in the tutorial, it all works.</p> <p>And the interesting part (to me):</p> <p>NOTE: It happened a few times, with massive delay or after stoping/restarting the kubernetes cluster, that the data appeared. Suddenly. I could not reproduce as of now.</p> <p>I must be missing something really simple. </p> <p>Note: I don't see any errors in the logs (stackdriver) and UI (opentsdb UI), neither bigtable, nor Kubernetes, nor anything I can think of.</p> <p>Note: the configs I am using are as linked in the tutorial.</p> <p>The put I am using (see the 204):</p> <p><a href="https://i.stack.imgur.com/pmaBR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmaBR.png" alt="enter image description here"></a></p> <p>and if I add <code>?details</code>, it indicates success:</p> <p><a href="https://i.stack.imgur.com/cSsJO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cSsJO.png" alt="enter image description here"></a></p>
Pinguin Dirk
<p>My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the <code>tsd.storage.flush_interval</code> configuration manages that process.</p> <p>You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the <a href="https://cloud.google.com/bigtable/docs/support/getting-support" rel="nofollow noreferrer">Cloud Bigtable support page</a> for more nuanced discussions.</p> <p>As an FYI, we (Google) are actively updating the <a href="https://cloud.google.com/solutions/opentsdb-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/solutions/opentsdb-cloud-platform</a> to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.</p>
Solomon Duskis
<p>I am currently learning Kubernetes, and i am facing a bit of a wall. I try to pass environmentalvariables from my YAML file definition to my container. But the variables seem not to be present afterwards. <code>kubectl exec &lt;pod name&gt; -- printenv</code> gives me the list of environmental variables. But the ones i defined in my YAML file is not present.</p> <p>I defined the environment variables in my deployment as shown below:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello-world-boot labels: app: hello-world-boot spec: selector: matchLabels: app: hello-world-boot template: metadata: labels: app: hello-world-boot containers: - name: hello-world-boot image: lightmaze/hello-world-spring:latest env: - name: HELLO value: &quot;Hello there&quot; - name: WORLD value: &quot;to the entire world&quot; resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 8080 selector: app: hello-world-boot </code></pre> <p>Hopefully someone can see where i failed in the YAML :)</p>
Martin
<p>If I correct the errors in your <code>Deployment</code> configuration so that it looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello-world-boot labels: app: hello-world-boot spec: selector: matchLabels: app: hello-world-boot template: metadata: labels: app: hello-world-boot spec: containers: - name: hello-world-boot image: lightmaze/hello-world-spring:latest env: - name: HELLO value: &quot;Hello there&quot; - name: WORLD value: &quot;to the entire world&quot; resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 8080 </code></pre> <p>And deploy it into my local <code>minikube</code> instance:</p> <pre><code>$ kubectl apply -f pod.yml </code></pre> <p>Then it seems to work as you intended:</p> <pre><code>$ kubectl exec -it hello-world-boot-7568c4d7b5-ltbbr -- printenv PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin HOSTNAME=hello-world-boot-7568c4d7b5-ltbbr TERM=xterm HELLO=Hello there WORLD=to the entire world KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.96.0.1:443 LANG=C.UTF-8 JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk JAVA_VERSION=8u212 JAVA_ALPINE_VERSION=8.212.04-r0 HOME=/home/spring </code></pre> <p>If you look at the above output, you can see both the <code>HELLO</code> and <code>WORLD</code> environment variables you defined in your <code>Deployment</code>.</p>
larsks
<p>How can I replace the Image used in a Kubernetes Deployment manifest with jq?</p> <p>For example:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: myapp name: myapp-deployment spec: replicas: 1 template: spec: containers: - name: myapp image: myapp:v1 </code></pre> <p>I tried using something like this <code>jq '.spec.template.spec.containers[0].image = &quot;myapp:v2&quot;'</code>. However, it always ends with a syntax or parse error.</p>
Frederik
<p>Using <a href="https://kislyuk.github.io/yq/" rel="nofollow noreferrer"><code>yq</code></a>, you can simply write:</p> <pre><code>yq -y '.spec.template.spec.containers[0].image = &quot;foo:latest&quot;' pod.yml </code></pre> <p>Which produces:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: myapp name: myapp-deployment spec: replicas: 1 template: spec: containers: - name: myapp image: foo:latest </code></pre> <p>But I would use <a href="https://kustomize.io/" rel="nofollow noreferrer">kustomize</a> for something like this, as @DavidMaze suggested.</p>
larsks
<p>Need to define RBAC based on the audit log. This can be a regular process to onboard a team and provide access.</p> <p>I find audit2rbac tool simple and clear to use. </p> <p>Need guidance wrt kubernetes service on azure.</p>
atul sahu
<p>Here is an example query for getting audit logs from Azure Log Analytics.</p> <p>It removes some of the noise to try and give just logs for when a user has modified a resource in Kubernetes. The requestURI and requestObject fields will give you the most info about what the user was doing.</p> <pre><code>AzureDiagnostics | where Category == &quot;kube-audit&quot; | extend log_j=parse_json(log_s) | extend requestURI=log_j.requestURI | extend verb=log_j.verb | extend username=log_j.user.username | extend requestObject = parse_json(log_j.requestObject) | where verb !in (&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;&quot;) | where username !in (&quot;aksService&quot;, &quot;masterclient&quot;, &quot;nodeclient&quot;) | where username !startswith &quot;system:serviceaccount:kube-system&quot; | where requestURI startswith &quot;/api/&quot; | where requestURI !startswith &quot;/api/v1/nodes/&quot; | where requestURI !startswith &quot;/api/v1/namespaces/kube-system/&quot; | where requestURI !startswith &quot;/api/v1/namespaces/ingress-basic/&quot; </code></pre>
Tom Ferguson
<p>I have a requirement to implement Server Sent Events capability in a micro service. The application in question runs in a Kubernetes cluster of two pods. However, I don't believe Server Sent Events in a clustered environment will reliably notify all clients of events they have registered for, for the following reason. As an example, client 1 is registered for notifications with the application instance running in pod 1. Client 2 is registered for notifications with the application instance running in pod 2. The next time the event in question occurs, it is handled successfully by the application running in pod 1 (since for whatever reason that is where the load balancer directed it to). So client 1 will be notified of the event, but client 2 will not, since the application running in pod 2 has no idea of what just took place on pod 1. And so forth for subsequent events (next request might go to pod 2, for example).</p> <p>So I'm thinking of moving towards using reactive streams instead.</p> <p>Does anyone disagree with my take on implementing SSE's where the application will run in a clustered environment?</p>
user1608142
<p>The fundamental problem is with holding state information in the pod(s) itself.</p> <p>A server running in a Kubernetes environment needs to be completely stateless. Not just so that it doesn't matter which pod the load-balancer sends the request to. But more fundamentally, Kubernetes expects to be able to shut down a pod and start a new one without it having any effect on the application. If state is held in the server then that state will be lost when the pod is shut down.</p> <p>Given this, a good way to implement event-driven processes is to use a message broker. Whenever an event occurs a message is sent, via the broker, to all instances of the server. In your particular case this would trigger the relevant server instance(s) to send that event out via server sent events.</p> <p>There's one nasty gotcha with using server sent events in a Kubernetes environment, although this would also apply to any push-based mechanism where the client holds a connection open to the server (which essentially is all of them). When Kubernetes shuts down a pod, all clients connected to that pod will be disconnected. So the client needs to be prepared for this and to reconnect automatically continue seamlessly where it left off.</p>
Ian Goldby
<p>I am trying to terminate the namespace argo in Kubernetes. In the past, I have succesfully followed the directions found here <a href="https://stackoverflow.com/questions/52954174/kubernetes-namespaces-stuck-in-terminating-status">Kubernetes Namespaces stuck in Terminating status</a></p> <p>this time, however, I am getting the following error message. What does it mean and how can I work around this?</p> <pre><code>{ &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;namespaces \&quot;argo\&quot; is forbidden: User \&quot;system:anonymous\&quot; cannot update resource \&quot;namespaces/finalize\&quot; in API group \&quot;\&quot; in the namespace \&quot;argo\&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { &quot;name&quot;: &quot;argo&quot;, &quot;kind&quot;: &quot;namespaces&quot; }, &quot;code&quot;: 403 } </code></pre>
user3877654
<p>You need to use an authenticated user that has permissions for the subresource (or more often, for <code>*</code>).</p>
coderanger
<p>I had run the following command <code>poetry install</code> when following instruction on <a href="https://docs.wire.com/how-to/install/kubernetes.html#ansible-kubernetes" rel="nofollow noreferrer">this page</a>.</p> <p>I had received an <code>RuntimeError</code>, does any one know how to solve it?</p> <pre><code>RuntimeError Poetry could not find a pyproject.toml file in /home/kodi66/wire-server-deploy/ansible or its parents at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/factory.py:369 in locate 365│ if poetry_file.exists(): 366│ return poetry_file 367│ 368│ else: → 369│ raise RuntimeError( 370│ &quot;Poetry could not find a pyproject.toml file in {} or its parents&quot;.format( 371│ cwd 372│ ) 373│ ) </code></pre>
user2120882
<p>It looks as if those instructions are out-of-date; looking at the repository history, the use of poetry was removed in January in commit <a href="https://github.com/wireapp/wire-server-deploy/commit/567dcce8f66769ff5fec802e34015a0053c5cef7" rel="nofollow noreferrer">567dcce</a>. The commit message reads (partially):</p> <blockquote> <p>Use nix to provide hegemony binary dependencies and switch to git submodules for ansible dedencies (#404)</p> <ul> <li>remove poetry, use Nix to provide the ansible we need</li> </ul> <p>Also, set NIX_PATH when entering via direnv, so nix-shell does the right thing when in there.</p> <p>Move to ansible 2.9 [...]</p> </blockquote> <p>You probably want to file a bug against the <a href="https://github.com/wireapp/wire-docs" rel="nofollow noreferrer">wire-docs</a> repository.</p>
larsks
<p>Whenever I am trying to run the docker images, it is exiting in immediately.</p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ae327a2bdba3 k8s-for-beginners:v0.0.1 &quot;/k8s-for-beginners&quot; 11 seconds ago Exited (1) 10 seconds ago focused_booth </code></pre> <p>As per Container Logs</p> <pre><code>standard_init_linux.go:228: exec user process caused: no such file or directory </code></pre> <p>I have created all the files in linux itself:</p> <pre><code>FROM alpine:3.10 COPY k8s-for-beginners / CMD [&quot;/k8s-for-beginners&quot;] </code></pre> <p>GO Code:</p> <pre><code>package main import ( &quot;fmt&quot; &quot;log&quot; &quot;net/http&quot; ) func main() { http.HandleFunc(&quot;/&quot;, handler) log.Fatal(http.ListenAndServe(&quot;0.0.0.0:8080&quot;, nil)) } func handler(w http.ResponseWriter, r *http.Request) { log.Printf(&quot;Ping from %s&quot;, r.RemoteAddr) fmt.Fprintln(w, &quot;Hello Kubernetes Beginners!&quot;) } </code></pre> <p>This is the first exercise from THE KUBERNETES WORKSHOP book.</p> <p>Commands I have used in this Process:</p> <pre><code>CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o k8s-for-beginners sudo docker build -t k8s-for-beginners:v0.0.1 . sudo docker run -p 8080:8080 -d k8s-for-beginners:v0.0.1 </code></pre> <p>Output of the command:</p> <pre class="lang-bash prettyprint-override"><code>sudo docker run k8s-for-beginners:v0.0.1 ldd /k8s-for-beginners </code></pre> <pre><code> /lib64/ld-linux-x86-64.so.2 (0x7f9ab5778000) libc.so.6 =&gt; /lib64/ld-linux-x86-64.so.2 (0x7f9ab5778000) Error loading shared library libgo.so.16: No such file or directory (needed by /k8s-for-beginners) Error loading shared library libgcc_s.so.1: No such file or directory (needed by /k8s-for-beginners) Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /k8s-for-beginners) Error relocating /k8s-for-beginners: crypto..z2frsa..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fx509..import: symbol not found Error relocating /k8s-for-beginners: log..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fmd5..import: symbol not found Error relocating /k8s-for-beginners: crypto..import: symbol not found Error relocating /k8s-for-beginners: bytes..import: symbol not found Error relocating /k8s-for-beginners: fmt.Fprintln: symbol not found Error relocating /k8s-for-beginners: crypto..z2felliptic..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fx509..z2fpkix..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2frand..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20poly1305..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcurve25519..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fidna..import: symbol not found Error relocating /k8s-for-beginners: internal..z2foserror..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fecdsa..import: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp.HandleFunc: symbol not found Error relocating /k8s-for-beginners: io..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp2..z2fhpack..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fcipher..import: symbol not found Error relocating /k8s-for-beginners: log.Fatal: symbol not found Error relocating /k8s-for-beginners: math..z2fbig..import: symbol not found Error relocating /k8s-for-beginners: runtime..import: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp..import: symbol not found Error relocating /k8s-for-beginners: hash..z2fcrc32..import: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp.ListenAndServe: symbol not found Error relocating /k8s-for-beginners: context..import: symbol not found Error relocating /k8s-for-beginners: fmt..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2ftls..import: symbol not found Error relocating /k8s-for-beginners: errors..import: symbol not found Error relocating /k8s-for-beginners: internal..z2ftestlog..import: symbol not found Error relocating /k8s-for-beginners: runtime.setIsCgo: symbol not found Error relocating /k8s-for-beginners: runtime_m: symbol not found Error relocating /k8s-for-beginners: encoding..z2fhex..import: symbol not found Error relocating /k8s-for-beginners: mime..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fbidi..import: symbol not found Error relocating /k8s-for-beginners: internal..z2freflectlite..import: symbol not found Error relocating /k8s-for-beginners: compress..z2fgzip..import: symbol not found Error relocating /k8s-for-beginners: sync..import: symbol not found Error relocating /k8s-for-beginners: compress..z2fflate..import: symbol not found Error relocating /k8s-for-beginners: encoding..z2fbinary..import: symbol not found Error relocating /k8s-for-beginners: math..z2frand..import: symbol not found Error relocating /k8s-for-beginners: runtime_cpuinit: symbol not found Error relocating /k8s-for-beginners: internal..z2fpoll..import: symbol not found Error relocating /k8s-for-beginners: mime..z2fmultipart..import: symbol not found Error relocating /k8s-for-beginners: runtime.check: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcryptobyte..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsha512..import: symbol not found Error relocating /k8s-for-beginners: runtime.registerTypeDescriptors: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20..import: symbol not found Error relocating /k8s-for-beginners: runtime.setmodinfo: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2ftransform..import: symbol not found Error relocating /k8s-for-beginners: time..import: symbol not found Error relocating /k8s-for-beginners: encoding..z2fbase64..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsha256..import: symbol not found Error relocating /k8s-for-beginners: __go_go: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpguts..import: symbol not found Error relocating /k8s-for-beginners: path..z2ffilepath..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2fsecure..z2fbidirule..import: symbol not found Error relocating /k8s-for-beginners: os..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpproxy..import: symbol not found Error relocating /k8s-for-beginners: net..z2ftextproto..import: symbol not found Error relocating /k8s-for-beginners: encoding..z2fasn1..import: symbol not found Error relocating /k8s-for-beginners: runtime.requireitab: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fdns..z2fdnsmessage..import: symbol not found Error relocating /k8s-for-beginners: path..import: symbol not found Error relocating /k8s-for-beginners: io..z2fioutil..import: symbol not found Error relocating /k8s-for-beginners: sort..import: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fnorm..import: symbol not found Error relocating /k8s-for-beginners: internal..z2fcpu..import: symbol not found Error relocating /k8s-for-beginners: runtime.ginit: symbol not found Error relocating /k8s-for-beginners: runtime.osinit: symbol not found Error relocating /k8s-for-beginners: runtime.schedinit: symbol not found Error relocating /k8s-for-beginners: bufio..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2finternal..z2frandutil..import: symbol not found Error relocating /k8s-for-beginners: runtime_mstart: symbol not found Error relocating /k8s-for-beginners: net..import: symbol not found Error relocating /k8s-for-beginners: strconv..import: symbol not found Error relocating /k8s-for-beginners: runtime.args: symbol not found Error relocating /k8s-for-beginners: runtime..z2finternal..z2fsys..import: symbol not found Error relocating /k8s-for-beginners: runtime.newobject: symbol not found Error relocating /k8s-for-beginners: syscall..import: symbol not found Error relocating /k8s-for-beginners: unicode..import: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp..z2finternal..import: symbol not found Error relocating /k8s-for-beginners: encoding..z2fpem..import: symbol not found Error relocating /k8s-for-beginners: _Unwind_Resume: symbol not found Error relocating /k8s-for-beginners: reflect..import: symbol not found Error relocating /k8s-for-beginners: mime..z2fquotedprintable..import: symbol not found Error relocating /k8s-for-beginners: log.Printf: symbol not found Error relocating /k8s-for-beginners: runtime.typedmemmove: symbol not found Error relocating /k8s-for-beginners: crypto..z2fdsa..import: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsha1..import: symbol not found Error relocating /k8s-for-beginners: bufio..types: symbol not found Error relocating /k8s-for-beginners: bytes..types: symbol not found Error relocating /k8s-for-beginners: compress..z2fflate..types: symbol not found Error relocating /k8s-for-beginners: compress..z2fgzip..types: symbol not found Error relocating /k8s-for-beginners: context..types: symbol not found Error relocating /k8s-for-beginners: crypto..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fcipher..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fdsa..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fecdsa..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2felliptic..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2finternal..z2frandutil..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fmd5..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2frand..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2frsa..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsha1..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsha256..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsha512..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2ftls..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fx509..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fx509..z2fpkix..types: symbol not found Error relocating /k8s-for-beginners: encoding..z2fasn1..types: symbol not found Error relocating /k8s-for-beginners: encoding..z2fbase64..types: symbol not found Error relocating /k8s-for-beginners: encoding..z2fbinary..types: symbol not found Error relocating /k8s-for-beginners: encoding..z2fhex..types: symbol not found Error relocating /k8s-for-beginners: encoding..z2fpem..types: symbol not found Error relocating /k8s-for-beginners: errors..types: symbol not found Error relocating /k8s-for-beginners: fmt..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20poly1305..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcryptobyte..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcurve25519..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fdns..z2fdnsmessage..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpguts..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpproxy..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp2..z2fhpack..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fidna..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2fsecure..z2fbidirule..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2ftransform..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fbidi..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fnorm..types: symbol not found Error relocating /k8s-for-beginners: hash..z2fcrc32..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fcpu..types: symbol not found Error relocating /k8s-for-beginners: internal..z2foserror..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fpoll..types: symbol not found Error relocating /k8s-for-beginners: internal..z2freflectlite..types: symbol not found Error relocating /k8s-for-beginners: internal..z2ftestlog..types: symbol not found Error relocating /k8s-for-beginners: io..types: symbol not found Error relocating /k8s-for-beginners: io..z2fioutil..types: symbol not found Error relocating /k8s-for-beginners: log..types: symbol not found Error relocating /k8s-for-beginners: math..z2fbig..types: symbol not found Error relocating /k8s-for-beginners: math..z2frand..types: symbol not found Error relocating /k8s-for-beginners: mime..types: symbol not found Error relocating /k8s-for-beginners: mime..z2fmultipart..types: symbol not found Error relocating /k8s-for-beginners: mime..z2fquotedprintable..types: symbol not found Error relocating /k8s-for-beginners: net..types: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp..types: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp..z2finternal..types: symbol not found Error relocating /k8s-for-beginners: net..z2ftextproto..types: symbol not found Error relocating /k8s-for-beginners: os..types: symbol not found Error relocating /k8s-for-beginners: path..types: symbol not found Error relocating /k8s-for-beginners: path..z2ffilepath..types: symbol not found Error relocating /k8s-for-beginners: reflect..types: symbol not found Error relocating /k8s-for-beginners: runtime..types: symbol not found Error relocating /k8s-for-beginners: runtime..z2finternal..z2fsys..types: symbol not found Error relocating /k8s-for-beginners: sort..types: symbol not found Error relocating /k8s-for-beginners: strconv..types: symbol not found Error relocating /k8s-for-beginners: sync..types: symbol not found Error relocating /k8s-for-beginners: syscall..types: symbol not found Error relocating /k8s-for-beginners: time..types: symbol not found Error relocating /k8s-for-beginners: unicode..types: symbol not found Error relocating /k8s-for-beginners: container..z2flist..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2faes..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fdes..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fed25519..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fed25519..z2finternal..z2fedwards25519..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fhmac..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2finternal..z2fsubtle..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2frc4..types: symbol not found Error relocating /k8s-for-beginners: crypto..z2fsubtle..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcryptobyte..z2fasn1..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fhkdf..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2finternal..z2fsubtle..types: symbol not found Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fpoly1305..types: symbol not found Error relocating /k8s-for-beginners: hash..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fbytealg..types: symbol not found Error relocating /k8s-for-beginners: internal..z2ffmtsort..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fnettrace..types: symbol not found Error relocating /k8s-for-beginners: internal..z2frace..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fsingleflight..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fsyscall..z2fexecenv..types: symbol not found Error relocating /k8s-for-beginners: internal..z2fsyscall..z2funix..types: symbol not found Error relocating /k8s-for-beginners: math..types: symbol not found Error relocating /k8s-for-beginners: math..z2fbits..types: symbol not found Error relocating /k8s-for-beginners: net..z2fhttp..z2fhttptrace..types: symbol not found Error relocating /k8s-for-beginners: net..z2furl..types: symbol not found Error relocating /k8s-for-beginners: runtime..z2finternal..z2fatomic..types: symbol not found Error relocating /k8s-for-beginners: runtime..z2finternal..z2fmath..types: symbol not found Error relocating /k8s-for-beginners: strings..types: symbol not found Error relocating /k8s-for-beginners: sync..z2fatomic..types: symbol not found Error relocating /k8s-for-beginners: unicode..z2futf16..types: symbol not found Error relocating /k8s-for-beginners: unicode..z2futf8..types: symbol not found Error relocating /k8s-for-beginners: runtime.strequal..f: symbol not found Error relocating /k8s-for-beginners: runtime.memequal64..f: symbol not found Error relocating /k8s-for-beginners: type...1reflect.rtype: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Align: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Align: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.AssignableTo: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.AssignableTo: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Bits: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Bits: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.ChanDir: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.ChanDir: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Comparable: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Comparable: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.ConvertibleTo: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.ConvertibleTo: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Elem: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Elem: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Field: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Field: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldAlign: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldAlign: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldByIndex: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldByIndex: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldByName: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldByName: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldByNameFunc: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.FieldByNameFunc: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Implements: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Implements: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.In: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.In: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.IsVariadic: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.IsVariadic: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Key: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Key: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Kind: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Kind: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Len: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Len: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Method: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Method: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.MethodByName: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.MethodByName: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Name: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Name: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumField: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumField: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumIn: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumIn: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumMethod: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumMethod: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumOut: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.NumOut: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Out: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Out: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.PkgPath: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.PkgPath: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Size: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.Size: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.String: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.String: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.common: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.common: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.rawString: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.rawString: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.uncommon..stub: symbol not found Error relocating /k8s-for-beginners: reflect.rtype.uncommon..stub: symbol not found Error relocating /k8s-for-beginners: reflect..reflect.rtype..d: symbol not found Error relocating /k8s-for-beginners: type...1net.IPAddr: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.Network: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.Network: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.String: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.String: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.family: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.family: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.isWildcard: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.isWildcard: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.sockaddr: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.sockaddr: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.toLocal: symbol not found Error relocating /k8s-for-beginners: net.IPAddr.toLocal: symbol not found Error relocating /k8s-for-beginners: net.IPAddr..d: symbol not found Error relocating /k8s-for-beginners: runtime.main: symbol not found Error relocating /k8s-for-beginners: runtime_iscgo: symbol not found Error relocating /k8s-for-beginners: runtime_isstarted: symbol not found Error relocating /k8s-for-beginners: runtime_isarchive: symbol not found Error relocating /k8s-for-beginners: __gcc_personality_v0: symbol not found Error relocating /k8s-for-beginners: io.Writer..d: symbol not found Error relocating /k8s-for-beginners: runtime.writeBarrier: symbol not found </code></pre>
UME
<p>In my particular case, this exact error was caused by a Bash entry script with incorrect Windows/DOS line endings.</p> <p>Add this to the Docker file:</p> <pre><code>RUN dos2unix /entrypoint.sh </code></pre> <p>If <code>dos2unix</code> is not installed, prefix with:</p> <pre><code># For Alpine Linux: RUN apk add dos2unix # For Ubuntu: RUN apt-get install dos2unix </code></pre>
Contango
<p>I want to deploy hyperkube in a Kubernetes pod. <br/> I already have a Kubernetes cluster. I tried few docker images in the docker hub. But all pods are failing with some issues. <br> I am not able to deploy hyperkube image in a Kubernetes pod. </p>
Nikhil Kumar Agrawal
<p><code>hyperkube</code> is the binary to run k8s components on the nodes. It is not intended to run inside the k8s cluster.</p> <p>You may want to start with <code>busybox</code> image:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "600" name: busybox </code></pre>
lexsys
<p>In <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="noreferrer">GKE Ingress documentation</a> it states that:</p> <blockquote> <p><em>When you create an Ingress object, the GKE Ingress controller creates a Google Cloud HTTP(S) Load Balancer and configures it according to the information in the Ingress and its associated Services.</em></p> </blockquote> <p>To me it seems that I can not have multiple <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">ingress resources</a> with single GCP ingress controller. Instead, GKE creates a new ingress controller for every ingress resource.</p> <p>Is this really so, or is it possible to have multiple ingress resources with a single ingress controller in GKE?</p> <p>I would like to have one GCP LoadBalancer as ingress controller with static IP and DNS configured, and then have multiple applications running in cluster, each application registering its own ingress resource with application specific host and/or path specifications.</p> <p>Please note that I'm very new to GKE, GCP and Kubernetes in general, so it might be that I have misunderstood something.</p>
Jarppe
<p>I think the question you're actually asking is slightly different than what you have written. You want to know if multiple Ingress resources can be linked to a single GCP Load Balancer, not GKE Ingress controller. Based on the <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="noreferrer">concept of a controller</a>, there is only one GKE Ingress controller in a cluster, which is responsible for fulfilling multiple resources and provisioning multiple load balancers.</p> <p>So, to answer the question directly (because I've been searching for a straight answer for a long time!):</p> <blockquote> <p>Combining multiple Ingress resources into a single Google Cloud load balancer is not supported.</p> </blockquote> <p>Source: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress</a></p> <p>Sad.</p> <p>However, using the nginx-ingress controller is one way to at least minimize the number of external (GCP) load balancers provisioned (it only provisions a single TCP load balancer), but since the load balancer is for TCP traffic, it cannot terminate SSL, or apply Firewall rules for you (Cloud Armor cannot be used, for instance).</p> <p>The only way I know of to have a single HTTPS load-balancer in GCP terminate SSL and route traffic to multiple services in GKE is to combine the ingresses into a single resource with all paths and certificates defined in one place.</p> <p>(If anybody figures out a way to do it with multiple separate ingress resources, I'd love to hear it!)</p>
mltsy
<p>I wrote a k8s deployment yml sample, but it failed every time I apply it to the cluster, the logs is that</p> <blockquote> <p>standard_init_linux.go:228: exec user process caused: exec format error</p> </blockquote> <p>the yml file is as follows, I am new to kubernetes, and stuck here now, wish you could help me</p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nub1 spec: selector: matchLabels: app: nub1 tier: backend replicas: 1 template: metadata: labels: app: nub1 tier: backend spec: containers: - name: nub1 image: johnhaha/nub1:latest ports: - containerPort: 3001 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 </code></pre> <p>the docker build file is</p> <pre><code>FROM node:lts ADD index.js /index.js CMD node index.js </code></pre>
John Wu
<p><code>exec format error</code> means you're trying to run a binary on a platform other than the one for which it was compiled. Looking at your image, it appears the binaries are built for an ARM platform:</p> <pre><code>$ file bash bash: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=29b2624b1e147904a979d91daebc60c27ac08dc6, stripped </code></pre> <p>Your Kubernetes environment is probably an x86_64 environment and won't be able to run your ARM binaries. The <code>docker buildx</code> command (see <a href="https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images" rel="nofollow noreferrer">the docs</a>) is able to build multi-platform images, so that may be something worth investigating.</p> <hr /> <p>You need to build a Docker image appropriate for the platform on which you will be running it.</p>
larsks
<p>While creating a secret:</p> <pre class="lang-bash prettyprint-override"><code>kubectl -n customspace create secret tls localhost.customspace.svc.customspace-tls-pair --cert=certs/webhook.crt --key=certs/webhook.key </code></pre> <p>I get the error:</p> <pre><code>error: failed to create secret the server could not find the requested resource (post secrets) </code></pre> <p>I have checked <a href="https://kubernetes.io/releases/version-skew-policy/" rel="nofollow noreferrer">https://kubernetes.io/releases/version-skew-policy/</a> but still am getting this error.</p> <pre><code>$ kubectl version -o=yaml clientVersion: buildDate: &quot;2022-09-14T19:49:27Z&quot; compiler: gc gitCommit: e4d4e1ab7cf1bf15273ef97303551b279f0920a9 gitTreeState: clean gitVersion: v1.25.1 goVersion: go1.19.1 major: &quot;1&quot; minor: &quot;25&quot; platform: linux/amd64 kustomizeVersion: v4.5.7 serverVersion: buildDate: &quot;2022-07-13T14:23:26Z&quot; compiler: gc gitCommit: aef86a93758dc3cb2c658dd9657ab4ad4afc21cb gitTreeState: clean gitVersion: v1.24.3 goVersion: go1.18.3 major: &quot;1&quot; minor: &quot;24&quot; platform: linux/amd64 </code></pre> <pre><code>$ echo $KUBECONFIG /home/vyom/.kube/config </code></pre> <p>Any help will be really appreciated, looked at a lot of similar questions, no luck.</p>
BeastMaster64
<p>This probably means your namespace (<code>customspace</code>) doesn't exist.</p>
Mike Conigliaro
<p><strong>given is following scenario</strong></p> <ul> <li>Terraform is used to generate 2 Kubernetes Namespaces</li> <li><strong>Namespace A</strong> contains RabbitMQ <ul> <li>RabbitMQ is installed via HELM-chart via terraform</li> <li>Password-secret for RabbitMQ is generated, if not set, in HELM-chart</li> </ul> </li> <li><strong>Namespace B</strong> contains applications which access RabbitMQ <ul> <li>Therefor the password-secret needs to be copied from <strong>A -&gt; B</strong> (or generated in B)</li> </ul> </li> </ul> <hr /> <p>How do I get the secret into namespace B with <strong>pure terraform logic</strong>?</p> <p><strong>My ideas:</strong></p> <ul> <li>simply copy secret from <strong>A -&gt; B</strong> (was not able to find a solution for that)</li> <li>read the password value from secret in <strong>A</strong> and create a new secret in <strong>B</strong> (was only able to find solutions where the secret was initially generated by terraform)</li> <li>generate the password-secret with terraform &quot;random_password&quot;-resource and use it in the helm-chart <ul> <li>could work if I can have a conditional check if a terraform var is not set = generate, else use the var-value</li> </ul> </li> </ul> <p>Are there good solutions for that problem?</p>
masterchris_99
<p>Terraform's kubernetes provider appears to have a kubernetes_secret data source: <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/secret" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/secret</a></p> <p>So theoretically at least, depending on the K8s credentials you are willing to provide to terraform while running the config, you may be able to just add a data source to the config for Namespace B:</p> <pre><code>data &quot;kubernetes_secret&quot; &quot;rabbit_password&quot; { metadata { name = &quot;rabbit_password&quot; namespace = &quot;namespace_a&quot; } } </code></pre> <p>And then refer to the data where you need it as:</p> <pre><code>data.kubernetes_secret.rabbit_password.data[&quot;rabbitmq-password&quot;] </code></pre>
mltsy
<p>I am trying to delete a persistent volume, to start form scratch a used kafka cluster into kubernetes, <strong>i changed the Retain mode to Delete, it was Retain.</strong> But i am not able to delete two of the three volumes:</p> <pre><code>[yo@machine kafka_k8]$ kubectl describe pv kafka-zk-pv-0 Name: kafka-zk-pv-0 Labels: type=local StorageClass: Status: Failed Claim: kafka-ns/datadir-0-poc-cp-kafka-0 Reclaim Policy: Delete Access Modes: RWO Capacity: 500Gi Message: host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka Source: Type: HostPath (bare host directory volume) Path: /mnt/disk/kafka Events: {persistentvolume-controller } Warning VolumeFailedDelete host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka </code></pre>
jacktrade
<p>I changed the policy &quot;Retain&quot; to &quot;<strong>Recycle</strong>&quot; and the volume now is able to be recreated.</p> <pre class="lang-sh prettyprint-override"><code>kubectl patch pv kafka-zk-pv-0 -p '{&quot;spec&quot;:{&quot;persistentVolumeReclaimPolicy&quot;:&quot;Recycle&quot;}}' </code></pre>
jacktrade
<p>We have an application deployed on GKE that would benefit from having fast temporary storage on disk.</p> <p>The GKE <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd" rel="nofollow noreferrer">local SSD</a> feature is almost perfect, however we have multiple pod replicas and would ideally like to support multiple pods on the same node. Mounting the local SSD using <code>hostPath</code> makes that difficult.</p> <p><a href="https://stackoverflow.com/questions/38720828/mount-local-ssd-drive-in-container">This 2016 SO question</a> mentions the idea of mounting <code>emptyDir</code>s on the local SSD which would be perfect, but I understand still isn't an option.</p> <p>There is a <a href="https://groups.google.com/forum/#!topic/kubernetes-users/IVg6QasyxV0" rel="nofollow noreferrer">2017 mailing list thread</a> with the same idea, but the answer was still not positive.</p> <p>The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd" rel="nofollow noreferrer">GCP docs for local SSDs</a> were recently updated to describe using them via the <code>PersistentVolume</code> abstraction, which sounds sort of promising. Could I use that to achieve what I'm after?</p> <p>The examples seem to show mounting the full local SSD as a <code>PersistentVolume</code>, when my preference is to use an isolated part of it for each pod. We also don't need the data to be persistent - once the pod is deleted we'd be happy for the data to be deleted as well.</p>
James Healy
<p>Kubernetes 1.11 added an alpha feature called <a href="https://github.com/kubernetes/kubernetes/issues/48677" rel="nofollow noreferrer">Downward API support in volume subPath</a>, which allows volumeMount subpaths to be set using the downward API.</p> <p>I tested this by creating a GKE 1.11 alpha cluster:</p> <pre><code>gcloud container clusters create jh-test --enable-kubernetes-alpha --zone=asia-southeast1-a --cluster-version=1.11.3-gke.18 --local-ssd-count=1 --machine-type=n1-standard-2 --num-nodes=2 --image-type=cos --disk-type=pd-ssd --disk-size=20Gi --no-enable-basic-auth --no-issue-client-certificate --no-enable-autoupgrade --no-enable-autorepair </code></pre> <p>I then created a 2-replica deployment with the following config:</p> <pre><code> env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: scratch-space mountPath: /tmp/scratch subPath: $(POD_NAME) volumes: - name: scratch-space hostPath: path: "/mnt/disks/ssd0" </code></pre> <p>If I <code>kubectl exec</code>'d into each pod, I had a <code>/tmp/scratch</code> directory that was isolated and very performant.</p> <p>If I SSHd into the host, then I could see a directory for each pod:</p> <pre><code>$ ls -l /mnt/disks/ssd0/ drwx--x--x 14 root root 4096 Dec 1 01:49 foo-6dc57cb589-nwbjw drwx--x--x 14 root root 4096 Dec 1 01:50 foo-857656f4-dzzzl </code></pre> <p>I also tried applying the deployment to a non-alpha GKE 1.11 cluster, but the SSD content ended up looking like this:</p> <pre><code>$ ls -l /mnt/disks/ssd0/ drwxr-xr-x 2 root root 4096 Dec 1 04:51 '$(POD_NAME)' </code></pre> <p>Unfortunately it's not realistic to run our workload on an alpha cluster, so this isn't a pragmatic solution for us yet. We'll have to wait for the feature to reach beta and become available on standard GKE clusters. It does seem to be <a href="https://github.com/kubernetes/kubernetes/issues/64604" rel="nofollow noreferrer">slowly progressing</a>, although the API <a href="https://github.com/kubernetes/community/pull/2871" rel="nofollow noreferrer">will probably change slightly</a>.</p> <p>For kubernetes 1.14, the syntax for <code>volumeMounts</code> has changed to use a new <code>subPathExpr</code> field. The feature remains alpha-only:</p> <pre><code> env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: scratch-space mountPath: /tmp/scratch subPathExpr: $(POD_NAME) volumes: - name: scratch-space hostPath: path: "/mnt/disks/ssd0" </code></pre>
James Healy
<p>Using the example to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">set env variables</a> I can set them, but I cannot find documentation on where I can USE them in the manifest. Replacing a literal value with an env variable for hostAliases IP gives an error:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: envar-demo labels: purpose: demonstrate-envars spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 env: - name: DEMO_GREETING value: &quot;Hello from the environment&quot; - name: DEMO_FAREWELL value: &quot;Such a sweet sorrow&quot; - name: HOST_ALIAS_IP value: &quot;127.0.0.1&quot; hostAliases: - ip: $(HOST_ALIAS_IP) hostnames: - &quot;desktop&quot; </code></pre> <blockquote> <p>C:&gt;kubectl apply -f envars.yaml<br /> The Pod &quot;envar-demo&quot; is invalid: spec.hostAliases.ip: Invalid value: &quot;$(HOST_ALIAS_IP)&quot;: must be valid IP address</p> </blockquote>
mm_sml
<p>You can't use environment variables in the manifest. You can only make use of them inside the pod. They describe the environment of the pod, not the environment in which your manifest is parsed.</p>
larsks
<p>I have been reading about <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">liveness and readiness probes in kubernetes</a> and I would like to use them to check and see if a cluster has come alive.</p> <p>The question is how to configure a readiness probe for an entire statefulset, and not an individual pod/container.</p> <p>A simple HTTP check can be used to determine readiness, but the issue I'm running into is that the readinessCheck seems to apply to the container/pod and not to the set itself.</p> <p>For the software I'm using, the HTTP endpoint doesn't come up until the cluster forms; meaning that each individual pod would fail the readinessCheck until all three are up and find one another.</p> <p>The behavior I'm seeing in Kubernetes right now is that the first of 3 replicas is created, and Kubernetes does not even attempt to create replicas 2 and 3 until the first passes the readinessCheck, which never happens, because all three have to be up for it to have a chance to pass it.</p>
FrobberOfBits
<p>You need to change <code>.spec.podManagementPolicy</code> for a <code>StatefulSet</code> from <code>OrderedReady</code> to <code>Parallel</code> policy. </p> <p>This way K8S will start all your pods in parallel and won't wait for probes.</p> <p>From <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#statefulsetspec-v1-apps" rel="nofollow noreferrer">documentation</a></p> <blockquote> <p>podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.</p> </blockquote>
lexsys
<p>I've build the following script:</p> <pre><code>import boto import sys import gcs_oauth2_boto_plugin def check_size_lzo(ds): # URI scheme for Cloud Storage. CLIENT_ID = 'myclientid' CLIENT_SECRET = 'mysecret' GOOGLE_STORAGE = 'gs' dir_file= 'date_id={ds}/apollo_export_{ds}.lzo'.format(ds=ds) gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET) uri = boto.storage_uri('my_bucket/data/apollo/prod/'+ dir_file, GOOGLE_STORAGE) key = uri.get_key() if key.size &lt; 45379959: raise ValueError('umg lzo file is too small, investigate') else: print('umg lzo file is %sMB' % round((key.size/1e6),2)) if __name__ == "__main__": check_size_lzo(sys.argv[1]) </code></pre> <p>It works fine locally but when I try and run on kubernetes cluster I get the following error:</p> <pre><code>boto.exception.GSResponseError: GSResponseError: 403 Access denied to 'gs://my_bucket/data/apollo/prod/date_id=20180628/apollo_export_20180628.lzo' </code></pre> <p>I have updated the .boto file on my cluster and added my oauth client id and secret but still having the same issue. </p> <p>Would really appreciate help resolving this issue.</p> <p>Many thanks!</p>
D_usv
<p>If it works in one environment and fails in another, I assume that you're getting your auth from a .boto file (or possibly from the OAUTH2_CLIENT_ID environment variable), but your kubernetes instance is lacking such a file. That you got a 403 instead of a 401 says that your remote server is correctly authenticating as somebody, but that somebody is not authorized to access the object, so presumably you're making the call as a different user.</p> <p>Unless you've changed something, I'm guessing that you're getting <a href="https://cloud.google.com/docs/authentication/production#obtaining_credentials_on_compute_engine_kubernetes_engine_app_engine_flexible_environment_and_cloud_functions" rel="nofollow noreferrer">the default Kubernetes Engine auth</a>, with means <a href="https://cloud.google.com/compute/docs/access/service-accounts#compute_engine_default_service_account" rel="nofollow noreferrer">a service account associated with your project</a>. That service account probably hasn't been granted read permission for your object, which is why you're getting a 403. Grant it read/write permission for your GCS resources, and that should solve the problem.</p> <p>Also note that by default the default credentials aren't scoped to include GCS, so <a href="https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes" rel="nofollow noreferrer">you'll need to add that as well</a> and then restart the instance.</p>
Brandon Yarbrough
<p>I have a docker image with below entrypoint.</p> <pre><code>ENTRYPOINT [&quot;sh&quot;, &quot;-c&quot;, &quot;python3 -m myapp ${*}&quot;] </code></pre> <p>I tried to pass arguments to this image in my kubernetes deployments so that <code>${*}</code> is replaced with them, but after checking the logs it seem that the first argument was ignored. I tried to reproduce the result regardless of image, and applied below pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test image: postgres # or any image you may like command: [&quot;bash -c /bin/echo ${*}&quot;] args: - sth - serve - arg </code></pre> <p>when I check the logs, I just see <code>serve arg</code>, and <code>sth</code> is completely ignored. Any idea on what went wrong or what should I do to pass arguments to exec-style entrypoints instead?</p>
Soroush Vafaie Tabar
<p>First, your <code>command</code> has quoting problems -- you are effectively running <code>bash -c echo</code>.</p> <p>Second, you need to closely read the documentation for the <code>-c</code> option (emphasis mine):</p> <blockquote> <p>If the <code>-c</code> option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, <strong>the first argument is assigned to $0</strong> and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages.</p> </blockquote> <p>So you want:</p> <pre><code>command: [&quot;bash&quot;, &quot;-c&quot;, &quot;echo ${*}&quot;, &quot;bash&quot;] </code></pre> <p>Given your pod definition, this would set <code>$0</code> to <code>bash</code>, and then <code>$1</code> to <code>sth</code>, <code>$2</code> to <code>serve</code>, and <code>$3</code> to <code>arg</code>.</p>
larsks
<p>Just noticed that after I update my appname:latest tag to a new image, the command I expected to run an exact debugging clone of a terminated POD is actually pulling the latest! I've searched (briefly) in Kubernetes and Openshift references, but found nothing specific. Looks like a bug, or at least counter-intuitive to debugging. Is there a way to force it, other than using explicit image IDs instead of tags in DeploymentConfigs?</p>
ptrk
<p>The <code>oc debug</code> command would usually be run against the deployment config. Since there is no concept of versioning of resources such as deployment config, the command will use whatever image is matched by the deployment config at that time.</p> <p>If the way you have set up the deployment config uses an image stream, then to maintain multiple versions of images so you can rollback to prior images, you shouldn't use <code>latest</code> tag alone. Instead each time you build and have a good image, tag that specific image in the image stream and then update the deployment config to use that tagged image in the image stream.</p> <p>If that model was followed and you had incremented the tag version, then you could still go back to a prior version if you needed to debug it.</p> <p>If you aren't using an image stream but are hosting on a remote registry, you would still want to tag each separate image you use so you can do the same.</p> <p>I am not sure what you feel is a bug.</p>
Graham Dumpleton
<p>I'm currently trying to alert on Kubernetes pods stacking within an availability zone. I've managed to use two different metrics to the point where I can see how many pods for an application are running on a specific availability zone. However, due to scaling, I want the alert to be percentage based...so we can alert when a specific percentage of pods are running on one AZ (i.e. over 70%).</p> <p>My current query:</p> <pre><code>sum(count(kube_pod_info{namespace="somenamespace", created_by_kind="StatefulSet"}) by (created_by_name, node) * on (node) group_left(az_info) kube_node_labels) by (created_by_name, az_info) </code></pre> <p>And some selected output:</p> <pre><code>{created_by_name="some-db-1",az_info="az1"} 1 {created_by_name="some-db-1",az_info="az2"} 4 {created_by_name="some-db-2",az_info="az1"} 2 {created_by_name="some-db-2",az_info="az2"} 3 </code></pre> <p>For example, in the above output we can see that 4 db-1 pods are stacking on az2 as opposed to 1 pod on az1. In this scenario we would want to alert as 80% of db-1 pods are stacked on a single AZ.</p> <p>As the output contains multiple pods on multiple AZs, it feels like it may be difficult to get the percentage using a single Prometheus query, but wondered if anyone with more experience could offer a solution?</p> <p>Thanks!</p>
Alistair Webster
<pre><code> your_expression / ignoring(created_by_name) group_left sum without(created_by_name)(your_expression) </code></pre> <p>will give you the ratio of the whole for each, and then you can do <code>&gt; .8</code> on that.</p>
brian-brazil
<p>Sorry if this sounds like I'm lazy, but I've search around, around and around, but couldn't find it!</p> <p>I'm looking for a reference that explains each of the fields that may exist in an OpenShift / Kubernetes template, e.g. what possible values there are.</p>
His
<p>The templates you get in OpenShift are OpenShift specific and not part of Kubernetes. If you mean the purpose of each of the possible fields you can specify for a parameter, you can run <code>oc explain template</code>. For example:</p> <pre><code>$ oc explain template.parameters RESOURCE: parameters &lt;[]Object&gt; DESCRIPTION: parameters is an optional array of Parameters used during the Template to Config transformation. Parameter defines a name/value variable that is to be processed during the Template to Config transformation. FIELDS: description &lt;string&gt; Description of a parameter. Optional. displayName &lt;string&gt; Optional: The name that will show in UI instead of parameter 'Name' from &lt;string&gt; From is an input value for the generator. Optional. generate &lt;string&gt; generate specifies the generator to be used to generate random string from an input value specified by From field. The result string is stored into Value field. If empty, no generator is being used, leaving the result Value untouched. Optional. The only supported generator is "expression", which accepts a "from" value in the form of a simple regular expression containing the range expression "[a-zA-Z0-9]", and the length expression "a{length}". Examples: from | value ----------------------------- "test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" | "0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i" name &lt;string&gt; -required- Name must be set and it can be referenced in Template Items using ${PARAMETER_NAME}. Required. required &lt;boolean&gt; Optional: Indicates the parameter must have a value. Defaults to false. value &lt;string&gt; Value holds the Parameter data. If specified, the generator will be ignored. The value replaces all occurrences of the Parameter ${Name} expression during the Template to Config transformation. Optional. </code></pre> <p>You can find more information in:</p> <ul> <li><a href="https://docs.openshift.org/latest/dev_guide/templates.html" rel="nofollow noreferrer">https://docs.openshift.org/latest/dev_guide/templates.html</a></li> </ul> <p>If that isn't what you mean, you will need to be more specific as to what you mean. If you are talking about fields on any resource object (templates are specific type of resource object in OpenShift), you can use <code>oc explain</code> on any of them, pass the name of the resource type as argument, and then a dotted path as you traverse into fields. If using plain Kubernetes, you can use <code>kubectl explain</code>.</p>
Graham Dumpleton