Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>With <code>CRD</code> we can extend the functionality of kubernetes, but how could I know which <code>controller</code> handle a certain CRD, I mean I know there is a CRD registered in my kubernetes named <code>foo</code> but how could I know which <code>controller/pod</code> do the reconcile for it?</p>
vincent pli
<p>There is no way of knowing just by looking at the CRDs. Several different controllers could be watching the same CRD, it's not like there is a 1-1 relationship.</p> <p>If you really need to know, there would be ways of figuring this out, like enabling the audit log and inspecting the calls received by the k8s api.</p>
Jose Armesto
<p>I have the following dockerfile for a project that is hosted with Kubernetes and Openshift and am getting a vulnerability warning from Gitlab that line 10 should use an absolute path instead of relative path for the sake of clarity and reliability. Is there something about a string path that dockerfile or Gitlab doesn't like? I am not getting the warnings for lines 3, 6, or 17. NOTE: I've replaced the docker image and project names below with placeholders surrounded by brackets.</p> <pre><code>1 FROM {docker-image1} 2 HEALTHCHECK CMD curl --fail -s http://localhost:8080/liveliness || exit 1 3 WORKDIR /app 4 5 FROM {docker-image2} AS build 6 WORKDIR /app/src 7 COPY [&quot;{proj-path-string1}&quot;, &quot;{proj-path-string2}&quot;] 8 RUN dotnet restore --runtime linux-x64 &quot;{proj-path-string1}&quot; 9 COPY . . 10 WORKDIR &quot;/app/src/{directory-name}&quot; 11 RUN dotnet build --runtime linux-x64 &quot;{project-name}&quot; -c Release -o /app/build 12 13 FROM build AS publish 14 RUN dotnet publish --runtime linux-x64 &quot;{project-name}&quot; -c Release -o /app/publish 15 16 FROM base AS final 17 WORKDIR /app 18 COPY --from=publish /app/publish . 19 20 ENTRYPOINT [&quot;dotnet&quot;, &quot;{project-name}.dll&quot;]``` </code></pre>
Reid P.
<p>Delete the quotes from that line, changing:</p> <pre><code>WORKDIR &quot;/app/src/{directory-name}&quot; </code></pre> <p>To:</p> <pre><code>WORKDIR /app/src/{directory-name} </code></pre>
BMitch
<p>I am building a new Helm chart (<strong>mychart</strong>) that I'm trying to install.</p> <p>A <code>values.yaml</code> exists and its contents specify the fullnameOverride:</p> <pre class="lang-yaml prettyprint-override"><code>fullnameOverride: &quot;myapp&quot; </code></pre> <p>I run the following command</p> <p><code>helm install --dry-run -f &quot;mychart-stack/values.yaml&quot; mychart-stack1 ./mychart-stack</code></p> <p>And it's giving me the error:</p> <blockquote> <p>template: mychart-stack/templates/persistentvolume.local-storage.range.yml:5:14: executing &quot;mychart-stack/templates/persistentvolume.local-storage.range.yml&quot; at &lt;include &quot;mychart-stack.fullname&quot; .&gt;: error calling include: template: mychart-stack/templates/_helpers.tpl:14:14: executing &quot;mychart-stack.fullname&quot; at &lt;.Values.fullnameOverride&gt;: nil pointer evaluating interface {}.fullnameOverride</p> </blockquote> <p>The <code>mychart-stack/templates/_helpers.tpl:14:14</code> is the pregenerated one when you're asking Helm to produce a Chart example.</p> <p>The error (14:14) is associated at the first line of the following auto generated code:</p> <pre class="lang-yaml prettyprint-override"><code>{{- if .Values.fullnameOverride }} {{- .Values.fullnameOverride | trunc 63 | trimSuffix &quot;-&quot; }} {{- else }} </code></pre> <hr /> <p>A little more context, as it's throwing an error while checking the persistentvolume.local-storage.range.yml, here are the contents of the file:</p> <pre class="lang-yaml prettyprint-override"><code>{{- range .Values.persistentVolume.localStorage }} --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-{{ include &quot;mychart-stack.fullname&quot; }}-{{ .name }} spec: capacity: storage: 20Gi # le champ volumeMode requiert l'activation de la &quot;feature gate&quot; Alpha BlockVolume volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage-{{ include &quot;mychart-stack.fullname&quot; }}--{{ .name }} local: path: {{ .Values.persistentVolume.basePath }}/{{ .name }} nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - {{ .Values.hostName }} {{- end }} </code></pre> <p>I don't know what's wrong, the code seems to indicate that it's not defined properly. I tried to run it in --debug mode but it doesn't help (same error).</p>
Micaël Félix
<p>Finally the problem wasn't the values.yaml that was not set correctly but more the way it was used within the template.</p> <p>When using an include of a definition coming from a .tpl file (this one was the autogenerated by Helm), we must be careful to not be in a range.</p> <p>I was creating a range of assets so it seems that it will run the code in the context of the range.</p> <blockquote> <p>Your conditional logic is being evaluated inside a range loop. This means . you're using to access Values is not the one you expect it to be, as it's overridden for each range iteration evaluation.</p> </blockquote> <p>ref: <a href="https://stackoverflow.com/questions/57475521/ingress-yaml-template-returns-error-in-renderring-nil-pointer-evaluating-int">ingress.yaml template returns error in renderring --&gt; nil pointer evaluating interface {}.service</a></p> <p>That means that we should use <code>$</code> instead of <code>.</code> notation because it references the global scope.</p> <p>Example:</p> <pre class="lang-yaml prettyprint-override"><code>{{- include &quot;mychart-stack.fullname&quot; $ }} </code></pre>
Micaël Félix
<p>Asked in <a href="https://stackoverflow.com/questions/65358738/is-there-any-way-to-configure-skaffold-to-build-images-on-my-local-docker-daemon/65395223?noredirect=1#comment120508501_65395223">a different question</a>:</p> <blockquote> <p>why does <code>skaffold</code> need two tags to the same image?</p> </blockquote>
Brian de Alwis
<p>During deployment, Skaffold rewrites the image references in the Kubernetes manifests being deployed to ensure that the cluster pulls the the newly-built images and doesn't use stale copies (read about <code>imagePullPolicy</code> and some of the issues that it attempts to address). Skaffold can't just use the computed image tag as many tag conventions do not produce unique tags and the tag can be overwritten by another developer and point to a different image. It's not unusual for a team of devs, or parallel tests, to push images into the same image repository and encounter tag clashes. For example, <code>latest</code> will be overwritten by the next build, and the default <code>gitCommit</code> tagger generates tags like <code>v1.17.1-38-g1c6517887</code> which uses the most recent version tag and the current commit SHA and so isn't unique across uncommitted source changes.</p> <p>When pushing to a registry, Skaffold can use the image's <em>digest</em>, the portion after the <code>@</code> in <code>gcr.io/my-project/image:latest@sha256:xxx</code>. This digest is the hash of the image configuration and layers and uniquely identifies a specific image. A container runtime ignores the tag (<code>latest</code> here) when there is a digest.</p> <p>When loading an image to a Docker daemon, as happens when deploying to minikube, the Docker daemon does not maintain image digests. So Skaffold instead tags the image with a second tag using a <em>computed digest</em>. It's extremely unlikely that two <em>different</em> images will have the same computed digest, unless they're the same image.</p> <p>Tags are cheap: they're like symlinks, pointing to an image identifier.</p>
Brian de Alwis
<p>I have a k8s cluster where I deploy some containers.</p> <p>The cluster is accessible at microk8s.hostname.internal.</p> <p>At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.</p> <p>And this works great.</p> <p>Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.</p> <p>How do I do this?</p> <p>Currently installed addons in microk8s:</p> <pre><code>aasa@bolsrv0891:/snap/bin$ microk8s status microk8s is running high-availability: no addons: enabled: dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes ingress # (core) Ingress controller for external access metrics-server # (core) K8s Metrics Server for API access to service metrics </code></pre> <p>Update 1: If I portforward to my service it works. I have tried this ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress namespace: jupyter-notebook annotations: kubernetes.io/ingress.class: public spec: rules: - host: jupyter.microk8s.hostname.internal http: paths: - path: / pathType: Prefix backend: service: name: jupyter-service port: number: 7070 </code></pre> <p>But I cant access it nor ping it. Chrome says: jupyter.microk8s.hostname.internal’s server IP address could not be found.</p> <p>My service looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: jupyter-service namespace: jupyter-notebook spec: ports: - name: 7070-8888 port: 7070 protocol: TCP targetPort: 8888 selector: app: jupyternotebook type: ClusterIP status: loadBalancer: {} </code></pre> <p>I can of course ping microk8s.hostname.internal.</p> <p>Update 2:</p> <p>The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/rewrite-target: /$1 name: jupyter-ingress namespace: jupyter-notebook spec: rules: - http: paths: - path: &quot;/jupyter-notebook/?(.*)&quot; pathType: Prefix backend: service: name: jupyter-service port: number: 7070 </code></pre> <p>This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.</p>
Viktor Eriksson
<p>To do this you would have to configure a kube service, kube ingress and the configure your DNS.</p> <p>Adding an entry into the <code>hosts</code> file would allow DNS resolution to <code>otherapplication.microk8s.hostname.internal</code></p> <p>You could use <code>dnsmasq</code> to allow for wildcard resolution e.g. <code>*.microk8s.hostname.internal</code></p> <p>You can test the dns reoslution using <code>nslookup</code> or <code>dig</code></p>
Wayne Shelley
<p>How can I make <code>Skaffold</code> forward privileged/protected/special ports which have numbers below <code>1024</code>? In my <code>skaffold.yaml</code> I added:</p> <pre class="lang-yaml prettyprint-override"><code>portForward: - resourceType: service resourceName: foo port: 80 localPort: 80 </code></pre> <p>It works fine for all unprotected ports, but in case of port <code>80</code>, <code>Skaffold</code> automatically picks another unprotected port instead of <code>80</code>.</p> <p>According to the documentation <code>Skaffold</code> runs <code>kubectl port-forward</code> on each of user-defined ports, so I granted the <code>kubectl</code> binary the capability to open privileged ports with this command <code>sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/kubectl</code>.</p> <p>Everything works fine when directly running <code>kubectl port-forward services/foo 80:80</code>, but when I run <code>skaffold dev --port-forward</code> it still picks another unprotected port.</p> <p>I have been using <code>Skaffold v1.28.1</code> with <code>Minikube v1.22.0</code> on <code>Ubuntu 20.04</code>.</p>
adrihanu
<p>This should work. We changed Skaffold's behaviour to prevent it from allocating system ports (≤ 1024), but user-defined port-forwards with explicit <code>localPort</code>s will still be honoured.</p> <p>You didn't say what ports you were seeing being allocated, but I suspect they were ports 4503–4533, in which you're hitting a bug (<a href="https://github.com/GoogleContainerTools/skaffold/issues/6312" rel="nofollow noreferrer">#6312</a>). This bug is now fixed and will be in the next release. You can also use the &quot;bleeding-edge&quot; build which is built from HEAD: the <a href="https://skaffold.dev/docs/install/" rel="nofollow noreferrer">installation instructions</a> have details for where to fetch these pre-built binaries.</p>
Brian de Alwis
<p>I'm running docker-registry inside a deployment with ingress setup (nginx-ingress), and I use cloudflare. I started getting issues when trying to push images larger then 1GB if a layer is bit larger then that I just get &quot;Retrying in x&quot;, and it begins from 0. Strange enough pushing any layer below that threshold just passes without issue and push succeeds.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ .Values.name }} annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: {{ .Values.certManager.name }} nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.org/client-max-body-size: &quot;0&quot; nginx.ingress.kubernetes.io/proxy-buffering: &quot;off&quot; nginx.ingress.kubernetes.io/proxy-http-version: &quot;1.1&quot; nginx.ingress.kubernetes.io/proxy_ignore_headers: &quot;X-Accel-Buffering&quot; nginx.ingress.kubernetes.io/connection-proxy-header: &quot;keep-alive&quot; nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;600&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;600&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;600&quot; nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: &quot;600&quot; nginx.ingress.kubernetes.io/proxy-next-upstream-tries: &quot;10&quot; nginx.ingress.kubernetes.io/proxy-request-buffering: &quot;off&quot; nginx.ingress.kubernetes.io/proxy-body-size: &quot;8192m&quot; kubernetes.io/tls-acme: 'true' nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers &quot;proxy_http_version 1.1&quot;; more_set_headers &quot;X-Forwarded-For $proxy_add_x_forwarded_for&quot;; more_set_headers &quot;Host $http_host&quot;; more_set_headers &quot;Upgrade $http_upgrade&quot;; more_set_headers &quot;Connection keep-alive&quot;; more_set_headers &quot;X-Real-IP $remote_addr&quot;; more_set_headers &quot;X-Forwarded-For $proxy_add_x_forwarded_for&quot;; more_set_headers &quot;X-Forwarded-Proto: https&quot;; more_set_headers &quot;X-Forwarded-Ssl on&quot;;   labels: app: {{ .Values.name }} spec: tls: - hosts: {{- range .Values.certificate.dnsNames }} - {{ . }} {{- end}} secretName: {{ .Values.certificate.secretName }} rules: - host: {{ .Values.certManager.mainHost }} http: paths: - path: / pathType: Prefix backend: service: name: {{ .Values.service.name }} port: number: {{ .Values.service.port }} </code></pre> <p>I want to be able to upload any size image as long as storage is available.</p>
Immutable
<p>First, verify you are using nginx-ingress and not ingress-nginx, which uses a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size" rel="nofollow noreferrer">different configuration for the body size</a>:</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/proxy-body-size: &quot;0&quot; </code></pre> <p>Next, track down where the connection is getting dropped by checking the proxy and registry logs. This includes cloudflare, the nginx pod, and the registry pod. There's no need to debug all three simultaneously, so figure out which one of these is rejecting the large put requests. If the issue is cloudflare, e.g. if you don't see any logs in your nginx ingress instance or registry containers, then consider pushing directly to your nginx ingress rather than cloudflare. Those logs may also indicate if the issue is based on time rather than size, which would be a different setting to adjust.</p> <p>And finally, as a workaround if you can't push with one large blob, there is an option to do a chunked blob put to a registry, which breaks the upload up into smaller requests, each of which should be below the proxy limits. Docker by default does a chunked upload but with only a single chunk, and I'm not aware of any way to change it's settings. My own project is <a href="https://github.com/regclient/regclient/" rel="nofollow noreferrer">regclient</a>, and it can copy images in an OCI layout or exported from the docker engine to a registry. With regclient/regctl working with an exported OCI Layout or <code>docker save</code> output, that could be implemented with the following:</p> <pre class="lang-bash prettyprint-override"><code>regctl registry login $registry # when a blob exceeds 500M, push it as 50M chunks, note each chunk is held in ram regctl registry set --blob-max 500000000 --blob-chunk 50000000 $registry regctl image import $registry/$repo:$tag $file_tar </code></pre>
BMitch
<p>I have a multi-module maven project (Spring Boot), I generate the docker images in using the JIB Maven Plugin but how should I name the images in scaffold? Im pushing to local docker repo and Skaffold afaik does not support templating. What is the recommended was to reference these images in Skaffold?</p> <p>Keep in mind that for separate images per module I need to name them as:</p> <pre><code> ${image.registry.host}:${image.registry.port}/${project.artifact} </code></pre> <p>So no choice really but to parametrize them in the pom.</p> <p>Do I now need to put in host and port names into the skaffold file? Whats the best way to handle this atm? And how about the name in Kubernetes deployment descriptor?</p> <pre><code> &lt;plugin&gt; &lt;groupId&gt;com.google.cloud.tools&lt;/groupId&gt; &lt;artifactId&gt;jib-maven-plugin&lt;/artifactId&gt; &lt;version&gt;${jib-maven-plugin.version}&lt;/version&gt; &lt;configuration&gt; &lt;!--If you want custom base image and push registry, use below configuration replace above--&gt; &lt;from&gt; &lt;image&gt;openjdk:8-jdk-alpine&lt;/image&gt; &lt;/from&gt; &lt;to&gt; **&lt;image&gt;${image.registry.host}:${image.registry.port}/${project.artifactId}**:${project.version}&lt;/image&gt; &lt;/to&gt; &lt;container&gt; &lt;jvmFlags&gt; &lt;jvmFlag&gt;-Djava.security.egd=file:/dev/./urandom&lt;/jvmFlag&gt; &lt;jvmFlag&gt;-Xdebug&lt;/jvmFlag&gt; &lt;jvmFlag&gt;-Duser.timezone=GMT+08&lt;/jvmFlag&gt; &lt;/jvmFlags&gt; &lt;mainClass&gt;com.example.jib.JibApplication&lt;/mainClass&gt; &lt;ports&gt; &lt;port&gt;8080&lt;/port&gt; &lt;/ports&gt; &lt;/container&gt; &lt;allowInsecureRegistries&gt;true&lt;/allowInsecureRegistries&gt; &lt;/configuration&gt; &lt;/plugin&gt; What goes in the Scaffold.yml for image name? apiVersion: skaffold/v1beta4 kind: Config # Enforce SKaffold to use Jib build: local: push: false # Generated artifact artifacts: **- image: lvthillo/my-app. ??????????? HOW SHOULD I NAME THIS? image: module2/ ???????** # Use jibMaven jibMaven: {} # Execute deployment.yml deploy: kubectl: manifests: - kubernetes/deployment.yml </code></pre> <p>Here is Kubernetes deployment descriptor.</p> <p>What name should image have here???</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: spring-deployment spec: replicas: 1 selector: matchLabels: app: spring-boot-jib template: metadata: labels: app: spring-boot-jib spec: containers: - name: spring-boot-jib-pod **image: lvthillo/my-app. ????????? What name here???** imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8080 </code></pre> <hr> <pre><code> apiVersion: v1 kind: Service metadata: name: spring-boot-jib-service spec: type: NodePort ports: - protocol: TCP port: 8080 nodePort: 32321 selector: app: spring-boot-jib </code></pre>
Steven Smart
<p>When using Jib with Skaffold, Skaffold is the master and overrides the image refs used by Jib. Skaffold will decide what image names should be used and will rewrite the image references in the Kubernetes manifests to match prior to deploying.</p> <p>If you're developing against a <em>local cluster</em> such as Minikube, then Skaffold causes the images to be loaded directly into the Minikube's Docker daemon; they're never pushed to the actual registry. (In jib terms, Skaffold will invoke <code>jib:dockerBuild</code> rather than <code>jib:build</code> to a registry.). So you can keep your production image refs in your <code>skaffold.yaml</code>.</p> <p>If you're pushing to registries, then you have three options depending on your setup:</p> <ol> <li><p>If you have a local or per-developer setups, then keep the production image refs and have developers use Skaffold's <code>--default-repo</code> option: it transforms and replaces the <em>repository</em> portion of the image refs as specified in the <code>skaffold.yaml</code> to point to this new repository.</p></li> <li><p>If it's ok for developers to intermingle within the same registry and repository, then the <a href="https://skaffold.dev/docs/how-tos/taggers/" rel="nofollow noreferrer">Skaffold taggers</a> should ensure your developers' pushed images won't conflict. You could use the [<code>envTemplate</code> tagger] to generate different image tags based on environmental information like <code>$USER</code>.</p></li> <li><p>You can use <a href="https://skaffold.dev/docs/how-tos/profiles/" rel="nofollow noreferrer">Skaffold's <em>profiles</em></a> and <a href="https://skaffold.dev/docs/how-tos/profiles/#override-via-patches" rel="nofollow noreferrer">patches</a> to override images. Creating the right JSON patches can feel a bit hit or miss, but it's pretty powerful.</p></li> </ol> <p>I personally use the <code>--default-repo</code> approach.</p>
Brian de Alwis
<p>I'm trying to follow this guide: <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a> and was able to get as far as running <code>curl -u &quot;elastic:$PASSWORD&quot; -k &quot;https://localhost:9200&quot;</code> successfully. However, I tried running it again from step 1 on and am now unable to get past <code>kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'</code> and I get: <code>No resources found in default namespace.</code></p> <p>I thought cleaning up all the ECK stuff following this <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-uninstalling-eck.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-uninstalling-eck.html</a> and restarting would fix the issue, but it doesn't seem to. For example, I have a manifest file that I named <code>elasticsearch.yaml</code>:</p> <pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: my-elasticsearch spec: version: 7.12.1 nodeSets: - name: default count: 1 config: node.store.allow_mmap: false </code></pre> <p>and for some reason, I'm seeing this when running <code>kubectl get pods</code>:</p> <pre><code>my-elasticsearch-depl-6d6f76dd64-4v5q2 0/1 ImagePullBackOff 0 5m40s my-elasticsearch-depl-7fcfc47f59-sprsv 0/1 ImagePullBackOff 0 11h </code></pre> <p>When I try deleting either one of them with <code>kubectl delete pod my-elasticsearch-depl-6d6f76dd64-4v5q2 </code>, a new one gets automatically generated as well.</p> <p>I was wondering how to first get rid of these weird zombie pods and then how I might be able to get the basic ECK setup running. Thank you!</p>
reactor
<ol> <li>If you rename <code>metadata.name: quickstart</code> to <code>my-elasticsearch</code>, then you'll need to rename it in <code>kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'</code> as well.</li> <li>Just deleting a pod won't get rid of it while the operator is still running. Have you really run <code>kubectl delete elastic --all</code> to remove all resources created by the Elastic operator (and if you have been switching around namespaces, in the right namespace(s))?</li> </ol>
xeraa
<p>I am currently attempting to get the logs from my kubernetes cluster to an external ElasticSearch/Kibana. So far I have used <a href="https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml" rel="nofollow noreferrer">this</a> daemon deployment to get filebeat running and piping to my external server, but I am unable to figure out how to set the index to something meaningfull. <a href="https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html" rel="nofollow noreferrer">This</a> documentation page tells me that I need to create am index key in the output.elasticsearch section, but I don't know what to put in the value. </p> <p>My desired output format would be something along the lines of <code>&lt;cluster-name&gt;-&lt;namespace&gt;-&lt;pod name&gt;</code></p> <p>IE: <code>devKube-frontend-publicAPI-123abc</code></p>
deef0000dragon1
<p>Precondition: You have enabled <a href="https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html" rel="nofollow noreferrer"><code>add_kubernetes_metadata: ~</code></a>.</p> <p>Then you can use that metadata in the index name like this:</p> <pre><code>output.elasticsearch: index: "%{[kubernetes.namespace]:filebeat}-%{[beat.version]}-%{+yyyy.MM.dd}" </code></pre> <ul> <li><code>%{[kubernetes.namespace]:filebeat}</code>: Use the Kubernetes namespace or if there is none fall back to <code>filebeat</code>.</li> <li>Adding <code>%{[beat.version]}</code> is highly recommended for the scenario when you upgrade Filebeat and there is a breaking change in the mapping. This should be limited to major version changes (if at all), but is an issue you can easily avoid with this setting.</li> <li>A time based index pattern like <code>%{+yyyy.MM.dd}</code> or even better would be an <a href="https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html" rel="nofollow noreferrer">ILM policy</a> to have evenly and properly sized shards.</li> </ul> <p>PS: You have the Pod name and other details in fields from <code>add_kubernetes_metadata: ~</code>. I would be careful not to cut the indices into tiny pieces, since every shard has a certain amount of overhead. The default Filebeat ILM policy is 50GB per shard — if your shards are smaller than 10GB you will most likely run into issues at some point. Leave the indices a bit more coarse grained and just use a filter for a specific Pod instead.</p>
xeraa
<p><strong>Situation</strong></p> <p>At work, we have been using Kubernetes Service Mesh v1.22 for the past year, and all was fine. We lately debuted a 2nd environment, this time running v1.25. This introduced several security related changes that we had to overcome, but that is a separate issue.</p> <p><strong>Problem</strong></p> <p>My problem is that a script that works perfectly on 1.22 isn't working on 1.25. The script is ran on your local, logins into the 1.22 instance, kubectl execs -it into a pod, opens an interactive session, and the terminal session remains open. You are now free to navigate around the pod to your hearts content. The purpose of the script is to go straight from local terminal to pod, bypassing all the tedious steps to get there manually.</p> <p>I run that same script in 1.25 environment and I get the following error: &quot;unable to use a tty - input is not a terminal or the right kind of file&quot;. If I perform these steps manually (login to environment via password, kubectl exec -it into pod), everything is fine. I can console in via /bin/sh successfully. I just can't do it via this script from my local.</p> <p><strong>Code</strong></p> <pre><code>sshpass -p $password ssh -t $kubernetes-login &quot;pwd &amp;&amp; echo '$password' | sudo -S kubectl exec -it $pod -c $container -n $namespace -- /bin/sh&quot; </code></pre> <p><strong>Troubleshooting</strong></p> <p>-A previous <a href="https://stackoverflow.com/questions/65915849/understanding-stdin-true-tty-true-on-a-kubernetes-container">stack overflow thread</a> said to edit the helm chart under spec/container so that tty and stdin: true. Got same error</p> <p>-Instead of 'sudo -S kubectl exec -it', I try using 'sudo -S kubectl exec --stdin=true --tty'. got same error</p> <p><strong>Thoughts</strong></p> <p>-Could the fact that v1.25 is forcing sudo basically at all times and v1.22 didn't mean something? Maybe TTY has conflicts? Again, when I do these steps manually, including using the mandatory sudo, things are fine</p> <p>-The syntax of the code is correct. It's been thoroughly tested and used in other scripts, so the only problem segment is the -t after exec -i. Simply doing exec -i and replacing 'ls' for /bin/sh will show you the contents of the pod just fine. Just no interactive sessions.</p> <p>-The pwd is proof that you logged into the environment correctly and password got acknowledged.</p>
Michael Norton
<pre><code>sshpass -p $password ssh -t $kubernetes-login &quot;pwd &amp;&amp; echo '$password' | sudo -S kubectl exec -it $pod -c $container -n $namespace -- /bin/sh&quot; </code></pre> <p>The input for the <code>kubectl</code> command is the pipe (<code>|</code>) from the <code>echo</code>, even though the content of that pipe is processed by sudo. That pipe is not a tty, so you can't format the command that way. You're going to need a different way of getting the password into <code>sudo</code> so the input isn't changed to the pipe.</p> <p>You can tell sudo to request the password from a helper command, so</p> <pre><code>sudo -A /path/to/passwd kubectl exec -it $pod -c $container -n $namespace -- /bin/sh </code></pre> <p>I believe the <code>-A</code> flag only takes a command, and not something parsed by a shell with args, so you may need to make the password injection a script.</p> <p>Another option, if sudo remembers a previous auth for some timeout, is to split the commands like:</p> <pre><code>sshpass -p $password ssh -t $kubernetes-login &quot;pwd &amp;&amp; (echo '$password' | sudo -S true) &amp;&amp; sudo kubectl exec -it $pod -c $container -n $namespace -- /bin/sh&quot; </code></pre>
BMitch
<p>I'm having trouble deleting custom resource definition. I'm trying to upgrade kubeless from v1.0.0-alpha.7 to <strong>v1.0.0-alpha.8</strong>.</p> <p>I tried to remove all the created custom resources by doing </p> <pre class="lang-sh prettyprint-override"><code>$ kubectl delete -f kubeless-v1.0.0-alpha.7.yaml deployment "kubeless-controller-manager" deleted serviceaccount "controller-acct" deleted clusterrole "kubeless-controller-deployer" deleted clusterrolebinding "kubeless-controller-deployer" deleted customresourcedefinition "functions.kubeless.io" deleted customresourcedefinition "httptriggers.kubeless.io" deleted customresourcedefinition "cronjobtriggers.kubeless.io" deleted configmap "kubeless-config" deleted </code></pre> <p>But when I try,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get customresourcedefinition NAME AGE functions.kubeless.io 21d </code></pre> <p>And because of this when I next try to upgrade by doing, I see,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl create -f kubeless-v1.0.0-alpha.8.yaml Error from server (AlreadyExists): error when creating "kubeless-v1.0.0-alpha.8.yaml": object is being deleted: customresourcedefinitions.apiextensions.k8s.io "functions.kubeless.io" already exists </code></pre> <p>I think because of this mismatch in the function definition , the hello world example is failing.</p> <pre class="lang-sh prettyprint-override"><code>$ kubeless function deploy hellopy --runtime python2.7 --from-file test.py --handler test.hello INFO[0000] Deploying function... FATA[0000] Failed to deploy hellopy. Received: the server does not allow this method on the requested resource (post functions.kubeless.io) </code></pre> <p>Finally, here is the output of,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe customresourcedefinitions.apiextensions.k8s.io Name: functions.kubeless.io Namespace: Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apiextensions.k8s.io/v1beta1","description":"Kubernetes Native Serverless Framework","kind":"CustomResourceDefinition","metadata":{"anno... API Version: apiextensions.k8s.io/v1beta1 Kind: CustomResourceDefinition Metadata: Creation Timestamp: 2018-08-02T17:22:07Z Deletion Grace Period Seconds: 0 Deletion Timestamp: 2018-08-24T17:15:39Z Finalizers: customresourcecleanup.apiextensions.k8s.io Generation: 1 Resource Version: 99792247 Self Link: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/functions.kubeless.io UID: 951713a6-9678-11e8-bd68-0a34b6111990 Spec: Group: kubeless.io Names: Kind: Function List Kind: FunctionList Plural: functions Singular: function Scope: Namespaced Version: v1beta1 Status: Accepted Names: Kind: Function List Kind: FunctionList Plural: functions Singular: function Conditions: Last Transition Time: 2018-08-02T17:22:07Z Message: no conflicts found Reason: NoConflicts Status: True Type: NamesAccepted Last Transition Time: 2018-08-02T17:22:07Z Message: the initial names have been accepted Reason: InitialNamesAccepted Status: True Type: Established Last Transition Time: 2018-08-23T13:29:45Z Message: CustomResource deletion is in progress Reason: InstanceDeletionInProgress Status: True Type: Terminating Events: &lt;none&gt; </code></pre>
smk
<p>So it turns out , the root cause was that Custom resources with finalizers can &quot;deadlock&quot;. The CustomResource &quot;functions.kubeless.io&quot; had a</p> <pre><code>Finalizers: customresourcecleanup.apiextensions.k8s.io </code></pre> <p>and this is can leave it in a bad state when deleting.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/60538" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/60538</a></p> <p>I followed the steps mentioned in <a href="https://github.com/kubernetes/kubernetes/issues/60538#issuecomment-369099998" rel="noreferrer">this workaround</a> and it now gets deleted.</p>
smk
<p>I have a number of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Jobs</a> running on k8s. </p> <p>These jobs run a custom agent that copies some files and sets up the environment for a user (trusted) provided container to run. This agent runs on the side of the user container, captures the logs, waits for the container to exit and process the generated results. </p> <p>To achieve this, we mount Docker's socket <code>/var/run/docker.sock</code> and run as a privileged container, and from within the agent, we use <a href="https://github.com/docker/docker-py/tree/master/docker" rel="noreferrer">docker-py</a> to interact with the user container (setup, run, capture logs, terminate).</p> <p>This works almost fine, but I'd consider it a hack. Since the user container was created by calling docker directly on a node, k8s is not aware of it's existence. This has been causing troubles since our monitoring tools interact with K8s, and don't get visibility to these stand-alone user containers. It also makes pod scheduling harder to manage, since the limits (cpu/memory) for the user container are not accounted as the requests for the pod. </p> <p>I'm aware of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init containers</a> but these don't quite fit this use case, since we want to keep the agent running and monitoring the user container until it completes. </p> <p><em>Is it possible for a container running on a pod, to request Kubernetes to add additional containers to the same pod the agent is running? And if so, can the agent also request Kubernetes to remove the user container at will (e.g. certain custom condition was met)?</em></p>
ButterDog
<p>From <a href="https://github.com/kubernetes/kubernetes/issues/37838#issuecomment-328853094" rel="nofollow noreferrer">this GitHub issue</a>, it seems that the answer is that adding or removing containers to a pod is not possible, since the container list in the pod spec is immutable. </p>
ButterDog
<p>I started learning about Kubernetes and I installed minikube and kubectl on Windows 7.</p> <p>After that I created a pod with command:</p> <pre><code>kubectl run firstpod --image=nginx </code></pre> <p>And everything is fine:</p> <p>[![enter image description here][1]][1]</p> <p>Now I want to go inside the pod with this command: <code>kubectl exec -it firstpod -- /bin/bash</code> but it's not working and I have this error:</p> <pre><code>OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: &quot;C:/Program Files/Git/usr/bin/bash.exe&quot;: stat C:/Program Files/Git/usr/bin/bash.exe: no such file or directory: unknown command terminated with exit code 126 </code></pre> <p>How can I resolve this problem?</p> <p>And another question is about this <code>firstpod</code> pod. With this command <code>kubectl describe pod firstpod</code> I can see information about the pod:</p> <pre><code>Name: firstpod Namespace: default Priority: 0 Node: minikube/192.168.99.100 Start Time: Mon, 08 Nov 2021 16:39:07 +0200 Labels: run=firstpod Annotations: &lt;none&gt; Status: Running IP: 172.17.0.3 IPs: IP: 172.17.0.3 Containers: firstpod: Container ID: docker://59f89dad2ddd6b93ac4aceb2cc0c9082f4ca42620962e4e692e3d6bcb47d4a9e Image: nginx Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Mon, 08 Nov 2021 16:39:14 +0200 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9b8mx (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-9b8mx: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned default/firstpod to minikube Normal Pulling 32m kubelet Pulling image &quot;nginx&quot; Normal Pulled 32m kubelet Successfully pulled image &quot;nginx&quot; in 3.677130128s Normal Created 31m kubelet Created container firstpod Normal Started 31m kubelet Started container firstpod </code></pre> <p>So I can see it is a docker container id and it is started, also there is the image, but if I do <code>docker images</code> or <code>docker ps</code> there is nothing. Where are these images and container? Thank you! [1]: <a href="https://i.stack.imgur.com/xAcMP.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/xAcMP.jpg</a></p>
elvis
<p>One error for certain is gitbash adding Windows the path. You can disable that with a double slash:</p> <pre><code>kubectl exec -it firstpod -- //bin/bash </code></pre> <p>This command will only work if you have bash in the image. If you don't, you'll need to pick a different command to run, e.g. <code>/bin/sh</code>. Some images are distroless or based on scratch to explicitly not include things like shells, which will prevent you from running commands like this (intentionally, for security).</p>
BMitch
<p>I'm trying to understand why sometimes I update a <code>.php</code> in my project it completely rebuilds the image everytime and other times it doesn't seem to do anything. Actually regarding the latter it says <code>Syncing 1 files for ...</code>, but none of my changes are reflected.</p> <p>This is my project structure:</p> <pre><code>/app /admin /conf app.conf /src /lib lib.php index.php Dockerfile.dev /manifests /dev ingress.yaml admin.yaml skaffold.yaml </code></pre> <p>When I make changes to <code>./admin/conf/app.conf</code> or <code>./admin/src/index.php</code>, I just get the <code>Syncing 1 files for...</code>, but none of the changes are reflected in the application. I have to <code>CTRL+C</code> to kill Skaffold and restart it... just <code>CTRL+S</code> in a <code>.yaml</code> or <code>lib.php</code> to trigger a rebuild.</p> <p>When I make changes to <code>./admin/src/lib/lib.php</code>, it rebuilds the entire image from scratch.</p> <p>Here are my configs:</p> <pre><code># skaffold.yaml apiVersion: skaffold/v1beta15 kind: Config build: local: push: false artifacts: - image: postgres context: postgres docker: dockerfile: Dockerfile.dev sync: manual: - src: "***/*.sql" dest: . - image: testappacr.azurecr.io/test-app-admin context: admin docker: dockerfile: Dockerfile.dev sync: manual: - src: "***/*.php" dest: . - src: "***/*.conf" dest: . - src: "***/*.tbs" dest: . - src: "***/*.css" dest: . - src: "***/*.js" dest: . deploy: kubectl: manifests: - manifests/dev/ingress.yaml - manifests/dev/postgres.yaml - manifests/dev/admin.yaml </code></pre> <pre><code># Dockerfile.dev FROM php:7.3-fpm EXPOSE 4000 COPY . /app WORKDIR /app/src RUN apt-get update \ &amp;&amp; apt-get install -y libpq-dev zlib1g-dev libzip-dev \ &amp;&amp; docker-php-ext-install pgsql zip CMD ["php", "-S", "0.0.0.0:4000"] </code></pre> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: admin-deployment-dev spec: replicas: 1 selector: matchLabels: component: admin template: metadata: labels: component: admin spec: containers: - name: admin image: testappacr.azurecr.io/test-app-admin ports: - containerPort: 4000 env: - name: PGUSER valueFrom: secretKeyRef: name: test-app-dev-secrets key: PGUSER - name: PGHOST value: postgres-cluster-ip-service-dev - name: PGPORT value: "1423" - name: PGDATABASE valueFrom: secretKeyRef: name: test-app-dev-secrets key: PGDATABASE - name: PGPASSWORD valueFrom: secretKeyRef: name: test-app-dev-secrets key: PGPASSWORD - name: SECRET_KEY valueFrom: secretKeyRef: name: test-app-dev-secrets key: SECRET_KEY - name: SENDGRID_API_KEY valueFrom: secretKeyRef: name: test-app-dev-secrets key: SENDGRID_API_KEY - name: DOMAIN valueFrom: secretKeyRef: name: test-app-dev-secrets key: DOMAIN - name: DEBUG valueFrom: secretKeyRef: name: test-app-dev-secrets key: DEBUG # livenessProbe: # tcpSocket: # port: 4000 # initialDelaySeconds: 2 # periodSeconds: 2 # readinessProbe: # tcpSocket: # port: 4000 # initialDelaySeconds: 2 # periodSeconds: 2 volumeMounts: - mountPath: "/docs/" name: file-storage volumes: - name: file-storage persistentVolumeClaim: claimName: file-storage --- apiVersion: v1 kind: Service metadata: name: admin-cluster-ip-service-dev spec: type: ClusterIP selector: component: admin ports: - port: 4000 targetPort: 4000 </code></pre> <p>I guess I'm trying to understand a few things:</p> <ol> <li>Why is a complete rebuild being triggered in one case?</li> <li>Why are files being "Synced", but the changes aren't reflected until I trigger a rebuild?</li> <li>How can I get the my changes to reflect in the app without triggering a complete rebuild?</li> </ol> <p>Thanks!</p>
cjones
<p>So there are a few issues. First, your wildcards should be <code>**</code> not <code>***</code>. The globbing library used by Skaffold doesn't recognize <code>***</code> and so it treats it as a literal part of the path name. And since you have no directory literally named <code>***</code>, no sync rules are matched and so your file changes cause the image to be rebuilt.</p> <p>When I correct the wildcards, your setup still didn't work for me.</p> <p>First, I see a warning when I modify the <code>index.php</code>:</p> <pre><code>Syncing 1 files for testappacr.azurecr.io/test-app-admin:4c76dec58e1ef426b89fd44e3b340810db96b6961c5cacfdb76f62c9dc6725b8 WARN[0043] Skipping deploy due to sync error: copying files: didn't sync any files </code></pre> <p>Skaffold by default cuts logs off at the warning level. If I instead run <code>skaffold dev -v info</code> I get some more information:</p> <pre><code>INFO[0011] files modified: [admin/src/index.php] Syncing 1 files for testappacr.azurecr.io/test-app-admin:4c76dec58e1ef426b89fd44e3b340810db96b6961c5cacfdb76f62c9dc6725b8 INFO[0011] Copying files: map[admin/src/index.php:[/app/src/src/index.php]] to testappacr.azurecr.io/test-app-admin:4c76dec58e1ef426b89fd44e3b340810db96b6961c5cacfdb76f62c9dc6725b8 WARN[0011] Skipping deploy due to sync error: copying files: didn't sync any files </code></pre> <p>Note the destination being reported, <code>/app/src/src/index.php</code>. This double <code>src</code> arises as your image's <code>WORKDIR</code> is set to <code>/app/src</code>, and your PHP sync rule is preserving the path under <code>app/admin</code>. You can fix this by amending your <code>skaffold.yaml</code> to strip off the leading <code>src</code>:</p> <pre><code> - src: "src/**/*.php" dest: . strip: src </code></pre> <p>You might need to adjust your other rules too, and note that you can use <code>dest: ..</code> in your rules.</p> <p>(Side note: I was still seeing <code>didn't sync any files</code> error. I was actually running <code>skaffold dev --status-check=false</code> so as to prevent Skaffold from waiting on the deployment status — I figured the deployment would never succeed as I didn't have any valid PHP files. But it turns out the deployments were actually failing because I didn't have your persistent volume claim available, and so the pod failed to start. And since there were no running containers, the files were never synced, and so Skaffold reported that syncing failed. So the moral of the story is that file syncing only works to running containers.)</p>
Brian de Alwis
<p>I followed the instructions in the official repo on installing on kubernetes, however I get a 404 when I try to use the UI. Could anyone tell me what the issue might be?</p> <p>Repo: <a href="https://github.com/apache/incubator-airflow/tree/master/scripts/ci/kubernetes" rel="nofollow noreferrer">https://github.com/apache/incubator-airflow/tree/master/scripts/ci/kubernetes</a></p> <p>To clarify, the instructions I followed were:</p> <ul> <li>Point kubectl to the local minikube cluster (v1.10.0)</li> <li>Clone repo (commit 89c1f530da04088300312ad3cec9fa74c3703176)</li> <li>cd incubator-airflow/scripts/ci/kubernetes</li> <li>./docker/build.sh</li> <li>./kube/deploy.sh</li> </ul> <p><a href="https://i.stack.imgur.com/8wqDa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8wqDa.png" alt="kubectl get pods airflow"></a> <a href="https://i.stack.imgur.com/k0qBv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k0qBv.png" alt="airflow minikube 404"></a></p>
eamon1234
<p>nevermind... I must have missed the memo that the default username/password is airflow/airflow even though I thought that authenticate was set to False.</p> <p>Solution:</p> <p>Go to localhost:8080/login and enter username/password airflow/airflow.</p>
eamon1234
<p>Hoping that there is some good insight into how to handle orchestration amount microservices in an on prem smaller company environment. Currently, the systems that we are looking to convert from monolithic to microservices like the rest of the world :).</p> <p>The problem I'm having with as an architect, is justifying the big learning curve and server requirements with the resources we have at the moment. I can easily see us having 50ish microservices, which I feel could be on that line of using kubernetes or not. </p> <p>The thing is, if we don't, how do we monitor if it is on-prem. We do use Azure Devops, so I'm wondering if this would safice for deployment parts.</p> <p>Thanks! </p>
gcoleman0828
<p>This comes down to a debate over essential vs accidental complexity. The verdict is in from companies that k8s strikes a good balance vs swarm and other orchestrators are barely talked about in the industry.</p> <p><a href="https://www.reactiveops.com/blog/is-kubernetes-overkill" rel="nofollow noreferrer">https://www.reactiveops.com/blog/is-kubernetes-overkill</a></p> <p>The platforms that build on kubernetes are still emerging to offer a simpler interface for those wanting a higher level of abstraction but aren't mature enough yet. GKE offers a very easy way to just deal with workloads, AKS is still maturing so you will likely face some bugs but it is tightly integrated with Azure Devops. </p> <p>Microsoft is all-in on k8s although their on-prem offering doesn't seem fully fledged yet. GKE on-prem and Openshift 4.1 offer fully managed on-prem (if using vSphere) for list price of $1200/core/year. <a href="https://nedinthecloud.com/2019/02/19/azure-stack-kubernetes-cluster-is-not-aks/" rel="nofollow noreferrer">https://nedinthecloud.com/2019/02/19/azure-stack-kubernetes-cluster-is-not-aks/</a></p> <p>Other ways of deploying on prem are emerging so long as you're comfortable with managing the compute, storage and network yourself. Installing and upgrading are becoming easier (see e.g. <a href="https://github.com/kubermatic/kubeone" rel="nofollow noreferrer">https://github.com/kubermatic/kubeone</a> which builds on the cluster-api abstraction). For bare metal ambitious projects like talos are making k8s specific immutable OSes (<a href="https://github.com/talos-systems/talos" rel="nofollow noreferrer">https://github.com/talos-systems/talos</a>).</p> <p>AWS is still holding out hope for lock-in with ECS and Fargate but it remains to be seen if that will succeed.</p>
eamon1234
<p>There are two kubelet nodes and each kubelet node contains several containers including server with wildfly. Even though I do not define containerPort &lt;&gt; hostPort, the management console can be reached with port 9990 from outside. I do not have any clue, why?</p> <pre><code>- name: server image: registry/server:develop-latest ports: - name: server-https containerPort: 8443 hostPort: 8443 </code></pre> <p>In docker container inspect &lt;container-id&gt; I see:</p> <pre><code>&quot;ExposedPorts&quot;: { &quot;9990/tcp&quot;: {}, ... </code></pre> <p>So,</p> <ul> <li>Why container port 9990 is exposed? and</li> <li>Why containerPort 9990 is mapped to hostPort and I can reach the port 9990 from outside?</li> </ul>
m19v
<p>You can expose the port in two places, when you run the container, and when you build the image. Typically you only do the latter since exposing the port is documentation of what ports are likely listening for connections inside the container (it doesn't have any affect on networking).</p> <p>To see if the port was exposed at build time, you can run:</p> <pre class="lang-bash prettyprint-override"><code>docker image inspect registry/server:develop-latest </code></pre> <p>And if that port wasn't exposed in your build, then it was likely exposed in your base image.</p>
BMitch
<p>I am using GKE with istio add-on enabled. Myapp somehow gives 503 errors using when using websocket. I am starting to think that maybe the websocket is working but the database connection is not and that causes 503's, as the cloudsql-proxy logs give errors:</p> <pre><code>$ kubectl logs myapp-54d6696fb4-bmp5m cloudsql-proxy 2019/01/04 21:56:47 using credential file for authentication; [email protected] 2019/01/04 21:56:47 Listening on 127.0.0.1:5432 for myproject:europe-west4:mydatabase 2019/01/04 21:56:47 Ready for new connections 2019/01/04 21:56:51 New connection for "myproject:europe-west4:mydatabase" 2019/01/04 21:56:51 couldn't connect to "myproject:europe-west4:mydatabase": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/mydatabase/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: read tcp 10.44.11.21:60728-&gt;108.177.126.95:443: read: connection reset by peer 2019/01/04 22:14:56 New connection for "myproject:europe-west4:mydatabase" 2019/01/04 22:14:56 couldn't connect to "myproject:europe-west4:mydatabase": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/mydatabase/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: read tcp 10.44.11.21:36734-&gt;108.177.127.95:443: read: connection reset by peer </code></pre> <p>Looks like the required authentication details should be in the credentials of the proxy service account I created and thus is provided for:</p> <pre><code>{ "type": "service_account", "project_id": "myproject", "private_key_id": "myprivekeyid", "private_key": "-----BEGIN PRIVATE KEY-----\MYPRIVATEKEY-----END PRIVATE KEY-----\n", "client_email": "[email protected]", "client_id": "myclientid", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/proxy-user%40myproject.iam.gserviceaccount.com" } </code></pre> <p>My question: How do I get rid of the errors/ get a proper google sql config from GKE?</p> <p>At cluster creation I selected the mTLS 'permissive' option.</p> <p>My config: myapp_and_router.yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: ports: - port: 8089 # 'name: http' apparently does not work name: db selector: app: myapp --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: gcr.io/myproject/firstapp:v1 imagePullPolicy: Always ports: - containerPort: 8089 env: - name: POSTGRES_DB_HOST value: 127.0.0.1:5432 - name: POSTGRES_DB_USER valueFrom: secretKeyRef: name: mysecret key: username - name: POSTGRES_DB_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password ## Custom healthcheck for Ingress readinessProbe: httpGet: path: /healthz scheme: HTTP port: 8089 initialDelaySeconds: 5 timeoutSeconds: 5 livenessProbe: httpGet: path: /healthz scheme: HTTP port: 8089 initialDelaySeconds: 5 timeoutSeconds: 20 - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=myproject:europe-west4:mydatabase=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] securityContext: runAsUser: 2 allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials --- ########################################################################### # Ingress resource (gateway) ########################################################################## apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: myapp-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 # 'name: http' apparently does not work name: db protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: myapp spec: hosts: - "*" gateways: - myapp-gateway http: - match: - uri: prefix: / route: - destination: host: myapp weight: 100 websocketUpgrade: true --- </code></pre> <p>EDIT 1: I had not enabled permissions (scopes) for the various google services when creating the cluster, see <a href="https://stackoverflow.com/questions/54145787/permissions-on-gke-cluster">here</a>. After creating a new cluster with the permissions I now get a new errormessage:</p> <pre><code>kubectl logs mypod cloudsql-proxy 2019/01/11 20:39:58 using credential file for authentication; [email protected] 2019/01/11 20:39:58 Listening on 127.0.0.1:5432 for myproject:europe-west4:mydatabase 2019/01/11 20:39:58 Ready for new connections 2019/01/11 20:40:12 New connection for "myproject:europe-west4:mydatabase" 2019/01/11 20:40:12 couldn't connect to "myproject:europe-west4:mydatabase": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/mydatabase/createEphemeral?alt=json: oauth2: cannot fetch token: 400 Bad Request Response: { "error": "invalid_grant", "error_description": "Invalid JWT Signature." } </code></pre> <p>EDIT 2: Looks like new error was caused by the Service Accounts keys no longer being valid. After making new ones I can connect to the database!</p>
musicformellons
<p>I saw similar errors but was able to get cloudsql-proxy working in my istio cluster on GKE by creating the following service entries (with some help from <a href="https://github.com/istio/istio/issues/6593#issuecomment-420591213" rel="nofollow noreferrer">https://github.com/istio/istio/issues/6593#issuecomment-420591213</a>):</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google-apis spec: hosts: - "*.googleapis.com" ports: - name: https number: 443 protocol: HTTPS --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: cloudsql-instances spec: hosts: # Use `gcloud sql instances list` to get the addresses of instances - 35.226.125.82 ports: - name: tcp number: 3307 protocol: TCP </code></pre> <p>Also, I still saw those connection errors during initialization until I added a delay in my app startup (<code>sleep 10</code> before running server) to give the istio-proxy and cloudsql-proxy containers time to get set up first. </p> <p>EDIT 1: Here are logs with the errors, then the successful "New connection/Client closed" lines once things are working:</p> <pre><code>2019/01/10 21:54:38 New connection for "my-project:us-central1:my-db" 2019/01/10 21:54:38 Throttling refreshCfg(my-project:us-central1:my-db): it was only called 44.445553175s ago 2019/01/10 21:54:38 couldn't connect to "my-project:us-central1:my-db": Post https://www.googleapis.com/sql/v1beta4/projects/my-project/instances/my-db/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 108.177.112.84:443: getsockopt: connection refused 2019/01/10 21:54:38 New connection for "my-project:us-central1:my-db" 2019/01/10 21:54:38 Throttling refreshCfg(my-project:us-central1:my-db): it was only called 44.574562959s ago 2019/01/10 21:54:38 couldn't connect to "my-project:us-central1:my-db": Post https://www.googleapis.com/sql/v1beta4/projects/my-project/instances/my-db/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 108.177.112.84:443: getsockopt: connection refused 2019/01/10 21:55:15 New connection for "my-project:us-central1:my-db" 2019/01/10 21:55:16 Client closed local connection on 127.0.0.1:5432 2019/01/10 21:55:17 New connection for "my-project:us-central1:my-db" 2019/01/10 21:55:17 New connection for "my-project:us-central1:my-db" 2019/01/10 21:55:27 Client closed local connection on 127.0.0.1:5432 2019/01/10 21:55:28 New connection for "my-project:us-central1:my-db" 2019/01/10 21:55:30 Client closed local connection on 127.0.0.1:5432 2019/01/10 21:55:37 Client closed local connection on 127.0.0.1:5432 2019/01/10 21:55:38 New connection for "my-project:us-central1:my-db" 2019/01/10 21:55:40 Client closed local connection on 127.0.0.1:5432 </code></pre> <p>EDIT 2: Ensure that Cloud SQL api is within scope of your cluster.</p>
gsf
<p>We are starting a project from scratch that will be managed on Google Cloud Services. I'd like to use Google Kubernetes Engine. Our application will have multiple environments (Dev, Staging, Production). Each environment is setup as a new Project on Google Cloud.</p> <p>What is unclear to me is how to parameterize our service/manifest files. For instance our deploy file below, anything in <code>{}</code> I'd like to pull from a list of variables per environment. In a previous post someone mentioned using Helm, but I cannot find much documentation supporting the use of helm this way.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: webapp spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: {max-surge} maxUnavailable: 0 selector: matchLabels: run: webapp template: metadata: labels: run: webapp spec: containers: - name: webapp image: {gcr-image-url} imagePullPolicy: Always ports: - containerPort: 3000 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: app-secrets key: DATABASE_URL - name: SECRET_KEY_BASE valueFrom: secretKeyRef: name: app-secrets key: SECRET_KEY_BASE </code></pre> <p>What tools are available to manage my GKE environments? We'll use terraform for our infrastructure management, but again is there a larger wrapper I can use to set parameters per environment?</p>
hummmingbear
<p>Helm would work for this, as would kustomize. In the case of helm, you'll have separate values.yaml files (e.g. dev-values.yaml) with e.g.:</p> <pre><code>max-surge: 2 gcr-image-url: project-23456/test </code></pre> <p>And then reference them in the yaml via:</p> <pre><code>{{ .Values.max-surge }} </code></pre> <p>The when installing you would use <code>helm upgrade --install my-app . --values=dev-values.yaml</code></p>
eamon1234
<p>I'm trying to install Kubernetes on CentOS 7.7, therefore, I have to install docker first. I followed <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker" rel="nofollow noreferrer">Kubernetes Documentation</a> to install docker-ce and modify daemon.json file.</p> <pre><code>$ yum install yum-utils device-mapper-persistent-data lvm2 $ yum-config-manager --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ yum update &amp;&amp; yum install \ containerd.io-1.2.10 \ docker-ce-19.03.4 \ docker-ce-cli-19.03.4 $ mkdir /etc/docker $ cat &gt; /etc/docker/daemon.json &lt;&lt;EOF { &quot;exec-opts&quot;: [&quot;native.cgroupdriver=systemd&quot;], &quot;log-driver&quot;: &quot;json-file&quot;, &quot;log-opts&quot;: { &quot;max-size&quot;: &quot;100m&quot; }, &quot;storage-driver&quot;: &quot;overlay2&quot;, &quot;storage-opts&quot;: [ &quot;overlay2.override_kernel_check=true&quot; ] } EOF $ mkdir -p /etc/systemd/system/docker.service.d $ systemctl daemon-reload $ systemctl start docker </code></pre> <p>When started docker service, it said:</p> <pre><code>Job for docker.service failed because the control process exited with error code. See &quot;systemctl status docker.service&quot; and &quot;journalctl -xe&quot; for details. </code></pre> <pre><code>$ systemctl status -l docker.service ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2020-01-07 14:44:11 UTC; 7min ago Docs: https://docs.docker.com Process: 9879 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE) Main PID: 9879 (code=exited, status=1/FAILURE) Jan 07 14:44:09 love61y2222c.mylabserver.com systemd[1]: Failed to start Docker Application Container Engine. Jan 07 14:44:09 love61y2222c.mylabserver.com systemd[1]: Unit docker.service entered failed state. Jan 07 14:44:09 love61y2222c.mylabserver.com systemd[1]: docker.service failed. Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: docker.service holdoff time over, scheduling restart. Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: Stopped Docker Application Container Engine. Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: start request repeated too quickly for docker.service Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: Failed to start Docker Application Container Engine. Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: Unit docker.service entered failed state. Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: docker.service failed. </code></pre> <pre><code>$ journalctl -xe . . -- Unit docker.service has begun starting up. Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.722780008Z&quot; level=info msg=&quot;Starting up&quot; Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.728447514Z&quot; level=info msg=&quot;parsed scheme: \&quot;unix\&quot;&quot; module=grpc Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.728479813Z&quot; level=info msg=&quot;scheme \&quot;unix\&quot; not registered, fallback to default scheme&quot; module= Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.728510943Z&quot; level=info msg=&quot;ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/ Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.728526075Z&quot; level=info msg=&quot;ClientConn switching balancer to \&quot;pick_first\&quot;&quot; module=grpc Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.732325726Z&quot; level=info msg=&quot;parsed scheme: \&quot;unix\&quot;&quot; module=grpc Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.733844225Z&quot; level=info msg=&quot;scheme \&quot;unix\&quot; not registered, fallback to default scheme&quot; module= Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.733880664Z&quot; level=info msg=&quot;ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/ Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.733898044Z&quot; level=info msg=&quot;ClientConn switching balancer to \&quot;pick_first\&quot;&quot; module=grpc Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time=&quot;2020-01-07T15:28:25.743421350Z&quot; level=warning msg=&quot;Using pre-4.0.0 kernel for overlay2, mount failures may require Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: failed to start daemon: error initializing graphdriver: overlay2: the backing xfs filesystem is formatted without d_type Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: Failed to start Docker Application Container Engine. -- Subject: Unit docker.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker.service has failed. -- -- The result is failed. Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: Unit docker.service entered failed state. Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: docker.service failed. </code></pre> <p>Could anyone tell me why docker service start failed after modifying daemon.json file? And how to specify <code>cgroupdriver</code>, default <code>log-driver</code> and default <code>storage-driver</code> in the right way?</p> <p>Any suggestion will be greatly appreciated. Thanks.</p>
tan
<p>This error is pointing to an issue forcing docker to use overlay2 without the proper backing filesystem:</p> <pre><code>failed to start daemon: error initializing graphdriver: overlay2: the backing xfs filesystem is formatted without d_type </code></pre> <p>See docker's table for details on backing filesystem requirements for the different storage drivers: <a href="https://docs.docker.com/storage/storagedriver/#supported-backing-filesystems" rel="nofollow noreferrer">https://docs.docker.com/storage/storagedriver/#supported-backing-filesystems</a></p> <p>The fix is to remove the storage driver settings, or fix the backing filesystem with the needed options to support overlay2:</p> <pre><code> { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" } } </code></pre> <p>For details on changing the xfs options, that appears to require rebuilding the filesystem. See <a href="https://superuser.com/a/1321963/587488">this answer</a> for more details on the needed steps.</p>
BMitch
<p>I'm using Airflow with kubernetes executor and the <code>KubernetesPodOperator</code>. I have two jobs:</p> <ul> <li>A: Retrieve data from some source up to 100MB</li> <li>B: Analyze the data from A.</li> </ul> <p>In order to be able to share the data between the jobs, I would like to run them on the same pod, and then A will write the data to a volume, and B will read the data from the volume.</p> <p>The <a href="https://airflow.apache.org/kubernetes.html" rel="nofollow noreferrer">documentation</a> states: </p> <blockquote> <p>The Kubernetes executor will create a new pod for every task instance.</p> </blockquote> <p>Is there any way to achieve this? And if not, what recommended way there is to pass the data between the jobs?</p>
matanper
<p>Sorry this isn't possible - one job per pod.</p> <p>You are best to use task 1 to put the data in a well known location (e.g in a cloud bucket) and get it from the second task. Or just combine the two tasks.</p>
eamon1234
<p>I decided to use the rootless version of Buildkit to build and push Docker images to a GCR (Google Container Registry) from within a container in Kubernetes.</p> <p>I stumbled upon this error:</p> <pre><code>/moby.buildkit.v1.Control/Solve returned error: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount /home/user/.local/tmp/buildkit-mount859701112: [{Type:bind Source:/home/user/.local/share/buildkit/runc-native/snapshots/snapshots/2 Options:[rbind ro]}]: operation not permitted </code></pre> <p>I am running <code>buildkitd</code> as a <code>deployment</code> linked to a <code>service</code> as specified by the <a href="https://github.com/moby/buildkit/tree/master/examples/kubernetes#deployment--service" rel="nofollow noreferrer">buildkit documentation</a> Those resources are ran inside a Kubernetes Cluster hosted on the Google Kubernetes Engine.</p> <p>I am using the following YAML for the Deployment and Service</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: buildkitd name: buildkitd spec: replicas: 1 selector: matchLabels: app: buildkitd template: metadata: labels: app: buildkitd annotations: container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined spec: containers: - name: buildkitd image: moby/buildkit:master-rootless args: - --addr - unix:///run/user/1000/buildkit/buildkitd.sock - --addr - tcp://0.0.0.0:1234 - --oci-worker-no-process-sandbox readinessProbe: exec: command: - buildctl - debug - workers initialDelaySeconds: 5 periodSeconds: 30 livenessProbe: exec: command: - buildctl - debug - workers initialDelaySeconds: 5 periodSeconds: 30 securityContext: runAsUser: 1000 runAsGroup: 1000 ports: - containerPort: 1234 --- apiVersion: v1 kind: Service metadata: labels: app: buildkitd name: buildkitd spec: ports: - port: 1234 protocol: TCP selector: app: buildkitd </code></pre> <p>It is the same as <a href="https://github.com/moby/buildkit/blob/master/examples/kubernetes/deployment%2Bservice.rootless.yaml" rel="nofollow noreferrer">buildkit documentation</a>'s without the TLS certificates setup.</p> <p>From another Pod, I then contact the Buildkit Daemon using the following command:</p> <pre><code>./bin/buildctl \ --addr tcp://buildkitd:1234 \ build \ --frontend=dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=eu.gcr.io/$PROJECT_ID/test-image,push=true </code></pre> <p>The <code>buildkitd</code> container successfuly receives the request but throws the error above.</p> <p>The output of the <code>buildctl</code> command is the following:</p> <pre><code>#1 [internal] load .dockerignore #1 transferring context: 2B done #1 DONE 0.1s #2 [internal] load build definition from Dockerfile #2 transferring dockerfile: 120B done #2 DONE 0.1s error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: failed to mount /home/user/.local/tmp/buildkit-mount859701112: [{Type:bind Source:/home/user/.local/share/buildkit/runc-native/snapshots/snapshots/2 Options:[rbind ro]}]: operation not permitted </code></pre> <p>Which is the error from the daemon.</p> <p>What strikes me is that I am able to containerise <code>buildkitd</code> inside a <code>minikube</code> cluster using the exact same YAML file as such:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/buildkitd-5b46d94f5d-xvnbv 1/1 Running 0 36m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/buildkitd ClusterIP 10.100.72.194 &lt;none&gt; 1234/TCP 36m service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 36m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/buildkitd 1/1 1 1 36m NAME DESIRED CURRENT READY AGE replicaset.apps/buildkitd-5b46d94f5d 1 1 1 36m </code></pre> <p>I deploy the service and deployment inside <code>minikube</code> and forward the service port using the following command to be able to access the deployment outside <code>minikube</code>.</p> <pre><code>kubectl port-forward service/buildkitd 2000:1234 </code></pre> <p>And with that setup I am able to execute my <code>buildctl</code> command without any issue (Image building and push to GCR).</p> <p>I wish to understand why it works on <code>minikube</code> and not on the Google Kubernetes Engine.</p> <p>Here is the container startup log if that is of any help</p> <pre><code>auto snapshotter: using native NoProcessSandbox is enabled. Note that NoProcessSandbox allows build containers to kill (and potentially ptrace) an arbitrary process in the BuildKit host namespace. NoProcessSandbox should be enabled only when the BuildKit is running in a container as an unprivileged user. found worker \&quot;wdukby0uwmjyvf2ngj4e71s4m\&quot;, labels=map[org.mobyproject.buildkit.worker.executor:oci org.mobyproject.buildkit.worker.hostname:buildkitd-5b46d94f5d-xvnbv org.mobyproject.buildkit.worker.snapshotter:native], platforms=[linux/amd64 linux/386]&quot; rootless mode is not supported for containerd workers. disabling containerd worker. found 1 workers, default=\&quot;wdukby0uwmjyvf2ngj4e71s4m\&quot; currently, only the default worker can be used. TLS is not enabled for tcp://0.0.0.0:1234. enabling mutual TLS authentication is highly recommended running server on /run/user/1000/buildkit/buildkitd.sock running server on [::]:1234 </code></pre>
Diego ROJAS
<p>Rootless requires various preparation steps to be performed on the host (this would need to be done outside of Kubernetes on the VM host running the kubernetes node). See the <a href="https://docs.docker.com/engine/security/rootless/" rel="nofollow noreferrer">rootless documentation</a> for a full list of steps. Note that these steps vary by Linux distribution because different distributions have already performed some or all of these prerequisite steps.</p> <blockquote> <p>Ubuntu</p> <ul> <li><p>No preparation is needed.</p> </li> <li><p>overlay2 storage driver is enabled by default (Ubuntu-specific kernel patch).</p> </li> <li><p>Known to work on Ubuntu 16.04, 18.04, and 20.04.</p> </li> </ul> <p>Debian GNU/Linux</p> <ul> <li><p>Add kernel.unprivileged_userns_clone=1 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system.</p> </li> <li><p>To use the overlay2 storage driver (recommended), run sudo modprobe overlay permit_mounts_in_userns=1 (Debian-specific kernel patch, introduced in Debian 10). Add the configuration to /etc/modprobe.d for persistence.</p> </li> <li><p>Known to work on Debian 9 and 10. overlay2 is only supported since Debian 10 and needs modprobe configuration described above.</p> </li> </ul> <p>Arch Linux</p> <ul> <li>Add kernel.unprivileged_userns_clone=1 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system</li> </ul> <p>openSUSE</p> <ul> <li><p>sudo modprobe ip_tables iptable_mangle iptable_nat iptable_filter is required. This might be required on other distros as well depending on the configuration.</p> </li> <li><p>Known to work on openSUSE 15.</p> </li> </ul> <p>Fedora 31 and later</p> <ul> <li><p>Fedora 31 uses cgroup v2 by default, which is not yet supported by the containerd runtime. Run sudo grubby --update-kernel=ALL --args=&quot;systemd.unified_cgroup_hierarchy=0&quot; to use cgroup v1.</p> </li> <li><p>You might need sudo dnf install -y iptables.</p> </li> </ul> <p>CentOS 8</p> <ul> <li>You might need sudo dnf install -y iptables.</li> </ul> <p>CentOS 7</p> <ul> <li><p>Add user.max_user_namespaces=28633 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl --system.</p> </li> <li><p>systemctl --user does not work by default. Run the daemon directly without systemd: dockerd-rootless.sh --experimental --storage-driver vfs</p> </li> <li><p>Known to work on CentOS 7.7. Older releases require additional configuration steps.</p> </li> <li><p>CentOS 7.6 and older releases require COPR package vbatts/shadow-utils-newxidmap to be installed.</p> </li> <li><p>CentOS 7.5 and older releases require running sudo grubby --update-kernel=ALL --args=&quot;user_namespace.enable=1&quot; and a reboot following this.</p> </li> </ul> </blockquote>
BMitch
<p>When I change inside index.js file inside auth directory then skaffold stuck on <strong>watching for changes...</strong> I restarted but every time when I change it stuck</p> <p>Syncing 1 files for test/test-auth:941b197143f22988459a0484809ee213e22b4366264d163fd8419feb07897d99</p> <p>Watching for changes...</p> <pre><code>&gt; auth &gt; node_modules &gt; src &gt; signup signup.js index.js &gt; .dockerignore &gt; Dockerfile &gt; package-lock.json &gt; package.json &gt; infra &gt; k8s auth-depl.yaml ingress-srv.yaml &gt; skaffold.yaml </code></pre> <p>My skaffold.yaml file is</p> <pre><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: local: push: false artifacts: - image: test/test-auth docker: dockerfile: Dockerfile context: auth sync: manual: - src: '***/*.js' dest: src </code></pre> <p>If I make change signup.js or index.js skaffold stuck.Please help me!</p>
Rishabh Soni
<p>Given the output you included above, I suspect that Skaffold is copying the file across:</p> <blockquote> <pre><code>Syncing 1 files for test/test-&gt; auth:941b197143f22988459a0484809ee213e22b4366264d163fd8419feb07897d99 Watching for changes... </code></pre> </blockquote> <p>but your app is not set up to respond to file changes. You need to use a tool like <code>nodemon</code> to watch for file changes and restart your app. The Skaffold <a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/hot-reload/node" rel="nofollow noreferrer"><code>hot-reload</code> example</a> shows one way to set this up.</p>
Brian de Alwis
<p>Have a simple program as shown below</p> <pre><code>import pyspark builder = ( pyspark.sql.SparkSession.builder.appName(&quot;MyApp&quot;) .config(&quot;spark.sql.extensions&quot;, &quot;io.delta.sql.DeltaSparkSessionExtension&quot;) .config( &quot;spark.sql.catalog.spark_catalog&quot;, &quot;org.apache.spark.sql.delta.catalog.DeltaCatalog&quot;, ) ) spark = builder.getOrCreate() spark._jsc.hadoopConfiguration().set( &quot;fs.gs.impl&quot;, &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem&quot; ) spark._jsc.hadoopConfiguration().set( &quot;fs.AbstractFileSystem.gs.impl&quot;, &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS&quot; ) df = spark.read.format(&quot;delta&quot;).load( &quot;gs://org/delta/bronze/mongodb/registration/audits&quot; ) print(df.show()) </code></pre> <p>This is packaged into a container using the below Dockerfile</p> <pre><code>FROM varunmallya/spark-pi:3.2.1 USER root ADD gcs-connector-hadoop2-latest.jar $SPARK_HOME/jars WORKDIR /app COPY main.py . </code></pre> <p>This app is then deployed as a SparkApplication on k8s using the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#specifying-deployment-mode" rel="nofollow noreferrer">spark-on-k8s</a> operator</p> <p>I expected to see 20 rows of data but instead got this exception</p> <pre><code>java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.sql.catalyst.expressions.ScalaUDF.f of type scala.Function1 in instance of org.apache.spark.sql.catalyst.expressions.ScalaUDF </code></pre> <p>However when I run this in local jupyter notebook I can see the desired. I have added the necessary package - <em>io.delta:delta-core_2.12:1.2.0</em> via the crd and have also ensured the <em>gcs-connector-hadoop2-latest.jar</em> is made available.</p> <p>What could the issue be?</p>
Varun Mallya
<p>Could you try the following <code>Dockerfile</code>:</p> <pre><code>FROM datamechanics/spark:3.1.1-hadoop-3.2.0-java-8-scala-2.12-python-3.8-dm17 USER root WORKDIR /app COPY main.py . </code></pre> <p>And then try deploying the <code>SparkApplication</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: sparkoperator.k8s.io/v1beta2 kind: SparkApplication metadata: name: sparky-pi namespace: spark spec: type: Python mode: cluster pythonVersion: &quot;3&quot; image: &lt;YOUR_IMAGE_GOES_HERE&gt;:latest mainApplicationFile: local:///app/main.py sparkVersion: &quot;3.1.1&quot; restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 driver: cores: 1 coreLimit: &quot;1200m&quot; memory: &quot;512m&quot; labels: version: 3.1.1 serviceAccount: spark executor: serviceAccount: spark cores: 1 instances: 1 memory: &quot;512m&quot; labels: version: 3.1.1 </code></pre> <p>I ran this on my Kubernetes cluster and was able to get:</p> <p><a href="https://i.stack.imgur.com/Iq5pE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iq5pE.png" alt="" /></a></p> <p>I think here the base image <code>datamechanics/spark:3.1.1-hadoop-3.2.0-java-8-scala-2.12-python-3.8-dm17</code> is key. Props to the folks who put it together!</p> <p>Source: <a href="https://towardsdatascience.com/optimized-docker-images-for-apache-spark-now-public-on-dockerhub-1f9f8fed1665" rel="nofollow noreferrer">https://towardsdatascience.com/optimized-docker-images-for-apache-spark-now-public-on-dockerhub-1f9f8fed1665</a></p>
Benjamin Tan Wei Hao
<p>I've the following docker file which is working for my application. I was able to access to the simple web app server.</p> <pre><code>FROM golang:1.14.7 AS builder RUN go get github.com/go-delve/delve/cmd/dlv RUN mkdir /app ADD . /app WORKDIR /app RUN CGO_ENABLED=0 GOOS=linux go build -gcflags=&quot;all=-N -l&quot; -o main ./... FROM alpine:3.12.0 AS production COPY --from=builder /app . EXPOSE 8000 40000 ENV PORT=8000 CMD [&quot;./main&quot;] </code></pre> <p>When I adopt it like following, I am not able to deploy it successfully to Kubernetes. The container crashed with some general error, not something that I can use.</p> <pre><code>standard_init_linux.go:190: exec user process caused &quot;no such file or directory&quot; </code></pre> <p><strong>This not working</strong></p> <pre><code>FROM golang:1.14.7 AS builder RUN go get github.com/go-delve/delve/cmd/dlv RUN mkdir /app ADD . /app WORKDIR /app RUN CGO_ENABLED=0 GOOS=linux go build -gcflags=&quot;all=-N -l&quot; -o main ./... FROM alpine:3.12.0 AS production COPY --from=builder /app . COPY --from=builder /go/bin/dlv / EXPOSE 8000 40000 ENV PORT=8000 CMD [&quot;/dlv&quot;, &quot;--listen=:40000&quot;, &quot;--headless=true&quot;, &quot;--api-version=2&quot;, &quot;--accept-multiclient&quot;, &quot;exec&quot;, &quot;./main&quot;] </code></pre> <p>If someone want to try it out this is the simple program (this is minimum reproducible example), if you take the first docker file it will work for you, for the second it does not.</p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;fmt&quot; &quot;net/http&quot; &quot;os&quot; ) func main() { fmt.Println(&quot;app is starting!&quot;) var port string if port = os.Getenv(&quot;PORT&quot;); len(port) == 0 { port = &quot;8080&quot; } http.HandleFunc(&quot;/&quot;, handler) http.ListenAndServe(&quot;:&quot;+port, nil) } func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, &quot;Hi there, %s!&quot;, r.URL.Path[1:]) } </code></pre>
PJEM
<p>You need to compile <code>dlv</code> itself with the static linking flags. Without that, <code>dlv</code> will have dynamic links to libc which doesn't exist within an alpine image. Other options include switching your production image to be debian based (<code>FROM debian</code>) or change to golang image to be alpine based (<code>FROM golang:1.14.7-alpine</code>). To compile <code>dlv</code> without dynamic links, the following Dockerfile works:</p> <pre><code>FROM golang:1.14.7 AS builder RUN CGO_ENABLED=0 go get -ldflags '-s -w -extldflags -static' github.com/go-delve/delve/cmd/dlv RUN mkdir /app ADD . /app WORKDIR /app RUN CGO_ENABLED=0 GOOS=linux go build -gcflags=&quot;all=-N -l&quot; -o main ./... FROM alpine:3.12.0 AS production COPY --from=builder /app . COPY --from=builder /go/bin/dlv / EXPOSE 8000 40000 ENV PORT=8000 CMD [&quot;/dlv&quot;, &quot;--listen=:40000&quot;, &quot;--headless=true&quot;, &quot;--api-version=2&quot;, &quot;--accept-multiclient&quot;, &quot;exec&quot;, &quot;./main&quot;] </code></pre> <p>To see the dynamic links, build your builder image and run <code>ldd</code> against the output binaries:</p> <pre><code>$ docker build --target builder -t test-63403272 . [+] Building 4.6s (11/11) FINISHED =&gt; [internal] load build definition from Dockerfile 0.0s =&gt; =&gt; transferring dockerfile: 570B 0.0s =&gt; [internal] load .dockerignore 0.0s =&gt; =&gt; transferring context: 2B 0.0s =&gt; [internal] load metadata for docker.io/library/golang:1.14.7 0.2s =&gt; [builder 1/6] FROM docker.io/library/golang:1.14.7@sha256:1364cfbbcd1a5f38bdf8c814f02ebbd2170c93933415480480104834341f283e 0.0s =&gt; [internal] load build context 0.0s =&gt; =&gt; transferring context: 591B 0.0s =&gt; CACHED [builder 2/6] RUN go get github.com/go-delve/delve/cmd/dlv 0.0s =&gt; CACHED [builder 3/6] RUN mkdir /app 0.0s =&gt; [builder 4/6] ADD . /app 0.1s =&gt; [builder 5/6] WORKDIR /app 0.0s =&gt; [builder 6/6] RUN CGO_ENABLED=0 GOOS=linux go build -gcflags=&quot;all=-N -l&quot; -o main ./... 4.0s =&gt; exporting to image 0.2s =&gt; =&gt; exporting layers 0.2s =&gt; =&gt; writing image sha256:d2ca7bbc0bb6659d0623e1b8a3e1e87819d02d0c7f0a0762cffa02601799c35e 0.0s =&gt; =&gt; naming to docker.io/library/test-63403272 0.0s $ docker run -it --rm test-63403272 ldd /go/bin/dlv linux-vdso.so.1 (0x00007ffda66ee000) libpthread.so.0 =&gt; /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007faa4824d000) libc.so.6 =&gt; /lib/x86_64-linux-gnu/libc.so.6 (0x00007faa4808c000) /lib64/ld-linux-x86-64.so.2 (0x00007faa48274000) </code></pre> <p>Libc is a common missing library when switching to alpine since it uses musl by default.</p>
BMitch
<p>I am aware of the pros of having multiple containers in one pod but What are the cons of having multiple containers. We have a 20k pod prod requirement and our current infra supports max 900 pods for one namespace.which is best suitable approach to accommodate this request.</p>
Karthik Reddy
<p>I'm not a devops guy, but from developer's perspective:</p> <p>If you have many containers in POD:</p> <ul> <li><p>you should be aware of: lifecycle issues - which container starts first, which ends first, and so forth</p></li> <li><p>how to evaluate the "operability" of a pod as a logical unit. If there is one container in the pod, then its clear - if the container runs - that's ok. But if you have many, what kind of probes (readiness, liveness) do you define? What happens if one container is dead, is the whole pod still operational?</p></li> <li><p>potentially its easier to find the place in kubernetes cluster for the replica of small pods than big ones (having many constainers and and therefor probably consuming much more resources)</p></li> <li><p>you sacrifice the potential scalability flexibility. Say container A and B are in the same pod. What if you want to scale out only container A.</p></li> <li><p>the same item as above applies to release management flexibility (rollout, canary releases, fallbacks to previous version - all that stuff)</p></li> <li><p>you'll have to think about the solution of gathering logs/metrics from the containers inside the pod. This can be easy or tedious depending on the actual technology stack, but this is a point that you'll have to solve anyway. Arguably when you have multiple containers in the pod the solution might get more complicated.</p></li> <li><p>somewhat less convenient operability with <code>kubectl</code>. Ok this is minor, but still. You'll always have to add that <code>-c</code> flag. Want to see the logs of the pod? Add <code>-c &lt;container-name&gt;</code>. Want to do <code>kubectl exec -it</code> - again, <code>-c &lt;container-name&gt;</code>. </p></li> </ul> <p>Of course sometimes running sidecar container is a valid case (for service mashes for example). But all-in-all its a tool for a job.</p> <p>Interestingly, I've found <a href="https://banzaicloud.com/blog/k8s-sidecars/" rel="noreferrer">an article</a> that talks about giving side car containers a "special attitude". This can be somewhat relevant although from the question I understand that you don't consider "side cars" if I've got you right</p>
Mark Bramnik
<p>I've created a Kubernetes job that has now failed. Where can I find the logs to this job?</p> <p>I'm not sure how to find the associated pod (I assume once the job fails it deletes the pod)?</p> <p>Running <code>kubectl describe job</code> does not seem to show any relevant information:</p> <pre><code>Name: app-raiden-migration-12-19-58-21-11-2018 Namespace: localdev Selector: controller-uid=c2fd06be-ed87-11e8-8782-080027eeb8a0 Labels: jobType=database-migration Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"labels":{"jobType":"database-migration"},"name":"app-raiden-migration-12-19-58-21-1... Parallelism: 1 Completions: 1 Start Time: Wed, 21 Nov 2018 12:19:58 +0000 Pods Statuses: 0 Running / 0 Succeeded / 1 Failed Pod Template: Labels: controller-uid=c2fd06be-ed87-11e8-8782-080027eeb8a0 job-name=app-raiden-migration-12-19-58-21-11-2018 Containers: app: Image: pp3-raiden-app:latest Port: &lt;none&gt; Command: php artisan migrate Environment: DB_HOST: local-mysql DB_PORT: 3306 DB_DATABASE: raiden DB_USERNAME: &lt;set to the key 'username' in secret 'cloudsql-db-credentials'&gt; Optional: false DB_PASSWORD: &lt;set to the key 'password' in secret 'cloudsql-db-credentials'&gt; Optional: false LOG_CHANNEL: stderr APP_NAME: Laravel APP_KEY: ABCDEF123ERD456EABCDEF123ERD456E APP_URL: http://192.168.99.100 OAUTH_PRIVATE: &lt;set to the key 'oauth_private.key' in secret 'laravel-oauth'&gt; Optional: false OAUTH_PUBLIC: &lt;set to the key 'oauth_public.key' in secret 'laravel-oauth'&gt; Optional: false Mounts: &lt;none&gt; Volumes: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m job-controller Created pod: app-raiden-migration-12-19-58-21-11-2018-pwnjn Warning BackoffLimitExceeded 2m job-controller Job has reach the specified backoff limit </code></pre>
Chris Stryczynski
<p>One other approach:</p> <ul> <li><code>kubectl describe job $JOB</code></li> <li>Pod name is shown under "Events"</li> <li><code>kubectl logs $POD</code></li> </ul>
David Thomas
<p>What is the relationship between EXPOSE in the dockerfile and TARGETPORT in the service YAML and actual running port in the Pod ?</p> <p>In my dockerfile</p> <pre><code>expose 8080 </code></pre> <p>in my deployment</p> <pre><code>ports: - containerPort: 8080 </code></pre> <p>In my service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: xtys-web-admin spec: type: NodePort ports: - port: 8080 targetPort: 8080 selector: app: xtys-web-admin </code></pre> <p>In my pod</p> <pre><code>kubectl exec xtys-web-admin-7b79647c8d-n6rhk -- ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 100 *:8332 *:* </code></pre> <p>So, in the pod actually running 8332( from some config file). my question is how does it still works ? it works,but i doubt it, someone can clarify it?</p>
adrian ding
<p>In the Dockerfile, <code>EXPOSE</code> is documentation by the image creator to those running the image for how they have configured the image. It sets metadata in the image that you can inspect, but otherwise does not impact how docker configures networking between containers. (Many will confuse this for publishing a port on the host, which is very different than exposing the port. Publishing a port in docker actually creates a mapping to allow the container to be externally accessed.)</p> <p>The value of <code>containerPort</code> is a runtime equivalent of <code>EXPOSE</code> to expose a port that was not specified in the image. This, again, is documentation only, but may be used by other tooling that inspects running images to self configure. I've mostly seen this used by reverse proxies that default to the exposed port if you do not specify a port to connect.</p> <p>It is possible for someone to configure an image to listen on a different port number than the image creator documented in their <code>EXPOSE</code>. For example, the nginx image will document that it listens on port 80 with it's default configuration, but you could provide your own <code>nginx.conf</code> file and reconfigure it to listen on port 8080 inside the container instead (e.g. if you did not want to run nginx as root).</p> <hr> <p>Now for the service side:</p> <p>The value of <code>targetPort</code> in a Kubernetes service needs to refer to the port the running container is actually listening on. Typically this is the same as the exposed port, but if you reconfigure your application like in the example above, you would set <code>targetPort</code> to 8080 instead of 80.</p> <p>The vaule of <code>port</code> in a Kubernetes service is the port the service itself listens on. For inter-container communication, you need to connect on this port, and it will often be the same as the <code>targetPort</code> to reduce confusing.</p> <p>Lastly, the value of <code>nodePort</code> in a Kubernetes service is the port published on the nodes for you to externally access your container. By default, this goes in the ephemeral port range starting at 30000.</p>
BMitch
<p>I am using fabric8 to develop a cluster management layer on top of Kubernetes, and I am confused as to what the 'official' API is to obtain notifications of errors when things go wrong when instantiating pods/rep controllers &amp; services etc. </p> <p>In the section "Pod Deployment Code" I have a stripped down version of what we do for pods. In the event that everything goes correctly, our code is fine. We rely on setting 'watches' as you can see in the method <code>deployPodWithWatch</code>. All I do in the given <code>eventReceived</code> callback is to print the event, but our real code will break apart a notification like this:</p> <pre><code>got action: MODIFIED / Pod(apiVersion=v1, kind=Pod, metadata=...etc etc status=PodStatus( conditions=[ </code></pre> <p>and pick out the 'status' element of the Pod and when we get PodCondition(status=True, type=Ready), we know that our pod has been successfully deployed.</p> <p>In the happy path case this works great. And you can actually run the code supplied with variable k8sUrl set to the proper url for your site (hopefully your k8s installation does not require auth which is site specific so i didn't provide code for that).</p> <p>However, suppose you change the variable <code>imageName</code> to "nginBoo". There is no public docker image of that name, so after you run the code, set your kubernetes context to the namespace "junk", and do a </p> <pre><code> describe pod podboy </code></pre> <p>you will see two status messages at the end with the following values for Reason / Message </p> <pre><code>Reason message failedSync Error syncing pod, skipping... failed Failed to pull image "nginBoo": API error (500): Error parsing reference: "nginBoo" is not a valid repository/tag </code></pre> <p>I would like to implement a watch callback so that it catches these types of errors. However, the only thing that I see are 'MODIFIED' events wherein the Pod has a field like this:</p> <pre><code> state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting( reason=API error (500): Error parsing reference: "nginBoo" is not a valid repository/tag </code></pre> <p>I suppose I could look for a reason code that contained the string 'API error' but this seems to be very much an implementation-dependent hack -- it might not cover all cases, and maybe it will change under my feet with future versions. I'd like some more 'official' way of figuring out if there was an error, but my searches have come up dry -- so I humbly request guidance from all of you k8s experts out there. Thanks !</p> <p>Pod Deployment Code</p> <pre><code>import com.fasterxml.jackson.databind.ObjectMapper import scala.collection.JavaConverters._ import com.ning.http.client.ws.WebSocket import com.typesafe.scalalogging.StrictLogging import io.fabric8.kubernetes.api.model.{DoneableNamespace, Namespace, Pod, ReplicationController} import io.fabric8.kubernetes.client.DefaultKubernetesClient.ConfigBuilder import io.fabric8.kubernetes.client.Watcher.Action import io.fabric8.kubernetes.client.dsl.Resource import io.fabric8.kubernetes.client.{DefaultKubernetesClient, Watcher} object ErrorTest extends App with StrictLogging { // corresponds to --insecure-skip-tls-verify=true, according to io.fabric8.kubernetes.api.model.Cluster val trustCerts = true val k8sUrl = "http://localhost:8080" val namespaceName = "junk" // replace this with name of a namespace that you know exists val imageName: String = "nginx" def go(): Unit = { val kube = getConnection dumpNamespaces(kube) deployPodWithWatch(kube, getPod(image = imageName)) } def deployPodWithWatch(kube: DefaultKubernetesClient, pod: Pod): Unit = { kube.pods().inNamespace(namespaceName).create(pod) /* create the pod ! */ val podWatchWebSocket: WebSocket = /* create watch on the pod */ kube.pods().inNamespace(namespaceName).withName(pod.getMetadata.getName).watch(getPodWatch) } def getPod(image: String): Pod = { val jsonTemplate = """ |{ | "kind": "Pod", | "apiVersion": "v1", | "metadata": { | "name": "podboy", | "labels": { | "app": "nginx" | } | }, | "spec": { | "containers": [ | { | "name": "podboy", | "image": "&lt;image&gt;", | "ports": [ | { | "containerPort": 80, | "protocol": "TCP" | } | ] | } | ] | } |} """. stripMargin val replacement: String = "image\": \"" + image val json = jsonTemplate.replaceAll("image\": \"&lt;image&gt;", replacement) System.out.println("json:" + json); new ObjectMapper().readValue(json, classOf[Pod]) } def dumpNamespaces(kube: DefaultKubernetesClient): Unit = { val namespaceNames = kube.namespaces().list().getItems.asScala.map { (ns: Namespace) =&gt; { ns.getMetadata.getName } } System.out.println("namespaces are:" + namespaceNames); } def getConnection = { val configBuilder = new ConfigBuilder() val config = configBuilder. trustCerts(trustCerts). masterUrl(k8sUrl). build() new DefaultKubernetesClient(config) } def getPodWatch: Watcher[Pod] = { new Watcher[Pod]() { def eventReceived(action: Action, watchedPod: Pod) { System.out.println("got action: " + action + " / " + watchedPod) } } } go() } </code></pre>
Chris Bedford
<p>I'd suggest you to have a look at events, see <a href="http://kubernetes.io/docs/user-guide/introspection-and-debugging/#example-debugging-pending-pods" rel="nofollow">this topic</a> for some guidance. Generally each object should generate events you can watch and be notified of such errors.</p>
soltysh
<p>Hi I am trying to expose 5 ports for an Informix Container which is within a statefulSet. It has a headless service attached, to allow other internal stateless sets communicate with it internally. </p> <p>I can ping the headless service <code>informix-set-service</code> from my <code>informix-0</code> pod and other pods however when I try <code>nmap -p 9088 informix-set-service</code> the port is listed as closed. I am assuming this is because my yaml is wrong but I can't for the life find out where it's wrong. </p> <p>It appears that the headless service is indeed attached and pointing at the correct stateful-set and within the minikube dashboard everything looks and appears to be correct.</p> <p><a href="https://i.stack.imgur.com/kdW5M.png" rel="nofollow noreferrer">Service minikube dash screenshot</a></p> <pre><code>informix@informix-0:/$ nmap -p 9088 informix-set-service Starting Nmap 6.47 ( http://nmap.org ) at 2019-08-20 03:50 UTC Nmap scan report for informix-set-service (172.17.0.7) Host is up (0.00011s latency). rDNS record for 172.17.0.7: informix-0.informix.default.svc.cluster.local PORT STATE SERVICE 9088/tcp closed unknown Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds informix@informix-0:/$ nmap -p 9088 localhost Starting Nmap 6.47 ( http://nmap.org ) at 2019-08-20 03:50 UTC Nmap scan report for localhost (127.0.0.1) Host is up (0.00026s latency). Other addresses for localhost (not scanned): 127.0.0.1 PORT STATE SERVICE 9088/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds </code></pre> <p>Anyone got any ideas?</p> <h1>Deployment yaml snippet:</h1> <pre><code>############################################################################### # Informix Container ############################################################################### # # Headless service for Informix container StatefulSet. # Headless service with clusterIP set to NULL # create DNS records for Informix container hosts. # apiVersion: v1 kind: Service metadata: name: informix-set-service labels: component: informix-set-service provider: IBM spec: clusterIP: None ports: - port: 9088 name: informix - port: 9089 name: informix-dr - port: 27017 name: mongo - port: 27018 name: rest - port: 27883 name: mqtt selector: component: informix-set-service --- # # Service for Informix container StatefulSet service. # This is used as an external entry point for # the ingress controller. # apiVersion: v1 kind: Service metadata: name: informix-service labels: component: informix-service provider: 4js spec: ports: - port: 9088 name: informix - port: 9089 name: informix-dr - port: 27017 name: mongo - port: 27018 name: rest - port: 27883 name: mqtt selector: component: informix-set-service --- # # StatefulSet for Informix cluster. # StatefulSet sets predictible hostnames,and external storage is bound # to the pods within StateFulSets for the life. # Replica count configures number of Informix Server containers. # apiVersion: apps/v1 kind: StatefulSet metadata: name: informix labels: app: informix component: db release: "12.10" provider: IBM spec: serviceName: informix #replicas: 2 #keep it simple for now... selector: matchLabels: component: informix-set-service template: metadata: labels: component: informix-set-service spec: containers: - name: informix image: ibmcom/informix-innovator-c:12.10.FC12W1IE tty: true securityContext: privileged: true env: - name: LICENSE value: "accept" - name: DBDATE value: "DMY4" - name: SIZE value: "custom" - name: DB_USER value: "db_root" - name: DB_NAME value: "db_main" - name: DB_PASS value: "db_pass123" ports: - containerPort: 9088 name: informix - containerPort: 9089 name: informix-dr - containerPort: 27017 name: mongo - containerPort: 27018 name: rest - containerPort: 27883 name: mqtt volumeMounts: - name: data mountPath: /opt/ibm/data - name: bind-dir-mnt mountPath: /mnt - name: bind-patch-informix-setup-sqlhosts mountPath: /opt/ibm/scripts/informix_setup_sqlhosts.sh - name: bind-file-dbexport mountPath: /opt/ibm/informix/bin/dbexport - name: bind-file-dbimport mountPath: /opt/ibm/informix/bin/dbimport - name: bind-file-ontape mountPath: /opt/ibm/informix/bin/ontape - name: bind-file-informix-config mountPath: /opt/ibm/data/informix_config.custom - name: bind-file-sqlhosts mountPath: /opt/ibm/data/sqlhosts volumes: - name: data persistentVolumeClaim: claimName: ifx-data - name: bind-dir-mnt hostPath: path: &lt;PROJECTDIR&gt;/resources/informix type: DirectoryOrCreate - name: bind-patch-informix-setup-sqlhosts hostPath: path: &lt;PROJECTDIR&gt;/containers/informix/resources/scripts/informix_setup_sqlhosts.sh type: File - name: bind-file-dbexport hostPath: path: &lt;PROJECTDIR&gt;/containers/informix/resources/bin/dbexport type: File - name: bind-file-dbimport hostPath: path: &lt;PROJECTDIR&gt;/containers/informix/resources/bin/dbimport type: File - name: bind-file-ontape hostPath: path: &lt;PROJECTDIR&gt;/containers/informix/resources/bin/ontape type: File - name: bind-file-informix-config hostPath: path: &lt;PROJECTDIR&gt;/containers/informix/resources/informix_config.custom type: File - name: bind-file-sqlhosts hostPath: path: &lt;PROJECTDIR&gt;/containers/informix/resources/sqlhosts.k8s type: File --- </code></pre> <p><strong>Edit 1: (added output of ss -lnt)</strong></p> <pre><code>informix@informix-0:/$ ss -lnt State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 0 127.0.0.1:9088 *:* LISTEN 0 0 127.0.0.1:9089 *:* LISTEN 0 0 172.17.0.7:27017 *:* LISTEN 0 0 172.17.0.7:27018 *:* LISTEN 0 0 172.17.0.7:27883 *:* LISTEN 0 0 *:22 *:* LISTEN 0 0 :::22 :::* </code></pre>
ryan4j
<p>From the <code>ss</code> output, you are listening on 127.0.0.1, rather than all interfaces:</p> <pre><code>informix@informix-0:/$ ss -lnt State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 0 127.0.0.1:9088 *:* LISTEN 0 0 127.0.0.1:9089 *:* </code></pre> <p>You need to adjust your application configuration to listen on something like <code>0.0.0.0</code> to enable it to be accessed from outside of the pod.</p>
BMitch
<p>I have a simple cronjob that runs every 10 minutes:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: myjob spec: schedule: &quot;*/10 * * * *&quot; #every 10 minutes successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - name: job image: image imagePullPolicy: Always restartPolicy: OnFailure </code></pre> <p>It indeed runs every 10 minutes, but i would like it to run first time when i deploy the cronjob. is it possible?</p>
ArielB
<p>You could have a one time CronJob trigger the scheduled CronJob:</p> <pre><code>kubectl create job --from=cronjob/&lt;name of cronjob&gt; &lt;name of job&gt; </code></pre> <p><a href="https://www.craftypenguins.net/how-to-trigger-a-kubernetes-cronjob-manually/" rel="nofollow noreferrer">Source</a></p> <p>The one time CronJob would need to run after the scheduled CronJob has been created, and its image would need to include the kubectl binary. Api-server permissions needed to run kubectl within the container could be provided by linking a ServiceAccount to the one time CronJob.</p>
Gavin
<p>Imagine a scenario in which a producer is producing 100 messages per second, and we're working on a system that consuming messages ASAP matters a lot, even 5 seconds delay might result in a decision not to take care of that message anymore. also, the order of messages does not matter.</p> <p>So I don't want to use a basic queue and a single pod listening on a single partition to consume messages, since in order to consume a message, the consumer needs to make multiple remote API calls and this might take time.</p> <p>In such a scenario, I'm thinking of a single Kafka topic, with 100 partitions. and for each partition, I'm gonna have a separate machine (pod) listening for partitions 0 to 99.</p> <p>Am I thinking right? this is my first project with Kafka. this seems a little weird to me.</p>
behz4d
<p>For your use case, think of partitions = max number of instances of the service consuming data. Don't create extra partitions if you'll have 8 instances. This will have a negative impact if consumers need to be rebalanced and probably won't give you any performace improvement. Also 100 messages/s is very, very little, you can make this work with almost any technology.</p> <p>To get the maximum performance I would suggest:</p> <ul> <li>Use a <a href="https://kafka.apache.org/31/javadoc/org/apache/kafka/clients/producer/RoundRobinPartitioner.html" rel="nofollow noreferrer">round robin partitioner</a></li> <li>Find a Parallel consumer implementation for your platform (for <a href="https://github.com/confluentinc/parallel-consumer" rel="nofollow noreferrer">jvm</a>)</li> </ul> <p>And there a few producer and consumer properties that you'll need to change, but they depend your environment. For example <code>batch.size</code>, <code>linger.ms</code>, etc. I would also check about the need to set <code>acks=all</code> as it might be ok for you to lose data if a broker dies given that old data is of no use.</p> <p>One warning: In Java, the standard kafka consumer is single threaded. This surprises many people and I'm not sure if the same is true for other platforms. So having 100s of partitions won't give any performance benefit with these consumers, and that's why it's important to use a Parallel Consumer.</p> <p>One more warning: Kafka is a complex broker. It's trivial to start using it, but it's a very bumpy journey to use it correctly.</p> <p>And a note: One of the benefits of Kafka is that it keeps the messages rather than delete them once they are consumed. If messages older than 5 seconds are useless for you, Kafka might be the wrong technology and using a more traditional broker might be easier (activeMQ, rabbitMQ or go to blazing fast ones like zeromq)</p>
Augusto
<p>I just switched from ForkPool to gevent with concurrency (5) as the pool method for Celery workers running in Kubernetes pods. After the switch I've been getting a non recoverable erro in the worker:</p> <p><code>amqp.exceptions.PreconditionFailed: (0, 0): (406) PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more</code></p> <p>The broker logs gives basically the same message:</p> <p><code>2021-11-01 22:26:17.251 [warning] &lt;0.18574.1&gt; Consumer None4 on channel 1 has timed out waiting for delivery acknowledgement. Timeout used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more</code></p> <p>I have the <code>CELERY_ACK_LATE</code> set up, but was not familiar with the necessity to set a timeout for the acknowledgement period. And that never happened before using processes. Tasks can be fairly long (60-120 seconds sometimes), but I can't find a specific setting to allow that.</p> <p>I've read in another post in other forum a user who set the timeout on the broker configuration to a huge number (like 24 hours), and was also having the same problem, so that makes me think there may be something else related to the issue.</p> <p>Any ideas or suggestions on how to make worker more resilient?</p>
lowercase00
<p>The accepted answer is the correct answer. However, if you have an existing RabbitMQ server running and do not want to restart it, you can dynamically set the configuration value by running the following command on the RabbitMQ server:</p> <p><code>rabbitmqctl eval 'application:set_env(rabbit, consumer_timeout, 36000000).'</code></p> <p>This will set the new timeout to 10 hrs (36000000ms). For this to take effect, you need to restart your workers though. Existing worker connections will continue to use the old timeout.</p> <p>You can check the current configured timeout value as well:</p> <p><code>rabbitmqctl eval 'application:get_env(rabbit, consumer_timeout).'</code></p> <p>If you are running RabbitMQ via Docker image, here's how to set the value: Simply add <code>-e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=&quot;-rabbit consumer_timeout 36000000&quot;</code> to your <code>docker run</code> OR set the environment <code>RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS</code> to <code>&quot;-rabbit consumer_timeout 36000000&quot;</code>.</p> <p>Hope this helps!</p>
Sarang
<p>I'm trying to use kubernetes-alpha provider in Terraform, but I have &quot;Failed to construct REST client&quot; error message. I'm using tfk8s to convert my yaml file to terraform code.</p> <p>I make the seme declaration for the provider than kubernetes, and my kubernetes provider work correctely</p> <pre><code>provider &quot;kubernetes-alpha&quot; { host = &quot;https://${data.google_container_cluster.primary.endpoint}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.primary.master_auth[0].cluster_ca_certificate) } provider &quot;kubernetes&quot; { host = &quot;https://${data.google_container_cluster.primary.endpoint}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.primary.master_auth[0].cluster_ca_certificate) } </code></pre> <pre><code>resource &quot;kubernetes_manifest&quot; &quot;exemple&quot; { provider = kubernetes-alpha manifest = { # result of tfk8s } } </code></pre> <p><a href="https://i.stack.imgur.com/rJoT9.png" rel="nofollow noreferrer">the error message</a></p> <p>somebody can help ?</p>
握草行天下
<p>After some digging, I found that this resource requires a running kubernetes instance and config before the terraform plan will work properly. Best stated in github here: <a href="https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/199#issuecomment-832614387" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/199#issuecomment-832614387</a></p> <p>Basically, you have to have two steps to first terraform apply your main configuration to stand up kubernetes in your cloud, and then secondly terraform apply the CRD resource once that cluster has been established.</p> <p>EDIT: I'm still trying to learn good patterns/practices for managing terraform config and found this pretty helpful. <a href="https://stackoverflow.com/questions/47708338/how-to-give-a-tf-file-as-input-in-terraform-apply-command">How to give a .tf file as input in Terraform Apply command?</a>. I ended up just keeping the cert manager CRD as a standard kubernetes manifest yaml that I apply per-cluster with other application helm charts.</p>
mccrackend
<p>Istio on Kubernetes injects an Envoy sidecar to run alongside Pods and implement a service mesh, however Istio itself <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations" rel="nofollow noreferrer">cannot ensure traffic does not bypass this proxy</a>; if that happens Istio security policy is no longer applied.</p> <p>Therefore, I am trying to understand all ways in which this bypass could happen (assuming Envoy itself hasn't been compromised) and find ways to prevent them so that TCP traffic originating from a Pod's network namespace is guaranteed to have gone through Envoy (or at least is much more likely to have done):</p> <ol> <li>Since (at the time of writing) Envoy does not support UDP (it's <a href="https://github.com/envoyproxy/envoy/pull/9046" rel="nofollow noreferrer">nearly there</a>), UDP traffic won't be proxied, so use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">NetworkPolicy</a> to ensure only TCP traffic is allowed to/from the Pod (e.g. to avoid TCP traffic being tunnelled out via a VPN over UDP)</li> <li>Drop NET_ADMIN capability to prevent the Pod from reconfiguring the IPTables rules in its network namespace that capture traffic</li> <li>Drop NET_RAW capability to prevent the Pod from opening raw sockets and bypassing the netfilter hook points that IPTables makes use of</li> </ol> <p>The only other attack vector I know of would be a kernel vulnerability - are there any others? Maybe there are other L3/4 protocols that IPTables doesn't recognise or ignores?</p> <p>I understand that <a href="https://www.youtube.com/watch?v=ER9eIXL2_14&amp;t=17m22s" rel="nofollow noreferrer">eBPF and Cilium</a> could be used to enforce this interception at the socket level, but I am interested in the case of using vanilla Istio on Kubernetes.</p> <p>EDIT: I am also assuming the workload does not have Kubernetes API server access</p>
dippynark
<p>Envoy is not designed to be used as a firewall. Service meshes that rely on it such as Istio or Cilium only consider it a bug if you can bypass the policies on the receiving end.</p> <p>For example, any pod can trivially bypass any Istio or Cilium policies by terminating its own Envoy with <code>curl localhost:15000/quitquitquit</code> and starting a custom proxy on port 15001 that allows everything before Envoy is restarted.</p> <p>You can patch up that particular hole, but since resisting such attacks <em>is not a design goal</em> for the service meshes, there are probably dozens of other ways to accomplish the same thing. New ways bypass these policies may also be added in subsequent releases.</p> <p>If you want your security policies to be actually enforced on the end that initiates the connection and not only on the receiving end, consider using a network policy implementation for which it <em>is</em> a design goal, such as Calico.</p>
Shnatsel
<p>In Kubernetes, there is deletionTimestamp to signal an ongoing deletion and there are finalizers to model tasks during the process of deletion. However, it could be, that during the deletion, the specification of a parent object changes in a way that would effective make cancelling the deletion the most desirable solution.</p> <p>I'd expect a clear and complete documentation of deletionTimestamp and finalization covering the entire lifecycle of deletionTimestamp. It seems that most people seem to assume that it is either zero or nonzero and cannot be changed while it is nonzero. However, there seems to be no documentation on that. I also do not want to &quot;just check&quot;, because just check is subject to change and may stop working tomorrow.</p>
Timm Felden
<p>The answer is No,</p> <p>Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned. Documentation is <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/" rel="nofollow noreferrer">here</a></p> <p>Reason being garbage collection used this identifier, In foreground cascading deletion, the owner object you're deleting first enters a deletion in progress state Read through this for detailed understanding <a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion" rel="nofollow noreferrer">Foreground cascading deletion</a></p>
Bijendra
<p>I'm using janusgraph docker image - <a href="https://hub.docker.com/r/janusgraph/janusgraph" rel="nofollow noreferrer">https://hub.docker.com/r/janusgraph/janusgraph</a> In my kubernetes deployment to initialise the remote graph using groovy script mounted to <code>docker-entrypoint-initdb.d</code></p> <p>This works as expected but in case if the remote host is not ready the janusgraph container throws exception and is still in the running mode.</p> <p>Because of this kubernetes will not attempt to restart the container again. Is there any way so that I can configure this janusgraph container to terminate in case of any exception</p>
Vivek
<p>A <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">readinessProbe</a> could be employed here with a command like <code>janusgraph show-config</code> or something similar which will exit with code -1</p> <pre><code>spec: containers: - name: liveness image: janusgraph/janusgraph:latest readinessProbe: exec: command: - janusgraph - show-config </code></pre> <p>Kubernetes will terminate the pod if the <code>readinessProbe</code> fails. A <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">livenessProbe</a> could also be used here too, in case this pod needs to be terminated if the remote host ever becomes unavailable.</p> <p>Consider <a href="https://docs.janusgraph.org/advanced-topics/monitoring/#configuring-metrics-reporting" rel="nofollow noreferrer">enabling</a> JanusGraph server metrics, which could then be used with Prometheus for additional monitoring or even with the <code>livenessProbe</code> itself.</p>
Gavin
<p>I have added <code>pgbouncer-exporter</code> container to my deployment. It is emitting the metrics on port <code>9100</code>. I want to add a scraper for these metrics so that it becomes visible in Prometheus. How can I do it by using Kubernetes <code>ServiceMonitor</code>?</p>
Dev
<p>I'm unfamiliar with <a href="https://github.com/prometheus-community/pgbouncer_exporter" rel="nofollow noreferrer"><code>pgbouncer-exporter</code></a> but the principles are consistent irrespective of technology.</p> <p>You'll need to:</p> <ol> <li>Ensure the <code>pgbouncer_exporter</code>'s port (default <code>9127</code>?) is published so that the <code>/metrics</code> are accessible beyond the Pod.</li> <li>Test <code>GET</code>'ting the endpoint (e.g. <code>kubectl port-forward</code> to the <code>Deployment</code>) to ensure that <code>/metrics</code> is accessible.</li> <li>Determine whether to use a <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#servicemonitor" rel="nofollow noreferrer"><code>ServiceMonitor</code></a> or <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/design.md#podmonitor" rel="nofollow noreferrer"><code>PodMonitor</code></a>. If you have a Service exposing <code>pgbouncer_exporter</code>, then use <code>ServiceMonitor</code>. Otherwise, use <code>PodMonitor</code></li> <li>Configure <code>*Monitor</code>'s <code>selector</code> and <code>port</code></li> <li>Apply the config to your cluster in the <code>Namespace</code> of your <code>Deployment</code>.</li> </ol>
DazWilkin
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/</a> According to the above document in order to use encryption configuration, we need to edit the <code>kube-apiserver.yaml</code> file. But in GCP, Azure or AWS we cannot view this the api-server as it is managed by the cloud provider. How can we use encryption configuration in this case? Has anyone managed to use encryption configuration to encrypt secrets in GCP,Azure and AWS?</p>
Ajinkya16
<p><a href="https://cloud.google.com/secret-manager" rel="nofollow noreferrer">Google Secret Manager(GSM)</a>is GCP’s flagship service for storing, rotation and retrieving secrets. A secret in GSM could be stored in encrypted form. It supports IAM for authentication and fine grained access controls</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-pod-security?WT.mc_id=docs-azuredevtips-azureappsdev" rel="nofollow noreferrer">Azure Key Vault FlexVolume</a> and for aws Amazon Elastic Container Service for Kubernetes (EKS) are the other tools that can be used</p>
Bijendra
<p>I have a dockerfile with a custom SQL server 2019 installation running a bashscript, which in turn calls another bash script:</p> <pre><code>FROM mcr.microsoft.com/mssql/server:2019-CU8-ubuntu-16.04 ARG BuildConfiguration=Debug USER root # Install Unzip RUN apt-get update \ &amp;&amp; apt-get install unzip -y # Install SQLPackage for Linux and make it executable RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2143497 \ &amp;&amp; unzip -qq sqlpackage.zip -d /opt/sqlpackage \ &amp;&amp; chmod +x /opt/sqlpackage/sqlpackage USER mssql # Create a config directory RUN mkdir -p /usr/config WORKDIR /usr/config # Copy required source files COPY entrypoint.sh /usr/config COPY configure-db.sh /usr/config COPY setup.sql /usr/config COPY PrepareServer.sql /usr/config COPY tSQLt.class.sql /usr/config # Copy the dacpac, that we will be deploying. Make sure the project has built before you run the dockerfile! COPY ./bin/${BuildConfiguration}/Database.dacpac /usr/config ENTRYPOINT [&quot;/bin/bash&quot;, &quot;./entrypoint.sh&quot;] CMD [&quot;tail -f /dev/null&quot;] HEALTHCHECK --interval=15s CMD /opt/mssql-tools/bin/sqlcmd -U sa -P $MSSQL_SA_PASSWORD -Q &quot;select 1&quot; &amp;&amp; grep -q &quot;MSSQL CONFIG COMPLETE&quot; ./config.log </code></pre> <p>entrypoint.sh:</p> <pre><code>#!/bin/bash /opt/mssql/bin/sqlservr &amp; /bin/bash /usr/config/configure-db.sh eval $1 </code></pre> <p>configure-db.sh:</p> <pre><code>#!/bin/bash # wait for MSSQL server to start export STATUS=1 i=0 while [[ $STATUS -ne 0 ]] &amp;&amp; [[ $i -lt 30 ]]; do i=$i+1 /opt/mssql-tools/bin/sqlcmd -t 1 -U sa -P $MSSQL_SA_PASSWORD -Q &quot;select 1&quot; &gt;&gt; /dev/null STATUS=$? done if [ $STATUS -ne 0 ]; then echo &quot;Error: MSSQL SERVER took more than thirty seconds to start up.&quot; exit 1 fi echo &quot;======= MSSQL SERVER STARTED ========&quot; # Run the setup script to create the DB and the schema in the DB /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $MSSQL_SA_PASSWORD -d master -i setup.sql #install the tSQLt CLR for testing echo &quot;======= PREPARING FOR tSQLt ========&quot; /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $MSSQL_SA_PASSWORD -d master -i PrepareServer.sql echo &quot;======= PREPARATION FOR tSQLt FINISHED ========&quot; echo &quot;======= INSTALLING tSQLt ========&quot; /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $MSSQL_SA_PASSWORD -d master -i tSQLt.class.sql echo &quot;======= INSTALLING tSQLt FINISHED========&quot; echo &quot;======= Starting Deployment of Dacpac ========&quot; /opt/sqlpackage/sqlpackage /a:Publish /tsn:. /tdn:${MSSQL_DB} /tu:sa /tp:$MSSQL_SA_PASSWORD /sf:/usr/config/Database.dacpac echo &quot;======= Finished Deployment of Dacpac ========&quot; echo &quot;======= MSSQL CONFIG COMPLETE =======&quot; </code></pre> <p>This is then deployed to Kuberenetes, but upon startup I see the following lines in the log:</p> <pre><code>./entrypoint.sh: line 2: $'\r': command not found ./entrypoint.sh: line 3: $'\r': command not found ./entrypoint.sh: line 4: $'\r': command not found : No such file or directoryigure-db.sh ./entrypoint.sh: line 6: $'\r': command not found </code></pre> <p>I have searched for the carriage return in Notepad++ and VSC, but could not find any. Notepad, as well as VSC display, that the EOL is set to Unix (LF)</p> <p>I have tried to manually run the file configure-db.sh with bash inside the pod and receive the following output:</p> <pre><code>mssql@sql-dev:/usr/config$ /bin/bash configure-db.sh configure-db.sh: line 5: $'\r': command not found configure-db.sh: line 33: syntax error: unexpected end of file </code></pre> <p>Weirdly enough, running this with docker-compose works flawlessly. Is there something that I am doing wrong here?</p> <p><strong>Update</strong> Just to make sure, I did not overlook anything by accident I went ahead, and opened up the solution folder on WSL / Ubuntu and created an entrypoint2.sh with <code>sudo chown +x ./entrypoint2.sh</code> and referenced this in the dockerfile. I made sure, that the line ending were indeed LF / <code>\n</code> and <em>not</em> CR-LF, then checked the file in and deployed it. In addition I run the file through <code>dos2unix</code> again.</p> <p>The outcome is identical. It works using docker-compose, but throws an error using Kubernetes:</p> <pre><code>./entrypoint2.sh: line 2: $'\r': command not found ./entrypoint2.sh: line 4: $'\r': command not found ./entrypoint2.sh: line 5: $'\r': command not found : No such file or directoryusr/config/configure-db.sh ./entrypoint2.sh: line 7: $'\r': command not found </code></pre>
Marco
<p>The error isn't that bash isn't found, it's that <code>\r</code> isn't found. This indicates you've saved your script with Windows linefeeds and tried to run that script on a Linux platform. From your editor, save the script with Linux linefeeds (LF, not CR-LF). Or you can use a tool like dos2unix to strip the carriage returns from the script.</p>
BMitch
<p>I am discovering Google Cloud Kubernetes being fairly new to the topic. I have created couple of clusters and later deleted them (that is what I thought). When I go to the console I see one new cluster:</p> <p><a href="https://i.stack.imgur.com/aYgFC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aYgFC.png" alt="enter image description here" /></a></p> <p>But when I run the command:</p> <pre><code>kubectl config view </code></pre> <p>I see other clusters defined</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://34.68.77.89 name: gke_question-tracker_us-central1-c_hello-java-cluster - cluster: certificate-authority-data: DATA+OMITTED server: https://34.135.56.138 name: gke_quizdev_us-central1_autopilot-cluster-1 contexts: - context: cluster: gke_question-tracker_us-central1-c_hello-java-cluster user: gke_question-tracker_us-central1-c_hello-java-cluster name: gke_question-tracker_us-central1-c_hello-java-cluster - context: cluster: gke_quizdev_us-central1_autopilot-cluster-1 user: gke_quizdev_us-central1_autopilot-cluster-1 name: gke_quizdev_us-central1_autopilot-cluster-1 current-context: gke_quizdev_us-central1_autopilot-cluster-1 kind: Config preferences: {} users: - name: gke_question-tracker_us-central1-c_hello-java-cluster user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: null command: gke-gcloud-auth-plugin env: null installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke interactiveMode: IfAvailable provideClusterInfo: true - name: gke_quizdev_us-central1_autopilot-cluster-1 user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: null command: gke-gcloud-auth-plugin env: null installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke interactiveMode: IfAvailable provideClusterInfo: true </code></pre> <ol> <li><p>Where in the Google Cloud Dashboard I can see the clusters mentioned in the config file (gke_question-tracker_us-central1-c_hello-java-cluster, gke_quizdev_us-central1_autopilot-cluster-1)?</p> </li> <li><p>Where in the Google Cloud Dashboard I can see the users mentioned in the config file?</p> </li> <li><p>Why I do not see the questy-java-cluster after running the kubectl config view command?</p> </li> </ol>
fascynacja
<p>This <strong>is</strong> a tad confusing.</p> <p>There are 2 related but <strong>disconnected</strong> &quot;views&quot; of the clusters.</p> <p>The first view is Google Cloud's &quot;view&quot;. This is what you're seeing in Cloud Console. You would see the same (!) details using e.g. <code>gcloud container clusters list --project=quizdev</code> (see <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/list" rel="nofollow noreferrer">docs</a>). This is the current set of Kubernetes clusters resources (there's one cluster <code>questy-java-cluster</code> in the current project (<code>quizdev</code>).</p> <p><code>kubectl</code> generally (though you can specify the projects on the command line too) uses a so-called kubeconfig file (default Linux location: <code>~/.kube/config</code>) to hold the configuration information for clusters, contexts (combine clusters with user and possible more) with users. See <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">Organizing Cluster Access using kubeconfig files</a>.</p> <p>Now, it's mostly up to you (the developer) to keep the Google Cloud view and the <code>kubectl</code> view in sync.</p> <p>When you <code>gcloud container clusters create</code> (or use Cloud Console), <code>gcloud</code> creates the cluster (and IIRC) configures the default kubeconfig file for you. This is to make it easier to immediately use <code>kubectl</code> after creating the cluster. You can also always <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer"><code>gcloud container clusters get-credentials</code></a> to repeat the credentials step (configuring kubeconfig).</p> <p>If you create clusters using Cloud Console, you <strong>must</strong> <code>gcloud container clusters get-credentials</code> manually in order to update your local kubeconfig file(s) with the cluster's credentials.</p> <p>I don't recall whether <code>gcloud container clusters delete</code> deletes the corresponding credentials in the default kubeconfig file; I think it <strong>doesn't</strong>.</p> <p>The result is that there's usually 'drift' between what the kubeconfig file contains and the clusters that exist; I create|delete clusters daily and periodically tidy my kubeconfig file for this reason.</p> <p>One additional complication is that (generally) there's one kubeconfig file (<code>~/.kube/config</code>) but you may also have multiple Google Cloud Projects. The clusters that you've <code>get-credentials</code> (either manually or automatically) that span multiple (!) Google Cloud Projects will all be present in the one local kubeconfig.</p> <p>There's a one-to-one mapping though between Google Cloud Projects, Locations and Cluster Names and kubeconfig <code>cluster</code>'s:</p> <pre><code>gke_{PROJECT}_{LOCATION}_{CLUSTER-NAME} </code></pre> <p>Lastly, if one (or more developers) use multiple hosts to access Kubernetes clusters, each host will need to reflect the kubeconfig configuration (<code>server</code>, <code>user</code>, <code>context</code>) for each cluster that it needs to access.</p> <p>GKE does a decent job in helping you manage kubeconfig configurations. The complexity|confusion arises because it does some of this configuration implicitly (<code>gcloud container clusters create</code>) and it would be better to make this more transparent. If you use any managed Kubernetes offering (AWS, Azure, Linode, Vultr etc. etc.), these all provide some analog of this process either manual or automatic for helping manage the entries in kubeconfig.</p>
DazWilkin
<p>I need to get all resources based on <strong>label</strong>, I used the following code <strong>which works</strong>, However, it takes <strong>too much time</strong> ( ~20sec) to get the response, even which I restrict it to only one namespace (vrf), any idea what im doing wrong here?</p> <pre><code>resource.NewBuilder(flags). Unstructured(). ResourceTypes(res...). NamespaceParam(&quot;vrf&quot;).AllNamespaces(false). LabelSelectorParam(&quot;a=b&quot;).SelectAllParam(selector == &quot;&quot;). Flatten(). Latest().Do().Object() </code></pre> <p><a href="https://pkg.go.dev/k8s.io/[email protected]/pkg/resource#Builder" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/[email protected]/pkg/resource#Builder</a></p> <p>As I already using <code>label</code> and <code>ns</code>, not sure what should else I do in this case.</p> <p>Ive checked the cluster connection and it seems that everything is ok, running regular <code>kubectl</code> are getting <strong>very fast</strong> response, just this query took much time.</p>
PJEM
<p>The search may be heavy due to the sheer size of the resources the query has to search into. Have you looked into this possibility and further reduce the size using one more label or filter on top of current.</p> <p>Also check the performance of you Kubernetes api server when the operation is being performed and optimize it.</p>
Bijendra
<p>We've just bought a docker hub pro user so that we don't have to worry about pull rate limits.</p> <p>Now, I'm currently having a problem trying to to set the docker hub pro user. Is there a way to set the credentials for hub.docker.com globally?</p> <p>In the kubernetes docs I found following article: <a href="https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry" rel="nofollow noreferrer">Kubernetes | Configure nodes for private registry</a></p> <p>On every node I executed a docker login with the credentials, copied the config.json to /var/lib/kubelet and restarted kubelet. But I'm still getting an ErrImagePull because of those rate limits.</p> <p>I've copied the config.json to the following places:</p> <ul> <li>/var/lib/kubelet/config.json</li> <li>/var/lib/kubelet/.dockercfg</li> <li>/root/.docker/config.json</li> <li>/.docker/config.json</li> </ul> <p>There is an option to use a secret for authentification. The problem is, that we would need to edit hundreds of statefulsets, deployments and deamonsets. So it would be great to set the docker user globally.</p> <p>Here's the config.json:</p> <pre><code>{ &quot;auths&quot;: { &quot;https://index.docker.io/v1/&quot;: { &quot;auth&quot;: &quot;[redacted]&quot; } }, &quot;HttpHeaders&quot;: { &quot;User-Agent&quot;: &quot;Docker-Client/19.03.13 (linux)&quot; } } </code></pre> <p>To check if it actually logs in with the user I've created an access token in my account. There I can see the last login with said token. The last login was when I executed the docker login command. So the images that I try to pull aren't using those credentials.</p> <p>Any ideas?</p> <p>Thank you!</p>
Cédric Voit
<p>Kubernetes implements this using <a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="nofollow noreferrer">image pull secrets</a>. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">This doc does a better job at walking through the process</a>.</p> <p>Using the Docker config.json:</p> <pre><code>kubectl create secret generic regcred \ --from-file=.dockerconfigjson=&lt;path/to/.docker/config.json&gt; \ --type=kubernetes.io/dockerconfigjson </code></pre> <p>Or you can pass the settings directly:</p> <pre><code>kubectl create secret docker-registry &lt;name&gt; --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL </code></pre> <p>Then use those secrets in your pod definitions:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: foo namespace: awesomeapps spec: containers: - name: foo image: janedoe/awesomeapp:v1 imagePullSecrets: - name: myregistrykey </code></pre> <p>Or to use the secret at a user level (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account" rel="nofollow noreferrer">Add image pull secret to service account</a>)</p> <ol> <li><p><code>kubectl get serviceaccounts default -o yaml &gt; ./sa.yaml</code></p> </li> <li><p>open the sa.yaml file, delete line with key resourceVersion, add lines with imagePullSecrets: and save.</p> <pre><code>kind: ServiceAccount metadata: creationTimestamp: &quot;2020-11-22T21:41:53Z&quot; name: default namespace: default selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: afad07eb-f58e-4012-9ccf-0ac9762981d5 secrets: - name: default-token-gkmp7 imagePullSecrets: - name: regcred </code></pre> </li> <li><p>Finally replace the serviceaccount with the new updated sa.yaml file <code>kubectl replace serviceaccount default -f ./sa.yaml</code></p> </li> </ol>
BMitch
<p>I had a express application and I used this application in my Kubernetes cluster.</p> <p>This application is auth service for my micro service architecture study.</p> <p>I use Skaffold dev command for applying Kubernetes for this app.</p> <p>My Dockerfile is like that:</p> <pre><code>FROM node:alpine WORKDIR /app COPY package.json . RUN npm install COPY . . CMD [&quot;npm&quot;, &quot;start&quot;] </code></pre> <p>And I run it with &quot;Scaffold dev&quot; command</p> <p>And I getting error like that:</p> <pre><code>... ... Deployments stabilized in 3.752 seconds Watching for changes... [auth] npm notice [auth] npm notice New patch version of npm available! 7.5.1 -&gt; 7.5.4 [auth] npm notice Changelog: &lt;https://github.com/npm/cli/releases/tag/v7.5.4&gt; [auth] npm notice Run `npm install -g [email protected]` to update! [auth] npm notice [auth] npm ERR! path /app [auth] npm ERR! command failed [auth] npm ERR! signal SIGTERM [auth] npm ERR! command sh -c node server.js [auth] [auth] npm ERR! A complete log of this run can be found in: [auth] npm ERR! /root/.npm/_logs/2021-02-19T16_46_28_956Z-debug.log </code></pre> <p>And my package.json file :</p> <pre><code> { &quot;name&quot;: &quot;authservice&quot;, &quot;version&quot;: &quot;1.0.0&quot;, &quot;description&quot;: &quot;Auth Service&quot;, &quot;main&quot;: &quot;index.js&quot;, &quot;scripts&quot;: { &quot;test&quot;: &quot;echo \&quot;Error: no test specified\&quot; &amp;&amp; exit 1&quot;, &quot;start&quot;: &quot;node server.js&quot; }, &quot;keywords&quot;: [ &quot;auth&quot;, &quot;user&quot; ], &quot;author&quot;: &quot;&quot;, &quot;license&quot;: &quot;ISC&quot;, &quot;dependencies&quot;: { &quot;bcrypt&quot;: &quot;^5.0.0&quot;, &quot;body-parser&quot;: &quot;^1.19.0&quot;, &quot;dotenv&quot;: &quot;^8.2.0&quot;, &quot;express&quot;: &quot;^4.17.1&quot;, &quot;express-rate-limit&quot;: &quot;^5.2.3&quot;, &quot;jsonwebtoken&quot;: &quot;^8.5.1&quot;, &quot;mongoose&quot;: &quot;5.10.19&quot;, &quot;morgan&quot;: &quot;^1.10.0&quot;, &quot;multer&quot;: &quot;^1.4.2&quot; }, &quot;devDependencies&quot;: { &quot;nodemon&quot;: &quot;^2.0.6&quot; } } </code></pre> <p>How can I solve this error?</p>
akasaa
<p>I assume you're either incorrectly specifying your script in the <code>package.json</code> or your script is not <code>server.js</code>.</p> <p>A minimal repro of your question works:</p> <p>Using the Node.JS Getting Started guide's example with one minor tweak: <a href="https://nodejs.org/en/docs/guides/getting-started-guide/" rel="nofollow noreferrer">https://nodejs.org/en/docs/guides/getting-started-guide/</a></p> <blockquote> <p><strong>NOTE</strong> Change <code>const hostname = '127.0.0.1';</code> to <code>const hostname = '0.0.0.0';</code> This is necessary to access the containerized app from the host.</p> </blockquote> <p>Adding a package.json because you have one and to show <code>npm start</code>:</p> <p><code>package.json</code>:</p> <pre><code>{ &quot;name&quot;: &quot;66281738&quot;, &quot;version&quot;: &quot;0.0.1&quot;, &quot;scripts&quot;: { &quot;start&quot;: &quot;node app.js&quot; } } </code></pre> <blockquote> <p><strong>NOTE</strong> I believe <code>npm start</code> defaults to <code>&quot;start&quot;: &quot;node server.js&quot;</code></p> </blockquote> <p>Using your <code>Dockerfile</code> and:</p> <pre class="lang-sh prettyprint-override"><code>QUESTION=&quot;66281738&quot; docker build --tag=${QUESTION} --file=./Dockerfile . docker run --interactive --tty --publish=7777:3000 ${QUESTION} </code></pre> <p>Yields:</p> <pre><code>&gt; [email protected] start &gt; node app.js Server running at http://0.0.0.0:3000/ </code></pre> <blockquote> <p><strong>NOTE</strong> <code>docker run</code> binds the container's <code>:3000</code> port to the host's <code>:7777</code> just to show these need not be the same.</p> </blockquote> <p>Then:</p> <pre class="lang-sh prettyprint-override"><code>curl --request GET http://localhost:3000/ </code></pre> <p>Yields:</p> <pre><code>Hello World </code></pre>
DazWilkin
<p>I am trying to push my docker image to Google Cloud Registry but get a 509 error say the certificate signed by unknown authority. This never used to be a problem and I can't seem to fix the issue. Any help is appreciated.</p> <p><strong>I'm running</strong> </p> <p><code>docker -- push gcp.io/project/registry</code></p> <p><strong>Error</strong> </p> <p><code>Get https://gcp.io/v2/: x509: certificate signed by unknown authority</code></p> <p>I'm on Mac OS.</p>
GrepThis
<p>Update: you have a typo, you need to go to <code>gcr.io</code>, not <code>gcp.io</code>.</p> <hr> <p>[ Original answer ]</p> <p>Looks like a certificate issue on gcp.io:</p> <pre><code>$ openssl s_client -showcerts -connect gcp.io:443 &lt;/dev/null CONNECTED(00000003) depth=0 OU = Domain Control Validated, OU = PositiveSSL Wildcard, CN = *.gcp.io verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 OU = Domain Control Validated, OU = PositiveSSL Wildcard, CN = *.gcp.io verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:OU = Domain Control Validated, OU = PositiveSSL Wildcard, CN = *.gcp.io i:C = GB, ST = Greater Manchester, L = Salford, O = Sectigo Limited, CN = Sectigo RSA Domain Validation Secure Server CA -----BEGIN CERTIFICATE----- MIIF6jCCBNKgAwIBAgIRAPLbl+CLddoCWmMcKSzPAT8wDQYJKoZIhvcNAQELBQAw gY8xCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO BgNVBAcTB1NhbGZvcmQxGDAWBgNVBAoTD1NlY3RpZ28gTGltaXRlZDE3MDUGA1UE AxMuU2VjdGlnbyBSU0EgRG9tYWluIFZhbGlkYXRpb24gU2VjdXJlIFNlcnZlciBD QTAeFw0xOTExMjUwMDAwMDBaFw0yMDExMjQyMzU5NTlaMFUxITAfBgNVBAsTGERv bWFpbiBDb250cm9sIFZhbGlkYXRlZDEdMBsGA1UECxMUUG9zaXRpdmVTU0wgV2ls ZGNhcmQxETAPBgNVBAMMCCouZ2NwLmlvMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAozq94VQqkxLR0qUqz6IM5/lY411MkLgrQhOR8Sg17EioEpudFKCV FhC9N2Z8EFLpaGAxABpYM5JLWy1PpszOEETFswaS0Y/CpCnzW/SXtlH2ZOlGBXII 3LKP7ScfCgbwnndg820cA0XDNc54MUZcx2ebe2MZfFHKNhm+Lpqr4UZZu4ZaE8C6 9tcJaMC/znIWUpf+61aUJIQTYITL+NVB3zCeDhK0r29aLbz4K33TqN+9PJtwyTiS 8PZFTg93R8RCzdJD6x1lg3u7tAHGi6S3Omn7y3YtivTsA3iYbYIBm9i+0EHgpTOA Hp9Z3wX2TF/M6FiY7yo1tc8ft6i3wICaUwIDAQABo4ICeDCCAnQwHwYDVR0jBBgw FoAUjYxexFStiuF36Zv5mwXhuAGNYeEwHQYDVR0OBBYEFH7PsGxNUFi1gLyOjwje oEXcFIIuMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdJQQWMBQG CCsGAQUFBwMBBggrBgEFBQcDAjBJBgNVHSAEQjBAMDQGCysGAQQBsjEBAgIHMCUw IwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMAgGBmeBDAECATCB hAYIKwYBBQUHAQEEeDB2ME8GCCsGAQUFBzAChkNodHRwOi8vY3J0LnNlY3RpZ28u Y29tL1NlY3RpZ29SU0FEb21haW5WYWxpZGF0aW9uU2VjdXJlU2VydmVyQ0EuY3J0 MCMGCCsGAQUFBzABhhdodHRwOi8vb2NzcC5zZWN0aWdvLmNvbTAbBgNVHREEFDAS gggqLmdjcC5pb4IGZ2NwLmlvMIIBBAYKKwYBBAHWeQIEAgSB9QSB8gDwAHUAB7dc G+V9aP/xsMYdIxXHuuZXfFeUt2ruvGE6GmnTohwAAAFuouNwEgAABAMARjBEAiAB bpCCsd9bTM4mJMAEVf9WL4Mu3z+EaezOfJ+1N5MzEAIgHcuRkKk/tukyDAz0gZtu z1K87zVaw96FUdFbLQnZw0YAdwBep3P531bA57U2SH3QSeAyepGaDIShEhKEGHWW gXFFWAAAAW6i42//AAAEAwBIMEYCIQCL81nEy3BBlmVR5ehK+LgAvWUxlwWUoTtH +TLgft+usgIhALCoeeBaEkcMTPIU+fmQQ6FTp7tMvzN726bHJ/ODJZmEMA0GCSqG SIb3DQEBCwUAA4IBAQBBnaYdC4OjT0rjlVYRR5lqiRsHTgQiReJXVwXtYO6czPYU np1szzpF0xto3lTImJNzyyWl8Zt+4H/ABrOE3aKlnpQVZ/nBPqx8cLI/O8kEl6o4 rQxCXfVum3LTHqO0EtFSQfC3ALS137afCKUGa/e4PlFNTMqStP/anhv6byK+0bwh jiqd9xuhjLmttf6zDelcmZPAZFuSL34khKnILPiXBsbiKFULiY1yEdpc4IpNLvZD ys46g64+ss0sIqYR3vDPdoQmY3SUutxL7m2fwElGKGJIMFvkJ4+TUvNqAIsyQuEt sIp/puDi8aEFhExywY1zrAeUuj4CJrCsHKZ25IIg -----END CERTIFICATE----- 1 s:C = US, ST = New Jersey, L = Jersey City, O = The USERTRUST Network, CN = USERTrust RSA Certification Authority i:C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Root -----BEGIN CERTIFICATE----- MIIFdzCCBF+gAwIBAgIQE+oocFv07O0MNmMJgGFDNjANBgkqhkiG9w0BAQwFADBv MQswCQYDVQQGEwJTRTEUMBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFk ZFRydXN0IEV4dGVybmFsIFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBF eHRlcm5hbCBDQSBSb290MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFow gYgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpOZXcgSmVyc2V5MRQwEgYDVQQHEwtK ZXJzZXkgQ2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMS4wLAYD VQQDEyVVU0VSVHJ1c3QgUlNBIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjAN BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAgBJlFzYOw9sIs9CsVw127c0n00yt UINh4qogTQktZAnczomfzD2p7PbPwdzx07HWezcoEStH2jnGvDoZtF+mvX2do2NC tnbyqTsrkfjib9DsFiCQCT7i6HTJGLSR1GJk23+jBvGIGGqQIjy8/hPwhxR79uQf jtTkUcYRZ0YIUcuGFFQ/vDP+fmyc/xadGL1RjjWmp2bIcmfbIWax1Jt4A8BQOujM 8Ny8nkz+rwWWNR9XWrf/zvk9tyy29lTdyOcSOk2uTIq3XJq0tyA9yn8iNK5+O2hm AUTnAU5GU5szYPeUvlM3kHND8zLDU+/bqv50TmnHa4xgk97Exwzf4TKuzJM7UXiV Z4vuPVb+DNBpDxsP8yUmazNt925H+nND5X4OpWaxKXwyhGNVicQNwZNUMBkTrNN9 N6frXTpsNVzbQdcS2qlJC9/YgIoJk2KOtWbPJYjNhLixP6Q5D9kCnusSTJV882sF qV4Wg8y4Z+LoE53MW4LTTLPtW//e5XOsIzstAL81VXQJSdhJWBp/kjbmUZIO8yZ9 HE0XvMnsQybQv0FfQKlERPSZ51eHnlAfV1SoPv10Yy+xUGUJ5lhCLkMaTLTwJUdZ +gQek9QmRkpQgbLevni3/GcV4clXhB4PY9bpYrrWX1Uu6lzGKAgEJTm4Diup8kyX HAc/DVL17e8vgg8CAwEAAaOB9DCB8TAfBgNVHSMEGDAWgBStvZh6NLQm9/rEJlTv A73gJMtUGjAdBgNVHQ4EFgQUU3m/WqorSs9UgOHYm8Cd8rIDZsswDgYDVR0PAQH/ BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wEQYDVR0gBAowCDAGBgRVHSAAMEQGA1Ud HwQ9MDswOaA3oDWGM2h0dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9BZGRUcnVzdEV4 dGVybmFsQ0FSb290LmNybDA1BggrBgEFBQcBAQQpMCcwJQYIKwYBBQUHMAGGGWh0 dHA6Ly9vY3NwLnVzZXJ0cnVzdC5jb20wDQYJKoZIhvcNAQEMBQADggEBAJNl9jeD lQ9ew4IcH9Z35zyKwKoJ8OkLJvHgwmp1ocd5yblSYMgpEg7wrQPWCcR23+WmgZWn RtqCV6mVksW2jwMibDN3wXsyF24HzloUQToFJBv2FAY7qCUkDrvMKnXduXBBP3zQ YzYhBx9G/2CkkeFnvN4ffhkUyWNnkepnB2u0j4vAbkN9w6GAbLIevFOFfdyQoaS8 Le9Gclc1Bb+7RrtubTeZtv8jkpHGbkD4jylW6l/VXxRTrPBPYer3IsynVgviuDQf Jtl7GQVoP7o81DgGotPmjw7jtHFtQELFhLRAlSv0ZaBIefYdgWOWnU914Ph85I6p 0fKtirOMxyHNwu8= -----END CERTIFICATE----- --- Server certificate subject=OU = Domain Control Validated, OU = PositiveSSL Wildcard, CN = *.gcp.io issuer=C = GB, ST = Greater Manchester, L = Salford, O = Sectigo Limited, CN = Sectigo RSA Domain Validation Secure Server CA --- No client certificate CA names sent Peer signing digest: SHA512 Peer signature type: RSA Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 3435 bytes and written 424 bytes Verification error: unable to verify the first certificate --- New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: A1FB2B7B405094705F0DAFCAABA63B4E1ABDE5C122F2F3E5A7DE88ECB75AB617 Session-ID-ctx: Master-Key: A0FB112FC9A33BD96E2346627A4E99A03F5C8AA404B19215EA3226A487B034E17EAC38AE0BD79C6B51E882BDC0DECE90 PSK identity: None PSK identity hint: None SRP username: None Start Time: 1588527367 Timeout : 7200 (sec) Verify return code: 21 (unable to verify the first certificate) Extended master secret: no --- DONE </code></pre> <p>And I see a similar error from <code>curl -v https://gcp.io/v2/</code>. This will need to be resolved by Google.</p>
BMitch
<p>I am trying to use VSCode Cloud Studio plugin to deploy and debug a project in Kubernetes. When I use intellij and Cloud Studio plugin there, everything works perfect. My MongoDB is persistent with each deployment. When I use VSCode and Cloud Studio there, MongoDB is not persistent anymore. I would appreciate any tips to make it work in VSCode too.</p> <p>When I deploy via intellij it uses the same persistent volume claim. When I deploy via VSCode it creates a new persistent volume claim everytime.</p> <p>Here is the launch.json for VSCode:</p> <pre><code> { &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Kubernetes: Run/Debug&quot;, &quot;type&quot;: &quot;cloudcode.kubernetes&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;skaffoldConfig&quot;: &quot;${workspaceFolder}\\skaffold.yaml&quot;, &quot;watch&quot;: false, &quot;cleanUp&quot;: false, &quot;portForward&quot;: true, &quot;imageRegistry&quot;: &quot;XYZ&quot;, &quot;debug&quot;: [ { &quot;image&quot;: &quot;XYZ&quot;, &quot;containerName&quot;: &quot;XYZ&quot;, &quot;sourceFileMap&quot;: { &quot;${workspaceFolder}&quot;: &quot;/root/&quot; } } ] } ] } </code></pre> <p>Here is the workspace.xml from intellij:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;project version=&quot;4&quot;&gt; &lt;component name=&quot;ChangeListManager&quot;&gt; &lt;list default=&quot;true&quot; id=&quot;b5a077d4-323a-4042-8c4a-3bdd2d997e47&quot; name=&quot;Changes&quot; comment=&quot;&quot; /&gt; &lt;option name=&quot;SHOW_DIALOG&quot; value=&quot;false&quot; /&gt; &lt;option name=&quot;HIGHLIGHT_CONFLICTS&quot; value=&quot;true&quot; /&gt; &lt;option name=&quot;HIGHLIGHT_NON_ACTIVE_CHANGELIST&quot; value=&quot;false&quot; /&gt; &lt;option name=&quot;LAST_RESOLUTION&quot; value=&quot;IGNORE&quot; /&gt; &lt;/component&gt; &lt;component name=&quot;Git.Settings&quot;&gt; &lt;option name=&quot;RECENT_GIT_ROOT_PATH&quot; value=&quot;$PROJECT_DIR$&quot; /&gt; &lt;/component&gt; &lt;component name=&quot;MarkdownSettingsMigration&quot;&gt; &lt;option name=&quot;stateVersion&quot; value=&quot;1&quot; /&gt; &lt;/component&gt; &lt;component name=&quot;ProjectId&quot; id=&quot;2KV2OUqPUEf43q5Aj0UCGkKKm10&quot; /&gt; &lt;component name=&quot;ProjectViewState&quot;&gt; &lt;option name=&quot;hideEmptyMiddlePackages&quot; value=&quot;true&quot; /&gt; &lt;option name=&quot;showLibraryContents&quot; value=&quot;true&quot; /&gt; &lt;/component&gt; &lt;component name=&quot;PropertiesComponent&quot;&gt; &lt;property name=&quot;RunOnceActivity.OpenProjectViewOnStart&quot; value=&quot;true&quot; /&gt; &lt;property name=&quot;RunOnceActivity.ShowReadmeOnStart&quot; value=&quot;true&quot; /&gt; &lt;property name=&quot;WebServerToolWindowFactoryState&quot; value=&quot;false&quot; /&gt; &lt;property name=&quot;com.google.cloudcode.ide_session_index&quot; value=&quot;20230118_0001&quot; /&gt; &lt;property name=&quot;last_opened_file_path&quot; value=&quot;$PROJECT_DIR$&quot; /&gt; &lt;property name=&quot;nodejs_package_manager_path&quot; value=&quot;npm&quot; /&gt; &lt;property name=&quot;settings.editor.selected.configurable&quot; value=&quot;preferences.pluginManager&quot; /&gt; &lt;property name=&quot;ts.external.directory.path&quot; value=&quot;C:\Program Files\JetBrains\IntelliJ IDEA 2021.3.2\plugins\JavaScriptLanguage\jsLanguageServicesImpl\external&quot; /&gt; &lt;/component&gt; &lt;component name=&quot;RunDashboard&quot;&gt; &lt;option name=&quot;excludedTypes&quot;&gt; &lt;set&gt; &lt;option value=&quot;gcp-app-engine-local-run&quot; /&gt; &lt;/set&gt; &lt;/option&gt; &lt;/component&gt; &lt;component name=&quot;RunManager&quot;&gt; &lt;configuration name=&quot;Develop on Kubernetes&quot; type=&quot;google-container-tools-skaffold-run-config&quot; factoryName=&quot;google-container-tools-skaffold-run-config-dev&quot; show_console_on_std_err=&quot;false&quot; show_console_on_std_out=&quot;false&quot;&gt; &lt;option name=&quot;allowRunningInParallel&quot; value=&quot;false&quot; /&gt; &lt;option name=&quot;buildEnvironment&quot; value=&quot;Local&quot; /&gt; &lt;option name=&quot;cleanupDeployments&quot; value=&quot;false&quot; /&gt; &lt;option name=&quot;deployToCurrentContext&quot; value=&quot;true&quot; /&gt; &lt;option name=&quot;deployToMinikube&quot; value=&quot;false&quot; /&gt; &lt;option name=&quot;envVariables&quot; /&gt; &lt;option name=&quot;imageRepositoryOverride&quot; /&gt; &lt;option name=&quot;kubernetesContext&quot; /&gt; &lt;option name=&quot;mappings&quot;&gt; &lt;list /&gt; &lt;/option&gt; &lt;option name=&quot;moduleDeploymentType&quot; value=&quot;DEPLOY_MODULE_SUBSET&quot; /&gt; &lt;option name=&quot;projectPathOnTarget&quot; /&gt; &lt;option name=&quot;resourceDeletionTimeoutMins&quot; value=&quot;2&quot; /&gt; &lt;option name=&quot;selectedOptions&quot;&gt; &lt;list /&gt; &lt;/option&gt; &lt;option name=&quot;skaffoldConfigurationFilePath&quot; value=&quot;$PROJECT_DIR$/skaffold.yaml&quot; /&gt; &lt;option name=&quot;skaffoldModules&quot;&gt; &lt;list&gt; &lt;option value=&quot;XYZ&quot; /&gt; &lt;/list&gt; &lt;/option&gt; &lt;option name=&quot;skaffoldNamespace&quot; /&gt; &lt;option name=&quot;skaffoldProfile&quot; /&gt; &lt;option name=&quot;skaffoldWatchMode&quot; value=&quot;ON_DEMAND&quot; /&gt; &lt;option name=&quot;statusCheck&quot; value=&quot;true&quot; /&gt; &lt;option name=&quot;verbosity&quot; value=&quot;WARN&quot; /&gt; &lt;method v=&quot;2&quot; /&gt; &lt;/configuration&gt; &lt;/component&gt; &lt;component name=&quot;SpellCheckerSettings&quot; RuntimeDictionaries=&quot;0&quot; Folders=&quot;0&quot; CustomDictionaries=&quot;0&quot; DefaultDictionary=&quot;application-level&quot; UseSingleDictionary=&quot;true&quot; transferred=&quot;true&quot; /&gt; &lt;component name=&quot;TaskManager&quot;&gt; &lt;task active=&quot;true&quot; id=&quot;Default&quot; summary=&quot;Default task&quot;&gt; &lt;changelist id=&quot;b5a077d4-323a-4042-8c4a-3bdd2d997e47&quot; name=&quot;Changes&quot; comment=&quot;&quot; /&gt; &lt;created&gt;1674045398429&lt;/created&gt; &lt;option name=&quot;number&quot; value=&quot;Default&quot; /&gt; &lt;option name=&quot;presentableId&quot; value=&quot;Default&quot; /&gt; &lt;updated&gt;1674045398429&lt;/updated&gt; &lt;workItem from=&quot;1674045401219&quot; duration=&quot;2543000&quot; /&gt; &lt;/task&gt; &lt;servers /&gt; &lt;/component&gt; &lt;component name=&quot;TypeScriptGeneratedFilesManager&quot;&gt; &lt;option name=&quot;version&quot; value=&quot;3&quot; /&gt; &lt;/component&gt; &lt;/project&gt; </code></pre> <p>All other files are the same for the project of course.</p>
MilesDyson
<p>It looks like the Cloud Code for IntelliJ configuration is restricting the deployment to the XYZ module, but not in the Cloud Code for VS Code configuration.</p>
Brian de Alwis
<p>I have a microservice application in one repo that communicates with another service that's managed by another repo. This is not an issue when deploying to cloud, however, when devving locally the other service needs to be deployed too.</p> <p>I've read this documentation: <a href="https://skaffold.dev/docs/design/config/#remote-config-dependency" rel="nofollow noreferrer">https://skaffold.dev/docs/design/config/#remote-config-dependency</a> and this seems like a clean solution, but I only want it to depend on the git skaffold config if deploying locally (i.e. current context is &quot;minikube&quot;).</p> <p>Is there a way to do this?</p>
WIlliam
<p>Profiles can be <a href="https://skaffold.dev/docs/environment/profiles/#activation" rel="nofollow noreferrer">automatically activated</a> based on criteria such as environment variables, kube-context names, and the Skaffold command being run.</p> <p>Profiles are processed after resolving the config dependencies though. But you could have your remote config include a profile that is contingent on a <code>kubeContext: minikube</code>.</p> <p>Another alternative is to have several <code>skaffold.yaml</code>s: one for prod, one for dev.</p>
Brian de Alwis
<p>When I try to create a pod in kubernetes with my image in my Harbor registry,I got an ErrImagePull Error, which looks like that:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10s default-scheduler Successfully assigned test/test-pod to ubuntu-s-2vcpu-2gb-ams3-01-slave01 Normal Pulling 9s kubelet Pulling image &quot;my.harbor.com/test/nginx:1.18.0&quot; Warning Failed 9s kubelet Failed to pull image &quot;my.harbor.com/test/nginx:1.18.0&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;my.harbor.com/test/nginx:1.18.0&quot;: failed to resolve reference &quot;my.harbor.com/test/nginx:1.18.0&quot;: failed to do request: Head https://my.harbor.com/v2/test/nginx/manifests/1.18.0: x509: certificate signed by unknown authority Warning Failed 9s kubelet Error: ErrImagePull Normal BackOff 8s kubelet Back-off pulling image &quot;my.harbor.com/test/nginx:1.18.0&quot; Warning Failed 8s kubelet Error: ImagePullBackOff </code></pre> <p>I think the crucial problem is that <code>'x509: certificate signed by unknown authority</code> but I really don't know what's wrong, since I <strong>copied my CA to both kubernetes master node and slave node</strong>, and <strong>they can both login to harbor</strong> and run <code>docker pull my.harbor.com/test/nginx:1.18.0</code> to pull the image successfully.</p> <p>I had been bothered days for this, any reply would be grateful.</p>
karasart
<blockquote> <p>I copied the ca.crt to /etc/docker/certs.d/my.harbor.com/</p> </blockquote> <p>This will make it work for the docker engine, which you've shown.</p> <blockquote> <p>along with my.harbor.cert and my.harbor.com.key</p> </blockquote> <p>I'd consider that a security violation and no longer trust the secret key for your harbor host. The private key should never need to be copied off of the host.</p> <blockquote> <p>and I also copied the ca.crt to /usr/local/share/ca-certificates/ and run command update-ca-certificates to update.</p> </blockquote> <p>That's the step that should have resolved this.</p> <p>You can verify that you loaded the certificate with:</p> <pre class="lang-bash prettyprint-override"><code>openssl s_client -connect my.harbor.com:443 -showcerts &lt;/dev/null </code></pre> <p>If the output for that doesn't include a message like <code>Verification: OK</code>, then you didn't configure the host certificates correctly and need to double check the steps for your Linux distribution. It's important to check this on each of your nodes. If you only update the manager and pull your images from a worker, that worker will still encounter TLS errors.</p> <p>If <code>openssl</code> shows a successful verification, then check your Kubernetes node. Depending on the CRI, it could be caching old certificate data and need to be restarted to detect the change on the host.</p> <blockquote> <p>As for CRI, I don't know what is it</p> </blockquote> <p>Container Runtime Interface, part of your Kubernetes install. By default, this is <code>containerd</code> on many Kubernetes distributions. <code>containerd</code> and other CRI's (except for <code>docker-shim</code>) will not look at the docker configuration.</p>
BMitch
<p>I am using the skaffold to run my typescript application with the helm in Kubernetes. Below is my skaffold build configuration.</p> <pre><code>apiVersion: skaffold/v2beta8 kind: Config build: local: push: false tagPolicy: gitCommit: variant: CommitSha prefix: commit- artifacts: - image: my-app sync: infer: - '**/**/*.ts' - '**/**/*.json' </code></pre> <p>As per this, Whenever I start the application, the application sync my ts and JSON file when update, and other than these files, it will rebuild the app. I have a 'build' folder in my root-structure. which I have mounted on the Kubernetes pod so whenever the app builds I will get the latest build code at my local and it will help to debug the application. But due to this application continuously rebuilding as skaffold found the change in the build folder. So, How to ignore folder/file for skaffold watch? I tried to use buildpacks.dependencies but it won't be working (Giving error for builder image definition). Can anyone help me, please?</p> <p>Thanks.</p>
aryan
<p>In your example, you're using Skaffold's <code>docker</code> builder. Skaffold's file watcher respects the values in the <code>.dockerignore</code> file.</p>
Brian de Alwis
<p>How to add a service_account.json file to kubernetes secrets? I tried</p> <p><code>kubectl create secret generic service_account.json -n sample --from-file=service_account=service_account.json</code></p> <p>but it returns an error failed to create secret Secret &quot;service_account.json&quot; is invalid: metadata.name</p>
Tony Stark
<p>You can't use <code>service_account.json</code> as the (metadata) name for a Kubernetes resource. Here's the documentation on permitted <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/" rel="nofollow noreferrer">Object Names and IDs</a></p> <p>You can use:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create secret generic foo \ --namespace=sample \ --from-file=key.json=service_account.json </code></pre> <blockquote> <p><strong>NOTE</strong> The secret is called <code>foo</code> and it creates a key called <code>key.json</code> whose value is the content of the file called <code>service_account.json</code>.</p> </blockquote> <blockquote> <p><strong>NOTE</strong> If you don't wish to rename the object name in the secret, you can omit it; I renamed the file <code>service_account.json</code> to <code>key.json</code> in the secret. To retain the original name, just use <code>--from-file=service_account.json</code>.</p> </blockquote> <p>You should then able to volume mount the secret in the Container where you need to use it:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: labels: control-plane: controller-manager name: system --- apiVersion: apps/v1 kind: Deployment metadata: {} spec: template: spec: containers: - name: my-container volumeMounts: - name: bar mountPath: /secrets volumes: - name: bar secret: secretName: foo </code></pre> <blockquote> <p><strong>NOTE</strong> The container can access the <code>foo</code> secret's content as <code>/secrets/key.json</code>.</p> </blockquote> <p>Intentionally distinct names <code>foo</code>, <code>bar</code> etc. for clarity</p>
DazWilkin
<p>I wanted to give a try to GCP's Anthos On-Premise GKE offering. </p> <p>For sake of my demo I setup a Kubernetes cluster in GCP itself using Google Compute Engine following instructions from (<a href="https://kubernetes.io/docs/setup/production-environment/turnkey/gce/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/turnkey/gce/</a>)</p> <p><a href="https://i.stack.imgur.com/9LHg5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9LHg5.png" alt="enter image description here"></a></p> <p>After this I followed Anthos documentation to register my cluster to Anthos. I was able to register the cluster and Login into it using both Token based and Basic authentication based mechanisms.</p> <p><a href="https://i.stack.imgur.com/jLARI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jLARI.png" alt="enter image description here"></a></p> <p>Now when I try to deploy anything from GCP console, I get following error</p> <p><a href="https://i.stack.imgur.com/pRMA7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pRMA7.png" alt="enter image description here"></a></p> <p>But the deployment succeeds, I can see deployment and associated pods in Running state on my cluster.</p> <p><a href="https://i.stack.imgur.com/mIjZW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mIjZW.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/3Y38L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Y38L.png" alt="enter image description here"></a></p> <p>Also when I try to deploy using Marketplace I get following error.</p> <p><a href="https://i.stack.imgur.com/NIkdW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NIkdW.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/d3Dt6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d3Dt6.png" alt="enter image description here"></a></p> <p>I wish to know if it is a bug in Anthos or my cluster has some missing configurations ? </p>
kaysush
<p>You're not running Anthos GKE On-Prem, you're running open-source Kubernetes on Google Cloud. Things designed for Anthos - the marketplace and connecting clusters to Cloud Console - are <strong>not supposed to work</strong> in your setup. The fact that they mostly work despite that is an accident (and a testament to the portability and compatibility of Kubernetes).</p> <p>To get Cloud Console integration and use the marketplace, you need to use either <a href="https://cloud.google.com/anthos/gke/" rel="nofollow noreferrer">Anthos GKE On-Prem</a> that runs on VMWare or regular <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">GKE</a>.</p>
Shnatsel
<p>I have built some docker images and pushed them to my dockerhub repo. That means that these docker images are also available on my local computer. Here is an example of a public docker image in my repo <a href="https://hub.docker.com/repository/docker/vikash112/pathology/general" rel="nofollow noreferrer">https://hub.docker.com/repository/docker/vikash112/pathology/general</a></p> <p>I wrote a yaml file for a pod that uses this image and I keep getting the following error</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m39s default-scheduler Successfully assigned default/classification-pod to digi1036734 Warning Failed 30s kubelet Failed to pull image &quot;vikash112/pathology:0.1.0&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/vikash112/pathology:0.1.0&quot;: failed to copy: httpReadSeeker: failed open: server message: invalid_token: authorization failed Warning Failed 30s kubelet Error: ErrImagePull Normal BackOff 30s kubelet Back-off pulling image &quot;vikash112/pathology:0.1.0&quot; Warning Failed 30s kubelet Error: ImagePullBackOff Normal Pulling 16s (x2 over 7m38s) kubelet Pulling image &quot;vikash112/pathology:0.1.0&quot; </code></pre> <p>Now, the interesting thing is that</p> <p>Because the image is available locally, k8s should not try to download the image So, inorder to force k8s to not download the image, I used the imagePullPolicy: Never and I get the following error, even though the image is clearly present as shown</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 83s default-scheduler Successfully assigned default/classification-pod to digi1036734 Warning ErrImageNeverPull 5s (x9 over 82s) kubelet Container image &quot;vikash112/kftools:0.0.1&quot; is not present with pull policy of Never Warning Failed 5s (x9 over 82s) kubelet Error: ErrImageNeverPull (base) gupta@DIGI1036734:~/disk/Tools/kfp_pipeline_7/app/CAII/cai2package$ docker images | grep kftools vikash112/kftools 0.0.2 90e48f4cee7b 4 months ago 13.4GB vikash112/kftools 0.0.1 1f00fe6b3e82 4 months ago 13.4GB </code></pre> <p>So, another thing I tried was to create a secret in k8s for my docker login following the instructions here <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p> <p>However, I am able to pull the public repositories like mongodb, busybox etc. So, I don't know why my own public repositories have any problems while pulling.</p> <p>How can we reproduce it (as minimally and precisely as possible)? You can try it using the following yaml script</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: classification-pod spec: restartPolicy: OnFailure containers: - name: caiiapp image: vikash112/hellok8s:v1 imagePullPolicy: IfNotPresent resources: limits: nvidia.com/gpu: 1 nodeSelector: kubernetes.io/hostname: digi1036734 </code></pre> <p>Lets say you save the file as myscript.yaml. You can execute it as</p> <pre><code>kubectl apply -f myscript.yaml </code></pre> <p>I dont expect k8s to pull any images if they are available locally. If it is trying to pull it should pull my images from docker hub as it is pulling other images like mongodb, busybox etc.</p>
screamingmamba
<blockquote> <p>Because the image is available locally, k8s should not try to download the image</p> </blockquote> <p>Docker and Kubernetes do not share the same image store (with one small exception of Docker Desktop running Kubernetes from there). Therefore you need to push the images to a registry and pull them from there.</p> <p>The image you have listed is 68 layers totaling over 7GB compressed. I would suggest trying to cut this way down to reduce the network and disk load on your system.</p> <blockquote> <p>invalid_token: authorization failed</p> </blockquote> <p>This error indicates that any authentication credentials you have provided are not correct. Double check that the credentials you have entered into kubernetes show the correct base64 encoded user/password (docker desktop does not place credentials in the config file by default). If you have 2fa enabled in Docker Hub, make sure you are using an access token rather than your password.</p>
BMitch
<p>I'm trying to use Skaffold, Dekorate and Spring Boot.</p> <p>I can't find any examples using the new buildpack feature of Spring Boot 2.3+</p> <pre><code>apiVersion: skaffold/v2beta9 kind: Config metadata: name: tellus-upgrade build: artifacts: - image: tellus-admin custom: buildCommand: ./mvnw -pl tellus-admin org.springframework.boot:spring-boot-maven-plugin:2.4.0:build-image -Dspring-boot.build-image.imageName=$IMAGE -Drevision=dev-SNAPSHOT -DskipTests=true dependencies: paths: - tellus-admin/src - tellus-admin/pom.xml - image: tellus-config-server custom: buildCommand: ./mvnw -pl tellus-config-server org.springframework.boot:spring-boot-maven-plugin:2.4.0:build-image -Dspring-boot.build-image.imageName=$IMAGE -Drevision=dev-SNAPSHOT -DskipTests=true dependencies: paths: - tellus-config-server/src - tellus-config-server/pom.xml deploy: kubectl: manifests: - kubernetes/defaults.yml - kubernetes/db/kubernetes.yml - kubernetes/dev/dnsutils.yml - kubernetes/kafka-connect/kubernetes.yml - tellus-admin/target/classes/META-INF/dekorate/kubernetes.yml - tellus-config-server/target/classes/META-INF/dekorate/kubernetes.yml </code></pre> <p>When I run skaffold dev I get the error: exiting dev mode because first build failed: the custom script didn't produce an image with tag [tellus-config-server:RELEASE_2020_2_0-226-g9be76a373-dirty]</p> <p>However from the logs it looks like the image was built...</p> <pre><code>[INFO] Successfully built image 'docker.io/library/tellus-config-server:RELEASE_2020_2_0-226-g9be76a373-dirty' [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17.004 s [INFO] Finished at: 2020-11-15T22:31:59+11:00 [INFO] ------------------------------------------------------------------------ Building [tellus-admin]... exiting dev mode because first build failed: the custom script didn't produce an image with tag [tellus-config-server:RELEASE_2020_2_0-226-g9be76a373-dirty] </code></pre>
Rod McCutcheon
<p>The <code>spring-boot-maven-plugin:build-image</code> loads the image into your local Docker daemon, but does not push the image. I've never tried it, but you might be able to use the <code>com.spotify:dockerfile-maven-plugin:push</code> goal.</p> <p><em>Update</em>: here's a Skaffold custom build script that should do the right thing:</p> <pre><code>#!/bin/sh set -e cd &quot;$BUILD_CONTEXT&quot; mvn -pl &quot;$1&quot; -Drevision=dev-SNAPSHOT -DskipTests=true \ org.springframework.boot:spring-boot-maven-plugin:build-image \ -Dspring-boot.build-image.imageName=&quot;$IMAGE&quot; if [ &quot;$PUSH_IMAGE&quot; = true ]; then docker push &quot;$IMAGE&quot; fi </code></pre> <p>You could save that to a file <code>mvn-build-image.sh</code> and then modify your skaffold.yaml like:</p> <pre><code>artifacts: - image: tellus-admin custom: buildCommand: ./mvn-build-image.sh tellus-admin </code></pre> <hr /> <p>You might want to look at the <a href="https://skaffold.dev/docs/pipeline-stages/builders/jib/" rel="nofollow noreferrer">Skaffold's Jib integration</a> to simplify this process.</p>
Brian de Alwis
<pre><code>KUBECONFIG=&quot;$(find ~/.kube/configs/ -type f -exec printf '%s:' '{}' +)&quot; </code></pre> <p>This will construct a config file path for the environment var. I can see the contexts of my clusters and I can switch them. However when I want to get my nodes I get</p> <blockquote> <p>error: You must be logged in to the server (Unauthorized)</p> </blockquote> <p>How to solve, any ideas?</p>
Stephan Kristyn
<p>I suspect you either don't have a <code>current-context</code> set or your <code>current-context</code> points to a non-functioning cluster.</p> <p>If set (or exported) <code>KUBECONFIG</code> can reference a set of config files.</p> <p>The files' content will be merged. I think this is what you're attempting.</p> <p>But then, that variable must be exported for <code>kubectl</code> to use.</p> <p>Either:</p> <pre class="lang-sh prettyprint-override"><code>export KUBECONFIG=... kubectl ... </code></pre> <p>Or:</p> <pre class="lang-sh prettyprint-override"><code>KUBECONFIG=... kubectl ... </code></pre> <p>Then, you can:</p> <pre class="lang-sh prettyprint-override"><code># List contexts by NAME kubectl config get-contexts # Use one of them by NAME kubectl config use-context ${NAME} </code></pre>
DazWilkin
<p>I have a SpringBoot project with graceful shutdown configured. Deployed on k8s <code>1.12.7</code> Here are the logs,</p> <pre><code>2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event 2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish 2019-07-20 10:23:16.273 INFO [service,fd964ebaa631a860,75a07c123397e4ff,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=59 2019-07-20 10:23:16.374 INFO [service,9a569ecd8c448e98,00bc11ef2776d7fb,false] 1 --- [nio-8080-exec-1] com.jay.resource.ProductResource : GET /products?id=68 ... 2019-07-20 10:23:33.711 INFO [service,1532d6298acce718,08cfb8085553b02e,false] 1 --- [nio-8080-exec-9] com.jay.resource.ProductResource : GET /products?id=209 2019-07-20 10:23:46.181 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation 2019-07-20 10:23:46.216 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor' </code></pre> <p>Application has received the <code>SIGTERM</code> at <code>10:23:16.180</code> from Kubernetes. As per <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">Termination of Pods</a> <code>point#5</code> says that the terminating pod is removed from the endpoints list of service, but it is contradicting that it forwarded the requests for 17 seconds (until <code>10:23:33.711</code>) after sending <code>SIGTERM</code> signal. Is there any configuration missing?</p> <p><code>Dockerfile</code></p> <pre><code>FROM openjdk:8-jre-slim MAINTAINER Jay RUN apt update &amp;&amp; apt install -y curl libtcnative-1 gcc &amp;&amp; apt clean ADD build/libs/sample-service.jar / CMD ["java", "-jar" , "sample-service.jar"] </code></pre> <p><code>GracefulShutdown</code></p> <pre><code>// https://github.com/spring-projects/spring-boot/issues/4657 class GracefulShutdown(val waitTime: Long, val timeout: Long) : TomcatConnectorCustomizer, ApplicationListener&lt;ContextClosedEvent&gt; { @Volatile private var connector: Connector? = null override fun customize(connector: Connector) { this.connector = connector } override fun onApplicationEvent(event: ContextClosedEvent) { log.info("Received shutdown event") val executor = this.connector?.protocolHandler?.executor if (executor is ThreadPoolExecutor) { try { val threadPoolExecutor: ThreadPoolExecutor = executor log.info("Waiting for ${waitTime}s to finish") hibernate(waitTime * 1000) log.info("Resumed after hibernation") this.connector?.pause() threadPoolExecutor.shutdown() if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) { log.warn("Tomcat thread pool did not shut down gracefully within $timeout seconds. Proceeding with forceful shutdown") threadPoolExecutor.shutdownNow() if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) { log.error("Tomcat thread pool did not terminate") } } } catch (ex: InterruptedException) { log.info("Interrupted") Thread.currentThread().interrupt() } }else this.connector?.pause() } private fun hibernate(time: Long){ try { Thread.sleep(time) }catch (ex: Exception){} } companion object { private val log = LoggerFactory.getLogger(GracefulShutdown::class.java) } } @Configuration class GracefulShutdownConfig(@Value("\${app.shutdown.graceful.wait-time:30}") val waitTime: Long, @Value("\${app.shutdown.graceful.timeout:30}") val timeout: Long) { companion object { private val log = LoggerFactory.getLogger(GracefulShutdownConfig::class.java) } @Bean fun gracefulShutdown(): GracefulShutdown { return GracefulShutdown(waitTime, timeout) } @Bean fun webServerFactory(gracefulShutdown: GracefulShutdown): ConfigurableServletWebServerFactory { log.info("GracefulShutdown configured with wait: ${waitTime}s and timeout: ${timeout}s") val factory = TomcatServletWebServerFactory() factory.addConnectorCustomizers(gracefulShutdown) return factory } } </code></pre> <p><code>deployment file</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: service name: service spec: progressDeadlineSeconds: 420 replicas: 1 revisionHistoryLimit: 1 selector: matchLabels: k8s-app: service strategy: rollingUpdate: maxSurge: 2 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: k8s-app: service spec: terminationGracePeriodSeconds: 60 containers: - env: - name: SPRING_PROFILES_ACTIVE value: dev image: service:2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 20 httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 5 name: service ports: - containerPort: 8080 protocol: TCP readinessProbe: failureThreshold: 60 httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 100 periodSeconds: 10 timeoutSeconds: 5 </code></pre> <p><strong>UPDATE:</strong></p> <p>Added custom health check endpoint</p> <pre><code>@RestControllerEndpoint(id = "live") @Component class LiveEndpoint { companion object { private val log = LoggerFactory.getLogger(LiveEndpoint::class.java) } @Autowired private lateinit var gracefulShutdownStatus: GracefulShutdownStatus @GetMapping fun live(): ResponseEntity&lt;Any&gt; { val status = if(gracefulShutdownStatus.isTerminating()) HttpStatus.INTERNAL_SERVER_ERROR.value() else HttpStatus.OK.value() log.info("Status: $status") return ResponseEntity.status(status).build() } } </code></pre> <p>Changed the <code>livenessProbe</code>,</p> <pre><code> livenessProbe: httpGet: path: /actuator/live port: 8080 initialDelaySeconds: 100 periodSeconds: 5 timeoutSeconds: 5 failureThreshold: 3 </code></pre> <p>Here are the logs after the change,</p> <pre><code>2019-07-21 14:13:01.431 INFO [service,9b65b26907f2cf8f,9b65b26907f2cf8f,false] 1 --- [nio-8080-exec-2] com.jay.util.LiveEndpoint : Status: 200 2019-07-21 14:13:01.444 INFO [service,3da259976f9c286c,64b0d5973fddd577,false] 1 --- [nio-8080-exec-3] com.jay.resource.ProductResource : GET /products?id=52 2019-07-21 14:13:01.609 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event 2019-07-21 14:13:01.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish ... 2019-07-21 14:13:06.431 INFO [service,002c0da2133cf3b0,002c0da2133cf3b0,false] 1 --- [nio-8080-exec-3] com.jay.util.LiveEndpoint : Status: 500 2019-07-21 14:13:06.433 INFO [service,072abbd7275103ce,d1ead06b4abf2a34,false] 1 --- [nio-8080-exec-4] com.jay.resource.ProductResource : GET /products?id=96 ... 2019-07-21 14:13:11.431 INFO [service,35aa09a8aea64ae6,35aa09a8aea64ae6,false] 1 --- [io-8080-exec-10] com.jay.util.LiveEndpoint : Status: 500 2019-07-21 14:13:11.508 INFO [service,a78c924f75538a50,0314f77f21076313,false] 1 --- [nio-8080-exec-2] com.jay.resource.ProductResource : GET /products?id=110 ... 2019-07-21 14:13:16.431 INFO [service,38a940dfda03956b,38a940dfda03956b,false] 1 --- [nio-8080-exec-9] com.jay.util.LiveEndpoint : Status: 500 2019-07-21 14:13:16.593 INFO [service,d76e81012934805f,b61cb062154bb7f0,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=152 ... 2019-07-21 14:13:29.634 INFO [service,38a32a20358a7cc4,2029de1ed90e9539,false] 1 --- [nio-8080-exec-6] com.jay.resource.ProductResource : GET /products?id=191 2019-07-21 14:13:31.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation 2019-07-21 14:13:31.692 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor' </code></pre> <p>With the <code>livenessProbe</code> of 3 failures, kubernetes served the traffic for 13 seconds after liveness failures i.e., from <code>14:13:16.431</code> to <code>14:13:29.634</code>.</p> <p><strong>UPDATE 2:</strong> The sequence of events (thanks to <code>Eamonn McEvoy</code>)</p> <pre><code>seconds | healthy | events 0 | ✔ | * liveness probe healthy 1 | ✔ | - SIGTERM 2 | ✔ | 3 | ✔ | 4 | ✔ | 5 | ✔ | * liveness probe unhealthy (1/3) 6 | ✔ | 7 | ✔ | 8 | ✔ | 9 | ✔ | 10 | ✔ | * liveness probe unhealthy (2/3) 11 | ✔ | 12 | ✔ | 13 | ✔ | 14 | ✔ | 15 | ✘ | * liveness probe unhealthy (3/3) .. | ✔ | * traffic is served 28 | ✔ | * traffic is served 29 | ✘ | * pod restarts </code></pre>
jaks
<p>SIGTERM isn't putting the pod into a terminating state immediately. You can see in the logs your application begins graceful shutdown at 10:23:16.180 and takes >20 seconds to complete. At this point, the container stops and pod can enter the terminating state.</p> <p>As far as kubernetes is concerned the pod looks ok during the graceful shutdown period. You need to add a liveness probe to your deployment; when it becomes unhealthy the traffic will stop.</p> <pre><code>livenessProbe: httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 100 periodSeconds: 10 timeoutSeconds: 5 </code></pre> <p>Update:</p> <p>This is because you have a failure threshold of 3, so you are allowing traffic for up to 15 seconds after the sigterm;</p> <p>e.g.</p> <pre><code>seconds | healthy | events 0 | ✔ | * liveness probe healthy 1 | ✔ | - SIGTERM 2 | ✔ | 3 | ✔ | 4 | ✔ | 5 | ✔ | * liveness probe issued 6 | ✔ | . 7 | ✔ | . 8 | ✔ | . 9 | ✔ | . 10 | ✔ | * liveness probe timeout - unhealthy (1/3) 11 | ✔ | 12 | ✔ | 13 | ✔ | 14 | ✔ | 15 | ✔ | * liveness probe issued 16 | ✔ | . 17 | ✔ | . 18 | ✔ | . 19 | ✔ | . 20 | ✔ | * liveness probe timeout - unhealthy (2/3) 21 | ✔ | 22 | ✔ | 23 | ✔ | 24 | ✔ | 25 | ✔ | * liveness probe issued 26 | ✔ | . 27 | ✔ | . 28 | ✔ | . 29 | ✔ | . 30 | ✘ | * liveness probe timeout - unhealthy (3/3) | | * pod restarts </code></pre> <p>This is assuming that the endpoint returns an unhealthy response during the graceful shutdown. Since you have <code>timeoutSeconds: 5</code>, if the probe simply times out this will take much longer, with a 5 second delay between issuing a liveness probe request and receiving its response. It could be the case that the container actually dies before the liveness threshold is hit and you are still seeing the original behaviour</p>
Eamonn McEvoy
<p>I'm trying to find an optimal way to handle ongoing PostgreSQL transactions during the shutdown of a golang server running on Kubernetes.</p> <p>Does it make sense to wait for transactions to finish, when these transaction are serving requests initiated by a server that has already shutdown? And even if the transaction completes within the graceful shutdown timeout duration - will the server be able to send the response?</p> <p>Even if responding to ongoing requests during shutdown is not possible, I prefer to cancel the context of all running transaction so they don't continue to run on the database after the server terminates, adding unnecessary load. But whenever I wait for transactions to finish, it seems there's a trade-off: The longer I wait for ongoing transactions to finish - the longer the container exists with a non responsive server that would error on each incoming request.</p> <p>Here's some sample code that demonstrates this:</p> <pre class="lang-golang prettyprint-override"><code>import ( &quot;github.com/jackc/pgx/v5/pgxpool&quot; &quot;os/signal&quot; &quot;context&quot; &quot;net/http&quot; &quot;syscall&quot; &quot;time&quot; ) func main() { ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGINT) defer cancel() // db is used by the API handler functions db, err := pgxpool.NewWithConfig(ctx, &lt;some_config&gt;) if err != nil { logger.Error(&quot;server failed to Shutdown&quot;, err) } server := http.Server{&lt;some_values&gt;} serverErr := make(chan error) go func() { serverErr &lt;- server.ListenAndServe() }() select { case &lt;-ctx.Done(): if err := Shutdown(closeCtx, time.Second*10, server, db); err != nil { logger.Error(&quot;server failed to Shutdown&quot;, err) } case err := &lt;-serverErr: logger.Error(&quot;server failed to ListenAndServe&quot;, err) } } func Shutdown(ctx context.Context, timeout time.Duration, server *http.Server, db *pgxpool.Pool) error { closeCtx, cancel := context.WithTimeout(ctx, timeout) defer cancel() // first, shutdown the server to stop accepting new requests if err := server.Shutdown(closeCtx); err != nil { return err } // allow running transactions to finish, but if they don't finish within // ten seconds, cancel the context of all running transactions so that they // are forced to finish (albeit, with error) transactionsComplete := waitForTransacitons(time.Second*10, db) if !transactionsComplete { cancelContextOfEveryTransaction() } // since this call blocks until all transactions finished we must call it // only once we are sure that there are no more running transactions. db.Close(ctx) return nil } </code></pre> <p>Would the optimal graceful termination sequence be:</p> <ul> <li>Shutdown the server.</li> <li>Immediately cancel context of all ongoing requests (killing the transaction as soon as the database driver tries to do anything with it).</li> <li>Close the connection pool.</li> <li>Exit.</li> </ul> <p>[edit]: alternative termination sequence (more graceful):</p> <ul> <li>Termination signal is received.</li> <li>The pod is in 'terminating' state and is removed from the load balancer.</li> <li>Shutdown the server with some timeout N.</li> <li>Shutdown the connection pool - with a short timeout. <em>Reasoning: since <code>server.Shutdown</code> returned, no responses will be returned. The only reason to wait for ongoing transactions is for background workers to finish their work, such as writing logs to the database.</em></li> <li>If there are still open transaction that prevent the connection pool from closing - kill these transactions and try to close the pool again.</li> <li>Exit.</li> </ul>
Alechko
<p>Why reinventing the wheel and not using some of the existing libraries, that do the magic for you?</p> <p>In our production services, we have used <a href="https://github.com/TV4/graceful" rel="nofollow noreferrer">this graceful shutdown lib</a> a lot and never had issues with it. It waits until all HTTP requests are served (within given timeout) and shuts down afterwards.</p> <p>The usage couldn't be simpler. After installing it</p> <pre><code>go mod download github.com/TV4/graceful </code></pre> <p>(eventually:</p> <pre><code>go get -u github.com/TV4/graceful </code></pre> <p>)</p> <p>you only need to import it:</p> <pre><code>import ( // ... &quot;github.com/TV4/graceful&quot; ) </code></pre> <p>and then you can replace all your code after instantiating a <code>server</code> (including your Shutdown function) with this one-liner:</p> <pre><code>server := ... graceful.LogListenAndServe(server, logger) </code></pre>
shadyyx
<p>I am trying to add ingress to my Kubernetes documentation. I was able to add the ingress.yaml file, and there is plenty of documentation on ingress.yaml, but I am using a skaffold.yaml to handle the nitty-gritty of the Kubernetes deployment. And I cannot find any documentation on creating a skaffold file for ingress. ( that simply uses googleCloudBuild, buildpacks, and minikube ) all the documentation I come across is for NGINX.</p> <p>My project looks like the following:</p> <pre><code>kubernetes-manifests: --- frontend_service.deployment.yaml --- frontend_service.service.yaml --- ingress.yaml --- login_service.deployment.yaml --- login_service.service.yaml --- recipes_service.deployment.yaml --- recipes_service.service.yaml </code></pre> <p>and my current skaffold file is the following:</p> <pre><code>apiVersion: skaffold/v2beta4 kind: Config build: tagPolicy: sha256: {} # defines where to find the code at build time and where to push the resulting image artifacts: - image: frontend-service context: src/frontend - image: login-service context: src/login - image: recipes-service context: src/recipes # defines the Kubernetes manifests to deploy on each run deploy: kubectl: manifests: - ./kubernetes-manifests/*.service.yaml - ./kubernetes-manifests/*.deployment.yaml profiles: # use the cloudbuild profile to build images using Google Cloud Build - name: cloudbuild build: googleCloudBuild: {} - name: buildpacks build: artifacts: - image: frontend-service context: src/frontend buildpack: builder: &quot;gcr.io/buildpacks/builder:v1&quot; - image: login-service context: src/login buildpack: builder: &quot;gcr.io/buildpacks/builder:v1&quot; - image: recipes-service context: src/recipes buildpack: builder: &quot;gcr.io/buildpacks/builder:v1&quot; </code></pre> <p>This current skaffold file does not deploy in an ingress architecture, it uses a backend and a frontend tier.</p>
ZanyCactus
<p>Ingress definitions are just Kubernetes Resources, so you just add your <code>ingress.yaml</code> into the manifests to be deployed:</p> <pre><code>deploy: kubectl: manifests: - ./kubernetes-manifests/ingress.yaml - ./kubernetes-manifests/*.service.yaml - ./kubernetes-manifests/*.deployment.yaml </code></pre>
Brian de Alwis
<p>I'm currently trying to create a GKE cluster and would like to be able to scale my PostgreSQL pods beyond 1 active instance but I'm getting stuck on the read-write permission of my volume is there a way to get readwritemany to work in GKE autopilot? a hot spare for my PostgreSQL pod would also be helpful if that is possible. thank you for the advice in advance.</p>
Alex Skotner
<p>IIUC you're unable to use Google's (Compute Engine's) Persistent Disks as <code>ReadWriteMany</code>. This may be documented on <a href="https://cloud.google.com" rel="nofollow noreferrer"><code>cloud.google.com</code> </a> but I was unable to find it.</p> <p>See Kubernetes' documentation for <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Access Modes</a> and specifically <code>GCEPersistentDisk</code>. These support <code>ReadWriteOnce</code> and <code>ReadOnlyMany</code> but <strong>not</strong> <code>ReadWriteMany</code>.</p> <p>There may be other ways but, one (Goole-supported) way you can get <code>ReadWriteMany</code> on GCP is to use Filestore. See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver" rel="nofollow noreferrer">Using the Filestore CSI driver</a>.</p> <p>For completeness (!) another way would be to use a PostgreSQL service such as <a href="https://cloud.google.com/sql/docs/features#postgres" rel="nofollow noreferrer">Cloud SQL for PostgreSQL</a></p>
DazWilkin
<p>If I understand the documentation here correctly...</p> <p><a href="https://skaffold.dev/docs/workflows/debug/#java-and-other-jvm-languages" rel="nofollow noreferrer">Skaffold Debug</a></p> <p>If I try to run K8s Project with 'skaffold debug' or with IntelliJ 'Develop on Kubernetes' and Debug, skaffold must insert in my k8s deployment/service files extra port for jdwp and Environment Variable with JAVA_TOOL_OPTIONS. Which is not happening for me.</p> <p>I am using a Helm Chart to deploy my k8s artifacts and I don't see anywhere that these things are configured (as it also mentioned here <a href="https://github.com/GoogleCloudPlatform/cloud-code-intellij/issues/2912" rel="nofollow noreferrer">GitHub Issue</a>).</p> <p>If I configure my deployment\service yamls manually, to insert port 5005 for jdwp and environment variable JAVA_TOOL_OPTIONS for jdwp and port forward 5005 then I can remotely attach to process and debug, but skaffold is not able to manage it by itself (It is not even trying, I see no JAVA_TOOL_OPTIONS in my logs).</p> <p>May be, it does not understand that I am running a JVM project or may be, while I created with my Helm Project with 'helm create' and there are several yaml files (configmap.yaml, deployment.yaml, hpa.yaml, ingress.yaml, service.yaml, serviceaccount.yaml) it is not able to find correct file to manipulate.</p> <p>If I also understand correctly, the deployment/pod that would be debugged must have following annotations:</p> <p><a href="https://skaffold.dev/docs/workflows/debug/#workload-annotations" rel="nofollow noreferrer">Annotations</a></p> <blockquote> <p>debug.cloud.google.com/config</p> </blockquote> <p>which are missing completely, only thing I see on deployment is the following -</p> <pre><code>ide: idea ideVersion: 2021.1.1.0.0 ijPluginVersion: unknown skaffold.dev/run-id: d2420cca-f212-4349-b078-41f36ed51bd5 </code></pre> <p>Any idea what is going wrong here?</p> <p>Actually, deployment functioning correctly and my Pod reports Ok for Readyness check but no debugging starting from skaffold/intellij.</p>
posthumecaver
<p>There were some mismatches between the @posthumecaver's Helm chart and the <code>skaffold.yaml</code> that prevented Skaffold from configuring the image. I'll summarize the findings here for the benefit of those who stumble across this post.</p> <p>@posthumecaver is using Skaffold's Helm support. This requires that the <code>skaffold.yaml</code> and the Helm chart use a common key for referencing the image. There are three approaches used in Helm for referencing images:</p> <h3>Fully-Qualified Name (default)</h3> <p>Skaffold will configure Helm setting a key to the fully-tagged image reference.</p> <p>The <code>skaffold.yaml</code> setup:</p> <pre><code>build: artifacts: - image: gcr.io/my-project/my-image deploy: helm: releases: - name: my-chart chartPath: helm artifactOverrides: img: gcr.io/my-project/my-image </code></pre> <p>The chart template:</p> <pre><code>image: &quot;{{.Values.img}}&quot; </code></pre> <p>The <code>values.yaml</code> (note that Skaffold overrides this value):</p> <pre><code>img: gcr.io/other-project/other-image:latest </code></pre> <p>Skaffold will invoke</p> <pre><code>helm install &lt;chart&gt; &lt;chart-path&gt; --set-string img=gcr.io/my-project/my-image:generatedTag@sha256:digest </code></pre> <h3>Split Repository and Tag</h3> <p>Skaffold can be configured to provide Helm with a separate repository and tag. The key used in the <code>artifactOverrides</code> is used as base portion producing two keys <code>{key}.repository</code> and <code>{key}.tag</code>.</p> <p>The <code>skaffold.yaml</code> setup:</p> <pre><code>build: artifacts: - image: gcr.io/my-project/my-image deploy: helm: releases: - name: my-chart chartPath: helm artifactOverrides: img: gcr.io/my-project/my-image imageStrategy: helm: {} </code></pre> <p>The chart template:</p> <pre><code>image: &quot;{{.Values.img.repository}}:{{.Values.img.tag}}&quot; </code></pre> <p>The <code>values.yaml</code> (note that Skaffold overrides these value):</p> <pre><code>img: repository: gcr.io/other-project/other-image tag: latest </code></pre> <p>Skaffold will invoke</p> <pre><code>helm install &lt;chart&gt; &lt;chart-path&gt; --set-string img.repository=gcr.io/my-project/my-image,img.tag=generatedTag@sha256:digest </code></pre> <h3>Split Registry, Repository, and Tag</h3> <p>Skaffold can also be configured to provide Helm with a separate repository and tag. The key used in the <code>artifactOverrides</code> is used as base portion producing three keys: <code>{key}.registry</code>, <code>{key}.repository</code>, and <code>{key}.tag</code>.</p> <p>The <code>skaffold.yaml</code> setup:</p> <pre><code>build: artifacts: - image: gcr.io/my-project/my-image deploy: helm: releases: - name: my-chart chartPath: helm artifactOverrides: img: gcr.io/my-project/my-image imageStrategy: helm: explicitRegistry: true </code></pre> <p>The chart template:</p> <pre><code>image: &quot;{{.Values.img.registry}}/{{.Values.img.repository}}:{{.Values.img.tag}}&quot; </code></pre> <p>The <code>values.yaml</code> (note that Skaffold overrides these value):</p> <pre><code>img: registry: gcr.io repository: other-project/other-image tag: latest </code></pre> <p>Skaffold will invoke</p> <pre><code>helm install &lt;chart&gt; &lt;chart-path&gt; --set-string img.registry=gcr.io,img.repository=my-project/my-image,img.tag=generatedTag@sha256:digest </code></pre>
Brian de Alwis
<p>Been working fine for months and quit working two days ago. Don't recall changing anything in the <code>.yamls</code>. </p> <p>Basically, when I start up the <code>create-react-app</code>, the <code>create-react-app</code> client just starts, fails and restarts.</p> <p>I've tried:</p> <ul> <li>Reverting to a previous commit when it was working</li> <li>Downgrading/upgrading <code>skaffold</code></li> <li>Downgrading/upgrading <code>minikube</code></li> <li>Downgrading/upgrading <code>kubectl</code></li> <li>Testing Ubuntu 19.10, macOS 10.15.3, and Windows 10 (WSL2) and the issue persists in all of them</li> </ul> <p>It appears to be an issue with <code>skaffold</code> and <code>create-react-app</code> as the following still works fine:</p> <ul> <li>The <code>api</code> and <code>postgres</code> pods still launch and run perfectly fine</li> <li>The following works normally which, to me, indicates it isn't a <code>create-react-app</code> issue:</li> </ul> <pre><code>cd client npm install npm start </code></pre> <ul> <li>The following also works normally which, to me, indicates it isn't a <code>docker</code> issue:</li> </ul> <pre><code>cd client docker build -f Dockerfile.dev . docker run -it -p 3000:3000 &lt;image_id&gt; </code></pre> <ul> <li>I don't think it is a Kubernetes issue. I pushed to my staging branch, triggering the staging CI/CD pipeline, passed build and deployment, and it is operating normally at my staging URL.</li> </ul> <p>This is what I have for the configs:</p> <pre><code># client.yaml apiVersion: apps/v1 kind: Deployment metadata: name: client-deployment-dev spec: replicas: 1 selector: matchLabels: component: client template: metadata: labels: component: client spec: containers: - name: client image: client ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: client-cluster-ip-service-dev spec: type: ClusterIP selector: component: client ports: - port: 3000 targetPort: 3000 </code></pre> <pre><code>#skaffold.yaml apiVersion: skaffold/v1beta15 kind: Config build: local: push: false artifacts: - image: client context: client docker: dockerfile: Dockerfile.dev sync: manual: - src: "***/*.js" dest: . - src: "***/*.jsx" dest: . - src: "***/*.json" dest: . - src: "***/*.html" dest: . - src: "***/*.css" dest: . - src: "***/*.scss" dest: . deploy: kubectl: manifests: - manifests/dev/client.yaml </code></pre> <pre><code># Dockerfile.dev FROM node:13-alpine WORKDIR /app COPY ./package.json ./ RUN npm install COPY . . CMD ["npm", "start"] </code></pre> <p><code>-v DEBUG</code> log:</p> <pre><code> $ skaffold dev -v DEBUG INFO[0000] starting gRPC server on port 50051 INFO[0000] starting gRPC HTTP server on port 50052 INFO[0000] Skaffold &amp;{Version:v1.6.0-docs ConfigVersion:skaffold/v2beta1 GitVersion: GitCommit:b74e2f94f628b16a866abddc2ba8f05ce0bf956c GitTreeState:clean BuildDate:2020-03-25T00:09:12Z GoVersion:go1.14 Compiler:gc Platform:linux/amd64} DEBU[0000] config version (skaffold/v1beta15) out of date: upgrading to latest (skaffold/v2beta1) DEBU[0000] validating yamltags of struct SkaffoldConfig DEBU[0000] validating yamltags of struct Metadata DEBU[0000] validating yamltags of struct Pipeline DEBU[0000] validating yamltags of struct BuildConfig DEBU[0000] validating yamltags of struct Artifact DEBU[0000] validating yamltags of struct Sync DEBU[0000] validating yamltags of struct SyncRule DEBU[0000] validating yamltags of struct SyncRule DEBU[0000] validating yamltags of struct SyncRule DEBU[0000] validating yamltags of struct SyncRule DEBU[0000] validating yamltags of struct SyncRule DEBU[0000] validating yamltags of struct SyncRule DEBU[0000] validating yamltags of struct ArtifactType DEBU[0000] validating yamltags of struct DockerArtifact DEBU[0000] validating yamltags of struct TagPolicy DEBU[0000] validating yamltags of struct GitTagger DEBU[0000] validating yamltags of struct BuildType DEBU[0000] validating yamltags of struct LocalBuild DEBU[0000] validating yamltags of struct DeployConfig DEBU[0000] validating yamltags of struct DeployType DEBU[0000] validating yamltags of struct KubectlDeploy DEBU[0000] validating yamltags of struct KubectlFlags INFO[0000] Using kubectl context: minikube DEBU[0000] Using builder: local DEBU[0000] Running command: [minikube docker-env --shell none] DEBU[0000] Command output: [DOCKER_TLS_VERIFY=1 DOCKER_HOST=tcp://192.168.39.184:2376 DOCKER_CERT_PATH=/home/eoxdev/.minikube/certs MINIKUBE_ACTIVE_DOCKERD=minikube ] DEBU[0000] setting Docker user agent to skaffold-v1.6.0-docs Listing files to watch... - client DEBU[0000] Found dependencies for dockerfile: [{package.json /app true} {. /app true}] DEBU[0000] Skipping excluded path: node_modules INFO[0000] List generated in 1.684217ms Generating tags... - client -&gt; DEBU[0000] Running command: [git describe --tags --always] DEBU[0000] Command output: [3403aa6 ] DEBU[0000] Running command: [git status . --porcelain] DEBU[0000] Command output: [] client:3403aa6 INFO[0000] Tags generated in 3.085635ms Checking cache... DEBU[0000] Found dependencies for dockerfile: [{package.json /app true} {. /app true}] DEBU[0000] Skipping excluded path: node_modules - client: Found Locally INFO[0000] Cache check complete in 6.098469ms Tags used in deployment: - client -&gt; client:1319b715976becb303bd077717e754e52beaef72d44c7b09f5b6835b1afacae2 local images can't be referenced by digest. They are tagged and referenced by a unique ID instead Starting deploy... DEBU[0000] Running command: [kubectl version --client -ojson] DEBU[0000] Command output: [{ "clientVersion": { "major": "1", "minor": "18", "gitVersion": "v1.18.0", "gitCommit": "9e991415386e4cf155a24b1da15becaa390438d8", "gitTreeState": "clean", "buildDate": "2020-03-25T14:58:59Z", "goVersion": "go1.13.8", "compiler": "gc", "platform": "linux/amd64" } } ] DEBU[0000] Running command: [kubectl --context minikube create --dry-run -oyaml -f /home/eoxdev/Projects/issues/skaffold-cra-error/manifests/dev/client.yaml] DEBU[0000] Command output: [apiVersion: apps/v1 kind: Deployment metadata: name: client-deployment-dev namespace: default spec: replicas: 1 selector: matchLabels: component: client template: metadata: labels: component: client spec: containers: - image: client name: client ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: client-cluster-ip-service-dev namespace: default spec: ports: - port: 3000 targetPort: 3000 selector: component: client type: ClusterIP ], stderr: W0327 08:49:50.543847 16516 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. DEBU[0000] manifests with tagged images: apiVersion: apps/v1 kind: Deployment metadata: name: client-deployment-dev namespace: default spec: replicas: 1 selector: matchLabels: component: client template: metadata: labels: component: client spec: containers: - image: client:1319b715976becb303bd077717e754e52beaef72d44c7b09f5b6835b1afacae2 name: client ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: client-cluster-ip-service-dev namespace: default spec: ports: - port: 3000 targetPort: 3000 selector: component: client type: ClusterIP DEBU[0000] manifests with labels apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/managed-by: skaffold-v1.6.0-docs skaffold.dev/builder: local skaffold.dev/cleanup: "true" skaffold.dev/deployer: kubectl skaffold.dev/docker-api-version: "1.40" skaffold.dev/run-id: 2ee04f07-3f07-4e75-bdba-dfac76d18bf0 skaffold.dev/tag-policy: git-commit skaffold.dev/tail: "true" name: client-deployment-dev namespace: default spec: replicas: 1 selector: matchLabels: component: client template: metadata: labels: app.kubernetes.io/managed-by: skaffold-v1.6.0-docs component: client skaffold.dev/builder: local skaffold.dev/cleanup: "true" skaffold.dev/deployer: kubectl skaffold.dev/docker-api-version: "1.40" skaffold.dev/run-id: 2ee04f07-3f07-4e75-bdba-dfac76d18bf0 skaffold.dev/tag-policy: git-commit skaffold.dev/tail: "true" spec: containers: - image: client:1319b715976becb303bd077717e754e52beaef72d44c7b09f5b6835b1afacae2 name: client ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/managed-by: skaffold-v1.6.0-docs skaffold.dev/builder: local skaffold.dev/cleanup: "true" skaffold.dev/deployer: kubectl skaffold.dev/docker-api-version: "1.40" skaffold.dev/run-id: 2ee04f07-3f07-4e75-bdba-dfac76d18bf0 skaffold.dev/tag-policy: git-commit skaffold.dev/tail: "true" name: client-cluster-ip-service-dev namespace: default spec: ports: - port: 3000 targetPort: 3000 selector: component: client type: ClusterIP DEBU[0000] 2 manifests to deploy. 2 are updated or new DEBU[0000] Running command: [kubectl --context minikube apply -f - --force --grace-period=0] - deployment.apps/client-deployment-dev created - service/client-cluster-ip-service-dev created INFO[0000] Deploy complete in 391.276171ms Waiting for deployments to stabilize DEBU[0000] getting client config for kubeContext: `` DEBU[0000] checking status default:deployment/client-deployment-dev DEBU[0000] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] DEBU[0000] Command output: [Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... ] DEBU[0001] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] - default:deployment/client-deployment-dev Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... DEBU[0001] Command output: [Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... ] DEBU[0001] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] DEBU[0001] Command output: [Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... ] DEBU[0001] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] DEBU[0001] Command output: [Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... ] DEBU[0001] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] DEBU[0001] Command output: [Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... ] DEBU[0002] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] DEBU[0002] Command output: [Waiting for deployment "client-deployment-dev" rollout to finish: 0 of 1 updated replicas are available... ] DEBU[0002] Running command: [kubectl --context minikube rollout status deployment client-deployment-dev --namespace default --watch=false] DEBU[0002] Command output: [deployment "client-deployment-dev" successfully rolled out ] - default:deployment/client-deployment-dev is ready. Deployments stabilized in 1.818029816s DEBU[0002] getting client config for kubeContext: `` INFO[0002] Streaming logs from pod: client-deployment-dev-58bdbf5664-fcc7k container: client DEBU[0002] Running command: [kubectl --context minikube logs --since=3s -f client-deployment-dev-58bdbf5664-fcc7k -c client --namespace default] [client-deployment-dev-58bdbf5664-fcc7k client] [client-deployment-dev-58bdbf5664-fcc7k client] &gt; [email protected] start /app [client-deployment-dev-58bdbf5664-fcc7k client] &gt; react-scripts start [client-deployment-dev-58bdbf5664-fcc7k client] DEBU[0002] Found dependencies for dockerfile: [{package.json /app true} {. /app true}] DEBU[0002] Change detected &lt;nil&gt; DEBU[0002] Skipping excluded path: node_modules Watching for changes... [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: Project is running at http://172.17.0.8/ [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: webpack output is served from [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: Content not from webpack is served from /app/public [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: 404s will fallback to / [client-deployment-dev-58bdbf5664-fcc7k client] Starting the development server... [client-deployment-dev-58bdbf5664-fcc7k client] DEBU[0003] Found dependencies for dockerfile: [{package.json /app true} {. /app true}] DEBU[0003] Skipping excluded path: node_modules INFO[0004] Streaming logs from pod: client-deployment-dev-58bdbf5664-fcc7k container: client DEBU[0004] Running command: [kubectl --context minikube logs --since=4s -f client-deployment-dev-58bdbf5664-fcc7k -c client --namespace default] [client-deployment-dev-58bdbf5664-fcc7k client] [client-deployment-dev-58bdbf5664-fcc7k client] &gt; [email protected] start /app [client-deployment-dev-58bdbf5664-fcc7k client] &gt; react-scripts start [client-deployment-dev-58bdbf5664-fcc7k client] [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: Project is running at http://172.17.0.8/ [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: webpack output is served from [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: Content not from webpack is served from /app/public [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: 404s will fallback to / [client-deployment-dev-58bdbf5664-fcc7k client] Starting the development server... [client-deployment-dev-58bdbf5664-fcc7k client] INFO[0019] Streaming logs from pod: client-deployment-dev-58bdbf5664-fcc7k container: client DEBU[0019] Running command: [kubectl --context minikube logs --since=20s -f client-deployment-dev-58bdbf5664-fcc7k -c client --namespace default] [client-deployment-dev-58bdbf5664-fcc7k client] [client-deployment-dev-58bdbf5664-fcc7k client] &gt; [email protected] start /app [client-deployment-dev-58bdbf5664-fcc7k client] &gt; react-scripts start [client-deployment-dev-58bdbf5664-fcc7k client] [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: Project is running at http://172.17.0.8/ [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: webpack output is served from [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: Content not from webpack is served from /app/public [client-deployment-dev-58bdbf5664-fcc7k client] ℹ 「wds」: 404s will fallback to / [client-deployment-dev-58bdbf5664-fcc7k client] Starting the development server... [client-deployment-dev-58bdbf5664-fcc7k client] ^CCleaning up... DEBU[0021] Running command: [kubectl --context minikube create --dry-run -oyaml -f /home/eoxdev/Projects/issues/skaffold-cra-error/manifests/dev/client.yaml] DEBU[0021] Command output: [apiVersion: apps/v1 kind: Deployment metadata: name: client-deployment-dev namespace: default spec: replicas: 1 selector: matchLabels: component: client template: metadata: labels: component: client spec: containers: - image: client name: client ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: client-cluster-ip-service-dev namespace: default spec: ports: - port: 3000 targetPort: 3000 selector: component: client type: ClusterIP ], stderr: W0327 08:50:11.709935 16770 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. DEBU[0021] Running command: [kubectl --context minikube delete --ignore-not-found=true -f -] - deployment.apps "client-deployment-dev" deleted - service "client-cluster-ip-service-dev" deleted INFO[0021] Cleanup complete in 187.851194ms </code></pre> <p>I have a test repo here you can try if you'd like:</p> <p><a href="https://github.com/eox-dev/skaffold-cra-error" rel="nofollow noreferrer">https://github.com/eox-dev/skaffold-cra-error</a></p> <p>Hopefully this is something I'm just overlooking, but like I've said, I've reverted the app back to when I know it was working and the issue still persists.</p> <p><strong>EDIT 4/1</strong> Was hopeful when I read this in the <code>v1.6.0-docs</code> release notes:</p> <blockquote> <p>Note: This release comes with a new config version <code>v2beta1</code>. To upgrade your <code>skaffold.yaml</code>, use <code>skaffold fix</code>. If you choose not to upgrade, <code>skaffold</code> will auto-upgrade as best as it can.</p> </blockquote> <p>Still having the same issue, however.</p>
cjones
<p>This is due to a change in <a href="https://github.com/facebook/create-react-app/" rel="noreferrer">facebook/create-react-app</a>, specifically <a href="https://github.com/facebook/create-react-app/issues/8739" rel="noreferrer">#8739</a> and <a href="https://github.com/facebook/create-react-app/issues/8688" rel="noreferrer">#8688</a>. You'll notice that your container immediately exits from docker if you run without allocating a TTY:</p> <pre><code>$ docker run --rm client; echo "&gt;&gt; $?" &gt; [email protected] start /app &gt; react-scripts start ℹ 「wds」: Project is running at http://172.17.0.4/ ℹ 「wds」: webpack output is served from ℹ 「wds」: Content not from webpack is served from /app/public ℹ 「wds」: 404s will fallback to / Starting the development server... &gt;&gt; 0 </code></pre> <p>There is a workaround posted to the issue, to set <code>CI=true</code> in your Dockerfile:</p> <pre><code>--- client/Dockerfile.dev +++ client/Dockerfile.dev @@ -1,6 +1,7 @@ FROM node:13-alpine WORKDIR /app COPY ./package.json ./ +ENV CI=true RUN npm install COPY . . </code></pre>
Brian de Alwis
<p>I would love to list deployments having <code>mongodb</code> environment value along with their phase status. Is there anyway to do so?</p> <p>With this command, I get the deployments name which carries a specific environment value</p> <pre><code>kubectl get deploy -o=custom-columns=&quot;NAME:.metadata.name,SEC:.spec.template.spec.containers[*].env[*].value&quot; | grep mongodb | cut -f 1 -d ' ' </code></pre> <p>Output:</p> <pre><code>app1 app2 app3 app4 </code></pre> <p>Output I want to get:</p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE app1 1/1 1 1 125d app2 1/1 1 1 248d app3 1/1 1 1 248d app4 1/1 1 1 248d </code></pre> <p>Or it can be pods as well. I'd appreciate your help.</p> <p>Thank you!</p>
titanic
<p>I had a go at a solution using <code>kubectl</code> but was unsuccessful.</p> <p>I suspect (!?) you'll need to use additional tools to parse|process the results to get what you want. Perhaps using <a href="https://stedolan.github.io/jq/" rel="noreferrer"><code>jq</code></a>?</p> <p>For Deployments, you can filter the results based on environment variable names using, e.g.:</p> <pre class="lang-sh prettyprint-override"><code>FILTER=&quot;{.items[*].spec.template.spec.containers[*].env[?(@.name==\&quot;mongodb\&quot;)]}&quot; kubectl get deployments \ --namespace=${NAMESPACE} \ --output=jsonpath=&quot;${FILTER}&quot; </code></pre> <p>But this only returns the filtered path (i.e. <code>items[*].spec.template.spec.containers[*].env</code>).</p> <p>With JSONPath, you ought (!) be able to apply the filter to the item but I this isn't supported (by <code>kubectl</code>'s implementation) i.e.:</p> <pre class="lang-sh prettyprint-override"><code>FILTER=&quot;{.items[?(@.spec.template.spec.containers[?(@.env[?(@.name==\&quot;mongodb\&quot;)])])]}&quot; </code></pre> <p>With <code>jq</code>, I think you'll be able to select the <code>env.name</code>, return the <code>item</code>'s status and get the raw <code>status</code> values that you need. Something like:</p> <pre class="lang-sh prettyprint-override"><code>FILTER=' .items[] |select(.spec.template.spec.containers[].env[].name == &quot;mongodb&quot;) |{&quot;name&quot;:.metadata.name, &quot;ready&quot;:.status.readyReplicas} ' kubectl get deployments \ --output=json \ | jq -r &quot;${FILTER}&quot; </code></pre>
DazWilkin
<p>We are submitting spark job into kubernetes cluster using cluster mode and with some more memory configurations. My job is finishing in about 5 mins but my executor pods are still running after 30 - 40 mins. Due to this the new jobs are pending as the resources are still bound to running pods.</p> <p>Below is spark submit command :</p> <p><code>/spark-2.4.4-bin-hadoop2.7/bin/spark-submit --deploy-mode cluster --class com.Spark.MyMainClass --driver-memory 3g --driver-cores 1 --executor-memory 12g --executor-cores 3 --master k8s://https://masterhost:6443 --conf spark.kubernetes.namespace=default --conf spark.app.name=myapp1 --conf spark.executor.instances=3 --conf spark.kubernetes.driver.pod.name=myappdriver1 --conf spark.kubernetes.container.image=imagePath --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.driver.container.image=imagePath --conf spark.kubernetes.executor.container.image=imagePath local:///opt/spark/jars/MyApp.jar</code></p>
Rajashekhar Meesala
<p>You need to add</p> <pre><code>sparkSession.stop() </code></pre> <p>at the end of your code</p>
Loic
<p>Currently I am trying to implement CI/CD pipeline using the DevOps automation tools like Jenkins and kubernetes. And I am using these for deploying my micro services creates using spring boot and maven projects.</p> <p>Now I am successfully deployed my spring boot micro services using Jenkins and Kubernetes. I am deployed to different namespaces using kubernetes. When I am committing , one post commit hook will work from my SVN repository. And that post commit hook will trigger the Jenkins Job.</p> <p><strong>My Confusion</strong></p> <p>When I am implementing the CI/CD pipeline , I read about the implementation of feed back loops in pipeline. Here I had felt the confusion that , If I need to use the implementation of Feedback Loops then which are the different ways that I can follow here ?</p> <p><strong>Can anyone suggest me to find out any useful documentations/tutorials for implementing the feed back loops in CI/CD pipeline please?</strong></p>
Mr.DevEng
<p>The method of getting deployment feedback depends on your service and your choice. For example, you can check if the container is up or check one of the rest URL. </p> <p>I use this stage as a final stage to check the service: </p> <pre><code> stage('feedback'){ sleep(time:10,unit:"SECONDS") def get = new URL("192.168.1.1:8080/version").openConnection(); def getRC = get.getResponseCode(); println(getRC); if(getRC.equals(200)) { println(get.getInputStream().getText()); } else{ error("Service is not started yet.") } } </code></pre> <p>Jenkins can notify users about failed tests(jobs) with sending email or json notify. read more: <a href="https://wiki.jenkins.io/display/JENKINS/Email-ext+plugin" rel="noreferrer">https://wiki.jenkins.io/display/JENKINS/Email-ext+plugin</a><br> <a href="https://wiki.jenkins.io/display/JENKINS/Notification+Plugin" rel="noreferrer">https://wiki.jenkins.io/display/JENKINS/Notification+Plugin</a><br> <a href="https://wiki.jenkins.io/display/JENKINS/Slack+Plugin" rel="noreferrer">https://wiki.jenkins.io/display/JENKINS/Slack+Plugin</a> </p> <p>If you want continuous monitoring for the deployed product, you need <strong>monitoring tools</strong> which are different from Jenkins.</p> <p>This is a sample picture for some popular tools of each part of DevOps: <a href="https://i.stack.imgur.com/gTHxn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gTHxn.png" alt="enter image description here"></a></p>
M-Razavi
<p>I am working on a web application with all the infrastructure based on Kubernetes. In my local environment, I am using Skaffold. I have two computers (Desktop and Laptop) with 8Gb of RAM each. By starting minikube (virtualbox driver) and <code>skaffold dev</code> the Deskop is freezing.</p> <p>So I decided to use the Laptop for coding and the Desktop for running minikube and everything related.</p> <p>I successfully managed to set up <strong>kubeconfig</strong> on the laptop to have a context with the minikube server.</p> <p>Actually, The issue is skaffold. When I run <code>skaffold dev</code>, it fails because minikube of the Deskop doesn't see the images build by skaffold on my laptop. <code>kubectl get po</code> returns <strong>ImagePullBackOff</strong>. That is because skaffold uses the local docker to build the image. The question is <em>how to make skaffold use the docker install in my Desktop</em>? I changed the docker context of my laptop so that it's linked to the Desktop context but it's still not working, skaffold is still using the default docker context installed in my laptop.</p> <p>How to make the images build by Skaffold being available on my Desktop? Is it possible for Skaffold to use a remote docker context? If yes, how?</p>
Mael Fosso
<p>Minikube uses its own Docker installation to power its cluster. This daemon runs in Minikube's VM (or container, if using the <code>docker</code> driver) and is completely independent from the host's Docker daemon (your Desktop). You can access to Minikube's daemon by setting the environment returned by <code>minikube docker-env</code>.</p>
Brian de Alwis
<p>I would like to know, how to find service name from the Pod Name in Kubernetes.</p> <p>Can you guys suggest ?</p>
Big Bansal
<p>Services (<code>spec.selector</code>) and Pods (<code>metadata.labels</code>) are bound through shared labels.</p> <p>So, you want to find all Services that include (some) of the Pod's labels.</p> <pre class="lang-sh prettyprint-override"><code>kubectl get services \ --selector=${KEY-1}=${VALUE-1},${KEY-2}=${VALUE-2},... --namespace=${NAMESPACE} </code></pre> <p>Where <code>${KEY}</code> and <code>${VALUE}</code> are the Pod's label(s) key(s) and values(s)</p> <p>It's challenging though because it's possible for the Service's <code>selector</code> labels to differ from Pod labels. You'd not want there to be no intersection but a Service's labels could well be a subset of any Pods'.</p> <p>The following isn't quite what you want but you may be able to extend it to do what you want. Given the above, it enumerates the Services in a Namespace and, using each Service's <code>selector</code> labels, it enumerates Pods that select based upon them:</p> <pre class="lang-sh prettyprint-override"><code> NAMESPACE=&quot;...&quot; SERVICES=&quot;$(\ kubectl get services \ --namespace=${NAMESPACE} \ --output=name)&quot; for SERVICE in ${SERVICES} do SELECTOR=$(\ kubectl get ${SERVICE} \ --namespace=${NAMESPACE}\ --output=jsonpath=&quot;{.spec.selector}&quot; \ | jq -r '.|to_entries|map(&quot;\(.key)=\(.value)&quot;)|@csv' \ | tr -d '&quot;') PODS=$(\ kubectl get pods \ --selector=${SELECTOR} \ --namespace=${NAMESPACE} \ --output=name) printf &quot;%s: %s\n&quot; ${SERVICE} ${PODS} done </code></pre> <blockquote> <p><strong>NOTE</strong> This requires <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a> because I'm unsure whether it's possible to use <code>kubectl</code>'s <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">JSONPath</a> to range over a Service's labels <strong>and</strong> reformat these as needed. Even using <code>jq</code>, my command's messy:</p> <ol> <li>Get the Service's <code>selector</code> as <code>{&quot;k1&quot;:&quot;v1&quot;,&quot;k2&quot;:&quot;v2&quot;,...}</code></li> <li>Convert this to <code>&quot;k1=v1&quot;,&quot;k2=v2&quot;,...</code></li> <li>Trim the extra (?) <code>&quot;</code></li> </ol> </blockquote> <p>If you want to do this for all Namespaces, you can wrap everything in:</p> <pre class="lang-sh prettyprint-override"><code>NAMESPACES=$(kubectl get namespaces --output=name) for NAMESPACE in ${NAMESPACE} do ... done </code></pre>
DazWilkin
<p>Is there a way to access existing validation specs? For example, I want to be able to set NodeAffinity on my CRD, and would like to just $ref: . I found the entire API here: <a href="https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json</a> OR kubectl proxy -> localhost:8001/openapi/v2 (from within my cluster)</p> <p>I could manually copy paste the api validation schema, but I was wondering if there was a way to automatically reference an existing OpenAPI Validation Spec from within my CRD with $ref. I imagine something like $ref: localhost:8001/openapi/v2/definitions/io.k8s.api.core.v1.NodeAffinity</p> <p>If this is even possible, will it resolve the inner $refs as well?</p> <p>For reference, here's what the nodeaffinity definition looks like in the API:</p> <pre><code>"io.k8s.api.core.v1.NodeAffinity": { "description": "Node affinity is a group of node affinity scheduling rules.", "properties": { "preferredDuringSchedulingIgnoredDuringExecution": { "description": "The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.", "items": { "$ref": "#/definitions/io.k8s.api.core.v1.PreferredSchedulingTerm" }, "type": "array" }, "requiredDuringSchedulingIgnoredDuringExecution": { "$ref": "#/definitions/io.k8s.api.core.v1.NodeSelector", "description": "If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node." } }, "type": "object" }, </code></pre> <p>(using Operator-SDK with Ansible, incase that matters)</p> <p>EDIT: (adding a full example to further explain)</p> <p>I have a CRD called Workshop, and I require validation on certain spec parameters.</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: workshops.k8s.example.tk spec: group: k8s.example.tk names: kind: Workshop listKind: WorkshopList plural: workshops singular: workshop scope: Namespaced subresources: status: {} validation: openAPIV3Schema: type: object properties: spec: type: object required: - workshopID properties: workshopID: # type: string description: Unique identifier for this particular virtual workshop example: d8e8fca2dc0f896fd7cb4cb0031ba249 </code></pre> <p>Now I need to add a nodeAffinity spec field that will be applied to any pods that live under this CustomResourceDefinition. The validation for it is going to be the exact same as the validation for nodeAffinity in pods.</p> <p>Let me pull the validation spec that is ALREADY WRITTEN in OpenApi from: <a href="https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json</a> and convert it to YAML then add it to my spec.</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: workshops.k8s.example.tk spec: group: k8s.example.tk names: kind: Workshop listKind: WorkshopList plural: workshops singular: workshop scope: Namespaced subresources: status: {} validation: openAPIV3Schema: type: object properties: spec: type: object required: - workshopID properties: workshopID: # type: string description: Unique identifier for this particular virtual workshop example: d8e8fca2dc0f896fd7cb4cb0031ba249 affinity: # type: object properties: nodeAffinity: # description: Node affinity is a group of node affinity scheduling rules. type: object properties: preferredDuringSchedulingIgnoredDuringExecution: description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. type: array items: description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). type: object required: - weight - preference properties: preference: description: A node selector term, associated with the corresponding weight. A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. type: object properties: matchExpressions: description: A list of node selector requirements by node's labels. type: array items: description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. type: object required: - key - operator properties: key: description: The label key that the selector applies to. type: string operator: description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. type: array items: type: string matchFields: description: A list of node selector requirements by node's fields. type: array items: description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. type: object required: - key - operator properties: key: description: The label key that the selector applies to. type: string operator: description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. type: array items: type: string weight: description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. type: integer format: int32 requiredDuringSchedulingIgnoredDuringExecution: description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. type: object required: - nodeSelectorTerms properties: nodeSelectorTerms: description: Required. A list of node selector terms. The terms are ORed. type: array items: description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. type: object properties: matchExpressions: description: A list of node selector requirements by node's labels. type: array items: description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. type: object required: - key - operator properties: key: description: The label key that the selector applies to. type: string operator: description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. type: array items: type: string matchFields: description: A list of node selector requirements by node's fields. type: array items: description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. type: object required: - key - operator properties: key: description: The label key that the selector applies to. type: string operator: description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. type: string values: description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. type: array items: type: string </code></pre> <p>Wow, for just one field (and its sub fields) to be validated, my CRD definition has grown by 100+ lines, all just to reimplement something that already exists in the Kubernetes-native pod api definition. It also took about 15 minutes to manually copy paste and resolve all the references in the Kubernetes spec by hand. Wouldn't it make so much sense to either:</p> <p>A) Store this long API spec in an external file, and use $ref: externalfile.json to pull it in to keep my CRD small and clean.</p> <p>OR BETTER YET</p> <p>B) Insert the actual Kubernetes-native validation spec that ALREADY EXISTS with a $ref tag like this:</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: workshops.k8s.example.tk spec: group: k8s.example.tk names: kind: Workshop listKind: WorkshopList plural: workshops singular: workshop scope: Namespaced subresources: status: {} validation: openAPIV3Schema: type: object properties: spec: type: object required: - workshopID properties: workshopID: # type: string description: Unique identifier for this particular virtual workshop example: d8e8fca2dc0f896fd7cb4cb0031ba249 affinity: type: object properties: nodeAffinity: $ref: &lt;kubernetes-api&gt;/openapi/v2#/definitions/io.k8s.api.core.v1.NodeAffinity </code></pre> <p>Back down to 30 or so lines of code, AND the validation spec stays up-to-date with Kubernetes native validation, since it's pulling the information from Kubernetes API itself. According to this, $ref should be supported in doing this: <a href="https://swagger.io/docs/specification/using-ref/#syntax" rel="nofollow noreferrer">https://swagger.io/docs/specification/using-ref/#syntax</a></p>
user2896438
<p>There unfortunately isn't a way to do nicely this currently. We solved the problem by writing <a href="https://gist.github.com/danrspencer/f695a22b15b1e4e3b3cae75a7a8a93ec" rel="nofollow noreferrer">a horrible Bash script</a> to rip the definition out of Kubernetes and included it via Helm templating into our CRD.</p>
Dan
<p>Until now I developed Python applications locally using docker and docker-compose. Now I'd like to change my development workflow to use <code>skaffold</code> with <code>docker</code> as builder, with <code>kubectl</code> as deployer and with <code>minikube</code> for managing the local kubernetes cluster.</p> <p>Let's say I've this docker based hello world for FastAPI:</p> <p>project structure:</p> <pre><code>app/app.py Dockerfile </code></pre> <p>app/app.py</p> <pre><code>from typing import Optional from fastapi import FastAPI app = FastAPI() @app.get(&quot;/&quot;) def read_root(): return {&quot;Hello&quot;: &quot;World&quot;} @app.get(&quot;/items/{item_id}&quot;) def read_item(item_id: int, q: Optional[str] = None): return {&quot;item_id&quot;: item_id, &quot;q&quot;: q} </code></pre> <p>Dockerfile:</p> <pre><code>FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app </code></pre> <p>If I run <code>docker build -t hello-fastapi .</code> and <code>docker run -p 80:80 hello-fastapi</code> I can access the service via <code>0.0.0.0</code> or <code>localhost</code>. I skip the <code>docker-compose</code> things here cause it does not matter w.r.t. the skaffold setup.</p> <p>To use <code>skaffold</code> I have the exact same project structure and content but I added the skaffold + kubectl specific things (<code>skaffold.yaml</code>, <code>deployment.yaml</code>):</p> <p>project structure:</p> <pre><code>app/app.py k8s/deployment.yaml Dockerfile skaffold.yaml </code></pre> <p>k8s/deployment.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: fastapi-service labels: app: fastapi-service spec: clusterIP: None ports: - port: 80 name: fastapi-service selector: app: fastapi-service --- apiVersion: apps/v1 kind: Deployment metadata: name: fastapi-service labels: app: fastapi-service spec: replicas: 1 selector: matchLabels: app: fastapi-service template: metadata: labels: app: fastapi-service spec: containers: - name: fastapi-service image: fastapi-service ports: - containerPort: 80 </code></pre> <p>skaffold.yaml</p> <pre><code>apiVersion: skaffold/v2beta10 kind: Config build: artifacts: - image: fastapi-image deploy: kubectl: manifests: - k8s/* </code></pre> <p>If I run <code>skaffold dev</code> everything seems to be fine:</p> <pre><code>Listing files to watch... - fastapi-service Generating tags... - fastapi-service -&gt; fastapi-service:latest Some taggers failed. Rerun with -vdebug for errors. Checking cache... - fastapi-service: Found Locally Tags used in deployment: - fastapi-service -&gt; fastapi-service:17659a877904d862184d7cc5966596d46b0765f1995f7abc958db4b3f98b8a35 Starting deploy... - service/fastapi-service created - deployment.apps/fastapi-service created Waiting for deployments to stabilize... - deployment/fastapi-service is ready. Deployments stabilized in 2.165700782s Press Ctrl+C to exit Watching for changes... [fastapi-service] Checking for script in /app/prestart.sh [fastapi-service] Running script /app/prestart.sh [fastapi-service] Running inside /app/prestart.sh, you could add migrations to this file, e.g.: [fastapi-service] [fastapi-service] #! /usr/bin/env bash [fastapi-service] [fastapi-service] # Let the DB start [fastapi-service] sleep 10; [fastapi-service] # Run migrations [fastapi-service] alembic upgrade head [fastapi-service] [fastapi-service] [2020-12-15 19:02:57 +0000] [1] [INFO] Starting gunicorn 20.0.4 [fastapi-service] [2020-12-15 19:02:57 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1) [fastapi-service] [2020-12-15 19:02:57 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker [fastapi-service] [2020-12-15 19:02:57 +0000] [8] [INFO] Booting worker with pid: 8 ... </code></pre> <p>However I cannot access the service via my web browser. How can I access the service from my local machine via e.g. the web browser?</p> <p><strong>EDIT</strong>:</p> <p>According to <code>minikube service list</code> the service <code>fastapi-service</code> exists:</p> <pre><code>|----------------------|---------------------------|--------------|-----| | NAMESPACE | NAME | TARGET PORT | URL | |----------------------|---------------------------|--------------|-----| | default | fastapi-service | No node port | | default | kubernetes | No node port | | kube-system | kube-dns | No node port | | kubernetes-dashboard | dashboard-metrics-scraper | No node port | | kubernetes-dashboard | kubernetes-dashboard | No node port | |----------------------|---------------------------|--------------|-----| </code></pre> <p>But I cannot access it via <code>curl $(minikube service fastapi-service --url)</code>:</p> <pre><code>curl: (3) Failed to convert 😿 to ACE; string contains a disallowed character curl: (6) Could not resolve host: service curl: (6) Could not resolve host: default curl: (6) Could not resolve host: has curl: (6) Could not resolve host: no curl: (6) Could not resolve host: node curl: (6) Could not resolve host: port </code></pre> <p>Probably this is related to <a href="https://stackoverflow.com/questions/60556096/unable-to-get-clusterip-service-url-from-minikube">Unable to get ClusterIP service url from minikube</a> . If I change <code>deployment.yaml</code> to</p> <pre><code>apiVersion: v1 kind: Service metadata: name: fastapi-service labels: app: fastapi-service spec: type: NodePort ports: - targetPort: 80 port: 80 selector: app: fastapi-service --- apiVersion: apps/v1 kind: Deployment metadata: name: fastapi-service labels: app: fastapi-service spec: replicas: 1 selector: matchLabels: app: fastapi-service template: metadata: labels: app: fastapi-service spec: containers: - name: fastapi-service image: fastapi-service ports: - containerPort: 80 </code></pre> <p>accessing the service via <code>curl $(minikube service fastapi-service --url)</code> is successful:</p> <pre><code>{&quot;message&quot;:&quot;Hello world! From FastAPI running on Uvicorn with Gunicorn. Using Python 3.7&quot;} </code></pre> <p>However I cannot access the service via the web browser.</p>
thinwybk
<p>Use Skaffold <a href="https://skaffold.dev/docs/pipeline-stages/port-forwarding/" rel="nofollow noreferrer"><code>--port-forward</code></a> to enable automatic port fowarding of services and user-defined port-forwards.</p> <p>Skaffold normally tries select a local port that is close to the remote port where possible. But since port 80 is a protected port, Skaffold will choose a different local port. You can explicitly <a href="https://skaffold.dev/docs/pipeline-stages/port-forwarding/#user-defined-port-forwarding" rel="nofollow noreferrer">configure a port forward your <code>skaffold.yaml</code></a> to specify the local port. Using user-defined port-forwards can help avoid potential confusion should you add other services later on.</p> <pre><code>apiVersion: skaffold/v2beta10 kind: Config build: artifacts: - image: fastapi-service deploy: kubectl: manifests: - k8s/* portForward: - resourceType: deployment resourceName: fastapi-service port: 80 localPort: 9000 </code></pre> <p>Instead of running <code>skaffold dev</code> run with <code>skaffold dev --port-forward</code>. Now you can access the service via <code>localhost:9000</code>.</p> <p>You can also use <code>kubectl port-forward</code> to manually forward ports; Skaffold uses this same functionality to implement its port forwarding.</p>
Brian de Alwis
<p>I am trying to set a non-default Service Account to the node pool that I am creating.</p> <p>However, every time with the following code, the node pool shows as using the <code>default</code> service account.</p> <pre><code>resource &quot;google_container_node_pool&quot; &quot;node_pool&quot; { ... service_account = &quot;myserviceaccount@&lt;id&gt;.iam.gserviceaccount.com&quot; oauth_scopes = [ &quot;https://www.googleapis.com/auth/cloud-platform&quot; ] } </code></pre> <p>When I check on the GKE console it shows the Service Account as <code>default</code> rather than my specified account.</p> <p>I have confirmed in the console, that a node group can be manually created with <code>myserviceaccount</code> set as the Service Account for the node group. It is only with Terraform this is not working.</p> <p>How do I set my own service account when creating the node pool?</p> <p>Any help on this would be greatly appreciated!</p>
fuzzi
<p>It's unclear from your question whether your <code>service_account</code> is, as required, part of the <code>node_config</code> which is part of resource.</p> <p>See <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#example-usage---with-a-separately-managed-node-pool-recommended" rel="nofollow noreferrer">example</a></p>
DazWilkin
<p>I have mounted one volume which contained username and password inside pod. If I do:</p> <pre><code>kubectl exec -it my-app -- cat /mnt/secrets-store/git-token {&quot;USERNAME&quot;:&quot;usernameofgit&quot;,&quot;PASSWORD&quot;:&quot;dhdhfhehfhel&quot;} </code></pre> <p>I want to read this USERNAME and PASSWORD using Spring Boot.</p>
Chintamani
<p>Assuming:</p> <ul> <li>the file (git_token) format is fixed (JSON).</li> <li>the file may not have an extension suffix (.json).</li> </ul> <p>... we have some Problems!</p> <p>I tried <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config.files.importing-extensionless" rel="nofollow noreferrer">2.3.5. Importing Extensionless Files</a> like:</p> <pre><code>spring.config.import=/mnt/secrets-store/git-token[.json] </code></pre> <p>But it works only with YAML/.properties yet!(tested with spring-boot:2.6.1))</p> <p>Same applies to <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config.typesafe-configuration-properties" rel="nofollow noreferrer">2.8. Type-safe Configuration Properties</a>. ;(;(</p> <hr /> <p>In Spring-Boot we can (out-of-the box) provide JSON-config (only) as <code>SPRING_APPLICATION_JSON</code> environment/command line property, and <em>it has to be the json string</em>, and cannot be a path or file (yet).</p> <hr /> <p>The proposed (baeldung) article shows ways to &quot;enable JSON properties&quot;, but it is a long article with many details, shows much code and has decent lacks/outdates (@Component on @ConfigurationProperties is rather &quot;unconventional&quot;)..</p> <hr /> <p>I tried the following (on local machine, under the mentioned assumptions):</p> <pre class="lang-scala prettyprint-override"><code>package com.example.demo; import com.fasterxml.jackson.annotation.JsonProperty; import lombok.Data; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @Value(&quot;&quot;&quot; #{@jacksonObjectMapper.readValue( T(java.nio.file.Files).newInputStream( T(java.nio.file.Path).of('/mnt/secrets-store/git-token')), T(com.example.demo.GitInfo) )}&quot;&quot;&quot; // watch out with @Value and text blocks! (otherwise: No converter found capable of converting from type [com.example.demo.GitInfo] to type [java.lang.String]) ) GitInfo gitInfo; @Bean CommandLineRunner runner() { return (String... args) -&gt; { System.out.println(gitInfo.getUsername()); System.out.println(gitInfo.getPassword()); }; } } @Data class GitInfo { @JsonProperty(&quot;USERNAME&quot;) private String username; @JsonProperty(&quot;PASSWORD&quot;) private String password; } </code></pre> <p>With (only) spring-boot-starter-web and lombok on board, it prints the expected output.</p> <hr /> <p>Solution outline:</p> <ul> <li>a pojo for this <ul> <li>the upper case is little problematic, but can be handled as shown.</li> </ul> </li> <li>a (crazy) <code>@Value</code> - (Spring-)Expression, involving: <ul> <li>(hopefully) auto-configured <code>@jacksonObjectMapper</code> bean. (alternatively: custom)</li> <li><a href="https://fasterxml.github.io/jackson-databind/javadoc/2.13/com/fasterxml/jackson/databind/ObjectMapper.html" rel="nofollow noreferrer">ObjectMapper#readValue</a> (alternatives possible)</li> <li><a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/nio/file/Files.html#newInputStream(java.nio.file.Path,java.nio.file.OpenOption...)" rel="nofollow noreferrer">java.nio.file.Files#newInputStream</a> (alternatives possible)</li> <li><a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/nio/file/Path.html#of(java.lang.String,java.lang.String...)" rel="nofollow noreferrer">java.nio.file.Path#of</a></li> </ul> </li> </ul>
xerx593
<p>I have backend service description in skaffold.yaml similiar this:</p> <pre><code>... deploy: helm: releases: - name: 'myapp-backend' chartPath: myapp-chart-backend values: APP_IMAGE: ... namespace: myapp-ns recreatePods: true ... </code></pre> <p>After cluster is up I have list pods and <code>kubectl get pods</code> return </p> <pre><code>... myapp-backend-7dbf4b6fb8-kw7zv myapp-backend-redis-646f454bcb-7thrc ... </code></pre> <p>I need full name of pod (<code>myapp-backend-7dbf4b6fb8-kw7zv</code>) to use it in <code>kubectl cp</code> command, which requires full name. But this command I run in my bash script, so I need to get full name <code>myapp-backend-7dbf4b6fb8-kw7zv</code> by name <code>myapp-backend</code>.</p>
Log
<p>Assuming you know the name in the deployment ('myapp-backend' in this case), you can:</p> <pre><code>kubectl get pods --selector=app=myapp-backend -o jsonpath='{.items[*].metadata.name}' </code></pre> <p><strong>Update</strong></p> <p>Since I obviously don't have an access to your environment, I've rather provided a a general path for solution, you can fiddle with this command but the idea probably will remain the same:</p> <ol> <li>call <code>kubectl get pods --selector=...</code> (Its possible that you should add more selectors in __your__environment)</li> <li>Assume that the output is json. Here one nifty trick is to examine json by using: <code>kubectl get pods --selector=app=&lt;myapp-backend&gt; -o json</code>. You'll get a properly formatted json that you can inspect and see what part of it you actually want to get.</li> <li>query by jsonpath only the part of json that you want by providing a jsonpath expression, for example <code>{.items[0].metadata.name}</code> will also work</li> </ol>
Mark Bramnik
<p>We have a custom metric that gets exported only upon some error condition in app</p> <p>Alert rule use that custom metric that gets registered with rule manager of Prometheus</p> <p>Why Prometheus does not raise error, when this metric name is queried? Despite the metric is not available in Prometheus yet...</p>
overexchange
<p>It seems correct that the absence of a signal is not treated as an error.</p> <p>However, it can cause problems with dashboards and alerting.</p> <p>See this presentation by one of Prometheus' creators: <a href="https://promcon.io/2017-munich/slides/best-practices-and-beastly-pitfalls.pdf" rel="nofollow noreferrer">Best Practices &amp; Beastly Pitfalls</a></p>
DazWilkin
<p>We are taking the step to upgrade our infrastructure setup and are doing some R&amp;D with K8s.</p> <p>We believe k8s is the solution we want to implement, however I've hit a brick wall.</p> <p>I'm really struggling to get k8s to pull an image from a private registry that uses a hostname that does not exist.</p> <p>I have followed instructions online and have successfully added a host record to coredns - I have verified it resolves correctly using throwaway containers, yet it seems like whenever I try to pull an image, I get the same error:</p> <pre><code>Failed to pull image &quot;fake.host.uk/app&quot;: rpc error: code = Unknown desc = Error response from daemon: Get &quot;https://fake.host.uk/v2/&quot;: dial tcp: lookup fake.host.uk: no such host </code></pre> <p>Doing a <code>docker login fake.host.uk</code> works absolutely fine. I can also see my added hosts via</p> <p><code>kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools</code></p> <p>Then running <code>ping fake.host.uk</code> brings back the correct IP.</p> <p>However, trying to pull an image just doesn't work; how can I solve this problem?</p>
Matt
<p>DNS resolution needs to be setup for each node in your cluster, preferably by updating a common DNS server, but you can also update /etc/hosts on every node in the cluster. Kubernetes and docker pull images from the node and not from within a container, so they won't see the settings applied to things like coredns (it would create a circular dependency, how do you resolve the name of the coredns image's registry).</p>
BMitch
<p>How can I share the Spring cache between replicated microservices in kubernetes? Which technology should I use? The solution should be able to support an high available configuration (e.g.: master/slave; active-active DR and so on).</p> <p>Please provide the spring boot configuration as well if possible.</p>
Fabry
<p>Spring cache has a concept of cache providers - technologies standing behind the caching implementation.</p> <p>By default the cache is implemented in-memory, so its not replicated in any manner. However you can configure it to run, say, with Redis. <a href="https://dzone.com/articles/quickstart-how-to-use-spring-cache-on-redis" rel="nofollow noreferrer">See this tutorial for technical implementation aspects</a></p> <p>In this case the same redis instance should be accessible from all kubernetes nodes running the pod of your application and you will have distributed cache.</p> <p>Depending on your actual needs you can also check the integration with <a href="https://mkyong.com/spring/spring-caching-and-ehcache-example/" rel="nofollow noreferrer">EHcache</a> or <a href="https://hazelcast.com/blog/spring-boot/" rel="nofollow noreferrer">Hazelcast</a>. I'm sure other options also exist but this should give a direction.</p>
Mark Bramnik
<p>So I have a kubernetes cronjob object set to run periodically.</p> <pre class="lang-py prettyprint-override"><code>NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE ticketing-job-lifetime-manager 45 */4 * * * False 0 174m 25d </code></pre> <p>and I know how to call it manually:</p> <pre class="lang-yaml prettyprint-override"><code># ticketing-job-manual-call will be the name of the job that runs kubectl create job --from=cronjobs/ticketing-job-lifetime-manager ticketing-job-manual-call </code></pre> <p><strong>BUT</strong> - what I want to do is call the job, but modify portions of it (shown below) before it is called. Specifically <code>items.metadata.annotations</code> and <code>items.spec.jobTemplate.spec.containers.args</code>.</p> <p>If this is possible on-the-fly, I'd be over the moon. If it requires creating a temporary object, then I'd appreciate an approach to doing this that is robust, performant - and safe. Thanks!</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: v1 items: - apiVersion: batch/v1 kind: CronJob metadata: annotations: &lt;annotation-1&gt; &lt;- want to modify these &lt;annotation-2&gt; .. &lt;annotation-n&gt; creationTimestamp: &quot;2022-05-03T13:24:49Z&quot; labels: AccountID: foo FooServiceAction: &quot;true&quot; FooServiceManaged: &quot;true&quot; CronName: foo name: foo namespace: my-namespace resourceVersion: &quot;298013999&quot; uid: 57b2-4612-88ef-a0d5e26c8 spec: concurrencyPolicy: Replace jobTemplate: metadata: annotations: &lt;annotation-1&gt; &lt;- want to modify these &lt;annotation-2&gt; .. &lt;annotation-n&gt; creationTimestamp: null labels: AccountID: 7761777c38d93b TicketServiceAction: &quot;true&quot; TicketServiceManaged: &quot;true&quot; CronName: ticketing-actions-7761777c38d93b-0 name: ticketing-actions-7761777c38d93b-0 namespace: rias spec: containers: - args: - --accountid=something &lt;- want to modify these - --faultzone=something - --type=something - --cronjobname=something - --plans=something command: - ./ticketing-job env: - name: FOO_BAR &lt;- may want to modify these value: &quot;false&quot; - name: FOO_BAZ value: &quot;true&quot; </code></pre>
Oliver Williams
<p>The way to think about this is that Kubernetes resources are defined (definitively) by YAML|JSON config files. A useful advantage to having config files is that these can be checked into source control; you automatically audit your work if you create unique files for each resource (for every change).</p> <p>Kubernetes (<code>kubectl</code>) isn't optimized|designed to tweak Resources although you can use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer"><code>kubectl patch</code></a> to update deployed Resources.</p> <p>I encourage you to consider a better approach that is applicable to any Kubernetes resource (not just <code>Job</code>'s) and this focuses on use YAML|JSON files as the way to represent state:</p> <ol> <li><code>kubectl get</code> the resource and output it as YAML|JSON (<code>--output=json|yaml</code>) persisting the result to a file (that could be source-controlled)</li> <li>Mutate the file using any of many tools but preferably YAML|JSON processing tools (e.g. <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer"><code>yq</code></a> or <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a>)</li> <li><code>kubectl create</code> or <code>kubectl apply</code> the file that results that reflects the intended configuration of the new resource.</li> </ol> <p>By way of example, assuming you use <code>jq</code>:</p> <pre class="lang-sh prettyprint-override"><code># Output 'ticketing-job-lifetime-manage' as a JSON file kubectl get job/ticketing-job-lifetime-manage \ --namespace=${NAMESPACE} \ --output=json &gt; ${PWD}/ticketing-job-lifetime-manage.json # E.g. replace '.metadata.annotations' entirely jq '.metadata.annotations=[{&quot;foo&quot;:&quot;x&quot;},{&quot;bar&quot;:&quot;y&quot;}]' \ ${PWD}/${PWD}/ticketing-job-lifetime-manage.json \ &gt; ${PWD}/${PWD}/new-job.json # E.g. replace a specific container 'foo' specific 'args' key with value jq '.spec.jobTemplate.spec.containers[]|select(.name==&quot;foo&quot;).args[&quot;--key&quot;]=&quot;value&quot; \ ${PWD}/${PWD}/new-job.json \ &gt; ${PWD}/${PWD}/new-job.json # Etc. # Apply kubectl create \ --filename=${PWD}/new-job.json \ --namespace=${NAMESPACE} </code></pre> <blockquote> <p><strong>NOTE</strong> You can pipe the output from the <code>kubectl get</code> through <code>jq</code> and into <code>kubectl create</code> if you wish but it's useful to keep a file-based record of the resource.</p> </blockquote> <p>Having to deal with YAML|JSON config file is a common issue with Kubernetes (and every other technology that uses them). There are other tools e.g. <a href="https://jsonnet.org/" rel="nofollow noreferrer">jsonnet</a> and <a href="https://cuelang.org/docs/integrations/k8s/" rel="nofollow noreferrer">CUE</a> that try to provide a more programmatic way to manage YAML|JSON.</p>
DazWilkin
<p>I use minikube with Docker driver on Linux. For a manual workflow I can enable registry addon in minikube, push there my images and refer to them in deployment config file simply as <code>localhost:5000/anything</code>. Then they are pulled to a minikube's environment by its Docker daemon and deployments successfully start in here. As a result I get all the base images saved only on my local device (as I build my images using my local Docker daemon) and minikube's environment gets cluttered only by images that are pulled by its Docker daemon.</p> <p>Can I implement the same workflow when use Skaffold? By default Skaffold uses minikube's environment for both building images and running containers out of them, and also it duplicates (sometimes even triplicates) my images inside minikube (don't know why).</p>
b.niki
<p>Skaffold builds directly to Minikube's Docker daemon as an optimization so as to avoid the additional retrieve-and-unpack required when pushing to a registry.</p> <p>I believe your duplicates are like the following:</p> <pre><code>$ (eval $(minikube docker-env); docker images node-example) REPOSITORY TAG IMAGE ID CREATED SIZE node-example bb9830940d8803b9ad60dfe92d4abcbaf3eb8701c5672c785ee0189178d815bf bb9830940d88 3 days ago 92.9MB node-example v1.17.1-38-g1c6517887 bb9830940d88 3 days ago 92.9MB </code></pre> <p>Although these images have different tags, those tags are just pointers to the same Image ID so there is a single image being retained.</p> <p>Skaffold normally cleans up left-over images from previous runs. So you shouldn't see the minikube daemon's space continuously growing.</p> <hr /> <p>An aside: even if those Image IDs were different, an image is made up of multiple <em>layers</em>, and those layers are shared across the images. So Docker's reported image sizes may not actually match the actual disk space consumed.</p>
Brian de Alwis
<p>I do anything i can. Not in daemon.json,maybe in some docker binary file.</p> <pre><code> # cat /etc/docker/daemon.json { &quot;registry-mirrors&quot;: [ &quot;https://d8b3zdiw.mirror.aliyuncs.com&quot; ], &quot;insecure-registries&quot;: [ &quot;https://ower.site.com&quot; </code></pre> <p>This is docker info</p> <pre><code>[root@k8s-master ~]# docker info Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Docker Buildx (Docker Inc., v0.7.1-docker) scan: Docker Scan (Docker Inc., v0.12.0) Server: Containers: 25 Running: 20 Paused: 0 Stopped: 5 Images: 13 Server Version: 20.10.12 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime: runc Init Binary: docker-init containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d runc version: v1.0.2-0-g52b36a2 init version: de40ad0 Security Options: seccomp Profile: default Kernel Version: 3.10.0-1160.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 3.701GiB Name: k8s-master ID: P6VN:S2FI:AU6D:LBCO:PPX7:KREJ:7OIQ:2K2J:XISF:MZGT:YSDB:XFIG Docker Root Dir: /data/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false </code></pre> <p>Disconnect the network,exec &quot;kubectl create configmap nginx-config --from-file=nginx.conf&quot;,then,exec &quot;kubectl describe pod nginx1-5c9f6bbd8c-4jmx2. The messages display</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 34s default-scheduler Successfully assigned default/nginx1-5c9f6bbd8c-4jmx2 to k8s-master Warning Failed 33s kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = Error response from daemon: Get &quot;https://registry-1.docker.io/v2/&quot;: read tcp 10.15.29.150:51492-&gt;54.85.133.123:443: read: connection reset by peer </code></pre> <p>I want to know where is “registry-1.docker.io”.</p>
zhuwei
<p>You are showing an issue with kubernetes, but making your configuration on docker. While these both run containers, they have separate configurations. For configuring the mirroring, it depends on how you run your containers in kubernetes. The most popular runtime being containerd's CRI, which has <a href="https://github.com/containerd/containerd/blob/main/docs/cri/registry.md" rel="nofollow noreferrer">documentation on configuring their registry settings</a>:</p> <p><code>/etc/containerd/config.toml</code>:</p> <pre><code>version = 2 [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors] [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;docker.io&quot;] endpoint = [&quot;https://registry-1.docker.io&quot;] </code></pre> <p>Which has been replaced with:</p> <p><code>/etc/containerd/config.toml</code>:</p> <pre><code>[plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry] config_path = &quot;/etc/containerd/certs.d&quot; </code></pre> <p>and</p> <p><code>/etc/containerd/certs.d/docker.io/hosts.toml</code>:</p> <pre><code>server = &quot;https://docker.io&quot; [host.&quot;https://registry-1.docker.io&quot;] # ... </code></pre> <p>As for where <code>registry-1.docker.io</code> comes from, it's the DNS name for the Docker Hub registry, which is the default when you do not specify a registry in your image name.</p>
BMitch
<p>I have one of my spec files like this</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: webserver image: PRIVATE_REPO/project/projectname:${TAG} imagePullPolicy: Always ports: - containerPort: 8080 name: http </code></pre> <p>I have the TAG value defined in an env file. When I did this with docker-compose it worked on docker-compose. I don't think I can do this for Kubernetes, I was wondering if there is a way to do this.</p>
Bharat Ramaswamy Nandakumar
<p>You cannot do this the same way in Kubernetes.</p> <p>Docker Compose permits transformations to be applied to its YAML config files so that you can, for example, replace environment variables with values as you were doing. Essentially, the YAML file you provide to Docker Compose is a template that the tool transforms into the YAML that it uses to deploy containers.</p> <p>Kubernetes CLI (<code>kubectl</code>) does not support such transformations. You will need to ensure that the YAML files you provide to <code>kubectl</code> are a literal representation of the YAML that you want to be applied to the cluster.</p> <p>There are various ways to address this &quot;templating problem&quot; with Kubernetes but you will need to use additional tools to do this.</p> <p>Simplistically, you can use a Linux tool like <code>sed</code> to replace variables with their values. Because you'll likely be using YAML configs, you can use a tool like <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer"><code>yq</code></a> that is designed to process YAML and, because <code>yq</code> understands YAML structure, the tool is better suited than e.g. <code>sed</code>.</p> <p>Because this is a common need with Kubernetes, there are Kubernetes-specific tools for templating configurations files. See <a href="https://helm.sh" rel="nofollow noreferrer">Helm</a>, <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a>, <a href="https://jsonnet.org/articles/kubernetes.html" rel="nofollow noreferrer">Jsonnet</a> and <a href="https://cuelang.org/docs/integrations/k8s/" rel="nofollow noreferrer">CUE</a> among others.</p>
DazWilkin
<p>In the <a href="https://skaffold.dev/docs/environment/local-cluster/" rel="nofollow noreferrer">skaffold documentation</a> it says it will auto-detect a local cluster based upon the kubernetes context and, if it is not a local-cluster, it will push to a container repo.</p> <p>I am running skaffold on a Mac, I do not see that behavior. When I run it with <code>skaffold run</code> on minikube, it does what I expect. But when I change the context to my remote cluster, it does not push the image to the remote container registry. I'm somewhat new to skaffold so I would love any ideas on how to debug this or anything that might cause this behavior.</p> <p>Edit: adding my ~/.skaffold/config file</p> <pre class="lang-yaml prettyprint-override"><code>global: local-cluster: true survey: last-prompted: &quot;2021-01-18T14:06:13-05:00&quot; kubeContexts: - kube-context: minikube local-cluster: true </code></pre>
Mike Nishizawa
<p>Setting <code>local-cluster: true</code> in your <code>~/.skaffold/config</code> instructs Skaffold to treat that cluster as a local-cluster. When in the <code>global</code> section, Skaffold will treat <em>all</em> clusters as local.</p>
Brian de Alwis
<p>I have a mssql pod that I need to use the <a href="https://github.com/free/sql_exporter" rel="nofollow noreferrer">sql_exporter</a> to export its metrics. I was able to set up this whole thing manually fine:</p> <ol> <li>download the binary</li> <li>install the package</li> <li>run ./sql_exporter on the pod to start listening on port for metrics</li> </ol> <p>I tried to automate this using <code>kubectl exec -it ...</code> and was able to do step 1 and 2. When I try to do step 3 with <code>kubectl exec -it &quot;$mssql_pod_name&quot; -- bash -c ./sql_exporter</code> the script just hangs and I understand as the server is just going to be listening forever, but this stops the rest of my installation scripts.</p> <pre><code>I0722 21:26:54.299112 435 main.go:52] Starting SQL exporter (version=0.5, branch=master, revision=fc5ed07ee38c5b90bab285392c43edfe32d271c5) (go=go1.11.3, user=root@f24ba5099571, date=20190114-09:24:06) I0722 21:26:54.299534 435 config.go:18] Loading configuration from sql_exporter.yml I0722 21:26:54.300102 435 config.go:131] Loaded collector &quot;mssql_standard&quot; from mssql_standard.collector.yml I0722 21:26:54.300207 435 main.go:67] Listening on :9399 &lt;nothing else, never ends&gt; </code></pre> <p>Any tips on just silencing this and let it run in the background (I cannot ctrl-c as that will stop the port-listening). Or is there a better way to automate plugin install upon pod deployment? Thank you</p>
bZhang
<h5>To answer your question:</h5> <p>This <a href="https://stackoverflow.com/a/49245303/609290">answer</a> should help you. You should (!?) be able to use <code>./sql_exporter &amp;</code> to run the process in the background (when <strong>not</strong> using <code>--stdin --tty</code>). If that doesn't work, you can try <code>nohup</code> as described by the same answer.</p> <h5>To recommend a better approach:</h5> <p>Using <code>kubectl exec</code> is not a good way to program a Kubernetes cluster.</p> <p><code>kubectl exec</code> is best used for debugging rather than deploying solutions to a cluster.</p> <p>I assume someone has created a Kubernetes Deployment (or similar) for Microsoft SQL Server. You now want to complement that Deployment with the exporter.</p> <p>You have options:</p> <ol> <li>Augment the existing Deployment and add the <code>sql_exporter</code> as a sidecar (another container) in the Pod that includes the Microsoft SQL Server container. The exporter accesses the SQL Server via <code>localhost</code>. This is a common pattern when deploying functionality that complements an application (e.g. logging, monitoring)</li> <li>Create a new Deployment (or similar) for the <code>sql_exporter</code> and run it as a standalone Service. Configure it scrape one|more Microsoft SQL Server instances.</li> </ol> <p>Both these approaches:</p> <ul> <li>take more work but they're &quot;more Kubernetes&quot; solutions and provide better repeatability|auditability etc.</li> <li>require that you create a <a href="https://hub.docker.com/r/githubfree/sql_exporter" rel="nofollow noreferrer">container for <code>sql_exporter</code></a> <strike>(although I assume the exporter's authors already provide this)</strike>.</li> </ul>
DazWilkin
<p>Recently, prometheus-operator has been promoted to stable helm chart (<a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-operator</a>). </p> <p>I'd like to understand how to add a custom application to monitoring by prometheus-operator in a k8s cluster. An example for say gitlab runner which by default provides metrics on 9252 would be appreciated (<a href="https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server" rel="noreferrer">https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server</a>).</p> <p>I have a rudimentary yaml that obviously doesn't work but also not provides any feedback on <em>what</em> isn't working:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: gitlab-monitor # Change this to the namespace the Prometheus instance is running in namespace: default labels: app: gitlab-runner-gitlab-runner release: prometheus spec: selector: matchLabels: app: gitlab-runner-gitlab-runner namespaceSelector: # matchNames: # - default any: true endpoints: - port: http-metrics interval: 15s </code></pre> <p>This is the prometheus configuration:</p> <pre><code>&gt; kubectl get prometheus -o yaml ... serviceMonitorNamespaceSelector: {} serviceMonitorSelector: matchLabels: release: prometheus ... </code></pre> <p>So the selectors should match. By "not working" I mean that the endpoints do not appear in the prometheus UI.</p>
andig
<p>Thanks to Peter who showed me that it idea in principle wasn't entirely incorrect I've found the missing link. As a <code>servicemonitor</code> does monitor services (haha), I missed the part of creating a service which isn't part of the gitlab helm chart. Finally this yaml did the trick for me and the metrics appear in Prometheus:</p> <pre><code># Service targeting gitlab instances apiVersion: v1 kind: Service metadata: name: gitlab-metrics labels: app: gitlab-runner-gitlab-runner spec: ports: - name: metrics # expose metrics port port: 9252 # defined in gitlab chart targetPort: metrics protocol: TCP selector: app: gitlab-runner-gitlab-runner # target gitlab pods --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: gitlab-metrics-servicemonitor # Change this to the namespace the Prometheus instance is running in # namespace: default labels: app: gitlab-runner-gitlab-runner release: prometheus spec: selector: matchLabels: app: gitlab-runner-gitlab-runner # target gitlab service endpoints: - port: metrics interval: 15s </code></pre> <p>Nice to know: the <code>metrics</code> <code>targetPort</code> is defined in the gitlab runner chart.</p>
andig
<p>I have the following skaffolding</p> <pre><code>build: tagPolicy: sha256: {} artifacts: - image : sdk context: docker docker: dockerfile: Dockerfile.sdk buildArgs: CONFIGURATION: Debug - image : app context: docker docker: dockerfile: Dockerfile.app buildArgs: CONFIGURATION: Debug - image: azu context: rt/azu/src docker: dockerfile: Dockerfile.worker buildArgs: VERSION : Debug </code></pre> <p>The first two images are built just fine. Whereas the third fails. The second build depends on the first and the third depends on the second. However the third fails with <code>"MANIFEST_UNKNOWN: manifest unknown"</code> because it tries to retrieve it from docker hub. If I change the context of the third build to docker (which will make the building of the image fail) skaffold finds the local image. What can I do to keep the correct context and make skaffold aware that it shouldn't pull from docker hub but use the locally build image?</p>
Rune FS
<p><em>(Aside: It's difficult to help if you don't include your full <code>skaffold.yaml</code> and your <code>Dockerfile</code>s. Just redact pieces that are private.)</em></p> <p>Skaffold &quot;just&quot; orchestrates the builds and has no influence over how the underlying builders resolve images. But Skaffold <em>does</em> instruct the underlying builders how the resulting image should be tagged. Skaffold provides a set of <a href="https://skaffold.dev/docs/pipeline-stages/taggers/" rel="nofollow noreferrer"><em>tagging policies</em></a>.</p> <p>So it sounds like your <code>rt/azu/src/Dockerfile.worker</code> has a <code>FROM</code> that is not referencing your <code>app</code> image — perhaps the image is being tagged differently than what you're referencing. Skaffold's default tagger uses the git commit, resulting in image references like <code>app:v1.12.0-37-g6af3198f3-dirty</code>.</p> <p>If my suspicion is true, then you'll want to use the <code>envTemplate</code> tagger so that your built images use a more controlled and predictable image tag that you can embed into your <code>Dockerfile</code>s.</p>
Brian de Alwis
<p>Generally you can use <code>kops get secrets kube --type secret -oplaintext</code>, but I am not running on AWS and am using GCP. </p> <p>I read that <code>kubectl config view</code> should show you this info, but I see no such thing (wondering if this has to do with GCP serviceaccount setup, am also using GKE).</p> <p>The <code>kubectl config view</code> returns something like:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://MY_IP name: MY_CLUSTER_NAME contexts: - context: cluster: MY_CLUSTER_NAME user: MY_CLUSTER_NAME name: MY_CLUSTER_NAME current-context: MY_CONTEXT_NAME kind: Config preferences: {} users: - name: MY_CLUSTER_NAME user: auth-provider: config: access-token: MY_ACCESS_TOKEN cmd-args: config config-helper --format=json cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud expiry: 2019-02-27T03:20:49Z expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp </code></pre> <p>Neither <strong>Username</strong>=><code>Admin</code> or <strong>Username</strong>=><code>MY_CLUSTER_NAME</code> worked with <strong>Password</strong>=><code>MY_ACCESS_TOKEN</code></p> <p>Any ideas?</p>
Alexander Kleinhans
<p>Try:</p> <pre class="lang-sh prettyprint-override"><code>gcloud container clusters describe ${CLUSTER} \ --flatten="masterAuth" [--zone=${ZONE}|--region=${REGION} \ --project=${PROJECT} </code></pre> <p>It's possible that your cluster has basic authentication (username|password) <em>disabled</em> as this authentication mechanism is discouraged.</p> <p>An alternative mechanism provided with Kubernetes Engine is (as shown in your config) is to use your <code>gcloud</code> credentials to get you onto the cluster.</p> <p>The following command will configure <code>~/.kube/config</code> so that you may access the cluster using your <code>gcloud</code> credentials. It looks as though this step has been done and you can use <code>kubectl</code> directly.</p> <pre><code>gcloud container clusters get-credentials ${CLUSTER} \ [--zone=${ZONE}|--region=${REGION}] \ --project=${PROJECT} </code></pre> <p>As long as you're logged in using <code>gcloud</code> with an account that's permitted to use the cluster, you should be able to:</p> <pre><code>kubectl cluster-info kubectl get nodes </code></pre>
DazWilkin
<p>I understand <code>gcloud</code> uses the Dockerfile specified in the root directory of the source (<code>.</code>) as in the command: <code>gcloud builds submit --tag gcr.io/[PROJECT_ID]/quickstart-image .</code></p> <p>but I am trying to specify the Dockerfile to use to build the image which I have not found any resource on how to do that, I don't know if that is possible.</p>
idrisadetunmbi
<p>The only way to specify a Dockerfile (i.e. other than <code>./Dockerfile</code>) would be to create a <code>cloudbuild.yaml</code> per techtabu@. This config could then use the <code>docker</code> builder and provide the specific Dockerfile, i.e.:</p> <pre><code>steps: - name: "gcr.io/cloud-builders/docker" args: - build - "--tag=gcr.io/$PROJECT_ID/quickstart-image" - "--file=./path/to/YourDockerFile" - . ... images: - "gcr.io/$PROJECT_ID/quickstart-image" </code></pre> <p>If you wish, you also get to specify an alternative name than <code>cloudbuild.yaml</code>.</p> <p>The <code>./Dockerfile</code> assumption is presumably to ease the transition to Cloud Build. </p> <p>I recommend you switch to using <code>cloudbuild.yaml</code> for the flexibility it provides.</p>
DazWilkin
<p>I created an Alpine docker image for Nifi 1.14.0 and used that image in stateful-set yaml file for Nifi pods' deployment on Rancher. On running the image locally on my VM, it is running without any errors and generating the HTTPS URL for Nifi UI but when the same image is deployed on Rancher via helm through the stateful-set file, its logs give &quot;nifi.sensitive.prop.key&quot; not found error and &quot;there was an issue decrypting protected properties&quot;. How do I resolve this issue?</p>
Dolly
<p>Set <code>NIFI_SENSITIVE_PROPS_KEY</code> environment variable. It should be atleast 12 characters. You can see more details how it is used in <a href="https://www.contemplatingdata.com/2017/08/28/apache-nifi-sensitive-properties-need-know/" rel="nofollow noreferrer">this article</a>.</p>
Dakshinamurthy Karra
<p>I have a pod and NodePort service running on GKE.</p> <p>In the Dockerfile for the container in my pod, I'm using <code>gosu</code> to run a command as a specific user:</p> <p>startup.sh</p> <pre><code>exec /usr/local/bin/gosu mytestuser &quot;$@&quot; </code></pre> <p>Dockerfile</p> <pre><code>FROM ${DOCKER_HUB_PUBLIC}/opensuse/leap:latest # Download and verify gosu RUN gpg --batch --keyserver-options http-proxy=${env.HTTP_PROXY} --keyserver hkps://keys.openpgp.org \ --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 &amp;&amp; \ curl -o /usr/local/bin/gosu -SL &quot;https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64&quot; &amp;&amp; \ curl -o /usr/local/bin/gosu.asc -SL &quot;https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64.asc&quot; &amp;&amp; \ gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu &amp;&amp; \ chmod +x /usr/local/bin/gosu # Add tini ENV TINI_VERSION v0.18.0 ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini RUN chmod +x /tini ENTRYPOINT [&quot;/tini&quot;, &quot;--&quot;, &quot;/startup/startup.sh&quot;] # Add mytestuser RUN useradd mytestuser # Run startup.sh which will use gosu to execute following `CMD` as `mytestuser` RUN /startup/startup.sh CMD [&quot;java&quot;, &quot;-Djava.security.egd=file:/dev/./urandom&quot;, &quot;-jar&quot;, &quot;/helloworld.jar&quot;] </code></pre> <p>I've just noticed that when I log into the container on GKE and look at the processes running, the java process that I would expect to be running as <code>mytestuser</code> is actually running as <code>chronos</code>:</p> <pre><code>me@gke-cluster-1-default-ool-1234 ~ $ ps aux | grep java root 9551 0.0 0.0 4296 780 ? Ss 09:43 0:00 /tini -- /startup/startup.sh java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar chronos 9566 0.6 3.5 3308988 144636 ? Sl 09:43 0:12 java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar </code></pre> <p>Can anyone explain what's happening, i.e. who is the <code>chronos</code> user, and why my process is not running as <code>mytestuser</code>?</p>
rmf
<p>When you run a useradd inside the container (or as part of the image build), it adds am entry to the <code>/etc/passwd</code> <em>inside the container</em>. The uid/gid will be in a shared namespace with the host, unless you enable user namespaces. However the mapping of those ids to names will be specific to the filesystem namespace where the process is running. Therefore in this scenario, the uid of mytestuser inside the container happens to be the same uid as chronos on the host.</p>
BMitch
<p>I want to create some docker images that generates text files. However, since images are pushed to Container Registry in GCP. I am not sure where the files will be generated to when I use kubectl run myImage. If I specify a path in the program, like '/usr/bin/myfiles', would they be downloaded to the VM instance where I am typing "kubectl run myImage"? I think this is probably not the case.. What is the solution?</p> <p>Ideally, I would like all the files to be in one place. Thank you</p>
FranktheTank
<p>Container Registry and Kubernetes are mostly irrelevant to the issue of where a container will persist files it creates.</p> <p>Some process running within a container that generates files will persist the files to the container instance's file system. Exceptions to this are <code>stdout</code> and <code>stderr</code> which are both available without further ado.</p> <p>When you run container images, you can mount volumes into the container instance and this provides possible solutions to your needs. Commonly, when running Docker Engine, it's common to mount the host's file system into the container to share files between the container and the host: <code>docker run ... --volume=[host]:[container] yourimage ...</code>.</p> <p>On Kubernetes, there are many <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">types of volumes</a>. An seemingly obvious solution is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer"><code>gcePersistentDisk</code></a> but this has a limitation in that it these disks may only be mounted for write on one pod at a time. A more powerful solution may be to use an NFS-based solution such as <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer"><code>nfs</code></a> or <a href="https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs" rel="nofollow noreferrer"><code>gluster</code></a>. These should provide a means for you to consolidate files outside of the container instances.</p> <p>A good solution but I'm unsure whether it is available, would be to write your files as Google Cloud Storage objects.</p> <p>A tenet of containers is that they should operate without making assumptions about their environment. Your containers should not make assumptions about running on Kubernetes and should not make assumptions about non-default volumes. By this I mean, that your containers will write files to container's file system. When you run the container, you apply the configuration that e.g. provides an NFS volume mount or GCS bucket mount etc. that actually persists the files beyond the container.</p> <p>HTH!</p>
DazWilkin
<p>I have created a docker image for the an app in which i am doing a copy of folder to the image like this:</p> <pre><code>COPY extra-addons/ /mnt/extra-addons/pos_item_price/ </code></pre> <p>but when i use that image using kubernetes and go to the /mnt/extra-addons folder</p> <pre><code>$ kubectl --insecure-skip-tls-verify --namespace my-app exec -it my-app-55d464dd78-7h7x7 -- /bin/bash root@my-app-55d464dd78-7h7x7:/# cd /mnt/extra-addons/ root@my-app-55d464dd78-7h7x7:/mnt/extra-addons# ls root@my-app-55d464dd78-7h7x7:/mnt/extra-addons# </code></pre> <p>i see nothing there</p> <p>but i do see that data is being copied when i am building the image</p> <pre><code>Step 19/26 : COPY extra-addons/ /mnt/extra-addons/pos_item_price/ ---&gt; 47fda7baba98 Step 20/26 : RUN ls -la /mnt/extra-addons/* ---&gt; Running in ab93cf423db5 total 12 drwxr-xr-x. 3 odoo root 4096 Apr 21 11:13 . drwxr-xr-x. 3 odoo root 4096 Apr 21 11:13 .. drwxrwxrwx. 7 root root 4096 Apr 21 11:13 pos_item_price Removing intermediate container ab93cf423db5 ---&gt; 645bc64741e0 Step 21/26 : RUN ls -la /mnt/extra-addons/pos_item_price/* ---&gt; Running in f6ad09d6d83c total 44 drwxrwxrwx. 7 root root 4096 Apr 21 11:13 . drwxr-xr-x. 3 odoo root 4096 Apr 21 11:13 .. -rw-rw-rw-. 1 root root 77 Apr 21 11:10 .git -rw-rw-rw-. 1 root root 579 Apr 21 11:10 .gitignore -rw-rw-rw-. 1 root root 45 Apr 21 11:10 __init__.py -rw-rw-rw-. 1 root root 571 Apr 21 11:10 __manifest__.py drwxrwxrwx. 2 root root 4096 Apr 21 11:13 data drwxrwxrwx. 2 root root 4096 Apr 21 11:13 models drwxrwxrwx. 2 root root 4096 Apr 21 11:13 security drwxrwxrwx. 3 root root 4096 Apr 21 11:13 static drwxrwxrwx. 2 root root 4096 Apr 21 11:13 views Removing intermediate container f6ad09d6d83c ---&gt; dc35af25b2a8 </code></pre> <p>I wonder why it is not persistant when i am copying it into the image, i would have expected the data to be present in the kubernetes pod?</p> <p>Full Dockerfile</p> <pre><code>FROM debian:stretch # Generate locale C.UTF-8 for postgres and general locale data ENV LANG C.UTF-8 # Install some deps, lessc and less-plugin-clean-css, and wkhtmltopdf RUN set -x; \ apt-get update \ &amp;&amp; apt-get install -y --no-install-recommends \ ca-certificates \ curl \ dirmngr \ fonts-noto-cjk \ gnupg \ libssl1.0-dev \ node-less \ python3-pip \ python3-pyldap \ python3-qrcode \ python3-renderpm \ python3-setuptools \ python3-vobject \ python3-watchdog \ xz-utils \ &amp;&amp; curl -o wkhtmltox.deb -sSL https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb \ &amp;&amp; echo '7e35a63f9db14f93ec7feeb0fce76b30c08f2057 wkhtmltox.deb' | sha1sum -c - \ &amp;&amp; dpkg --force-depends -i wkhtmltox.deb\ &amp;&amp; apt-get -y install -f --no-install-recommends \ &amp;&amp; rm -rf /var/lib/apt/lists/* wkhtmltox.deb # install latest postgresql-client RUN set -x; \ echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' &gt; etc/apt/sources.list.d/pgdg.list \ &amp;&amp; export GNUPGHOME="$(mktemp -d)" \ &amp;&amp; repokey='B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8' \ &amp;&amp; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "${repokey}" \ &amp;&amp; gpg --armor --export "${repokey}" | apt-key add - \ &amp;&amp; gpgconf --kill all \ &amp;&amp; rm -rf "$GNUPGHOME" \ &amp;&amp; apt-get update \ &amp;&amp; apt-get install -y postgresql-client \ &amp;&amp; rm -rf /var/lib/apt/lists/* # Install rtlcss (on Debian stretch) RUN set -x;\ echo "deb http://deb.nodesource.com/node_8.x stretch main" &gt; /etc/apt/sources.list.d/nodesource.list \ &amp;&amp; export GNUPGHOME="$(mktemp -d)" \ &amp;&amp; repokey='9FD3B784BC1C6FC31A8A0A1C1655A0AB68576280' \ &amp;&amp; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "${repokey}" \ &amp;&amp; gpg --armor --export "${repokey}" | apt-key add - \ &amp;&amp; gpgconf --kill all \ &amp;&amp; rm -rf "$GNUPGHOME" \ &amp;&amp; apt-get update \ &amp;&amp; apt-get install -y nodejs \ &amp;&amp; npm install -g rtlcss \ &amp;&amp; rm -rf /var/lib/apt/lists/* # Install Odoo ENV ODOO_VERSION 12.0 ARG ODOO_RELEASE=20190128 ARG ODOO_SHA=9e34aaed2eb1e7697aaf36767247dbf335e9fe7a RUN set -x; \ curl -o odoo.deb -sSL http://nightly.odoo.com/${ODOO_VERSION}/nightly/deb/odoo_${ODOO_VERSION}.${ODOO_RELEASE}_all.deb \ &amp;&amp; echo "${ODOO_SHA} odoo.deb" | sha1sum -c - \ &amp;&amp; dpkg --force-depends -i odoo.deb \ &amp;&amp; apt-get update \ &amp;&amp; apt-get -y install -f --no-install-recommends \ &amp;&amp; rm -rf /var/lib/apt/lists/* odoo.deb # Copy entrypoint script and Odoo configuration file RUN pip3 install num2words xlwt COPY ./entrypoint.sh / COPY ./odoo.conf /etc/odoo/ RUN chown odoo /etc/odoo/odoo.conf # Mount /var/lib/odoo to allow restoring filestore and /mnt/extra-addons for users addons RUN mkdir -p /mnt/extra-addons/pos_item_price \ &amp;&amp; chown -R odoo /mnt/extra-addons VOLUME ["/var/lib/odoo", "/mnt/extra-addons"] RUN ls -la /mnt/extra-addons/* RUN echo "-------- Before LS END -----" COPY extra-addons/ /mnt/extra-addons/pos_item_price/ RUN ls -la /mnt/extra-addons/* RUN ls -la /mnt/extra-addons/pos_item_price/* # Expose Odoo services EXPOSE 8069 8071 # Set the default config file ENV ODOO_RC /etc/odoo/odoo.conf # Set default user when running the container USER odoo ENTRYPOINT ["/entrypoint.sh"] CMD ["odoo"] </code></pre>
Zeeshan Abbas
<p>I believe the issue you are facing is related to your volume, but not the one defined inside the Dockerfile (even though I personally dislike any volume defined in the Dockerfile because of issues they cause).</p> <p>To explain the issues resulting from the VOLUME in the Dockerfile, you can see the following example to test COPY, ADD, and RUN:</p> <pre><code>$ cat df.vol FROM busybox:latest VOLUME ["/data"] CMD find /data COPY sample-data/file.txt /data/file.txt COPY sample-data/dir /data/dir ADD sample-data/tar-file.tgz /data/tar-dir RUN echo "hello world" &gt;/data/run.txt \ &amp;&amp; find /data \ &amp;&amp; sleep 5m </code></pre> <p>Here's the sample-data directory used for the COPY and ADD commands:</p> <pre><code>$ ls -al sample-data/ total 32 drwxr-xr-x 3 bmitch bmitch 4096 Jan 22 2017 . drwxr-xr-x 34 bmitch bmitch 12288 Apr 17 15:16 .. drwxr-xr-x 2 bmitch bmitch 4096 Jan 22 2017 dir -rw-r--r-- 1 bmitch bmitch 14 Jan 22 2017 file2.txt -rw-r--r-- 1 bmitch bmitch 12 Jan 22 2017 file.txt -rw-r--r-- 1 bmitch bmitch 214 Jan 22 2017 tar-file.tgz </code></pre> <p>Lets run a build (without BUILDKIT since we want to be able to debug this):</p> <pre><code>$ DOCKER_BUILDKIT=0 docker build -f df.vol -t test-vol . Sending build context to Docker daemon 23.04kB Step 1/7 : FROM busybox:latest ---&gt; 59788edf1f3e Step 2/7 : VOLUME ["/data"] ---&gt; Using cache ---&gt; 14b4f1130806 Step 3/7 : CMD find /data ---&gt; Running in 75673363d1e3 Removing intermediate container 75673363d1e3 ---&gt; 262714d065fc Step 4/7 : COPY sample-data/file.txt /data/file.txt ---&gt; d781519c584e Step 5/7 : COPY sample-data/dir /data/dir ---&gt; 34b5b4a83b1e Step 6/7 : ADD sample-data/tar-file.tgz /data/tar-dir ---&gt; 3fc45f2e62a4 Step 7/7 : RUN echo "hello world" &gt;/data/run.txt &amp;&amp; find /data &amp;&amp; sleep 5m ---&gt; Running in d75794387274 /data /data/dir /data/dir/file1.txt /data/dir/file2.txt /data/run.txt /data/tar-dir /data/tar-dir/dir /data/tar-dir/dir/file1.txt /data/tar-dir/dir/file2.txt /data/tar-dir/file.txt /data/file.txt Removing intermediate container d75794387274 ---&gt; 5af322be539a Successfully built 5af322be539a Successfully tagged test-vol:latest </code></pre> <p>Note the <code>run.txt</code> file above. We also see the files from the COPY and ADD commands. However, if we ran another RUN command, or any time we use the resulting image, we'll see:</p> <pre><code>$ docker run -it --rm test-vol:latest /data /data/dir /data/dir/file1.txt /data/dir/file2.txt /data/tar-dir /data/tar-dir/dir /data/tar-dir/dir/file1.txt /data/tar-dir/dir/file2.txt /data/tar-dir/file.txt /data/file.txt </code></pre> <p>Only the files from COPY and ADD are there. The reason for that is easier to see if we look at the temporary container that docker uses for the RUN steps (this is why I had the <code>sleep 5m</code> during the build). Here's the output from another window during that 5 minute sleep:</p> <pre><code>$ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d75794387274 3fc45f2e62a4 "/bin/sh -c 'echo \"h…" 1 second ago Created brave_dubinsky $ docker diff d75 $ docker inspect d75 [ { "Id": "d75794387274cc222391065c14581a29ff9fcc898ef367db64b9f145bd9325c7", "Created": "2019-04-21T18:19:19.449392301Z", "Path": "/bin/sh", "Args": [ "-c", "echo \"hello world\" &gt;/data/run.txt &amp;&amp; find /data &amp;&amp; sleep 5m" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 31620, "ExitCode": 0, "Error": "", "StartedAt": "2019-04-21T18:19:22.699031557Z", "FinishedAt": "0001-01-01T00:00:00Z" }, ... "Mounts": [ { "Type": "volume", "Name": "07b9d30dfdfcae91b820dc6fa249030fd8d7a4ad9c50ee928aaab104c07c8a9d", "Source": "/home/var-docker/volumes/07b9d30dfdfcae91b820dc6fa249030fd8d7a4ad9c50ee928aaab104c07c8a9d/_data", "Destination": "/data", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ], ... </code></pre> <p>What you see in the above commands is that docker runs the build with a temporary container and an anonymous volume. The <code>diff</code> output shows the changes to that container that will be captured by the build as a layer in the Dockerfile. In this case, nothing.</p> <p>Making the change to the volume doesn't modify the container filesystem, so you never see the change, but the ADD and COPY commands run directly against the image layers so you do see those changes.</p> <p>Will removing the VOLUME from the Dockerfile fix this issue? Probably not (unless your method of running the image is creating and reusing an anonymous volume from the image). Do I recommend removing the VOLUME anyway? Yes, this isn't needed to later specify a volume when you run your container, you can define a volume then for any directory, and having in the Dockerfile breaks attempts to extend the image later with a RUN command in non-intuitive ways.</p> <hr> <p>So, if it's not the VOLUME command interacting with your COPY, why else would you see your changes lost? The most likely cause is defining a volume when you <em>run</em> the container. We need to see your yml spec to know for sure. That volume, if it's a named volume, will be initialized with the contents of your image, but only once. After that, no matter what you change in your image, the volume will be persistent and show you the files in the volume from the last time you used that volume.</p> <p>If you want to update a volume based on changes to your image, then you need to configure an entrypoint in your image to copy files from a saved location in the image, to your volume location in the container. I have examples of doing this in my <a href="https://github.com/sudo-bmitch/docker-base" rel="nofollow noreferrer">docker-base</a> image. See the <code>save-volume</code> and <code>load-volume</code> scripts there.</p>
BMitch
<p>I'm using <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">Jenkins Kubernetes Plugin</a> which starts Pods in a Kubernetes Cluster which serve as Jenkins agents. The pods contain 3 containers in order to provide the slave logic, a Docker socket as well as the <code>gcloud</code> command line tool.</p> <p>The usual workflow is that the slave does its job and notifies the master that it completed. Then the master terminates the pod. However, if the slave container crashes due to a lost network connection, the container terminates with error code 255, the other two containers keep running and so does the pod. This is a problem because the pods have large CPU requests and setup is cheap with the slave running only when they have to, but having multiple machines running for 24h or over the weekend is a noticable financial damage.</p> <p>I'm aware that starting multiple containers in the same pod is not fine Kubernetes arts, however ok if I know what I'm doing and I assume I do. I'm sure it's hard to solve this differently given the way the Jenkins Kubernetes Plugin works.</p> <p>Can I make the pod terminate if one container fails without it respawn? As solution with a timeout is acceptable as well, however less preferred.</p>
Kalle Richter
<p>Disclaimer, I have a rather limited knowledge of kubernetes, but given the question:</p> <p>Maybe you can run the forth container that exposes one simple endpoint of "liveness" It can run <code>ps -ef</code> or any other way to contact 3 existing containers just to make sure they're alive.</p> <p>This endpoint could return "OK" only if all the containers are running, and "ERROR" if at least one of them was detected as "crushed"</p> <p>Then you could setup a liveness probe of kubernetes so that it would stop the pod upon the error returned from that forth container.</p> <p>Of course if this 4th process will crash by itself for any reason (well it shouldn't unless there is a bug or something) then the liveness probe won't respond and kubernetes is supposed to stop the pod anyway, which is probably what you really want to achieve.</p>
Mark Bramnik
<p>I create a yaml file to create rabbitmq kubernetes cluster. I can see pods. But when I write kubectl get deployment. I cant see there. I can't access to rabbitmq ui page.</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: rabbit name: rabbit spec: ports: - port: 5672 protocol: TCP name: mqtt - port: 15672 protocol: TCP name: ui type: NodePort --- apiVersion: apps/v1 kind: StatefulSet metadata: name: rabbit spec: serviceName: rabbit replicas: 3 selector: matchLabels: app: rabbit template: metadata: labels: app: rabbit spec: containers: - name: rabbitmq image: rabbitmq nodeSelector: rabbitmq: "clustered" </code></pre>
newUser
<p>@arghya-sadhu's answer is correct.</p> <p><strong>NB</strong> I'm unfamiliar with RabbitMQ but you may need to use a different image (see <a href="https://hub.docker.com/_/rabbitmq" rel="nofollow noreferrer">'Management Plugin`</a>) to include the UI.</p> <p>See below for more details.</p> <p>You should be able to hack your way to the UI on one (!) of the Pods via:</p> <pre class="lang-sh prettyprint-override"><code>PORT=8888 kubectl port-forward pod/rabbit-0 --namespace=${NAMESPACE} ${PORT}:15672 </code></pre> <p>And then browse <code>localhost:${PORT}</code> (if <code>8888</code> is unavailable, try another). </p> <p>I <strong>suspect</strong> (!) this won't work unless you use the image with the management plugin. </p> <h3>Plus</h3> <ul> <li>The <code>Service</code> needs to select the <code>StatefulSet</code>'s Pods</li> </ul> <p>Within the Service <code>spec</code> you should add perhaps:</p> <pre><code>selector: app: rabbit </code></pre> <ul> <li>Presumably (!?) you are using a private repo (because you have <code>imagePullSecrets</code>).</li> </ul> <p>If you don't and wish to use DockerHub, you may remove the <code>imagePullSecrets</code> section.</p> <ul> <li>It's useful to document (!) container ports albeit not mandatory:</li> </ul> <p>In the <code>StatefulSet</code></p> <pre><code>ports: - containerPort: 5672 - containerPort: 15672 </code></pre> <h3>Debug</h3> <pre class="lang-sh prettyprint-override"><code>NAMESPACE="default" # Or ... </code></pre> <p>Ensure the StatefulSet is created:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get statesfulset/rabbit --namespace=${NAMESPACE} </code></pre> <p>Check the Pods:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get pods --selector=app=rabbit --namespace=${NAMESPACE} </code></pre> <p>You can check the the Pods are bound to a (!) Service:</p> <pre class="lang-sh prettyprint-override"><code>kubectl describe endpoints/rabbit --namespace=${NAMESPACE} </code></pre> <p><strong>NB</strong> You should see 3 addresses (one per Pod)</p> <p>Get the NodePort either:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get service/rabbit --namespace=${NAMESPACE} --output=json kubectl describe service/rabbit --namespace=${NAMESPACE} </code></pre> <p>You will <strong>need</strong> to use the NodePort to access both the MQTT endpoint and the UI. </p>
DazWilkin
<p>As per <a href="https://docs.docker.com/compose/env-file/" rel="nofollow noreferrer">docker docs</a>, environment variables in .env file expected to be in in key-val format as <code>VAR=VAL</code> which works fine for sample like <code>foo=bar</code> but no mention of unavoidable special characters e.g. '=', which may confuse for <code>key-val</code> separator OR <code>space</code> both part of valid db connection string as in:</p> <p>secrets.env file:</p> <pre><code> connectionString=Data Source=some-server;Initial Catalog=db;User ID=uid;Password=secretpassword </code></pre> <p>which is referred in docker-compose.debug.yaml file content as:</p> <pre><code>services: some-service: container_name: "service-name" env_file: - secrets.env ports: - "80:80" </code></pre> <p>Which is further used to transform into <code>docker-compose.yaml</code> as shown complete flow below:</p> <p><a href="https://i.stack.imgur.com/yplMb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yplMb.png" alt="enter image description here"></a></p> <p>So the question is - how do you include connection string which has <code>=</code> and <code>Spaces</code> as part of value ?</p> <p><strong>Need</strong> - We have few micro-services within VS solution and looking forward to avoid repetition of same connection strings otherwise needs in service spec of `docker-compose.yaml' </p> <p><strong>Tried</strong> including values in single/double quote but post transformation whatever after <code>=</code> is treated as value including quotes just similar like kubernets yaml file convention</p>
AnilR
<p>I ran a test without any issues:</p> <pre><code>$ cat .env ENV=default USER_NAME=test2 SPECIAL=field=with=equals;and;semi-colons $ cat docker-compose.env.yml version: '2' services: test: image: busybox command: env environment: - SPECIAL $ docker-compose -f docker-compose.env.yml up Creating network "test_default" with the default driver Creating test_test_1_55eac1c3767c ... done Attaching to test_test_1_d7787ac5bfc0 test_1_d7787ac5bfc0 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin test_1_d7787ac5bfc0 | HOSTNAME=d249a16a8e09 test_1_d7787ac5bfc0 | SPECIAL=field=with=equals;and;semi-colons test_1_d7787ac5bfc0 | HOME=/root test_test_1_d7787ac5bfc0 exited with code 0 </code></pre>
BMitch
<p>I'm trying to incorporate the <code>kubectl auth can-i</code> <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/auth/cani.go#L236" rel="nofollow noreferrer">logic</a> into my code base, but while the code is working, the results are not what I expect.</p> <p>I have 2 users (<strong>minikube</strong>/<strong>jenny</strong>). <strong>minikube</strong> has full cluster wide access, but <strong>jenny</strong> is limited to a namespaced role/rolebinding:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create role &quot;jenny-pod-creator&quot; --verb=create --resource=pod -n &quot;jenny&quot; kubectl create rolebinding &quot;jenny-creator-binding&quot; --role=&quot;jenny-pod-creator&quot; --user=&quot;jenny&quot; --namespace=&quot;jenny&quot; </code></pre> <p>Using the cli, I get the results I expect:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl auth can-i create pod --context jenny -n jenny yes $ kubectl auth can-i create pod --context jenny -n default no - RBAC: role.rbac.authorization.k8s.io &quot;jenny-pod-creator&quot; not found </code></pre> <p>but in my code, <strong>jenny</strong> is not coming up with permission to create. <code>response.Status.Allowed</code> is always <code>false</code> for <strong>jenny</strong> <em>(always true for <strong>minikube</strong>)</em></p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;context&quot; &quot;fmt&quot; &quot;log&quot; &quot;os&quot; &quot;path/filepath&quot; authorizationv1 &quot;k8s.io/api/authorization/v1&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; ) func main() { kubeconfig := filepath.Join( os.Getenv(&quot;HOME&quot;), &quot;.kube&quot;, &quot;config&quot;, ) config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, kubeconfig) if err != nil { log.Fatal(err) } clientset, err := kubernetes.NewForConfig(config) if err != nil { log.Fatal(err) } a := clientset.AuthorizationV1().SelfSubjectAccessReviews() sar := &amp;authorizationv1.SelfSubjectAccessReview{ Spec: authorizationv1.SelfSubjectAccessReviewSpec{ ResourceAttributes: &amp;authorizationv1.ResourceAttributes{ Namespace: &quot;jenny&quot;, Verb: &quot;create&quot;, Resource: &quot;Pod&quot;, }, }, } response, err := a.Create(context.TODO(), sar, metav1.CreateOptions{}) if err != nil { log.Fatal(err) } fmt.Printf(&quot;create resource POD is %v \n&quot;, response.Status.Allowed) } </code></pre>
GrandVizier
<p>In Kubernetes there's a notion of Kinds and Resources. It's very well explained in <a href="https://stackoverflow.com/questions/52309496/difference-between-kubernetes-objects-and-resources">Difference between Kubernetes Objects and Resources</a> and in <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#types-kinds" rel="nofollow noreferrer">API Conventions</a>, if your curious to learn more.</p> <p>In short:</p> <ol> <li><em>Kind</em> is the type which tells a client what kind of entity it represents, and this is always upper case, <code>Pod</code> for example.</li> <li><em>Resource</em> is the representation of such entity sent via HTTP, and this is always lower case and plural.</li> </ol> <p>In your case, you're working with <em>Resource</em> so you need to change <code>Pod</code> (<em>Kind</em>) to <code>pods</code> (<em>resource</em>), which should give you <code>true</code> for <code>jenny</code>. As for minikube, you're getting always <code>true</code> because that user is system:admin, which has full access to the cluster.</p>
soltysh
<p>I'd like to create a Google Kubernetes Engine (GKE) alpha cluster and am facing the following problems:</p> <ul> <li>According to the <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes" rel="nofollow noreferrer">release notes</a> the latest alpha version is <code>1.16.6-gke.4</code>, however <code>gcloud beta container clusters create blablu --machine-type=n1-standard-2 --cluster-version=1.16.6-gke.4 --no-enable-stackdriver-kubernetes --no-enable-autoupgrade --preemptible --enable-kubernetes-alpha --quiet --enable-pod-security-policy</code> fails due to <code>ERROR: (gcloud.beta.container.clusters.create) ResponseError: code=400, message=Node version "1.16.6-gke.4" is unsupported.</code></li> <li>Omitting the version and specifying <code>--enable-kubernetes-alpha</code> causes a cluster with version <code>1.14.10-gke.17</code> to be created - two major versions from the current 1.16.x alpha version which doesn't make sense.</li> <li>I tried about 20 1.16.x-gke.y versions with different values for x and y in a trial error approach without any luck.</li> </ul> <p>I tried these commands with <code>gcloud</code>, <code>gcloud beta</code> and <code>gcloud alpha</code>. All <code>gcloud alpha</code> commands fail due to <code>ERROR: (gcloud.alpha.container.clusters.create) ResponseError: code=404, message=Method not found.</code> which is not helpful at all.</p> <p>I created 1.16 Alpha clusters before using the version specified in the release notes.</p>
Kalle Richter
<p>A good way to "sanity check" things like this is to use the Console and have it show you the equivalent CLI command. I tried to repro your issue and the command the Console produces this:</p> <pre class="lang-sh prettyprint-override"><code>gcloud beta container clusters create ${CLUSTER} \ --project=${PROJECT} \ --region=${REGION} \ --release-channel="rapid" \ </code></pre> <p>Where <code>rapid</code>==<code>1.16.5-gke.2</code></p> <p>Even though:</p> <pre class="lang-sh prettyprint-override"><code>gcloud container get-server-config \ --project=${PROJECT} \ --region=${REGION} Fetching server config for ... defaultClusterVersion: 1.14.10-gke.17 defaultImageType: COS validImageTypes: - UBUNTU_CONTAINERD - COS - UBUNTU - COS_CONTAINERD validMasterVersions: - 1.15.9-gke.9 - 1.15.9-gke.8 ... validNodeVersions: - 1.15.9-gke.9 - 1.15.9-gke.8 ... </code></pre> <p>So I think you need to use one of the <code>--release-channel</code> flag to get the version you're seeking.</p> <p><strong>NB</strong> I know you're probably aware that different regions|zones also (occasionally) have different GKE version availability.</p>
DazWilkin
<p>With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.</p> <p>In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.</p> <p>Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?</p> <p>I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.</p>
Kyle
<p>Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.</p> <p>A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.</p> <p>Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.</p>
BMitch
<p>I'm trying to run a docker image once to execute a task using a popular <a href="https://hub.docker.com/r/minio/mc/" rel="nofollow noreferrer">S3 client minio</a> the environment I'm dealing with use Kubernetes.</p> <p>I can get shell access to execute tasks like so:</p> <pre><code>docker run -it minio/mc --restart=Never --rm /bin/sh </code></pre> <p>similarly, I'm able to run busybox image in my K8S cluster.</p> <pre><code>kubectl run busybox -i --tty --image=busybox --restart=Never --rm -- sh </code></pre> <p>However, I'm unable to get that mc client to work the same way it does with the previous example.</p> <pre><code>kubectl run minio -i --tty --image=minio/mc --restart=Never --rm -- /bin/sh </code></pre> <p>My shell would just exit, any ideas on how to keep the shell open? or how to pass bash commands to it before it dies?</p>
Deano
<p>This issue arises when container(s) in a Pod run some process(es) that complete. When its containers exit, the Pod completes. It is more common to have container(s) in a Pod that run continuously.</p> <p>A solution to this completing problem is thus to keep the container running:</p> <ol> <li>Run the container in a Pod:</li> </ol> <pre class="lang-sh prettyprint-override"><code>kubectl run minio \ --image=minio/mc \ --restart=Never \ --command \ -- /bin/sh -c 'while true; do sleep 5s; done' </code></pre> <blockquote> <p><strong>NOTE</strong> the Pod is kept running by the <code>while</code> loop in the container</p> </blockquote> <blockquote> <p><strong>NOTE</strong> the image's entrypoint is overridden by <code>--command</code> and <code>/bin/sh</code></p> </blockquote> <ol start="2"> <li>Exec into the container, e.g.:</li> </ol> <pre class="lang-sh prettyprint-override"><code>kubectl exec --stdin --tty minio -- mc --help </code></pre>
DazWilkin
<p>So I created EKS Cluster using example given in<br /> <a href="https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete" rel="nofollow noreferrer">Cloudposse eks terraform module</a></p> <p>On top of this, I created AWS S3 and Dynamodb for storing state file and lock file respectively and added the same in <a href="https://www.terraform.io/docs/language/settings/backends/s3.html" rel="nofollow noreferrer">terraform backend config</a>.</p> <p>This is how it looks :</p> <pre><code>resource &quot;aws_s3_bucket&quot; &quot;terraform_state&quot; { bucket = &quot;${var.namespace}-${var.name}-terraform-state&quot; # Enable versioning so we can see the full revision history of our # state files versioning { enabled = true } # Enable server-side encryption by default server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = &quot;aws:kms&quot; } } } } resource &quot;aws_dynamodb_table&quot; &quot;terraform_locks&quot; { name = &quot;${var.namespace}-${var.name}-running-locks&quot; billing_mode = &quot;PAY_PER_REQUEST&quot; hash_key = &quot;LockID&quot; attribute { name = &quot;LockID&quot; type = &quot;S&quot; } } terraform { backend &quot;s3&quot; { bucket = &quot;${var.namespace}-${var.name}-terraform-state&quot; key = &quot;${var.stage}/terraform.tfstate&quot; region = var.region # Replace this with your DynamoDB table name! dynamodb_table = &quot;${var.namespace}-${var.name}-running-locks&quot; encrypt = true } } </code></pre> <p>Now when I try to delete EKS cluster using <code>terraform destroy</code> I get this error:</p> <pre><code>Error: error deleting S3 Bucket (abc-eks-terraform-state): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket. </code></pre> <p>This is the output of <code>terraform plan -destroy</code> after the cluster is partially destroyed because of s3 error</p> <pre><code>Changes to Outputs: - dynamodb_table_name = &quot;abc-eks-running-locks&quot; -&gt; null - eks_cluster_security_group_name = &quot;abc-staging-eks-cluster&quot; -&gt; null - eks_cluster_version = &quot;1.19&quot; -&gt; null - eks_node_group_role_name = &quot;abc-staging-eks-workers&quot; -&gt; null - private_subnet_cidrs = [ - &quot;172.16.0.0/19&quot;, - &quot;172.16.32.0/19&quot;, ] -&gt; null - public_subnet_cidrs = [ - &quot;172.16.96.0/19&quot;, - &quot;172.16.128.0/19&quot;, ] -&gt; null - s3_bucket_arn = &quot;arn:aws:s3:::abc-eks-terraform-state&quot; -&gt; null - vpc_cidr = &quot;172.16.0.0/16&quot; -&gt; null </code></pre> <p>I cannot manually delete the tfstate in s3 because that'll make terraform recreate everything, also I tried to remove s3 resource from tfstate but it gives me lock error(also tried to forcefully remove lock and with -lock=false)</p> <p>So I wanted to know is there a way to tell terraform to delete s3 at the end once everything is deleted. Or is there a way to use the terraform which is there in s3 locally?</p> <p>What's the correct approach to delete EKS cluster when your TF state resides in s3 backend and you have created s3 and dynamodb using same terraform.</p>
Savan
<p>Generally, it is not recommended to keep your S3 bucket that you use for Terraform's backend state management in the Terraform state itself (for this exact reason). I've seen this explicitly stated in Terraform documentation, but I've been unable to find it in a quick search.</p> <p>What I would do to solve this issue:</p> <ol> <li><a href="https://www.terraform.io/docs/cli/commands/force-unlock.html" rel="nofollow noreferrer">Force unlock</a> the Terraform lock (<code>terraform force-unlock LOCK_ID</code>, where <code>LOCK_ID</code> is shown in the error message it gives you when you try to run a command).</li> <li>Download the state file from S3 (via the AWS console or CLI).</li> <li>Create a new S3 bucket (manually, not in Terraform).</li> <li>Manually upload the state file to the new bucket.</li> <li>Modify your Terraform backend config to use the new bucket.</li> <li>Empty the old S3 bucket (via the AWS console or CLI).</li> <li>Re-run Terraform and allow it to delete the old S3 bucket.</li> </ol> <p>Since it's still using the same old state file (just from a different bucket now), it won't re-create everything, and you'll be able to decouple your TF state bucket/file from other resources.</p> <p>If, for whatever reason, Terraform refuses to force-unlock, you can go into the DynamoDB table via the AWS console and delete the lock manually.</p>
Jordan
<p>I am trying to set up <a href="https://cloud.google.com/debugger/docs/setup/go" rel="nofollow noreferrer">Stackdriver debugging</a> using Go. Using the article and this great <a href="https://medium.com/google-cloud/stackdriver-error-reporting-part-2-826f40e00886" rel="nofollow noreferrer">medium post</a> I came up with this <a href="https://github.com/roberson34/stackdriver-demo" rel="nofollow noreferrer">solution</a>.</p> <p>Key parts, in <code>cloudbuild.yaml</code></p> <pre><code>- name: gcr.io/cloud-builders/wget args: [ "-O", "go-cloud-debug", "https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug" ] ... </code></pre> <p>Dockerfile I have</p> <pre><code>... COPY gopath/bin/stackdriver-demo /stackdriver-demo ADD go-cloud-debug / ADD source-context.json / CMD ["/go-cloud-debug","-sourcecontext=./source-context.json", "-appmodule=go-errrep","-appversion=1.0","--","/stackdriver-demo"] ... </code></pre> <p>However the pods keeps crashing, the container logs show this error:</p> <pre><code>Error loading program: decoding dwarf section info at offset 0x0: too short </code></pre> <p>EDIT: Using <code>https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug</code> may be outdated as I haven't seen it used outside Daz's medium post. The official <a href="https://cloud.google.com/debugger/docs/setup/go" rel="nofollow noreferrer">docs</a> uses the package <code>cloud.google.com/go/cmd/go-cloud-debug-agent</code></p> <p>I have update cloudbuild.yaml file to install this package:</p> <pre><code>- name: 'gcr.io/cloud-builders/go' args: ["get", "-u", "cloud.google.com/go/cmd/go-cloud-debug-agent"] env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux'] - name: 'gcr.io/cloud-builders/go' args: ["install", "cloud.google.com/go/cmd/go-cloud-debug-agent"] env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux'] </code></pre> <p>And in the <code>Dockerfile</code> I can get access to the binary in <code>gopath/bin/go-cloud-debug-agent</code></p> <p>When I execute the <code>gopath/bin/go-cloud-debug-agent</code> with my own program as an argument:</p> <p><code>/go-cloud-debug-agent -sourcecontext=./source-context.json -appmodule=go-errrep -appversion=1.0 -- /stackdriver-demo</code></p> <p>I get another opaque error: </p> <pre><code>Error loading program: AttrStmtList not present or not int64 for unit 88 </code></pre> <p>So basically using the <code>cloud-debug</code> binary from <code>https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug</code> and <code>cloud-debug-agent</code> binary from the package <code>cloud.google.com/go/cmd/go-cloud-debug-agent</code> both don't work and give different errors.</p> <p>Would appreciate any tips on what I'm doing wrong and how to fix it.</p>
robertson
<p>OK :-)</p> <p>Yes, you should follow the current Stackdriver documentation, e.g. <code>go-cloud-debug-agent</code></p> <p>Unfortunately, there are now various issues with my post including a (currently broken) <code>gcr.io/cloud-builders/kubectl</code> for regions.</p> <p>I think your issue pertains to your use of <code>golang:alpine</code>. Alpine uses musl rather than the glibc that you find on most other Linux distro's and so, you really must compile for Alpine to ensure your binaries reference the correct libc.</p> <p>I'm able to get your solution working <em>primarily</em> by switching your Dockerfile to pull the Cloud Debug Agent while on Alpine and to compile your source on Alpine:</p> <pre><code>FROM golang:alpine RUN apk add git RUN go get -u cloud.google.com/go/cmd/go-cloud-debug-agent ADD main.go src RUN CGO_ENABLED=0 go build -gcflags=all='-N -l' src/main.go ADD source-context.json / CMD ["bin/go-cloud-debug-agent","-sourcecontext=/source-context.json", "-appmodule=stackdriver-demo","-appversion=1.0","--","main"] </code></pre> <p>I think that should get you beyond the errors that you documented and you should be able to deploy your container to Kubernetes.</p> <p>I've made my version of your image publicly available (and will retain it for a few days for you):</p> <pre><code>gcr.io/dazwilkin-190402-55473323/roberson34@sha256:17cb45f1320e2fe04e0681310506f4c229896429192b0d1c2c8dc20ed54adb0d </code></pre> <p>You may wish to reference it (by that digest) in your <code>deployment.yaml</code></p> <p><strong>NB</strong> For Error Reporting to be "interesting", your code needs to generate errors and, with your example, this is going to be challenging (usually a good thing). You may consider adding another errorful handler that always results in errors so that you may test the service.</p>
DazWilkin
<p>I created an ubuntu instance on gcloud and installed minikube and all the required dependency in it. Now I can do curl from gnode terminal &quot;curl <a href="http://127.0.0.1:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/%22" rel="nofollow noreferrer">http://127.0.0.1:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/&quot;</a> I get the HTML response back.</p> <p>But I want to Access this URL from my Laptop browser. I tried opening these Ports in firewall of instance-node tcp:8080,8085,443,80,8005,8006,8007,8009,8009,8010,7990,7992,7993,7946,4789,2376,2377</p> <p>But still unable to access the above mentioned url while replacing it with my <strong>external(39.103.89.09) IP</strong> i.e <a href="http://39.103.89.09:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://39.103.89.09:8080/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/</a></p> <p>I believe I need to do some networking related changes but don't know what.</p> <p>I am very new to Cloud computing and networking so please help me.</p>
vikky
<p>I suspect that minikube binds to the VM's localhost interface making it inaccessible from a remote machine.</p> <p>There may be a way to run minikube such that it binds to <code>0.0.0.0</code> and then you may be able to use it remotely.</p> <p>Alternatively, you can keep the firewall limited to e.g. <code>22</code> and use SSH to port-forward the VM's port <code>8080</code> to your localhost. `gcloud' includes a helper for this too:</p> <ol> <li>Ensure minikube is running on the VM</li> <li><code>gcloud compute ssh ${INSTANCE} --project=${PROJECT} --zone=${ZONE} --ssh-flag=&quot;-L 8080:localhost:8080&quot;</code></li> <li>Try accessing Kubernetes endpoints from your <strong>local</strong> machine using <code>localhost:8080/api/v1/...</code></li> </ol> <h3>Update</h3> <p>OK, I created a Debian VM (<code>n1-instance-2</code>), installed <code>docker</code> and <code>minikube</code>.</p> <p>SSH'd into the instance:</p> <pre class="lang-sh prettyprint-override"><code>gcloud compute ssh ${INSTANCE} \ --zone=${ZONE} \ --project=${PROJECT} </code></pre> <p>Then <code>minikube start</code></p> <p>Then:</p> <pre class="lang-sh prettyprint-override"><code>minikube kubectl -- get namespaces NAME STATUS AGE default Active 14s kube-node-lease Active 16s kube-public Active 16s kube-system Active 16s </code></pre> <p>minikube appears (I'm unfamiliar it) to run as a Docker container called <code>minikube</code> and it exposes 4 ports to the VM's (!) localhost: <code>22</code>,<code>2376</code>,<code>5000</code>,<code>8443</code>. The latter is key.</p> <p>To determine the port mapping, either eyeball it:</p> <pre class="lang-sh prettyprint-override"><code>docker container ls \ --filter=name=minikube \ --format=&quot;{{.Ports}}&quot; \ | tr , \\n </code></pre> <p>Returns something like:</p> <pre><code>127.0.0.1:32771-&gt;22/tcp 127.0.0.1:32770-&gt;2376/tcp 127.0.0.1:32769-&gt;5000/tcp 127.0.0.1:32768-&gt;8443/tcp </code></pre> <p>In this case, the port we're interested in is <code>32768</code></p> <p>Or:</p> <pre class="lang-sh prettyprint-override"><code>docker container inspect minikube \ --format=&quot;{{ (index (index .NetworkSettings.Ports \&quot;8443/tcp\&quot;) 0).HostPort }}&quot; 32768 </code></pre> <p>Then, exit the shell and return using <code>--ssh-flag</code>:</p> <pre class="lang-sh prettyprint-override"><code>gcloud compute ssh ${INSTANCE} \ --zone=${ZONE} \ --project=${PROJECT} \ --ssh-flag=&quot;-L 8443:localhost:32768&quot; </code></pre> <blockquote> <p><strong>NOTE</strong> <code>8443</code> will be the port on the localhost; <code>32768</code> is the remote minikube port</p> </blockquote> <p>Then, from another shell on your local machine (and while the port-forwarding <code>ssh</code> continues in the other shell), pull the <code>ca.crt</code>, <code>client.key</code> and <code>client.crt</code>:</p> <pre class="lang-sh prettyprint-override"><code>gcloud compute scp \ $(whoami)@${INSTANCE}:./.minikube/profiles/minikube/client.* \ ${PWD} \ --zone=${ZONE} \ --project=${PROJECT} gcloud compute scp \ $(whoami)@${INSTANCE}:./.minikube/ca.crt \ ${PWD} \ --zone=${ZONE} \ --project=${PROJECT} </code></pre> <p>Now, create a config file, call it <code>kubeconfig</code>:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: ./ca.crt server: https://localhost:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: ./client.crt client-key: ./client.key </code></pre> <p>And, lastly:</p> <pre class="lang-sh prettyprint-override"><code>KUBECONFIG=./kubeconfig kubectl get namespaces </code></pre> <p>Should yield:</p> <pre><code>NAME STATUS AGE default Active 23m kube-node-lease Active 23m kube-public Active 23m kube-system Active 23m </code></pre>
DazWilkin
<p>I am new with kubernettes. But I have installed ubuntu-server to my raspberry pi and now I am trying to forward the port for the dashboard.</p> <p>I don't have any success, almost nothing happens and I can't see the dashboard in the cluster-info.</p> <p>I tried following command:</p> <pre><code>microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443 </code></pre> <p>it freeze with following print out</p> <pre><code>Forwarding from 127.0.0.1:10443 -&gt; 8443 Forwarding from [::1]:10443 -&gt; 8443 </code></pre> <p>If I look up the cluster-info I says:</p> <pre><code>cluster-info Kubernetes master is running at https://127.0.0.1:16443 Heapster is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/heapster/proxy CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Grafana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy InfluxDB is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy </code></pre> <p>Any idea to what I am doing wrong?</p>
Dasma
<p>Nothing is frozen - the command for port-forward is running in the foreground. If you have setup the service properly with the right port number everything should be working fine.</p> <p>Try running the same as a background process, by adding &amp; at the end.</p> <blockquote> <p><code>microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443 &amp;</code></p> </blockquote> <p>If you want to kill it. Get the pid</p> <blockquote> <p><code>ps -aef</code></p> </blockquote> <p>and then kill it using the below command</p> <blockquote> <p><code>kill -9 pid-here</code></p> </blockquote>
Praveen Sripati
<p>I have service A which is a consumer from some queue.</p> <p>I can monitor and count any consumed message, easily with Prometheus :)</p> <pre><code>from prometheus_client import start_http_server, Counter COUNTER_IN_MSGS = Counter('msgs_consumed', 'count consumed messages') start_http_server(8000) while(queue not empty): A.consume(queue) COUNTER_IN_MSGS.inc() </code></pre> <p>But than, one day I decide to duplicate my consumer to 10 consumer which do the same {A1, A2..., A10}, by using the same code but running on 10 different dockers (containers on K8s in my case).</p> <p>How can I monitor them using Prometheus?? Should I change my code and some id to each consumer as label?</p> <p>What is the best practice to do in order to be able to sum them all together but also count on each by its own?</p>
dev ved
<p>Yes, you should consider using <a href="https://github.com/prometheus/client_python#labels" rel="nofollow noreferrer">labels</a> to disambiguate metrics (e.g. Counters) by instance.</p> <p>You'll need to determine a unique identifier to use.</p> <p>Kubernetes provides a <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">Downward API</a> that enables you to surface information from the Pod to a container. One of these values should be useful.</p> <p>You can then use PromQL <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/" rel="nofollow noreferrer">ignoring</a> to e.g. sum across Counters and ignore one or more labels.</p> <p>With this approach, you get to choose whether to sum by instance or across instances.</p>
DazWilkin