Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>In one of my deployment files, I want to set an environment variable. The variable is <code>KUBE_VERSION</code> and values must be fetched from a ConfigMap.</p> <pre><code> kube_1_21: 1.21.10_1550 </code></pre> <p>This is part of ConfigMap where I want to set <code>1.21.10_1550</code> to <code>KUBE_VERSION</code>, but if the cluster is of IKS 1.20, then the key will be:</p> <pre><code>kube_1_20: 1.20.21_3456 </code></pre> <p><code>kube_</code> is always static. How can I set environment variable using a regex expression?</p> <p>Something of this sort:</p> <pre><code> - name: KUBE_VERSION valueFrom: configMapKeyRef: name: cluster-info key: &quot;kube_1*&quot; </code></pre>
ambikanair
<p>As far as I know it is unfortunately not possible to use the regular expression as you would like. Additionally, you have information about the regular expression that validates the entered data:</p> <blockquote> <p>regex used for validation is '[-._a-zA-Z0-9]+')</p> </blockquote> <p>It follows that you have to enter <code>key</code> as an alphanumeric string and additionally you can use the characters <code>-</code>, <code>_</code> and <code>.</code> So it is not possible to use regex in this place.</p> <p>To workaround you can write your custom script i.e. in Bash and replace the proper line with <a href="https://stackoverflow.com/questions/8822097/how-to-replace-a-whole-line-with-sed">sed command</a>.</p>
Mikołaj Głodziak
<p>Is there a way we can share the GPU between multiple pods or we need some specific model of NVIDIA GPUS?</p>
I'm sK
<blockquote> <p>Short answer, yes :)</p> </blockquote> <p>Long answer below :)</p> <p>There is no &quot;built-in&quot; solution to achieve that, but you can use many tools (plugins) to control GPU. First look at the <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#v1-8-onwards" rel="nofollow noreferrer">Kubernetes official site</a>:</p> <blockquote> <p>Kubernetes includes <strong>experimental</strong> support for managing AMD and NVIDIA GPUs (graphical processing units) across several nodes.</p> <p>This page describes how users can consume GPUs across different Kubernetes versions and the current limitations.</p> </blockquote> <p>Look also about limitations:</p> <blockquote> <ul> <li>GPUs are only supposed to be specified in the <code>limits</code> section, which means: - You can specify GPU <code>limits</code> without specifying <code>requests</code> because Kubernetes will use the limit as the request value by default. - You can specify GPU in both <code>limits</code> and <code>requests</code> but these two values must be equal. - You cannot specify GPU <code>requests</code> without specifying <code>limits</code>.</li> <li>Containers (and Pods) do not share GPUs. There's no overcommitting of GPUs.</li> <li>Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.</li> </ul> </blockquote> <p>As you can see this supports GPUs between several nodes. You can find the <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#deploying-nvidia-gpu-device-plugin" rel="nofollow noreferrer">guide</a> how to deploy it.</p> <p>Additionally, if you don't specify this in resource / request limits, the <strong>containers</strong> from all pods will have full access to the GPU as if they were normal processes. There is no need to do anything in this case.</p> <p>For more look also at <a href="https://github.com/kubernetes/kubernetes/issues/52757" rel="nofollow noreferrer">this github topic</a>.</p>
Mikołaj Głodziak
<p>My deployment had a readinessProbe configured like:</p> <pre><code> readinessProbe: port: 8080 path: /ready initialDelaySeconds: 30 failureThreshold: 60 periodSeconds: 10 timeoutSeconds: 15 </code></pre> <p>I want to remove the probe for some reason. However, after removing it from my YML file my deployment is not successful because look like the pod is never considered ready. Checking in GCP I discover that the result YML file has a readiness probe that points to some &quot;default values&quot; that I haven't set nowhere:</p> <pre><code> readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 80 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 </code></pre> <p>Is there a way to actually remove a ReadinessProbe for good?</p>
JorgeBPrado
<p>You need to set readinessProbe to null value like that:</p> <pre><code>readinessProbe: null </code></pre>
matthieu Bouamama
<p>We have a use case to monitor kubernetes clusters and I am trying to find the list of exceptions thrown by kubernetes to reflect the status of the k8s server (in a namespace) while trying to submit a job on the UI.</p> <p>Example: if k8s server throws <code>ClusterNotFound</code> exception that means we cannot submit any more jobs to that api server.</p> <p>Is there such a comprehensive list?</p> <p>I came across <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/api/errors/errors.go" rel="nofollow noreferrer">this</a> in Go Lang. Will this be it? Does java has something like this?</p>
chandu
<p>The file you are referencing is a part of Kubernetes library used by many Kubernetes components for API requests fields validations. As all Kubernetes components are written in Go and I couldn't find any plans to port Kubernetes to Java, it's unlikely to have a Java version of that file.</p> <p>However, there is an officially supported Kubernetes client library, written in Java, so you can check for the proper modules to validate API requests and process API responses in the <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">java-client repostiory</a> or on the <a href="https://javadoc.io/doc/io.kubernetes/client-java-api/latest/index.html" rel="nofollow noreferrer">javadoc site</a>.</p> <p>For example, objects that are used to contain proper or improper HTTP replies from Kubernetes apiserver: <a href="https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/models/V1Status.html" rel="nofollow noreferrer">V1Status</a> and <a href="https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/ApiException.html" rel="nofollow noreferrer">ApiExceptions</a>, <a href="https://github.com/kubernetes-client/java/blob/8b08ec39ab12542a3fed1f4a92d67b7e7a393e14/kubernetes/src/main/java/io/kubernetes/client/openapi/ApiException.java" rel="nofollow noreferrer">(repository link)</a></p> <p>Please consider to check java-client usage <a href="https://github.com/kubernetes-client/java/wiki/3.-Code-Examples" rel="nofollow noreferrer">examples</a> for better understanding.</p> <p>Detailed Kubernetes RESTful API reference could be found on the <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">official page</a><br /> For example: <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#create-create-a-deployment" rel="nofollow noreferrer">Deployment create request</a></p> <p>If you are really interested in Kubernetes cluster monitoring and logging aspects, please consider to read the following articles at the beginning:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/" rel="nofollow noreferrer">Metrics For Kubernetes System Components</a></li> <li><a href="https://www.datadoghq.com/blog/kubernetes-control-plane-monitoring/" rel="nofollow noreferrer">Kubernetes Control Plane monitoring with Datadog</a></li> <li><a href="https://sysdig.com/blog/monitor-kubernetes-control-plane/" rel="nofollow noreferrer">How to monitor Kubernetes control plane</a></li> <li><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">Logging Architecture</a></li> <li><a href="https://logz.io/blog/a-practical-guide-to-kubernetes-logging/" rel="nofollow noreferrer">A Practical Guide to Kubernetes Logging</a></li> </ul>
moonkotte
<p>At the very beginning of some of my statefulSets, one of my initContainers requieres to do a lot of operations.</p> <p>It is not a problem at all. All those tasks are optimized to takes just some second. The problem comes when it detects that it is an initialization from scratch. If so, it needs to download a database snapshot from a repository and put it in place in the database volume. All that operation takes more than 7-8 minutes.It is a StatefulSet so it is done only at the very beginning as the volume is configured to persist. </p> <p>The problem here is that the initContainer doesn't finish and get &quot;restarted&quot; (just marked as failed as the runtime exceeded 5 minutes or so). How can I raise that time to allow the initContainer to finish?  To be honest, I am not sure if it is a specific timeout/max_runtime value assigned to the initContainer or globally to the pod initialization. I have been looking around for hours but don't manage to find what exactly is causing this. I assumed it could be something to be set or reconfigured in the kubelet but I couldn't find anything there. Please, help...</p>
Nullzone
<p>hye,</p> <p>the default is 5 minutes and you can adjust it to your needs by modifying the value in <code>--pod-eviction-timeout</code>.</p> <p>let me know if it works. :)</p>
tpaz1
<p>My nginx-ingress-controller is in the <code>ingress-nginx</code> namespace and I've set the large-client-header-buffers to <code>4 16k</code>, <code>4 32k</code> etc.</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx data: proxy-buffer-size: "16k" large-client-header-buffers: "4 16k" </code></pre> <p>When I inspect the configuration in the nginx-controller pod I see:</p> <pre><code> kubectl exec -n ingress-nginx nginx-ingress-controller-65fd579494-jptxh cat /etc/nginx/nginx.conf | grep large_client_header large_client_header_buffers 4 16k; </code></pre> <p>So everything seems to be configured correctly, still I get the error message <code>400 Bad Request Request Header Or Cookie Too Large</code></p>
Peter Salomonsen
<p>There is <a href="https://github.com/kubernetes/ingress-nginx/issues/319" rel="nofollow noreferrer">dedicated topic on github</a> about the problem. You can find there possible solutions. This problem should be completely removed based on <a href="https://github.com/helm/charts/issues/20901" rel="nofollow noreferrer">this issue</a>.</p> <p>Look also at more tutorials how can you solve this problem, but from the browser site:</p> <ul> <li><a href="https://www.minitool.com/news/request-header-or-cookie-too-large.html" rel="nofollow noreferrer">How to Fix the “Request Header Or Cookie Too Large” Issue [MiniTool News]</a></li> <li><a href="https://support.mozilla.org/gl/questions/918154" rel="nofollow noreferrer">400 Bad Request Request Header Or Cookie Too Large nginx - What does this error mean?</a></li> <li><a href="https://support.mozilla.org/en-US/questions/1327416" rel="nofollow noreferrer">HTTP Error 400. The size of the request headers is too long.</a>.</li> </ul>
Mikołaj Głodziak
<p>I am using Kubernetes deployment. I wish to start the pods one by one. Not all at once. Is there any way. I do not want to use statefulSet.</p> <pre><code>kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: my-container-image ports: - name: https containerPort: 8443 volumeMounts: - mountPath: /tmp name: app-vol restartPolicy: Always imagePullSecrets: - name: regcred volumes: - name: app-vol persistentVolumeClaim: claimName: app-vol </code></pre>
Hitesh Bajaj
<blockquote> <p>I wish to start the pods one by one. Not all at once. Is there any way. I do not want to use statefulSet.</p> </blockquote> <p>Unfortunately you are trying to accomplish something with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code></a> that is achievable with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer"><code>Statefulset</code></a>. Is it possible to achieve the desired effect with deployment? A similar one can be obtained, but it will require <a href="https://stackoverflow.com/questions/66365892/limit-first-time-deployment-of-pods-in-kubernetes-using-kind-deployment/66641268#66641268">creating a custom script</a> and it will be a non-standard solution.</p> <p>However, this will not be a recommended solution. Statefulset works well for feed control during starts, and you shouldn't use anything else here.</p> <p>To sum up: you should change the assumptions a bit and accept the statefulset, thanks to which you will achieve the result you want, or you should not control the order in which the pods are run.</p> <p>As <a href="https://stackoverflow.com/users/4551228/rohatgisanat" title="620 reputation">rohatgisanat</a> mentioned in the comment:</p> <blockquote> <p>Deployments do not guarantee ordered starts. StatefulSets are the way to achieve what you require.</p> </blockquote>
Mikołaj Głodziak
<p>When deploying AKS cluster into different availability zones (&quot;1,2,3&quot; in our case) the vm scaleset is used for default nodepool deployment (not availability set). Everything is pretty fine there, but the problem is - while using the default nodepool scaleset, it is put into 1 fault domains only, and i did not find a way to change that (despite the fact, that the vm scaleset should be deployed into 5 fault/update domains as per documentation):</p> <p><a href="https://i.stack.imgur.com/y2hoh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y2hoh.png" alt="enter image description here" /></a></p> <p>Why is it so ? How to put the nodepool into default 5 fault/update domains in addition to 3 availability zones (i mean 5 fault/update domains in each of the 3 availability zones)?</p> <p>P.S. - You can always deploy AKS cluster's nodepool into availabilitySet, and have 5 update/fault domains, but then the availability zones are not available when using the availabilitySet.</p>
misfit
<p>By reference to this doc: <a href="https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains" rel="nofollow noreferrer">Choosing the right number of fault domains for virtual machine scale set</a>, for the regions that support zonal deployment of virtual machine scale sets and this option is selected, the default value of the fault domain count is 1 for each of the zones.</p> <p>You can also consider aligning the number of scale set fault domains with the number of Managed Disks fault domains.</p>
wallezzi
<p>I think that there are lots of DevOps engineer realized this issue. Because I am from a software background. Explanations for syntax not enough for me. Below YAML is working for the Azure environment but not working for EKS and AWS.</p> <p>Error:</p> <pre><code> error validating data: ValidationError(Deployment.spec): unknown field &quot;spec&quot; in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>My deployment yaml :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-flask spec: selector: matchLabels: app: my-flask replicas: 2 template: metadata: labels: app: my-flask spec: containers: - name: my-flask image: yusufkaratoprak/awsflaskeks:latest ports: - containerPort: 5000 </code></pre> <p><a href="https://i.stack.imgur.com/6adsp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6adsp.png" alt="enter image description here" /></a></p>
Penguen
<p>there is some indentation problem with your yamls. the field second<code>spec</code> is under the template.<br /> will also encourage you to see the official docs of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">kubernetes_deployment</a></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-flask spec: selector: matchLabels: app: my-flask replicas: 2 template: metadata: labels: app: my-flask spec: containers: - name: my-flask image: yusufkaratoprak/awsflaskeks:latest ports: - containerPort: 5000 </code></pre>
heheh
<p>I have an ingress controller working for UI container service and backend container service. my ingress configuration is as follows:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: testapp annotations: rules: - host: test1.example.com http: paths: - path: /static backend: serviceName: ui-service servicePort: 80 - path: /apicall backend: serviceName: backend-service servicePort: 8080 </code></pre> <p>Which is working fine. Now I need to forward this ingress URL if it contains <code>?</code>.</p> <p>For eg, if url is <code>test1.example.com/?productid=10001</code> I need to add static before this <code>?</code> and forward it to <code>test1.example.com/static?productid=10001</code>.</p> <p>Is this behavior possible through below annotations?</p> <pre><code> nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>If yes, how to have that regex kind of url where if <code>?</code> is present in the url followed by any string/characters, add <code>static</code> keyword before it?</p>
iRunner
<p>First of all look at the YAML below to understand how works <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite" rel="nofollow noreferrer">rewrite rule</a>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: ingressClassName: nginx rules: - host: rewrite.bar.com http: paths: - path: /something(/|$)(.*) pathType: Prefix backend: service: name: http-svc port: number: 80 </code></pre> <blockquote> <p>In this ingress definition, any characters captured by <code>(.*)</code> will be assigned to the placeholder <code>$2</code>, which is then used as a parameter in the <code>rewrite-target</code> annotation.</p> <p>For example, the ingress definition above will result in the following rewrites:</p> <ul> <li><code>rewrite.bar.com/something</code> rewrites to <code>rewrite.bar.com/</code></li> <li><code>rewrite.bar.com/something/</code> rewrites to <code>rewrite.bar.com/</code></li> <li><code>rewrite.bar.com/something/new</code> rewrites to <code>rewrite.bar.com/new</code></li> </ul> </blockquote> <p>In your case, you will need to follow a similar procedure. But first, I suggest you do a separate ingress for each path. This will help you keep order later and additionally you will have an independent ingress for each path. Look at <a href="https://serverfault.com/questions/1065154/nginx-ingress-rewrite-rule/1065176#1065176">this question</a> to see why and how you should create a separate ingress.</p> <blockquote> <p>Now I need to forward this ingress URL if it contains ?. for eg if URL is test1.example.com/?productid=10001 i need to add static before this ? and forward it to test1.example.com/static?productid=10001.</p> </blockquote> <p>You will need to create an appropriate capture group. For the data you presented it will look like this:</p> <pre class="lang-yaml prettyprint-override"><code> - path: /(/|$)(.*) </code></pre> <p>and the annotation should be like this:</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/rewrite-target: /static/$2 </code></pre> <p>In this case, everything that gets captured from the URL after the <code>/</code> character will be rewritten to the new address, which will look like <code>/static</code> and the address that was before. There should be no problem with the question mark.</p> <p>If you need to create a regex that will act on a question mark, you will need to prefix it with a <code>\</code> character, which is described <a href="https://www.regular-expressions.info/characters.html#special" rel="nofollow noreferrer">here</a>.</p> <p>Also, consider that you are currently targeting root <code>/</code> path and want to create 2 ingresses. I also suggest creating appropriate endpoints depending on the situation, so that you can easily create 2 different ingresses responsible for directing different traffic.</p>
Mikołaj Głodziak
<p>With default configurations, does istio-proxy(sidecar) manipulate incoming/outgoing requests from the application container?</p>
RMNull
<p>Comment posted by <a href="https://stackoverflow.com/users/10008173/david-maze" title="82,113 reputation">David Maze</a> is good and this could be part of an answer:</p> <blockquote> <p>There are a couple of cases like <a href="https://istio.io/latest/docs/tasks/observability/distributed-tracing/overview/#trace-context-propagation" rel="nofollow noreferrer">distributed tracing</a> and <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls" rel="nofollow noreferrer">mutual TLS</a> that add headers;</p> </blockquote> <p>These are the two methods by which you can manipulate the headers using istio. Additionally, in the official documentation you can find a <a href="https://istio.io/v1.4/docs/tasks/policy-enforcement/control-headers/" rel="nofollow noreferrer">simple tutorial</a> with yaml files, where you will learn how to manage headers. It is also displayed how you can create <a href="https://istio.io/v1.4/docs/tasks/policy-enforcement/control-headers/#request-header-operations" rel="nofollow noreferrer">request header operation</a>. However, take into account that this tutorial is based on an outdated version of Istio and you should use the latest supported version for your solution.</p>
Mikołaj Głodziak
<p>I have the following Ingress configuration:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: http-ingress spec: rules: - host: example-adress.com http: paths: - path: /apple pathType: Prefix backend: service: name: apple-service port: number: 80 - path: /banana pathType: Prefix backend: service: name: banana-service port: number: 80 tls: - hosts: - example-adress.com secretName: testsecret-tls </code></pre> <p>And i also created the Secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: testsecret-tls namespace: default data: tls.crt: path to .crt tls.key: Zpath to .key type: kubernetes.io/tls </code></pre> <p>But when i connect to one of my services and check the certificate it says that it uses a cert created by Kubernetes Ingress Controller Fake certificate. When i run microk8s kubectl describe ingress i get the following output:</p> <pre><code>Name: http-ingress Namespace: default Address: 127.0.0.1 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) TLS: testsecret-tls terminates example-adress.com Rules: Host Path Backends ---- ---- -------- example-adress.com /apple apple-service:80 (10.1.55.17:5678) /banana banana-service:80 (10.1.55.10:5678) Annotations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 28m nginx-ingress-controller Ingress default/http-ingress Normal UPDATE 20m (x2 over 28m) nginx-ingress-controller Ingress default/http-ingress </code></pre> <p>What do i need to change to make my Ingress use my Cert instead of generating a new one everytime?</p>
timmmmmb
<p>Posting this out of comment as it works.</p> <p>Based on your tls secret yaml, you tried to add certificate and private key using paths, which is not supported currently (<a href="https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets" rel="nofollow noreferrer">reference</a>) Fragment from reference:</p> <blockquote> <p>When using this type of Secret, the <code>tls.key</code> and the <code>tls.crt</code> key must be provided in the <code>data</code> (or <code>stringData</code>) field of the Secret configuration, although the API server doesn't actually validate the values for each key.</p> </blockquote> <p>Therefore there are two suggestions how to move forward:</p> <ul> <li>Add base64 encrypted values for key and certificate to tls secret</li> <li>Allow kubernetes do it for you with the following command: <code>kubectl create secret tls testsecret-tls --cert=tls.cert --key=tls.key</code></li> </ul>
moonkotte
<p><strong>Environment</strong></p> <p><strong>Kubectl Version</strong></p> <pre class="lang-sh prettyprint-override"><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;18&quot;, GitVersion:&quot;v1.18.4&quot;, GitCommit:&quot;c96aede7b5205121079932896c4ad89bb93260af&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-06-18T02:59:13Z&quot;, GoVersion:&quot;go1.14.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20+&quot;, GitVersion:&quot;v1.20.4-80+89e0897d2cb807&quot;, GitCommit:&quot;89e0897d2cb8073fbb8f700258573f1478d4826a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-11-22T03:53:35Z&quot;, GoVersion:&quot;go1.15.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p><strong>Kubernetes Version (Kind Cluster)</strong></p> <pre class="lang-yaml prettyprint-override"><code>kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane image: kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9 - role: worker image: kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9 - role: worker image: kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9 </code></pre> <p><strong>Kubebuilder Version</strong></p> <pre class="lang-sh prettyprint-override"><code>Version: main.version{KubeBuilderVersion:&quot;3.1.0&quot;, KubernetesVendor:&quot;1.19.2&quot;, GitCommit:&quot;92e0349ca7334a0a8e5e499da4fb077eb524e94a&quot;, BuildDate:&quot;2021-05-27T17:54:28Z&quot;, GoOs:&quot;darwin&quot;, GoArch:&quot;amd64&quot;} </code></pre> <p><strong>Os</strong></p> <pre><code>Macos Big Sur 11.6 </code></pre> <p>I use kubebuilder to define my own <code>CRD</code> like below, and it contains <code>VolumeClaimTemplates</code> filed which the type is <code>[]coreV1.PersistentVolumeClaim</code></p> <pre class="lang-golang prettyprint-override"><code>package v1alpha1 import ( apps &quot;k8s.io/api/apps/v1&quot; coreV1 &quot;k8s.io/api/core/v1&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/apimachinery/pkg/util/intstr&quot; ) type DatabaseSetSpec struct { ... // +optional VolumeClaimTemplates []coreV1.PersistentVolumeClaim `json:&quot;volumeClaimTemplates,omitempty&quot; protobuf:&quot;bytes,4,rep,name=volumeClaimTemplates&quot;` } // +kubebuilder:object:root=true // +kubebuilder:subresource:status // +kubebuilder:resource:shortName=ami-dbs // DatabaseSet is the Schema for the databasesets API type DatabaseSet struct { metav1.TypeMeta `json:&quot;,inline&quot;` metav1.ObjectMeta `json:&quot;metadata,omitempty&quot;` Spec DatabaseSetSpec `json:&quot;spec,omitempty&quot;` Status DatabaseSetStatus `json:&quot;status,omitempty&quot;` } // DatabaseSetStatus defines the observed state of DatabaseSet type DatabaseSetStatus struct { ... } //+kubebuilder:object:root=true // DatabaseSetList contains a list of DatabaseSet type DatabaseSetList struct { metav1.TypeMeta `json:&quot;,inline&quot;` metav1.ListMeta `json:&quot;metadata,omitempty&quot;` Items []DatabaseSet `json:&quot;items&quot;` } func init() { SchemeBuilder.Register(&amp;DatabaseSet{}, &amp;DatabaseSetList{}) } </code></pre> <p>But when I apply the CR like the below, I found that the <code>metadata</code> filed is empty</p> <pre><code>apiVersion: apps.analyticdb.aliyun.com/v1alpha1 kind: DatabaseSet metadata: name: databaseset-sample spec: ... volumeClaimTemplates: - metadata: name: pvc-test spec: accessModes: [ &quot;ReadWriteOnce&quot; ] storageClassName: &quot;manual&quot; resources: requests: storage: 3Gi </code></pre> <p>Here is the yaml which get from the k8s ectd, it cound be found that the metadata of the <code>volumeClaimTemplates</code> is empty.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps.analyticdb.aliyun.com/v1alpha1 kind: DatabaseSet metadata: creationTimestamp: &quot;2021-12-24T09:46:22Z&quot; generation: 1 name: databaseset-sample namespace: default resourceVersion: &quot;98727469&quot; uid: e64107f2-7a4b-473b-9275-39ab5e2e88dc spec: ... volumeClaimTemplates: - metadata: {} spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi storageClassName: manual </code></pre> <p>Does anyone know why?</p> <p>And when I mark the volumeclaimtemplate field with the below comment, <code>metada</code> can be decoed correctly</p> <pre class="lang-golang prettyprint-override"><code>// +kubebuilder:pruning:PreserveUnknownFields // +kubebuilder:validation:Schemaless </code></pre>
Huzhenyu
<p>I have posted community wiki answer to summarise the topic:</p> <p>The same question was asked <a href="https://github.com/kubernetes-sigs/kubebuilder/issues/2460" rel="nofollow noreferrer">on the github</a>. The solution is like OP mentioned:</p> <blockquote> <p>have found the solution, use the controller-gen crd option <code>crd:generateEmbeddedObjectMeta=true</code> will work</p> </blockquote> <p>It was mentioned on the github:</p> <blockquote> <p>I found this option through <code>controller-gen -h</code>, and there is no mention of this option in the <a href="https://book.kubebuilder.io/reference/controller-gen.html" rel="nofollow noreferrer">official kubebuilder controller-gen CLI documention</a> .</p> </blockquote> <p>Yes, there is no mention about that (official doc for this tool is slightly old - August 2019) But look at <a href="https://issueexplorer.com/issue/spotify/flink-on-k8s-operator/83" rel="nofollow noreferrer">this problem</a> and <a href="https://issueexplorer.com/issue/spotify/flink-on-k8s-operator/83#6496747" rel="nofollow noreferrer">this answer</a>:</p> <blockquote> <p>hi <a href="https://github.com/numbnut" rel="nofollow noreferrer">@numbnut</a> I haven't tested yet with v1.21.2 but I'll take a look.</p> <p>Using <code>[email protected]</code> was needed before to be able to run the operator with k8s <code>&gt;1.18 but that has been addressed in this fork at least up until</code> v1.20.x`</p> <p>The newer<code>0.6.1</code> introduces <code>generateEmbeddedObjectMeta</code> which is required specifically to add the metadata for the <code>PersistentVolumeClaims</code> without it claims will not be deleted when the cluster is deleted;</p> <p><a href="https://github.com/clouddra" rel="nofollow noreferrer">@clouddra</a> check if your env is not using a previous version of <code>controller-tools</code></p> </blockquote> <p>You can also find appropriate lines from the <a href="https://github.com/kubernetes-sigs/controller-tools/blob/c796d037386bc209fde0ca657df6b6face0ec5b7/pkg/crd/gen.go#L74" rel="nofollow noreferrer">source code</a>.</p>
Mikołaj Głodziak
<p>I have an Elasticsearch DB running on Kubernetes exposed to <code>my_domain.com/elastic</code> as an Istio virtual service, which I have no problem accessing via the browser (as in I get to login successfully to the endpoint). I can also query the DB with Python's Requests. But I can't access the DB with the official python client if I use <code>my_domain.com/elastic</code>. The LoadBalancer IP works perfectly well even with the client. What am I missing? I have SSL certificates set up for my_domain.com via Cert-Manager and CloudFlare.</p> <p>This works:</p> <pre><code>import requests import os data = ' { &quot;query&quot;: { &quot;match_all&quot;: {} } }' headers = {'Content-Type': 'application/json'} auth= ('elastic', os.environ['ELASTIC_PASSWORD']) response = requests.post('https://mydomain.cloud/elastic/_search', auth=auth, data=data, headers=headers) print(response.text) </code></pre> <p>This doesn't work (I have tried a number of different parameters):</p> <pre><code>from datetime import datetime import os from elasticsearch import Elasticsearch, RequestsHttpConnection es = Elasticsearch(, [{'host': 'mydomain.cloud', 'port': 443, 'url_prefix': 'elastic', 'use_ssl': True}], http_auth=('elastic', os.environ['ELASTIC_PASSWORD']), # 1Password or kubectl get secret elastic-cluster-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n elastic-system schema='https'#, verify_certs=False, # use_ssl=True, # connection_class = RequestsHttpConnection, # port=443, ) # if not es.ping(): # raise ValueError(&quot;Connection failed&quot;) doc = { 'author': 'kimchy', 'text': 'Elasticsearch: cool. bonsai cool.', 'timestamp': datetime.now(), } res = es.index(index=&quot;test-index&quot;, id=1, document=doc) print(res['result']) res = es.get(index=&quot;test-index&quot;, id=1) print(res['_source']) es.indices.refresh(index=&quot;test-index&quot;) res = es.search(index=&quot;test-index&quot;, query={&quot;match_all&quot;: {}}) print(&quot;Got %d Hits:&quot; % res['hits']['total']['value']) for hit in res['hits']['hits']: print(&quot;%(timestamp)s %(author)s: %(text)s&quot; % hit[&quot;_source&quot;]) </code></pre> <p>The resulting error:</p> <pre><code>elasticsearch.exceptions.RequestError: RequestError(400, 'no handler found for uri [//test-index/_doc/1] and method [PUT]', 'no handler found for uri [//test-index/_doc/1] and method [PUT]') </code></pre> <p>cluster.yaml</p> <pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: elastic-cluster namespace: elastic-system spec: version: 7.15.2 http: # tls: # selfSignedCertificate: # disabled: true service: spec: type: LoadBalancer nodeSets: - name: master-nodes count: 2 config: node.roles: [&quot;master&quot;] volumeClaimTemplates: - metadata: name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path. spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: local-storage - name: data-nodes count: 2 config: node.roles: [&quot;data&quot;] volumeClaimTemplates: - metadata: name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path. spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage podTemplate: # metadata: # annotations: # traffic.sidecar.istio.io/includeInboundPorts: &quot;*&quot; # traffic.sidecar.istio.io/excludeOutboundPorts: &quot;9300&quot; # traffic.sidecar.istio.io/excludeInboundPorts: &quot;9300&quot; spec: # automountServiceAccountToken: true containers: - name: elasticsearch resources: requests: memory: 4Gi cpu: 3 limits: memory: 4Gi cpu: 3 </code></pre> <p>virtual-service.yaml</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: elastic-vts namespace: elastic-system spec: hosts: - &quot;mydomain.cloud&quot; gateways: - istio-system/gateway http: - match: - port: 443 - uri: prefix: /elastic rewrite: uri: / route: - destination: host: elastic-cluster-es-http.elastic-system.svc.cluster.local port: number: 9200 </code></pre> <p>destination-rule.yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: elastic-destination-rule namespace: elastic-system spec: host: elastic-cluster-es-http.elastic-system.svc.cluster.local trafficPolicy: tls: mode: SIMPLE </code></pre> <p>gateway.yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - 'mydomain.cloud' tls: httpsRedirect: true - port: number: 443 name: https protocol: HTTPS hosts: - 'mydomain.cloud' tls: mode: SIMPLE credentialName: letsencrypt-staging-tls </code></pre>
Minsky
<p>I have reproduced your problem and the solution is as follows. First, pay attention to your yaml file:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: elastic-vts namespace: elastic-system spec: hosts: - &quot;mydomain.cloud&quot; gateways: - istio-system/gateway http: - match: - port: 443 - uri: prefix: /elastic &lt;---- here is the problem rewrite: uri: / ... </code></pre> <p>The error that appears looks like this:</p> <pre><code>elasticsearch.exceptions.RequestError: RequestError(400, 'no handler found for uri [//test-index/_doc/1] and method [PUT]', 'no handler found for uri [//test-index/_doc/1] and method [PUT]') </code></pre> <p>The problem is right there: <code>[//test-index/_doc/1]</code> (it's about a duplicate / character). I think it's a similar problem to the problem mentioned <a href="https://github.com/elastic/elasticsearch-js/issues/572" rel="nofollow noreferrer">here</a>. To fix this, I suggest adding the <code>/</code> to the line <code>prefix: /elastic</code> and your yaml will be like this example:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: elastic-vts namespace: elastic-system spec: hosts: - &quot;mydomain.cloud&quot; gateways: - istio-system/gateway http: - match: - port: 443 - uri: prefix: /elastic/ &lt;---- here rewrite: uri: / ... </code></pre> <p>In this moment the answer from Elastic looks like:</p> <pre><code>Got 1 Hits: 2021-12-30T09:20:12.038004 kimchy: Elasticsearch: cool. bonsai cool. </code></pre>
Mikołaj Głodziak
<p>Hi I know this might be a possible duplicate, but I cannot get the answer from this <a href="https://stackoverflow.com/questions/55044486/waitforfirstconsumer-persistentvolumeclaim-waiting-for-first-consumer-to-be-crea">question</a>.</p> <p>I have a prometheus deployment and would like to give it a persistent volume.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: prometheus-deployment namespace: monitoring labels: app: prometheus-server spec: replicas: 1 selector: matchLabels: app: prometheus-server template: metadata: labels: app: prometheus-server spec: containers: - name: prometheus image: prom/prometheus args: - &quot;--storage.tsdb.retention.time=60d&quot; - &quot;--config.file=/etc/prometheus/prometheus.yml&quot; - &quot;--storage.tsdb.path=/prometheus/&quot; ports: - containerPort: 9090 resources: requests: cpu: 500m memory: 500M limits: cpu: 1 memory: 1Gi volumeMounts: - name: prometheus-config-volume mountPath: /etc/prometheus/ - name: prometheus-storage-volume mountPath: /prometheus/ volumes: - name: prometheus-config-volume configMap: defaultMode: 420 name: prometheus-server-conf - name: prometheus-storage-volume persistentVolumeClaim: claimName: prometheus-pv-claim --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prometheus-pv-claim spec: storageClassName: default volumeMode: Filesystem accessModes: - ReadWriteOnce resources: requests: storage: 4Gi </code></pre> <p>Now both the pvc and the deployment cannot be scheduled because the pvc waits for the deployment and the other way around. As far as I am concerned we have a cluster with automatic provisioning, thus I cannot just create a pv. How can I solve this problem, because other deployments do use pvc in the same style and it works.</p>
msts1906
<p>Its because of namespace. PVC is a namespaced object <a href="https://stackoverflow.com/questions/35364367/share-persistent-volume-claims-amongst-containers-in-kubernetes-openshift/35366775#35366775">you can look here</a>. Your PVC is on the default namespace. Moving it to monitoring namespace should work.</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prometheus-pv-claim namespace: monitoring spec: storageClassName: default volumeMode: Filesystem accessModes: - ReadWriteOnce resources: requests: storage: 4Gi </code></pre>
heheh
<p>I deployed a k3s cluster into 2 raspberry pi 4. One as a master and the second as a worker using the script k3s offered with the following options:</p> <p>For the master node:</p> <pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server --bind-address 192.168.1.113 (which is the master node ip)' sh - </code></pre> <p>To the agent node:</p> <pre><code>curl -sfL https://get.k3s.io | \ K3S_URL=https://192.168.1.113:6443 \ K3S_TOKEN=&lt;master-token&gt; \ INSTALL_K3S_EXEC='agent' sh- </code></pre> <p>Everything seems to work, but <code>kubectl top nodes</code> returns the following:</p> <pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k3s-master 137m 3% 1285Mi 33% k3s-node-01 &lt;unknown&gt; &lt;unknown&gt; &lt;unknown&gt; &lt;unknown&gt; </code></pre> <p>I also tried to deploy the k8s dashboard, according to what is written in <a href="https://rancher.com/docs/k3s/latest/en/installation/kube-dashboard/" rel="nofollow noreferrer">the docs</a> but it fails to work because it can't reach the metrics server and gets a timeout error:</p> <pre><code>&quot;error trying to reach service: dial tcp 10.42.1.11:8443: i/o timeout&quot; </code></pre> <p>and I see a lot of errors in the pod logs:</p> <pre><code>2021/09/17 09:24:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2021/09/17 09:25:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2021/09/17 09:26:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2021/09/17 09:27:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. </code></pre> <p>logs from the <code>metrics-server</code> pod:</p> <pre><code>elet_summary:k3s-node-01: unable to fetch metrics from Kubelet k3s-node-01 (k3s-node-01): Get https://k3s-node-01:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.1.106:10250: connect: no route to host E0917 14:03:24.767949 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node-01: unable to fetch metrics from Kubelet k3s-node-01 (k3s-node-01): Get https://k3s-node-01:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.1.106:10250: connect: no route to host E0917 14:04:24.767960 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node-01: unable to fetch metrics from Kubelet k3s-node-01 (k3s-node-01): Get https://k3s-node-01:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.1.106:10250: connect: no route to host </code></pre>
Assaf Sapir
<p>Moving this out of comments for better visibility.</p> <hr /> <p>After creation of small cluster, I wasn't able to reproduce this behaviour and <code>metrics-server</code> worked fine for both nodes, <code>kubectl top nodes</code> showed information and metrics about both available nodes (thought it took some time to start collecting the metrics).</p> <p>Which leads to troubleshooting steps why it doesn't work. Checking <code>metrics-server</code> logs is the most efficient way to figure this out:</p> <pre><code>$ kubectl logs metrics-server-58b44df574-2n9dn -n kube-system </code></pre> <p>Based on logs it will be different steps to continue, for instance in comments above:</p> <ul> <li>first it was <code>no route to host</code> which is related to network and lack of possibility to resolve hostname</li> <li>then <code>i/o timeout</code> which means route exists, but service did not respond back. This may happen due to firewall which blocks certain ports/sources, <code>kubelet</code> is not running (listens to port <code>10250</code>) or as it appeared for OP, there was an issue with <code>ntp</code> which affected certificates and connections.</li> <li>errors may be different in other cases, it's important to find the error and based on it troubleshoot further.</li> </ul>
moonkotte
<p>I am getting this error:</p> <pre><code>mysql-c INFO Trying to connect to MySQL server </code></pre> <p>So basically, going to <a href="http://blog.example.com/ghost" rel="nofollow noreferrer">http://blog.example.com/ghost</a> shows me &quot;service unavailable&quot; Here are some logs:</p> <p><strong>kubectl get events --all-namespaces</strong></p> <pre><code>NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE example 31m Warning Unhealthy pod/example-blog-ghost-ccf6cb4c4-7p7v6 Readiness probe failed: Get http://10.244.0.105:2368/: dial tcp 10.244.0.105:2368: connect: connection refused example 26m Warning BackOff pod/example-blog-ghost-ccf6cb4c4-7p7v6 Back-off restarting failed container example &lt;unknown&gt; Warning FailedScheduling pod/example-blog-ghost-ccf6cb4c4-fc94s 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory. example &lt;unknown&gt; Warning FailedScheduling pod/example-blog-ghost-ccf6cb4c4-fc94s 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory. example &lt;unknown&gt; Warning FailedScheduling pod/example-blog-ghost-ccf6cb4c4-fc94s running &quot;VolumeBinding&quot; filter plugin for pod &quot;example-blog-ghost-ccf6cb4c4-fc94s&quot;: pod has unbound immediate PersistentVolumeClaims example &lt;unknown&gt; Normal Scheduled pod/example-blog-ghost-ccf6cb4c4-fc94s Successfully assigned example/example-blog-ghost-ccf6cb4c4-fc94s to cluster-name example 18m Normal SuccessfulAttachVolume pod/example-blog-ghost-ccf6cb4c4-fc94s AttachVolume.Attach succeeded for volume &quot;pvc-b6cd4ca6-b410-4af2-ae2f-24b6169caa45&quot; example 18m Normal Pulled pod/example-blog-ghost-ccf6cb4c4-fc94s Container image &quot;docker.io/bitnami/ghost:3.33.0-debian-10-r6&quot; already present on machine example 18m Normal Created pod/example-blog-ghost-ccf6cb4c4-fc94s Created container ghost example 18m Normal Started pod/example-blog-ghost-ccf6cb4c4-fc94s Started container ghost example 3m35s Warning Unhealthy pod/example-blog-ghost-ccf6cb4c4-fc94s Readiness probe failed: Get http://10.244.0.37:2368/: dial tcp 10.244.0.37:2368: connect: connection refused example 16m Warning Unhealthy pod/example-blog-ghost-ccf6cb4c4-fc94s Liveness probe failed: Get http://10.244.0.37:2368/: dial tcp 10.244.0.37:2368: connect: connection refused example 19m Normal SuccessfulCreate replicaset/example-blog-ghost-ccf6cb4c4 Created pod: example-blog-ghost-ccf6cb4c4-fc94s example 23m Normal DeletingLoadBalancer service/example-blog-ghost Deleting load balancer example 23m Normal DeletedLoadBalancer service/example-blog-ghost Deleted load balancer example 19m Normal ExternalProvisioning persistentvolumeclaim/example-blog-ghost waiting for a volume to be created, either by external provisioner &quot;provisioner-id&quot; or manually created by system administrator example 19m Normal Provisioning persistentvolumeclaim/example-blog-ghost External provisioner is provisioning volume for claim &quot;example/example-blog-ghost&quot; example 16m Normal EnsuringLoadBalancer service/example-blog-ghost Ensuring load balancer example 19m Normal ScalingReplicaSet deployment/example-blog-ghost Scaled up replica set example-blog-ghost-ccf6cb4c4 to 1 example 19m Warning BadConfig ingress/example-blog-ghost Could not determine issuer for ingress due to bad annotations: failed to determine issuer name to be used for ingress resource example 19m Warning SyncLoadBalancerFailed service/example-blog-ghost Error syncing load balancer: failed to ensure load balancer: load-balancer is not yet active (current status: new) example 19m Warning SyncLoadBalancerFailed service/example-blog-ghost Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID id: PUT url-loadbalancer: 403 (request &quot;id-1&quot;) Load Balancer can't be updated while it processes previous actions example 19m Normal ProvisioningSucceeded persistentvolumeclaim/example-blog-ghost Successfully provisioned volume pvc-b6cd4ca6-b410-4af2-ae2f-24b6169caa45 example 18m Warning SyncLoadBalancerFailed service/example-blog-ghost Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID id: PUT url-loadbalancer: 403 (request &quot;id-2&quot;) Load Balancer can't be updated while it processes previous actions example 18m Warning SyncLoadBalancerFailed service/example-blog-ghost Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID id: PUT url-loadbalancer: 403 (request &quot;id-3&quot;) Load Balancer can't be updated while it processes previous actions example 17m Warning SyncLoadBalancerFailed service/example-blog-ghost Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID id: PUT url-loadbalancer: 403 (request &quot;id-4&quot;) Load Balancer can't be updated while it processes previous actions example 16m Normal EnsuredLoadBalancer service/example-blog-ghost Ensured load balancer cert-manager 4m34s Warning Unhealthy pod/cert-manager-webhook-567c5c769b-hxzzj Liveness probe failed: Get http://10.244.0.51:6080/livez: net/http: request canceled (Client.Timeout exceeded while awaiting headers) cert-manager 8m7s Warning Unhealthy pod/cert-manager-webhook-567c5c769b-hxzzj Readiness probe failed: Get http://10.244.0.51:6080/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) databases 73s Warning Unhealthy pod/mariadb-master-0 Readiness probe failed: Get http://10.244.0.121:9104/metrics: net/http: request canceled (Client.Timeout exceeded while awaiting headers) databases 4m34s Warning Unhealthy pod/mariadb-master-0 Liveness probe failed: Get http://10.244.0.121:9104/metrics: net/http: request canceled (Client.Timeout exceeded while awaiting headers) databases 4m41s Warning Unhealthy pod/mariadb-slave-0 Readiness probe failed: Get http://10.244.0.92:9104/metrics: net/http: request canceled (Client.Timeout exceeded while awaiting headers) databases 4m33s Warning Unhealthy pod/mariadb-slave-0 Liveness probe failed: Get http://10.244.0.92:9104/metrics: net/http: request canceled (Client.Timeout exceeded while awaiting headers) databases 4m40s Warning Unhealthy pod/mariadb-slave-1 Liveness probe failed: Get http://10.244.0.72:9104/metrics: net/http: request canceled (Client.Timeout exceeded while awaiting headers) databases 8m9s Warning Unhealthy pod/mariadb-slave-1 Readiness probe failed: Get http://10.244.0.72:9104/metrics: net/http: request canceled (Client.Timeout exceeded while awaiting headers) default 22m Warning VolumeFailedDelete persistentvolume/pvc-19869bc1-5a90-474e-a7e4-93c8a98fc47e rpc error: code = Unknown desc = DELETE id: 409 (request &quot;req1&quot;) failed to delete volume: attached volume cannot be deleted default 22m Warning VolumeFailedDelete persistentvolume/pvc-19869bc1-5a90-474e-a7e4-93c8a98fc47e rpc error: code = Unknown desc = DELETE id: 409 (request &quot;re2&quot;) failed to delete volume: attached volume cannot be deleted default 22m Warning VolumeFailedDelete persistentvolume/pvc-19869bc1-5a90-474e-a7e4-93c8a98fc47e rpc error: code = Unknown desc = DELETE id: 409 (request &quot;req3&quot;) failed to delete volume: attached volume cannot be deleted kube-system 70s Warning Unhealthy pod/cilium-operator-6d8c6cd8c4-98pdq Liveness probe failed: Get http://127.0.0.1:9234/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) routing 66s Warning Unhealthy pod/traefik-7bfff8d8f6-wjlcc Readiness probe failed: Get http://10.244.0.40:9000/ping: net/http: request canceled (Client.Timeout exceeded while awaiting headers) routing 15m Warning Unhealthy pod/traefik-7bfff8d8f6-wjlcc Liveness probe failed: Get http://10.244.0.40:9000/ping: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p><strong>kubectl get pods -n example</strong></p> <pre><code>NAME READY STATUS RESTARTS AGE example-blog-ghost-ccf6cb4c4-7p7v6 0/1 Running 1 5m21s </code></pre> <p><strong>kubectl get ingress -A</strong></p> <pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE example example-blog-ghost &lt;none&gt; blog.example.com 80, 443 6m13s </code></pre> <p><strong>kubectl logs example-blog-ghost-ccf6cb4c4-7p7v6 -n example</strong></p> <pre><code>Welcome to the Bitnami ghost container Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-ghost Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-ghost/issues nami INFO Initializing mysql-client nami INFO mysql-client successfully initialized nami INFO Initializing ghost handler WARN Using /opt/bitnami/ghost as working directory mysql-c INFO Trying to connect to MySQL server </code></pre> <p>I ensured that the username and password are correct and I logged in via phpmyadmin via port-forward to check that another chart can access the mariadb database. Also I ensured the database table was created that is referenced in the ghost value file.</p> <p><strong>database: example_blog</strong></p> <p>My phpmyadmin helm chart value to connect to the mariadb are:</p> <pre><code>port: 3306 host: mariadb </code></pre> <p>The helm chart is from here:</p> <pre><code>https://hub.helm.sh/charts/bitnami/phpmyadmin </code></pre> <p>The mariadb helm chart is from here:</p> <pre><code>https://hub.helm.sh/charts/bitnami/mariadb </code></pre> <p>Also when I go to <a href="https://blog.example.com" rel="nofollow noreferrer">https://blog.example.com</a> (https version) I get a 404 page not found</p> <p>In the value file I have:</p> <pre><code>ghostProtocol: https ghostHost: blog.example.com ghostPort: 443 ghostPath: / </code></pre> <p>My values.yml:</p> <pre><code>## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry and imagePullSecrets ## # global: # imageRegistry: myRegistryName # imagePullSecrets: # - myRegistryKeySecretName # storageClass: myStorageClass ## Bitnami Ghost image version ## ref: https://hub.docker.com/r/bitnami/ghost/tags/ ## image: registry: docker.io repository: bitnami/ghost tag: 3.33.0-debian-10-r6 ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistryKeySecretName ## String to partially override ghost.fullname template (will maintain the release name) ## # nameOverride: ## String to fully override ghost.fullname template ## # fullnameOverride: ## Init containers parameters: ## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup ## volumePermissions: image: registry: docker.io repository: bitnami/minideb tag: buster pullPolicy: Always ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistryKeySecretName ## Ghost protocol, host, port and path to create application URLs ## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration ## ghostProtocol: https ghostHost: blog.example.com ghostPort: 443 ghostPath: / ## User of the application ## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration ## ghostUsername: example ## Application password ## Defaults to a random 10-character alphanumeric string if not set ## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration ## # ghostPassword: ## Admin email ## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration ## ghostEmail: [email protected] ## Ghost Blog name ## ref: https://github.com/bitnami/bitnami-docker-ghost#environment-variables ## ghostBlogTitle: example ## Set to `true` to allow the container to be started with blank passwords ## ref: https://github.com/bitnami/bitnami-docker-wordpress#environment-variables allowEmptyPassword: false ## SMTP mail delivery configuration ## ref: https://github.com/bitnami/bitnami-docker-redmine/#smtp-configuration ## smtpHost: example smtpPort: example smtpUser: example smtpPassword: &quot;&quot; smtpFromAddress: &quot;'example' &lt;[email protected]&gt;&quot; smtpService: example ## Use an existing secrets which already store your password data ## # existingSecret: # ## Name of the existing secret # ## # name: mySecret # ## Key mapping where &lt;key&gt; is the value which the deployment is expecting and # ## &lt;value&gt; is the name of the key in the existing secret. # ## # keyMapping: # ghost-password: myGhostPasswordKey # smtp-password: mySmtpPasswordKey ## Configure extra options for liveness and readiness probes ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) ## livenessProbe: enabled: true initialDelaySeconds: 120 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 6 successThreshold: 1 ## ## External database configuration ## externalDatabase: ## All of these values are only used when mariadb.enabled is set to false ## Database host host: mariadb ## non-root Username for Wordpress Database user: example ## Database password password: ## Database name database: example_blog ## Database port number port: 3306 ## ## MariaDB chart configuration ## ## https://github.com/bitnami/charts/blob/master/bitnami/mariadb/values.yaml ## mariadb: ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters enabled: false ## Disable MariaDB replication replication: enabled: false ## Create a database and a database user ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run ## db: name: bitnami_ghost user: bn_ghost ## If the password is not specified, mariadb will generates a random password ## # password: ## MariaDB admin password ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#setting-the-root-password-on-first-run ## # rootUser: # password: ## Enable persistence using Persistent Volume Claims ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## master: persistence: enabled: true ## mariadb data Persistent Volume Storage Class ## If defined, storageClassName: &lt;storageClass&gt; ## If set to &quot;-&quot;, storageClassName: &quot;&quot;, which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS &amp; OpenStack) ## # storageClass: &quot;-&quot; accessMode: ReadWriteOnce size: 8Gi ## Kubernetes configuration ## For minikube, set this to NodePort, elsewhere use LoadBalancer ## service: type: LoadBalancer ## HTTP Port port: 80 ## Extra ports to expose (normally used with the `sidecar` value) ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services ## extraPorts: [] ## Specify the loadBalancerIP value for LoadBalancer service types. ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer ## loadBalancerIP: ## nodePorts: ## http: &lt;to set explicitly, choose port between 30000-32767&gt; nodePorts: http: &quot;&quot; ## Enable client source IP preservation ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster ## Service annotations. Evaluated as a template ## annotations: {} ## Pod Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ## securityContext: enabled: true fsGroup: 1001 runAsUser: 1001 ## Enable persistence using Persistent Volume Claims ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## persistence: enabled: true ## ghost data Persistent Volume Storage Class ## If defined, storageClassName: &lt;storageClass&gt; ## If set to &quot;-&quot;, storageClassName: &quot;&quot;, which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS &amp; OpenStack) ## # storageClass: &quot;-&quot; accessMode: ReadWriteOnce size: 8Gi path: /bitnami ## Configure resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: requests: memory: 512Mi cpu: 300m ## Configure the ingress resource that allows you to access the ## Ghost installation. Set up the URL ## ref: http://kubernetes.io/docs/user-guide/ingress/ ## ingress: ## Set to true to enable ingress record generation enabled: true ## Set this to true in order to add the corresponding annotations for cert-manager certManager: true ## Ingress annotations. Evaluated as a template. ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: &quot;true&quot; will automatically be set ## If certManager is set to true, annotation kubernetes.io/tls-acme: &quot;true&quot; will automatically be set annotations: # kubernetes.io/ingress.class: nginx kubernetes.io/ingress.class: traefik ## The list of hostnames to be covered with this ingress record. ## Most likely this will be just one host, but in the event more hosts are needed, this is an array hosts: - name: blog.example.com path: / ## Set this to true in order to enable TLS on the ingress record tls: true ## Optionally specify the TLS hosts for the ingress record ## Useful when the Ingress controller supports www-redirection ## If not specified, the above host name will be used # tlsHosts: # - www.ghost.local # - ghost.local ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS tlsSecret: example-ghost.local-tls secrets: ## If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## ## name should line up with a tlsSecret set further up ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information # - name: ghost.local-tls # key: # certificate: ## Node selector for pod assignment ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector ## nodeSelector: {} ## Affinity for pod assignment (evaluated as a template) ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## affinity: {} ## Additional pod annotations ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} ## Add sidecars to the pod ## For example: ## sidecars: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 sidecars: [] ## Add init containers to the pod ## For example: ## initContainers: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## initContainers: [] ## Array to add extra volumes ## extraVolumes: [] ## Array to add extra mounts (normally used with extraVolumes) ## extraVolumeMounts: [] ## An array to add extra env vars ## For example: ## extraEnvVars: ## - name: MY_ENV_VAR ## value: env_var_value ## extraEnvVars: [] ## Name of a ConfigMap containing extra env vars ## extraEnvVarsConfigMap: ## Name of a Secret containing extra env vars ## extraEnvVarsSecret: </code></pre> <p>So I am kind of unsure how to figure out what to try next? Any ideas?</p>
Jacob
<p>When you are in another namespace be sure to put the following in the helm values file:</p> <pre><code>host: mariadb.databases </code></pre>
Jacob
<p>I am looking for a solution here on Terraform for creating role assignment and selecting the principal ids based on region.. If I am running the code to china, the variable should be &quot;local.principal_ids_cn&quot; and if global, then it has to be &quot;local.principal_ids&quot;.. I do have a env variable where the geo will be set based on cluster-name.. so &quot;if geo = cn use local.principal_ids_cn, else use local.principal_ids&quot; How can this be incorporated in terraform?</p> <p>This is my input file:</p> <pre><code> &quot;applications&quot; : [ { &quot;principal_id&quot; : &quot;00000000-000000-global-000000000000&quot;, &quot;principal_id_cn&quot; : &quot;00000000-000000-china-000000000000&quot;, } ] } </code></pre> <p>My resource block looks like this:</p> <pre><code>locals { # get json role_data = jsondecode(file(var.inputfile)) principal_ids = distinct([for principal in local.role_data.applications : principal.principal_id]) principal_ids_cn = distinct([for principal_cn in local.role_data.applications : principal.principal_id_cn]) } data &quot;azurerm_subscription&quot; &quot;primary&quot; {} resource &quot;azurerm_role_assignment&quot; &quot;custom&quot; { for_each = toset(local.principal_ids) scope = data.azurerm_subscription.primary.id role_definition_name = var.custom_role principal_id = each.key } resource &quot;azurerm_role_assignment&quot; &quot;builtin&quot; { for_each = toset(local.principal_ids) scope = data.azurerm_subscription.primary.id role_definition_name = var.builtin_role principal_id = each.key } </code></pre> <p>variables.tf:</p> <pre><code>variable &quot;custom_role&quot; { type = string description = &quot;custom role&quot; default = &quot;READER&quot; } variable &quot;builtin_role&quot; { type = string description = &quot;builtin role&quot; default = &quot;My_built_in_role&quot; } </code></pre> <p>If there is a possibility to switch over the local variables based on the regions(china and global)? Any suggestions ate ideas how this can be achieved?</p>
pk_dhruv
<p>You can use conditional expression in Terraform to implement the logic &quot;if geo = cn use local.principal_ids_cn, else use local.principal_ids&quot;</p> <p>Terraform code for your resource block:</p> <pre><code>locals { # get json role_data = jsondecode(file(var.inputfile)) principal_ids = distinct([for principal in local.role_data.applications : principal.principal_id]) principal_ids_cn = distinct([for principal_cn in local.role_data.applications : principal.principal_id_cn]) principal = (var.geo == &quot;cn&quot; ? local.principal_ids_cn : local.principal_ids) } data &quot;azurerm_subscription&quot; &quot;primary&quot; {} resource &quot;azurerm_role_assignment&quot; &quot;custom&quot; { for_each = toset(local.principal_ids) scope = data.azurerm_subscription.primary.id role_definition_name = var.custom_role principal_id = each.key } resource &quot;azurerm_role_assignment&quot; &quot;builtin&quot; { for_each = toset(local.principal_ids) scope = data.azurerm_subscription.primary.id role_definition_name = var.builtin_role principal_id = each.key } </code></pre> <p><a href="https://www.terraform.io/docs/language/expressions/conditionals.html" rel="nofollow noreferrer">https://www.terraform.io/docs/language/expressions/conditionals.html</a></p>
Andriy Bilous
<p>I have 2 pods in a Kubernetes namespace. One uses <code>TCP</code> and the other uses <code>UDP</code> and both are exposed using <code>ClusterIP</code> services via external IP. Both services use the same external IP.</p> <p>This way I let my users access both the services using the same IP. I want to remove the use of <code>spec.externalIPs</code> but be able to allow my user to still use a single domain name/IP to access both the <code>TCP</code> and <code>UDP</code> services.</p> <p>I do not want to use <code>spec.externalIPs</code>, so I believe clusterIP and NodePort services cannot be used. ​Load balancer service does not allow me to specify both <code>TCP</code> and <code>UDP</code> in the same service.</p> <p>I have experimented with NGINX Ingress Controller. But even there the Load Balancer service needs to be created which cannot support both <code>TCP</code> and <code>UDP</code> in the same service.</p> <p>Below is the cluster IP service exposing the apps currently using external IP:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: tcp-udp-svc name: tcp-udp-service spec: externalIPs: - &lt;public IP- IP2&gt; ports: - name: tcp-exp port: 33001 protocol: TCP targetPort: 33001 - name: udp-exp port: 33001 protocol: UDP targetPort: 33001 selector: app: tcp-udp-app sessionAffinity: None type: ClusterIP </code></pre> <p>The service shows up like below</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) tcp-udp-service ClusterIP &lt;internal IP IP1&gt; &lt;public IP- IP2&gt; 33001/TCP,33001/UDP </code></pre> <p>Using the above set up, both the <code>TCP</code> and <code>UDP</code> apps on port 33001 is accessible externally just fine using IP2.</p> <p>As you can see I've used:</p> <pre><code>spec: externalIPs: - &lt;public IP- IP2&gt; </code></pre> <p>In the service to make it accessible externally.</p> <p>However I do not want to use this set up, ie. I am looking for a set up without using the <code>spec.externalIPs</code>.</p> <p>When using a load balancer service to expose the apps, I see that both <code>TCP</code> and <code>UDP</code> cannot be added in the same load balancer service. So I have to create one load balancer service for <code>TCP</code> and add another load balancer service for <code>UDP</code> like below</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) tcp-service LoadBalancer &lt;internal IP IP1&gt; &lt;public IP- IP2&gt; 33001/TCP udp-service LoadBalancer &lt;internal IP IP3&gt; &lt;public IP- IP4&gt; 33001/UDP --- apiVersion: v1 kind: Service metadata: name: tcp-service spec: externalTrafficPolicy: Cluster ports: - name: tcp-svc port: 33001 protocol: TCP targetPort: 33001 selector: app: tcp-udp-app sessionAffinity: None type: LoadBalancer --- apiVersion: v1 kind: Service metadata: name: udp-service spec: externalTrafficPolicy: Cluster ports: - name: udp-svc port: 33001 protocol: UDP targetPort: 33001 selector: app: tcp-udp-app sessionAffinity: None type: LoadBalancer </code></pre> <p>But the problem is that each of these services get individual IPs assigned (IP2 &amp; IP4). But I want to be able to access both the TCP &amp; UDP apps using the same IP. When testing out with nginx ingress controller too, I am faced the same issue as above.</p> <p>Is there any other possible way to achieve what I am looking for, ie. to expose both TCP and UDP services on the same IP, but without using the <code>spec.externalIPs</code>?</p>
anas
<p>Unfortunately, you will not be able to achieve your desired result with Load Balancer. Service type in any way for UDP traffic, because according the <a href="https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas#lbaas_limitations" rel="nofollow noreferrer">following documentation</a> UDP protocol is not supported by any of VPC load balancer types.</p> <p>You could theoretically define a portable public IP address for LoadBalancer Service type, by using the <code>loadBalancerIP</code> annotation, but this portable public IP address has to be available in portable public subnet upfront and Cloud Provivers's LB needs to support UDP protocol. You can see <a href="https://cloud.ibm.com/docs/containers?topic=containers-cs_loadbalancer_fails" rel="nofollow noreferrer">this doc</a></p> <p>Workaround for non-Prod setup:</p> <p>You can use <code>hostPort</code> to <a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="nofollow noreferrer">expose</a> TCP &amp; UDP ports directly on worker nodes. Could be used together with some Ingress controllers that support TCP &amp; UDP Services, like NGINX Ingress. For more see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">this documentation</a>.</p>
Mikołaj Głodziak
<p>How do you change the <code>name</code> of a user in a kube config file with kubectl (no text editor)?</p> <p>Example kube config file <code>stage_config.yaml</code>:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://127.0.0.1:6443 name: cluster.local contexts: - context: cluster: cluster.local user: kubernetes-admin name: [email protected] current-context: [email protected] kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>I want to do something like <code>kubectl config rename-user --kubeconfig ~/.kube/stage_config.yaml kubernetes-admin kubernetes-admin-1</code></p> <p>With the output like:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://127.0.0.1:6443 name: cluster.local contexts: - context: cluster: cluster.local user: kubernetes-admin name: [email protected] current-context: [email protected] kind: Config preferences: {} users: - name: kubernetes-admin-1 user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>I've tried <code>kubectl config set</code> but receive the following error:</p> <pre><code>kubectl config set --kubeconfig ~/.kube/stage_config.yaml users.name.kubernetes-admin kubernetes-admin-1 error: can't set a map to a value: map[kubernetes-admin:0xc000c53100] </code></pre> <p><code>kubectl config --help</code> shows that the <code>rename-context</code> command exists, but nothing like <code>rename-user</code> nor <code>rename-cluster</code> exists.</p>
SlyGuy
<p>Use JQ to do the magic (just make sure your KubeConfig is in JSON). I'm sure is possible to do it with YQ for YAML but I let someone else fill that one in. Set <code>CLUSTER_NAME</code> to your desired value.</p> <pre class="lang-bash prettyprint-override"><code>jq '.clusters[0].name=&quot;'$CLUSTER_NAME'&quot; | .contexts[0].context.cluster=&quot;'$CLUSTER_NAME'&quot;' source/kubeconfig &gt; target/kubeconfig </code></pre>
anon_coward
<p>I recently learned that Intel SGX processors are able to encrypt enclaves for persistent storage to disk. After this, I started to write my first SGX apps and now I am wondering if there is any opportunity to deploy them on Kubernetes?</p>
jayare
<p>Your question can be split into multiple steps:</p> <ol> <li>Having a Kubernetes cluster that exposes SGX to your apps</li> </ol> <p>You'll need Kubernetes nodes with SGX-capable CPUs. The way Kubernetes handles &quot;special devices&quot; as SGX is through <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/" rel="nofollow noreferrer">Device Plugins</a>. Multiple SGX device plugins exist for Kubernetes:</p> <ul> <li>Intel: <a href="https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/sgx_plugin/README.html" rel="nofollow noreferrer">https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/sgx_plugin/README.html</a></li> <li>Azure: <a href="https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md" rel="nofollow noreferrer">https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md</a></li> </ul> <p>Once you've equipped a node with such a plugin, they provide you with a mechanism to expose the SGX device to your containers.</p> <ol start="2"> <li>Building SGX apps for Kubernetes and accessing SGX resources</li> </ol> <p>You'll need to bundle your enclave into a container and write the Kubernetes resource definitions. The most common language for Cloud Native Applications is probably Go. There is a great example for a confidential microservice application based on the EdgelessRT Go runtime and SDK(link), which uses the Azure device plugin for exposing SGX to the containers: <a href="https://github.com/edgelesssys/emojivoto" rel="nofollow noreferrer">https://github.com/edgelesssys/emojivoto</a></p> <ol start="3"> <li>Managing attestation, sealing, etc. for your SGX app</li> </ol> <p>Probably the most interesting point when deploying SGX apps on Kubernetes is SGX-specific orchestration. While Kubernetes handles all the general orchestration, SGX-specific task as remote-attestation, migration, and secrets management of your deployments need to be handled separately. The Marblerun service mesh addresses those tasks, namely:</p> <ul> <li>Attestation of your services: <a href="https://marblerun.sh/docs/features/attestation/" rel="nofollow noreferrer">https://marblerun.sh/docs/features/attestation/</a></li> <li>Migration and Recovery: <a href="https://marblerun.sh/docs/features/recovery/" rel="nofollow noreferrer">https://marblerun.sh/docs/features/recovery/</a></li> <li>Secrets Management: <a href="https://marblerun.sh/docs/features/secrets-management/" rel="nofollow noreferrer">https://marblerun.sh/docs/features/secrets-management/</a></li> </ul>
Jonathan
<p>I have stuck resources after delete a jitsi stack in my master node. The only pending resources are this two <code>statefullset.appsset</code>, no pods are running.</p> <p><a href="https://i.stack.imgur.com/rZqnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZqnr.png" alt="My issue" /></a></p> <p>If I execute the command:</p> <pre><code>kubectl delete statefulsets shard-0-jvb -n jitsi --force --grace-period=0 --cascade=orphan </code></pre> <p>The console freezes for hours and resources are not removed.</p> <p>Any other way to force the destroying process?</p> <p>The stack was created with Kustomize.</p> <p><a href="https://i.stack.imgur.com/0zPdH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0zPdH.png" alt="enter image description here" /></a></p>
Miguel Conde
<p>Posting the answer as community wiki, feel free to edit and expand.</p> <hr /> <p><strong>Stuck objects in general</strong></p> <p>Sometimes objects can't be deleted due to <code>finalizer</code>(s), you will need to find them by viewing at the whole object e.g. <code>kubectl get pod pod-name -o json</code>.</p> <p>Then there are two options:</p> <ol> <li><p>fix what prevents dependent object to be deleted (for instance it was metrics server - see <a href="https://stackoverflow.com/a/68319002/15537201">another answer on SO</a>)</p> </li> <li><p>if it's not possible to fix, then <code>finalizer</code> should be removed manually by <code>kubectl edit resource_type resouce_name</code></p> </li> </ol> <p><strong>Stuck statefulsets</strong></p> <p>Kubernetes documentation has two parts related to deleting statefulsets (it's a bit more complicated since usually they have persistent volumes as well).</p> <p>Useful links:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/" rel="nofollow noreferrer">Delete a StatefulSet</a></li> <li><a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">Force Delete StatefulSet Pods</a></li> </ul>
moonkotte
<p>With:</p> <p><code>kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml</code></p> <p>I get an error:</p> <pre><code>Error: unknown flag: --server-dry-run See 'kubectl apply --help' for usage. </code></pre> <p>And even with:</p> <p><code>kubectl apply -f web.yaml --dry-run=server --validate=false -o yaml</code></p> <p>I get another error:</p> <pre><code>Warning: resource deployments/web is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. Error from server (Conflict): error when applying patch: {&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;kubectl.kubernetes.io/last-applied-configuration&quot;:&quot;{\&quot;apiVersion\&quot;:\&quot;apps/v1\&quot;,\&quot;kind\&quot;:\&quot;Deployment\&quot;,\&quot;metadata\&quot;:{\&quot;annotations\&quot;:{},\&quot;creationTimestamp\&quot;:\&quot;2021-12-30T08:51:06Z\&quot;,\&quot;generation\&quot;:1,\&quot;labels\&quot;:{\&quot;app\&quot;:\&quot;web\&quot;},\&quot;name\&quot;:\&quot;web\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;,\&quot;resourceVersion\&quot;:\&quot;1589\&quot;,\&quot;uid\&quot;:\&quot;c2a4c20e-f55b-4113-b8e6-d2c19bb3e91c\&quot;},\&quot;spec\&quot;:{\&quot;progressDeadlineSeconds\&quot;:600,\&quot;replicas\&quot;:1,\&quot;revisionHistoryLimit\&quot;:10,\&quot;selector\&quot;:{\&quot;matchLabels\&quot;:{\&quot;app\&quot;:\&quot;web\&quot;}},\&quot;strategy\&quot;:{\&quot;rollingUpdate\&quot;:{\&quot;maxSurge\&quot;:\&quot;25%\&quot;,\&quot;maxUnavailable\&quot;:\&quot;25%\&quot;},\&quot;type\&quot;:\&quot;RollingUpdate\&quot;},\&quot;template\&quot;:{\&quot;metadata\&quot;:{\&quot;creationTimestamp\&quot;:null,\&quot;labels\&quot;:{\&quot;app\&quot;:\&quot;web\&quot;}},\&quot;spec\&quot;:{\&quot;containers\&quot;:[{\&quot;image\&quot;:\&quot;nginx\&quot;,\&quot;imagePullPolicy\&quot;:\&quot;Always\&quot;,\&quot;name\&quot;:\&quot;nginx\&quot;,\&quot;resources\&quot;:{},\&quot;terminationMessagePath\&quot;:\&quot;/dev/termination-log\&quot;,\&quot;terminationMessagePolicy\&quot;:\&quot;File\&quot;}],\&quot;dnsPolicy\&quot;:\&quot;ClusterFirst\&quot;,\&quot;restartPolicy\&quot;:\&quot;Always\&quot;,\&quot;schedulerName\&quot;:\&quot;default-scheduler\&quot;,\&quot;securityContext\&quot;:{},\&quot;terminationGracePeriodSeconds\&quot;:30}}},\&quot;status\&quot;:{}}\n&quot;},&quot;resourceVersion&quot;:&quot;1589&quot;}} to: Resource: &quot;apps/v1, Resource=deployments&quot;, GroupVersionKind: &quot;apps/v1, Kind=Deployment&quot; Name: &quot;web&quot;, Namespace: &quot;default&quot; for: &quot;web.yaml&quot;: Operation cannot be fulfilled on deployments.apps &quot;web&quot;: the object has been modified; please apply your changes to the latest version and try again </code></pre> <p>What should I do?</p> <p>I'm using docker-desktop and my kubectl version is:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.4&quot;, GitCommit:&quot;b695d79d4f967c403a96986f1750a35eb75e75f1&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-11-17T15:48:33Z&quot;, GoVersion:&quot;go1.16.10&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.4&quot;, GitCommit:&quot;b695d79d4f967c403a96986f1750a35eb75e75f1&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-11-17T15:42:41Z&quot;, GoVersion:&quot;go1.16.10&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>and my cluster version is <code>1.22.4</code></p>
Ali
<blockquote> <p>I get an error:</p> </blockquote> <pre class="lang-yaml prettyprint-override"><code>Error: unknown flag: --server-dry-run See 'kubectl apply --help' for usage. </code></pre> <p>That's correct. This flag is deprecated. You need to use <code>--dry-run=server</code> flag. For more look <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create" rel="nofollow noreferrer">at this site</a>.</p> <p>As for the second problem, it seems that this is correct on the part of k8s. You can find the <a href="https://github.com/kubernetes/kubernetes/issues/84430#issuecomment-638376994" rel="nofollow noreferrer">explanation here</a>. If you want to resolve your problem you need to remove fields <code>creationTimestamp</code>. It is well explained in <a href="https://stackoverflow.com/questions/51297136/kubectl-error-the-object-has-been-modified-please-apply-your-changes-to-the-la">this question</a>.</p>
Mikołaj Głodziak
<p>quick question as I am sure its an easy fix, just cannot seem to figure it out!</p> <p>We have a site that runs on <a href="http://www.awesomeapp.com" rel="nofollow noreferrer">www.awesomeapp.com</a> - all working perfectly on ingress routing</p> <p>However I want to also redirect the route domain to <a href="http://www.awesomeapp.com" rel="nofollow noreferrer">www.awesomeapp.com</a> for example awesomeapp.com =&gt; <a href="http://www.awesomeapp.com" rel="nofollow noreferrer">www.awesomeapp.com</a></p> <p>I added this annotation nginx.ingress.kubernetes.io/permanent-redirect: <a href="https://www.awesomeapp.com" rel="nofollow noreferrer">https://www.awesomeapp.com</a></p> <p>Again all working perfectly.</p> <p>My question is how do I also get the path added as well. For example</p> <p>user enters in awesomeapp.com/myawesomepage and what I want is it to not only redirect but append the path as well so it ends up like this</p> <p><a href="http://www.blahblah.com/myawesomepage" rel="nofollow noreferrer">www.blahblah.com/myawesomepage</a></p> <p>I added this ingress.kubernetes.io/rewrite-target: /$1$2 but this does not work, and that is in conjunction with this - path: /(/|$)(.*)</p> <p>Full yaml config as follows:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: root-routing labels: app: awesomeapp annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/permanent-redirect: https://www.awesomeapp.com ingress.kubernetes.io/rewrite-target: /$1$2 spec: tls: - hosts: - awesomeapp.com secretName: awesomeapp-com-tlscert rules: - host: awesomeapp.com http: paths: - path: /(/|$)(.*) pathType: Prefix backend: service: name: awesomeapp port: number: 80 </code></pre> <p>Thanks John</p>
John Hyde
<p>nginx.ingress.kubernetes.io/server-snippet: | return 301 https://www.awesomeapp.com$request_uri;</p>
John Hyde
<p>Im new to kubernetes. In the yaml file to create services, i define externalIPs value in order to access services from outside the cluster:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: mytestservice spec: type: ClusterIP clusterIP: 10.96.1.113 externalIPs: - 172.16.80.117 ports: - name: tcp-8088 protocol: TCP port: 8088 targetPort: 8088: selector: service-app: mytestservice </code></pre> <p>and it works just fine, i can call to my service by using externalIp:port (this case 172.16.80.117:8088). But i heard people talking about ingress controller (and some API gateways) that provide access from outside. I do read about them a bit but still cant tell what're the differences!? and does my cluster need them or not?</p> <p>(According to the accepted answer i found here <a href="https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending">Kubernetes service external ip pending</a></p> <p><code>With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.</code></p> <p>My cluster is a custom K8s Cluster too, using kubeadm. If i have no need of using domain name, just address my services directly by externalip and port then i can totally ignore ingress controller. Am i right?</p> <p>Thanks!</p>
Left Click
<p>Welcome to the community.</p> <p><strong>Short answer:</strong></p> <p>At this point the answer to your question is yes, for simple cases you may completely ignore ingress. It will be a good option when it's time to go to production.</p> <hr /> <p><strong>A bit more details:</strong></p> <p>Main point why you may need to look at <code>ingress</code> is because it manages incoming traffic: works with HTTP/HTTPS requests, provides routing based on paths, can work with TLS/SSL and can perform <a href="https://en.wikipedia.org/wiki/TLS_termination_proxy" rel="nofollow noreferrer">TLS termination</a> and many more.</p> <p>There are different ingresses available, most common is <code>nginx ingress</code>. It has almost all features regular <code>nginx</code> has. You can find <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">nginx ingress annotations</a> to see what it can do.</p> <p>For example, if you have microservices application, for each service separate load balancer will be required while everything can be directed to single ingress and routed further to services (see examples in useful links).</p> <p>If you only play with kubernetes and single service, no need to even have a loadbalancer, you can use a <code>nodePort</code> or <code>externalIP</code>.</p> <p>Also with <code>ingress</code> deployed, there's no need to specify a port. Usually <code>ingress</code> listens to <code>80</code> and <code>443</code> respectively.</p> <p>I'd say it's worth trying to see how it works and make routing and managing service cleaner.</p> <hr /> <p><strong>Useful links:</strong></p> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress - description and examples</a></li> <li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="nofollow noreferrer">basic usage - host based routing</a></li> </ul>
moonkotte
<p>The <a href="https://book.kubebuilder.io" rel="nofollow noreferrer">Kubebuilder V3 documentation</a> explains that it talks about &quot;How to batch multiple events into a single reconciliation call&quot;. However, I could not find any information about event management in this documentation.</p> <p>Could you please provide information/code sample about how to send Events with <code>Kubebuilder-v3/operator-sdk</code>?</p>
Fabrice Jammes
<p><a href="https://book-v1.book.kubebuilder.io/basics/simple_controller.html" rel="nofollow noreferrer">This part</a> from the official documentation should answer your question:</p> <blockquote> <p><strong>This business logic of the Controller is implemented in the <code>Reconcile</code> function. This function takes the Namespace and Name of a ContainerSet, allowing multiple Events to be batched together into a single Reconcile call.</strong> The function shown here creates or updates a Deployment using the replicas and image specified in ContainerSet.Spec. Note that it sets an OwnerReference for the Deployment to enable garbage collection on the Deployment once the ContainerSet is deleted.</p> <ol> <li>Read the ContainerSet using the NamespacedName</li> <li>If there is an error or it has been deleted, return</li> <li>Create the new desired DeploymentSpec from the ContainerSetSpec</li> <li>Read the Deployment and compare the Deployment.Spec to the ContainerSet.Spec</li> <li>If the observed Deployment.Spec does not match the desired spec - Deployment was not found: create a new Deployment - Deployment was found and changes are needed: update the Deployment</li> </ol> </blockquote> <p>There you can also find example with the code.</p>
Mikołaj Głodziak
<h1>TLDR;</h1> <p>It's possible to connect 2 pods in Kubernetes as they were in the same local-net with all the ports opened?</p> <h1>Motivation</h1> <p>Currently, we have airflow implemented in a Kubernetes cluster, and aiming to use TensorFlow Extended we need to use Apache beam. For our use case Spark would be the appropriate runner to be used, and as airflow and TensorFlow are coded in python we would need to use the Apache Beam's Portable Runner (<a href="https://beam.apache.org/documentation/runners/spark/#portability" rel="nofollow noreferrer">https://beam.apache.org/documentation/runners/spark/#portability</a>).</p> <h1>The problem</h1> <p>The communication between the airflow pod and the job server pod is resulting in transmitting errors (probably because of some random ports used by the job server).</p> <h1>Setup</h1> <p>To follow a good isolation practice and to imitate the Spark in Kubernetes common setup (using the driver inside the cluster in a pod), the job server was implemented as:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: beam-spark-job-server labels: app: airflow-k8s spec: selector: matchLabels: app: beam-spark-job-server replicas: 1 template: metadata: labels: app: beam-spark-job-server spec: restartPolicy: Always containers: - name: beam-spark-job-server image: apache/beam_spark_job_server:2.27.0 args: [&quot;--spark-master-url=spark://spark-master:7077&quot;] resources: limits: memory: &quot;1Gi&quot; cpu: &quot;0.7&quot; env: - name: SPARK_PUBLIC_DNS value: spark-client ports: - containerPort: 8099 protocol: TCP name: job-server - containerPort: 7077 protocol: TCP name: spark-master - containerPort: 8098 protocol: TCP name: artifact - containerPort: 8097 protocol: TCP name: java-expansion apiVersion: v1 kind: Service metadata: name: beam-spark-job-server labels: app: airflow-k8s spec: type: ClusterIP selector: app: beam-spark-job-server ports: - port: 8099 protocol: TCP targetPort: 8099 name: job-server - port: 7077 protocol: TCP targetPort: 7077 name: spark-master - port: 8098 protocol: TCP targetPort: 8098 name: artifact - port: 8097 protocol: TCP targetPort: 8097 name: java-expansion </code></pre> <h1>Development/Errors</h1> <p>If I execute the command <code>python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=beam-spark-job-server:8099 --environment_type=LOOPBACK</code> from the airflow pod I get no logs on the job server and I get this error on the terminal:</p> <pre><code>INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds. INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds. INFO:oauth2client.client:Timeout attempting to reach GCE metadata service. WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. Connecting anonymously. INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:46569 WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. INFO:root:Default Python SDK image for environment is apache/beam_python3.7_sdk:2.27.0 Traceback (most recent call last): File &quot;/usr/local/lib/python3.7/runpy.py&quot;, line 193, in _run_module_as_main &quot;__main__&quot;, mod_spec) File &quot;/usr/local/lib/python3.7/runpy.py&quot;, line 85, in _run_code exec(code, run_globals) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/examples/wordcount.py&quot;, line 99, in &lt;module&gt; run() File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/examples/wordcount.py&quot;, line 94, in run ERROR:grpc._channel:Exception iterating requests! Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.7/site-packages/grpc/_channel.py&quot;, line 195, in consume_request_iterator request = next(request_iterator) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/artifact_service.py&quot;, line 355, in __next__ raise self._queue.get() File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/pipeline.py&quot;, line 561, in run return self.runner.run_pipeline(self, self._options) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py&quot;, line 421, in run_pipeline job_service_handle.submit(proto_pipeline) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py&quot;, line 115, in submit prepare_response.staging_session_token) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py&quot;, line 214, in stage staging_session_token) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/artifact_service.py&quot;, line 241, in offer_artifacts for request in requests: File &quot;/home/airflow/.local/lib/python3.7/site-packages/grpc/_channel.py&quot;, line 416, in __next__ return self._next() File &quot;/home/airflow/.local/lib/python3.7/site-packages/grpc/_channel.py&quot;, line 803, in _next raise self grpc._channel._MultiThreadedRendezvous: &lt;_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = &quot;Unknown staging token job_b6f49cc2-6732-4ea3-9aef-774e3d22867b&quot; debug_error_string = &quot;{&quot;created&quot;:&quot;@1613765341.075846957&quot;,&quot;description&quot;:&quot;Error received from peer ipv4:127.0.0.1:8098&quot;,&quot;file&quot;:&quot;src/core/lib/surface/call.cc&quot;,&quot;file_line&quot;:1067,&quot;grpc_message&quot;:&quot;Unknown staging token job_b6f49cc2-6732-4ea3-9aef-774e3d22867b&quot;,&quot;grpc_status&quot;:3}&quot; &gt; output | 'Write' &gt;&gt; WriteToText(known_args.output) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/pipeline.py&quot;, line 582, in __exit__ self.result = self.run() File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/pipeline.py&quot;, line 561, in run return self.runner.run_pipeline(self, self._options) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py&quot;, line 421, in run_pipeline job_service_handle.submit(proto_pipeline) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py&quot;, line 115, in submit prepare_response.staging_session_token) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py&quot;, line 214, in stage staging_session_token) File &quot;/home/airflow/.local/lib/python3.7/site-packages/apache_beam/runners/portability/artifact_service.py&quot;, line 241, in offer_artifacts for request in requests: File &quot;/home/airflow/.local/lib/python3.7/site-packages/grpc/_channel.py&quot;, line 416, in __next__ return self._next() File &quot;/home/airflow/.local/lib/python3.7/site-packages/grpc/_channel.py&quot;, line 803, in _next raise self grpc._channel._MultiThreadedRendezvous: &lt;_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = &quot;Unknown staging token job_b6f49cc2-6732-4ea3-9aef-774e3d22867b&quot; debug_error_string = &quot;{&quot;created&quot;:&quot;@1613765341.075846957&quot;,&quot;description&quot;:&quot;Error received from peer ipv4:127.0.0.1:8098&quot;,&quot;file&quot;:&quot;src/core/lib/surface/call.cc&quot;,&quot;file_line&quot;:1067,&quot;grpc_message&quot;:&quot;Unknown staging token job_b6f49cc2-6732-4ea3-9aef-774e3d22867b&quot;,&quot;grpc_status&quot;:3}&quot; </code></pre> <p>Which indicates an error transmitting the job. If I implement the Job Server in the same pod as airflow I get a full working communication between these two containers, I would like to have the same behavior but with them in different pods.</p>
Giovani Merlin
<p>You need to deploy two containers in one pod</p>
Dipen
<p>I am using this <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/multibases/README.md" rel="noreferrer">example</a>:</p> <pre><code>├── base │   ├── kustomization.yaml │   └── pod.yaml ├── dev │   └── kustomization.yaml ├── kustomization.yaml ├── production │   └── kustomization.yaml └── staging └── kustomization.yaml </code></pre> <p>and in <code>kustomization.yaml</code> file in root:</p> <pre><code>resources: - ./dev - ./staging - ./production </code></pre> <p>I also have the image transformer code in <code>dev, staging, production</code> kustomization.yaml:</p> <pre><code>images: - name: my-app newName: gcr.io/my-platform/my-app </code></pre> <p>To build a single deployment manifest, I use:</p> <pre><code>(cd dev &amp;&amp; kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p>which simply works!</p> <p>to build deployment manifest for all overlays (dev, staging, production), I use:</p> <pre><code>(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p>which uses the <code>kustomization.yaml</code> in root which contains all resources(dev, staging, production).</p> <p>It does work and the final build is printed on console but without the image tag.</p> <p>It seems like the <code>kusotmize edit set image</code> only updates the <code>kustomizaion.yaml</code> of the current dir.</p> <p>Is there anything which can be done to handle this scenario in an easy and efficient way so the final output contains image tag as well for all deployments?</p> <p><a href="https://github.com/D-GC/kustomize-multibase" rel="noreferrer">To test please use this repo</a></p>
Arian
<p>It took some time to realise what happens here. I'll explain step by step what happens and how it should work.</p> <h2>What happens</h2> <p>Firstly I re-created the same structure:</p> <pre><code>$ tree . ├── base │   ├── kustomization.yaml │   └── pod.yaml ├── dev │   └── kustomization.yaml ├── kustomization.yaml └── staging └── kustomization.yaml </code></pre> <p>When you run this command for single deployment:</p> <pre><code>(cd dev &amp;&amp; kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p>you change working directory to <code>dev</code>, manually override image from <code>gcr.io/my-platform/my-app</code> and adding tag <code>0.0.2</code> and then render the deployment.</p> <p>The thing is previously added <code>transformer code</code> gets overridden by the command above. You can remove <code>transformer code</code>, run the command above and get the same result. And after running the command you will find out that your <code>dev/kustomization.yaml</code> will look like:</p> <pre><code>resources: - ./../base namePrefix: dev- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: my-app newName: gcr.io/my-platform/my-app newTag: 0.0.2 </code></pre> <p><strong>Then</strong> what happens when you run this command from main directory:</p> <pre><code>(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p><code>kustomize</code> firstly goes to overlays and do <code>transformation code</code> which is located in <code>overlays/kustomization.yaml</code>. When this part is finished, image name is <strong>not</strong> <code>my-app</code>, but <code>gcr.io/my-platform/my-app</code>.</p> <p>At this point <code>kustomize edit</code> command tries to find image with name <code>my-app</code> and can't do so and therefore does NOT apply the <code>tag</code>.</p> <h2>What to do</h2> <p>You need to use transformed image name if you run <code>kustomize edit</code> in main working directory:</p> <pre><code>$ kustomize edit set image gcr.io/my-platform/my-app=*:0.0.4 &amp;&amp; kustomize build . apiVersion: v1 kind: Pod metadata: labels: app: my-app name: dev-myapp-pod spec: containers: - image: gcr.io/my-platform/my-app:0.0.4 name: my-app --- apiVersion: v1 kind: Pod metadata: labels: app: my-app name: stag-myapp-pod spec: containers: - image: gcr.io/my-platform/my-app:0.0.4 name: my-app </code></pre>
moonkotte
<p>I installed nginx-ingress using helm. After that I notice the default <code>controller.kind</code> is <code>deployment</code> rather than <code>daemonset</code>, as I found in the <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">official doc</a>.</p> <p>So how can I update the <code>controller.kind</code> from <code>deployment</code> to <code>daemonset</code> without reinstalling from the very beginning?</p> <pre><code>helm install nginx-ingress nginx-stable/nginx-ingress --set controller.service.type=NodePort --set controller.service.httpPort.nodePort=30000 --set controller.service.httpsPort.nodePort=30443 </code></pre>
mainframer
<p>This might be helpful to you (--set controller.kind=daemonset).</p> <pre><code>helm install nginx-ingress nginx-stable/nginx-ingress --set controller.service.type=NodePort --set controller.service.httpPort.nodePort=30000 --set controller.service.httpsPort.nodePort=30443 --set controller.kind=daemonset </code></pre>
quoc9x
<p>I have a RKE2 kube installation, 3 nodes, I install MariaDB from BitNami repository:</p> <pre><code>- name: mariadb repository: https://charts.bitnami.com/bitnami version: 10.3.2 </code></pre> <p>It boots up correctly in my kube installation, but I need to access it from outside the cluster, let's say with my Navicat client as example.</p> <p>This is my <strong>values.yaml</strong>:</p> <pre><code>mariadb: clusterDomain: a4b-kube.local auth: rootPassword: &quot;password&quot; replicationPassword: &quot;password&quot; architecture: replication primary: service: type: LoadBalancer loadBalancerIP: mariadb.acme.com secondary: replicaCount: 2 </code></pre> <p>Listing the services I see:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE a4b-test-mariadb-primary LoadBalancer 10.43.171.45 &lt;pending&gt; 3306:31379/TCP 48m </code></pre> <p>And the external IP never gets updated, I also try specifing an IP instead of dns, in my case was 192.168.113.120 but I got same result. What am I missing?</p>
NiBE
<p>You might consider using NodePort</p> <pre><code>mariadb: clusterDomain: a4b-kube.local auth: rootPassword: &quot;password&quot; replicationPassword: &quot;password&quot; architecture: replication primary: service: type: NodePort nodePort: 32036 secondary: replicaCount: 2 </code></pre> <p><code>nodePort: 32036</code> you can choose in range 30000 - 32767 (default)<br /> Then, you can access via <code>nodeIP:nodePort</code></p>
quoc9x
<p>kubernetes cannot pull a public image. Standard images like nginx are downloading successfully, but my pet project is not downloading. I'm using minikube for launch kubernetes-cluster</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: api-gateway-deploumnet labels: app: api-gateway spec: replicas: 3 selector: matchLabels: app: api-gateway template: metadata: labels: app: api-gateway spec: containers: - name: api-gateway image: creatorsprodhouse/api-gateway:latest imagePullPolicy: Always ports: - containerPort: 80 </code></pre> <p>when I try to create a deployment I get an error that kubernetes cannot download my public image.</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl get pods </code></pre> <p>result:</p> <pre class="lang-js prettyprint-override"><code>NAME READY STATUS RESTARTS AGE api-gateway-deploumnet-599c784984-j9mf2 0/1 ImagePullBackOff 0 13m api-gateway-deploumnet-599c784984-qzklt 0/1 ImagePullBackOff 0 13m api-gateway-deploumnet-599c784984-csxln 0/1 ImagePullBackOff 0 13m </code></pre> <pre class="lang-bash prettyprint-override"><code>$ kubectl logs api-gateway-deploumnet-599c784984-csxln </code></pre> <p>result</p> <pre><code>Error from server (BadRequest): container &quot;api-gateway&quot; in pod &quot;api-gateway-deploumnet-86f6cc5b65-xdx85&quot; is waiting to start: trying and failing to pull image </code></pre> <p>What could be the problem? The standard images are downloading but my public one is not. Any help would be appreciated.</p> <p><strong>EDIT 1</strong></p> <pre><code>$ api-gateway-deploumnet-599c784984-csxln </code></pre> <p>result:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m22s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-mq4td to minikube Warning Failed 3m8s kubelet Failed to pull image &quot;creatorsprodhouse/api-gateway:latest&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 3m8s kubelet Error: ErrImagePull Normal BackOff 3m7s kubelet Back-off pulling image &quot;creatorsprodhouse/api-gateway:latest&quot; Warning Failed 3m7s kubelet Error: ImagePullBackOff Normal Pulling 2m53s (x2 over 8m21s) kubelet Pulling image &quot;creatorsprodhouse/api-gateway:latest&quot; </code></pre> <p><strong>EDIT 2</strong></p> <p>If I try to download a separate docker image, it's fine</p> <pre><code>$ docker pull creatorsprodhouse/api-gateway:latest </code></pre> <p>result:</p> <pre><code>Digest: sha256:e664a9dd9025f80a3dd60d157ce1464d4df7d0f8a00538e6a137d44f9f9f12aa Status: Downloaded newer image for creatorsprodhouse/api-gateway:latest docker.io/creatorsprodhouse/api-gateway:latest </code></pre> <p><strong>EDIT 3</strong> After advice to restart minikube</p> <pre><code>$ minikube stop $ minikube delete --purge $ minikube start --cni=calico </code></pre> <p>I started the pods.</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m28s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-bkr28 to minikube Warning FailedCreatePodSandBox 4m27s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container &quot;7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1&quot; network for pod &quot;api-gateway-deploumnet-849899786d-bkr28&quot;: networkPlugin cni failed to set up pod &quot;api-gateway-deploumnet-849899786d-bkr28_default&quot; network: failed to set bridge addr: could not add IP address to &quot;cni0&quot;: permission denied, failed to clean up sandbox container &quot;7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1&quot; network for pod &quot;api-gateway-deploumnet-849899786d-bkr28&quot;: networkPlugin cni failed to teardown pod &quot;api-gateway-deploumnet-849899786d-bkr28_default&quot; network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-57e7da7379b524635074e6d0 -m comment --comment name: &quot;crio&quot; id: &quot;7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1&quot; --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-57e7da7379b524635074e6d0':No such file or directory Try `iptables -h' or 'iptables --help' for more information. </code></pre>
MiyRon
<p>I could not solve the problem in the ways I was suggested. However, it worked when <strong>I ran minikube with a different driver</strong></p> <pre><code>$ minikube start --driver=none </code></pre> <p><em><strong>--driver=none</strong></em> means that the cluster will run on your host instead of the standard <em><strong>--driver=docker</strong></em> which runs the cluster in docker.</p> <p>It is better to run minikube with <em><strong>--driver=docker</strong></em> as it is safer and easier, but it didn't work for me as I could not download my images. For me personally it is ok to use <em><strong>--driver=none</strong></em> although it is a bit dangerous.</p> <p>In general, if anyone knows what the problem is, please answer my question. In the meantime you can try to run minikube cluster on your host with the command I mentioned above.</p> <p>In any case, thank you very much for your attention!</p>
MiyRon
<p>When running a sidecar container along with the main container in my kubernets deployment I'm getting an error with the timeout when mounting the volume.</p> <p><strong>MountVolume.SetUp failed for volume &quot;initdb&quot; : failed to sync configmap cache: timed out waiting for the condition</strong></p> <p><strong>kubectl describe pod mypod</strong></p> <pre><code>. . . client: Container ID: docker://xx Image: docker.io/golang:xx Image ID: docker-xxx Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 14 Jun 2022 15:44:22 +0200 Finished: Tue, 14 Jun 2022 15:44:22 +0200 Ready: False Restart Count: 3 Environment: env_a: &lt;set to the key 'env_a' of config map 'unleash'&gt; Optional: false token: &lt;set to the key 'token' in secret 'mysecrets'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from xxx (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: initdb: Type: ConfigMap (a volume populated by a ConfigMap) Name: initdb Optional: false kube-api-access-rrrsv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: xxx ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m44s default-scheduler Successfully xxx to minikube Warning FailedMount 6m42s kubelet MountVolume.SetUp failed for volume &quot;initdb&quot; : failed to sync configmap cache: timed out waiting for the condition Normal Pulled 3m23s (x5 over 6m) kubelet Container image &quot;xxx/postgres:xx&quot; already present on machine Normal Created 3m21s (x5 over 5m56s) kubelet Created container init-db Normal Started 3m14s (x5 over 5m45s) kubelet Started container init-db Warning BackOff 3m12s (x9 over 5m1s) kubelet Back-off restarting failed container Normal Pulled 94s kubelet Container image &quot;xx/unleash-server:xxx&quot; already present on machine </code></pre> <p>This is the deployment file that I'm trying to deploy:</p> <p>I did not attach all the yaml file as it's working fine except when including the sidecarcontainer.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: mydeployment name: mydeployment spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: mydeployment template: metadata: labels: app.kubernetes.io/name: mydeployment spec: volumes: - name: volume-db configMap: name: volume-db initContainers: - name: datbase image: &quot;myrepo/postgres:xxx&quot; env: . . . # environment variables to configure the postgres database. Not attached here # . . . volumeMounts: - mountPath: /etc/postgresql name: volume-db readOnly: true containers: - name: maincontainer env: . . . # environment variables to configure the maincontainer. Not attached here # . . . image: &quot;myrepo/maincontainer:xxx&quot; imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8080 protocol: TCP livenessProbe: httpGet: path: localhost/maincontainer/health port: http initialDelaySeconds: 20 timeoutSeconds: 250 readinessProbe: httpGet: path: localhost/maincontainer/health port: http initialDelaySeconds: 20 timeoutSeconds: 250 successThreshold: 5 periodSeconds: 10 - name: sidecarcontainer env: . . . #environment variables to configure the sidecar container. Not attached here# . . . image: &quot;myrepo/sidecarcontainer:xxx&quot; imagePullPolicy: IfNotPresent </code></pre> <p>Any idea why is throwing the timeout error when mounting the volume?</p> <p>Is it something related to the timeout when including the sidecar container and needs to be incresed?</p> <p>If it's something related on how I'm including the sidecar container a clear example will be appreciated.</p>
X T
<p>I've already fixed the issue.</p> <p>Since the sidecar container is running a go image it's necessary to run some command so that the pod won´t finished its execution.</p> <p>Adding these lines at the end of the sidecar container definition in the deployment yaml file fixed the issue:</p> <pre><code>- name: sidecarcontainer env: . . . #environment variables to configure the sidecar container. Not attached here# . . . image: &quot;myrepo/sidecarcontainer:xxx&quot; imagePullPolicy: IfNotPresent command: [&quot;bash&quot;] args: [&quot;-c&quot;, &quot;while true; do echo hello; sleep 300; done&quot;] </code></pre>
X T
<p>I am getting this error in clusterissuer (cert-manager version 1.7.1):</p> <p>&quot;Error getting keypair for CA issuer: error decoding certificate PEM block&quot;</p> <p>I have the ca.crt, tls.crt and tls.key stored in a Key Vault in Azure.</p> <p><strong>kubectl describe clusterissuer ca-issuer</strong></p> <pre><code> Ca: Secret Name: cert-manager-secret Status: Conditions: Last Transition Time: 2022-02-25T11:40:49Z Message: Error getting keypair for CA issuer: error decoding certificate PEM block Observed Generation: 1 Reason: ErrGetKeyPair Status: False Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ErrGetKeyPair 3m1s (x17 over 58m) cert-manager Error getting keypair for CA issuer: error decoding certificate PEM block Warning ErrInitIssuer 3m1s (x17 over 58m) cert-manager Error initializing issuer: error decoding certificate PEM block </code></pre> <p><strong>kubectl get clusterissuer</strong></p> <pre><code>NAME READY AGE ca-issuer False 69m </code></pre> <ul> <li>This is the clusterissuer yaml file:</li> </ul> <p><strong>ca-issuer.yaml</strong></p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: ca-issuer namespace: cert-manager spec: ca: secretName: cert-manager-secret </code></pre> <p>This is the KeyVault yaml file to retrieve the ca.crt, tls.crt and tls.key</p> <p><strong>keyvauls.yaml</strong></p> <pre><code>apiVersion: spv.no/v2beta1 kind: AzureKeyVaultSecret metadata: name: secret-akscacrt namespace: cert-manager spec: vault: name: kv-xx # name of key vault object: name: akscacrt # name of the akv object type: secret # akv object type output: secret: name: cert-manager-secret # kubernetes secret name dataKey: ca.crt # key to store object value in kubernetes secret --- apiVersion: spv.no/v2beta1 kind: AzureKeyVaultSecret metadata: name: secret-akstlscrt namespace: cert-manager spec: vault: name: kv-xx # name of key vault object: name: akstlscrt # name of the akv object type: secret # akv object type output: secret: name: cert-manager-secret # kubernetes secret name dataKey: tls.crt # key to store object value in kubernetes secret --- apiVersion: spv.no/v2beta1 kind: AzureKeyVaultSecret metadata: name: secret-akstlskey namespace: cert-manager spec: vault: name: kv-xx # name of key vault object: name: akstlskey # name of the akv object type: secret # akv object type output: secret: name: cert-manager-secret # kubernetes secret name dataKey: tls.key # key to store object value in kubernetes secret --- </code></pre> <p>and these are the certificates used:</p> <pre><code>--- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: argocd-xx namespace: argocd spec: secretName: argocd-xx issuerRef: name: ca-issuer kind: ClusterIssuer commonName: &quot;argocd.xx&quot; dnsNames: - &quot;argocd.xx&quot; privateKey: size: 4096 --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: sonarqube-xx namespace: sonarqube spec: secretName: &quot;sonarqube-xx&quot; issuerRef: name: ca-issuer kind: ClusterIssuer commonName: &quot;sonarqube.xx&quot; dnsNames: - &quot;sonarqube.xx&quot; privateKey: size: 4096 </code></pre> <p>I can see that I can retrive the secrets for the certificate from key vault:</p> <p><strong>kubectl get secret -n cert-manager cert-manager-secret -o yaml</strong></p> <pre><code>apiVersion: v1 data: ca.crt: XXX tls.crt: XXX tls.key: XXX </code></pre> <p>Also, another strange thing is that I am getting other secrets in sonarqube/argocd namespace which I deployed previously but are not any more in my deployment file. I cannot delete them, when I try to delete them, they are re-created automatically. Looks like they are stored in some kind of cache. Also I tried to delete the namespace akv2k8s/cert-manager and delete the cert-manager/akv2k8s controllers and re-install them again but same issue after re-installing and applying the deployment...</p> <pre><code>kubectl get secret -n sonarqube NAME TYPE DATA AGE cert-manager-secret Opaque 3 155m default-token-c8b86 kubernetes.io/service-account-token 3 2d1h sonarqube-xx-7v7dh Opaque 1 107m sql-db-secret Opaque 2 170m kubectl get secret -n argocd NAME TYPE DATA AGE argocd-xx-7b5kb Opaque 1 107m cert-manager-secret-argo Opaque 3 157m default-token-pjb4z kubernetes.io/service-account-token 3 3d15h </code></pre> <p><strong>kubectl describe certificate sonarqube-xxx -n sonarqube</strong></p> <pre><code>Status: Conditions: Last Transition Time: 2022-02-25T11:04:08Z Message: Issuing certificate as Secret does not exist Observed Generation: 1 Reason: DoesNotExist Status: False Type: Ready Last Transition Time: 2022-02-25T11:04:08Z Message: Issuing certificate as Secret does not exist Observed Generation: 1 Reason: DoesNotExist Status: True Type: Issuing Next Private Key Secret Name: sonarqube-xxx-7v7dh Events: &lt;none&gt; </code></pre> <p>Any idea?</p> <p>Thanks.</p>
X T
<p>I figured it out just uploading the certificate info <strong>ca.crt</strong>. <strong>tls.crt</strong> and <strong>tls.key</strong> <strong>in plain text, without BASE64 encoding</strong> in the Key Vault secrets in Azure.</p> <p>When AKV2K8S retrives the secrets from the Key Vault and stored in Kubernetes, automatically it is encoded in BASE64.</p> <p>Regards,</p>
X T
<p>My setup (running locally in two minikubes) is I have two k8s clusters:</p> <ol> <li>frontend cluster is running a golang api-server,</li> <li>backend cluster is running an ha bitnami postgres cluster (used bitnami postgresql-ha chart for this)</li> </ol> <p>Although if i set the pgpool service to use nodeport and i get the ip + port for the node that the pgpool pod is running on i can hardwire this (host + port) to my database connector in the api-server (in the other cluster) this works. However what i haven't been able to figure out is how to generically connect to the other cluster (e.g. to pgpool) without using the ip address?</p> <p>I also tried using Skupper, which also has an example of connecting to a backend cluster with postgres running on it, but their example doesn't use bitnami ha postgres helm chart, just a simple postgres install, so it is not at all the same.</p> <p>Any ideas?</p>
Jim Smith
<p>For those times when you have to, or purposely want to, connect pods/deployments across multiple clusters, Nethopper (<a href="https://www.nethopper.io/" rel="nofollow noreferrer">https://www.nethopper.io/</a>) is a simple and secure solution. The postgresql-ha scenario above is covered under their free tier. There is a two cluster minikube 'how to' tutorial at <a href="https://www.nethopper.io/connect2clusters" rel="nofollow noreferrer">https://www.nethopper.io/connect2clusters</a> which is very similar to your frontend/backend use case. Nethopper is based on skupper.io, but the configuration is much easier and user friendly, and is centralized so it scales to many clusters if you need to.</p> <p>To solve your specific use case, you would:</p> <ol> <li>First install your api server in the frontend and your bitnami postgresql-ha chart in the backend, as you normally would.</li> <li>Go to <a href="https://mynethopper.com/" rel="nofollow noreferrer">https://mynethopper.com/</a> and <ul> <li>Register</li> <li>Clouds -&gt; define both clusters (clouds), frontend and backend</li> <li>Application Network -&gt; create an application network</li> <li>Application Network -&gt; attach both clusters to the network</li> <li>Application Network -&gt; install nethopper-agent in each cluster with copy paste instructions.</li> <li>Objects -&gt; import and expose pgpool (call the service 'pgpool') in your backend.</li> <li>Objects -&gt; distribute the service 'pgpool' to frontend, using a distribution rule.</li> </ul> </li> </ol> <p>Now, you should see 'pgpool' service in the frontend cluster</p> <blockquote> <p>kubectl get service</p> </blockquote> <p>When the API server pods in the frontend request service from pgpool, they will connect to pgpool in the backend, magically. It's like the 'pgpool' pod is now running in the frontend.</p> <p>The nethopper part should only take 5-10 minutes, and you do NOT need IP addresses, TLS certs, K8s ingresses or loadbalancers, a VPN, or an istio service mesh or sidecars.</p>
cmunford
<p>I have a Kubernetes cluster (v. 1.22) and inside it I have Nginx ingress controller deployed. I have found I could <a href="https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required" rel="noreferrer">reload my ingress</a> in several situations: The next list describes the scenarios when a reload is required:</p> <blockquote> <ul> <li>New Ingress Resource Created.</li> <li>TLS section is added to existing Ingress.</li> <li>Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.</li> <li>A path is added/removed from an Ingress.</li> <li>An Ingress, Service, Secret is removed.</li> <li>Some missing referenced object from the Ingress is available, like a Service or Secret.</li> <li>A Secret is updated.</li> </ul> </blockquote> <p>My ingress now using only HTTP traffic and I want to add TLS section to existing Ingress.</p> <p><strong>So, my question is: What should I exactly do to reload my ingress?</strong></p> <p>I cannot find any information in docs or other places. Any suggestion is appreciated!</p>
Halt_07
<blockquote> <p><strong>What should I exactly do to reload my ingress?</strong></p> </blockquote> <p>You just need to update the ingress, in your case you just need to add the TLS section is to existing Ingress.</p> <p>Then (automatically) the ingress controller should find the differences (as <a href="https://stackoverflow.com/users/11344502/anemyte">anemyte</a> says in its answer) and update the ingress. From now on, you will be able to use TLS.</p> <p>In general, this should all happen automatically. In theory, this could also be done manually, although it is not recommended. It is described <a href="https://github.com/kubernetes/ingress-nginx/issues/2612" rel="noreferrer">in this topic</a>.</p> <hr /> <p><strong>EDIT:</strong></p> <p>I have reproduced this situation. First I have created simple ingress with following <code>ingress.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ing-1 spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: app-1 port: number: 80 path: / pathType: Prefix </code></pre> <p>Then I have run <code>kubectl get ingress</code> and here is the output:</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE ing-1 nginx www.example.com 35.X.X.X 80 3m </code></pre> <p>In this step I had working ingress without TLS (only working port 80). Then I have created <code>tls.yaml</code> for TLS (I have used self signed certs, you need to use your certs and domain):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: tls data: tls.crt: | &lt;my cert&gt; tls.key: | &lt;my key&gt; type: kubernetes.io/tls </code></pre> <p>I have run in by <code>kubectl apply -f tls.yaml</code> and then I had changed <code>ingress.yaml</code> as below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ing-1 spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: app-1 port: number: 80 path: / pathType: Prefix # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: tls </code></pre> <p>I have added the TLS section. Then I have run <code>kubectl apply -f ingress.yaml</code> and after few second I could see this output when running <code>kubectl get ingress</code>:</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE ing-1 nginx www.example.com 35.239.7.126 80, 443 18m </code></pre> <p>TLS is working. In the logs I can see this message:</p> <pre><code>Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;ing-1&quot;, UID:&quot;84966fae-e135-47bb-8110-bf372de912c8&quot;, APIVersion:&quot;networking.k8s.io/v1&quot;, ResourceVersion:&quot;11306&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync </code></pre> <p>Ingress reloaded automatically :)</p>
Mikołaj Głodziak
<p>Currently, I choose what image to push to registry and i use a &quot; complex &quot; method to set the image tag to manifest before push the files to Git repos.</p> <p>This is my code</p> <pre><code>stage(&quot;Push to Repo &quot;){ steps { script { def filename = 'Path/to/file/deploy.yaml' def data = readYaml file: filename data.spec[0].template.spec.containers[0].image = &quot;XXXXXXXXXXXXXXXXXXXXX:${PROJECT_VERSION}&quot; sh &quot;rm $filename&quot; writeYaml file: filename, data: data sh &quot;sed -ie 's/- apiVersion/ apiVersion/g' Path/to/file/deploy.yaml &quot; sh &quot;sed -i '/^ - c.*/a ---' Path/to/file/deploy.yaml &quot; sh ''' cd Path/to/file/ git add . git commit -m &quot;[0000] [update] update manifest to version: ${PROJECT_VERSION} &quot; git push -u origin HEAD:branche_name ''' }}} </code></pre> <p>I'am looking for a another way to parse the image tag directly to manifest.</p> <p>Is there a Jenkins plugin to do that ?</p>
Quentin Merlin
<p>I use YQ tool to do this, it's an image used to edit yaml files.<br /> Example (just <code>docker run</code>):</p> <pre><code>docker run --rm --user=&quot;root&quot; -e TAG=dev-123456 -v &quot;${PWD}&quot;:/workspace -w /workspace mikefarah/yq eval '.spec.spec.containers.image.tag = strenv(TAG)' -i values.yaml </code></pre> <p>This replaces tag dev-123456 for the current tag in deployment.<br /> I write on multiple lines to make it easier to see, you can write on one line if you want.<br /> Link for details:<br /> <a href="https://hub.docker.com/r/mikefarah/yq" rel="nofollow noreferrer">https://hub.docker.com/r/mikefarah/yq</a></p>
quoc9x
<p>I am running</p> <pre><code>kubectl get nodes -o yaml | grep ExternalIP -C 1 </code></pre> <p>But am not finding any ExternalIP. There are various comments showing up about problems with non-cloud setups.</p> <p>I am following this doc <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a></p> <p>with microk8s on a desktop.</p>
mathtick
<p>If you setup k8s cluster on Cloud, Kubernetes will auto detect ExternalIP for you. ExternalIP will be a Load Balance IP address. But if you setup it on premise or on your Desktop. You can set External IP address by deploy your Load Balance, such as MetaLB. You can get it <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">here</a></p>
pmbibe
<p>I am trying to apply kubernetes code that was given for me. I am getting an error:</p> <pre><code>Error from server (BadRequest): error when creating &quot;infra/ecr-creds/rendered.yml&quot;: Secret in version &quot;v1&quot; cannot be handled as a Secret: illegal base64 data at input byte 0 </code></pre> <p>From what I understand, the error is coming from this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: aws-ecr-creds-secret namespace: whatever labels: app.kubernetes.io/name: aws-multi-ecr-credentials helm.sh/chart: aws-multi-ecr-credentials-1.4.3 app.kubernetes.io/instance: aws-ecr-creds-novisign app.kubernetes.io/version: &quot;1.4.3&quot; app.kubernetes.io/managed-by: Helm type: Opaque data: AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} </code></pre> <p>I tried to set the <code>AWS_ACCESS_KEY_ID</code> variable using:</p> <pre><code>export AWS_ACCESS_KEY_ID=$(echo &quot;...code...&quot; | base64) </code></pre> <p>but it doesn't work. What is the proper way to do it?</p>
justadev
<p>You should use <code>stringData</code>:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: aws-ecr-creds-secret namespace: whatever labels: app.kubernetes.io/name: aws-multi-ecr-credentials helm.sh/chart: aws-multi-ecr-credentials-1.4.3 app.kubernetes.io/instance: aws-ecr-creds-novisign app.kubernetes.io/version: &quot;1.4.3&quot; app.kubernetes.io/managed-by: Helm type: Opaque stringData: AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} </code></pre> <p>Link references:<br /> <a href="https://hub.docker.com/repository/docker/cuongquocvn/aws-cli-kubectl" rel="noreferrer">https://hub.docker.com/repository/docker/cuongquocvn/aws-cli-kubectl</a></p>
quoc9x
<p>I see istio is adding <code>x-b3-traceid</code>, <code>x-b3-spanid</code> and other headers to the incoming request when tracing is enabled. But none of them are returned to the caller.</p> <p>I am able to capture the <code>x-b3-traceid</code> in the log and can find it out in Tempo/Grafana. I can see the <code>traceid</code> at the istio envoy proxy (sidecar), I am able to access the header using <code>EnvoyFilter</code>.</p> <p>Can someone let me know where it is filtered?</p>
user2095730
<p><strong>TL;DR</strong> these are the headers so that <code>jaeger</code> or <code>zipkin</code> can track individual requests. This application is responsible for their proper propagation. Additionally, they are <strong>request headers</strong> and <strong>not response header</strong>, so everything works fine.</p> <hr /> <p>Explanation:</p> <p>It looks okay. First, let's start with <a href="https://docs.oracle.com/cd/F17999_01/docs.10/SCP/GUID-3DD925E4-A4C9-49F1-A6D8-323DB637056F.htm" rel="nofollow noreferrer">what these requests are</a>:</p> <pre><code> Field Name Request/ Response Type Description x-request-id request The x-request-idheader is used by Envoy to uniquely identify a request as well as perform stable access logging and tracing x-b3-traceid request The x-b3-traceidHTTP header is used by the Zipkin tracer in Envoy. The TraceId is 64-bit in length and indicates the overall ID of the trace. Every span in a trace shares this ID x-b3-spanid request The x-b3-spanidHTTP header is used by the Zipkin tracer in Envoy. The SpanId is 64-bit in length and indicates the position of the current operation in the trace tree x-b3-sampled request The x-b3-sampledHTTP header is used by the Zipkin tracer in Envoy. When the Sampled flag is either not specified or set to 1, the span will be reported to the tracing system </code></pre> <p>On the github you can find the question: <a href="https://github.com/Nike-Inc/riposte/issues/54" rel="nofollow noreferrer">What is the usage of X-B3-TraceId, traceId, parentSpanId and spanId ?</a>:</p> <blockquote> <p>http headers contain only a small amount of data: IDs and sampling flags these go synchronously with your application requests, and allow the other side to continue the trace. <a href="https://github.com/openzipkin/b3-propagation#overall-process" rel="nofollow noreferrer">https://github.com/openzipkin/b3-propagation#overall-process</a> If zipkin is enabled, details including these IDs and other data like duration send asynchronously to zipkin after a request completes. you can see a diagram about that here under &quot;Example Flow&quot; <a href="http://zipkin.io/pages/architecture.html" rel="nofollow noreferrer">http://zipkin.io/pages/architecture.html</a></p> </blockquote> <blockquote> <blockquote> <p>X-B3-TraceId is same or different from every call of the same client? different per overall request. each top-level request into your system has</p> </blockquote> <p>a different trace id. Each step in that request has a different span id</p> </blockquote> <blockquote> <blockquote> <p>X-B3-SpanId is not send back to the caller, then how could i set the parent(which show be the X-B3-SpanId of the the present call) of the next call? Here is a response shows the absent of X-B3-SpanId in the header: I don't quite understand. The parent is only used when creating a span.</p> </blockquote> <p><strong>The span is created before a request is sent. So, there's no relevance to response headers in span creation.</strong></p> </blockquote> <p>In <a href="https://istio.io/latest/docs/tasks/observability/distributed-tracing/overview/" rel="nofollow noreferrer">this doc</a> you can find information about headers from istio site:</p> <blockquote> <p>Distributed tracing enables users to track a request through mesh that is distributed across multiple services. This allows a deeper understanding about request latency, serialization and parallelism via visualization.</p> <p>Istio leverages Envoy’s distributed tracing feature to provide tracing integration out of the box. Specifically, Istio provides options to install various tracing backend and configure proxies to send trace spans to them automatically. See Zipkin, Jaeger and Lightstep task docs about how Istio works with those tracing systems.</p> </blockquote> <p>If you want to full understand how it works you should read <a href="https://www.envoyproxy.io/docs/envoy/v1.12.0/intro/arch_overview/observability/tracing" rel="nofollow noreferrer">this envoyproxy documentation</a>:</p> <blockquote> <p>Distributed tracing allows developers to obtain visualizations of call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. Envoy supports three features related to system wide tracing:</p> <ul> <li><p><strong>Request ID generation</strong>: Envoy will generate UUIDs when needed and populate the <a href="https://www.envoyproxy.io/docs/envoy/v1.12.0/configuration/http/http_conn_man/headers#config-http-conn-man-headers-x-request-id" rel="nofollow noreferrer">x-request-id</a> HTTP header. Applications can forward the x-request-id header for unified logging as well as tracing.</p> </li> <li><p><strong>Client trace ID joining</strong>: The <a href="https://www.envoyproxy.io/docs/envoy/v1.12.0/configuration/http/http_conn_man/headers#config-http-conn-man-headers-x-client-trace-id" rel="nofollow noreferrer">x-client-trace-id</a> header can be used to join untrusted request IDs to the trusted internal <a href="https://www.envoyproxy.io/docs/envoy/v1.12.0/configuration/http/http_conn_man/headers#config-http-conn-man-headers-x-request-id" rel="nofollow noreferrer">x-request-id</a>.</p> </li> <li><p><strong>External trace service integration</strong>: Envoy supports pluggable external trace visualization providers, that are divided into two subgroups:</p> </li> <li><p>External tracers which are part of the Envoy code base, like <a href="https://lightstep.com/" rel="nofollow noreferrer">LightStep</a>, <a href="https://zipkin.io/" rel="nofollow noreferrer">Zipkin</a> or any Zipkin compatible backends (e.g. <a href="https://github.com/jaegertracing/" rel="nofollow noreferrer">Jaeger</a>), and <a href="https://datadoghq.com/" rel="nofollow noreferrer">Datadog</a>.</p> </li> <li><p>External tracers which come as a third party plugin, like <a href="https://www.instana.com/blog/monitoring-envoy-proxy-microservices/" rel="nofollow noreferrer">Instana</a>.</p> </li> </ul> </blockquote> <p>Answering your question:</p> <blockquote> <p>Can someone let me know where it is filtered?</p> </blockquote> <p>They do this by default. It is the application (zipkin, jaeger) that is responsible for acting on these headers. Additionally, they are <strong>request headers</strong> and <strong>not response header</strong>, so everything works fine.</p>
Mikołaj Głodziak
<p>The scenario: I'm having Apache Pulsar v2.6.0 deployed in Rancher Kubernetes, together with jetstack/cert-manager:</p> <pre><code>helm install cert-manager jetstack/cert-manager --namespace cert-manager --set installCRDs=true helm install --values ./values.yaml pulsar apache/pulsar </code></pre> <p>I also configured TLS for all Pulsar components as follows (values.yaml):</p> <pre><code> tls: enabled: true # common settings for generating certs common: keySize: 2048 # settings for generating certs for proxy proxy: enabled: true # settings for generating certs for broker broker: enabled: true # settings for generating certs for bookies bookie: enabled: true # settings for generating certs for zookeeper zookeeper: enabled: true </code></pre> <p>However, cert-manager generates the secrets names with a hash suffix, so when creating e.g. a Zookeeper pod, Kubernetes complains that it cannot find the tls secret and fails with the event 'MountVolume.SetUp failed for volume &quot;zookeeper-certs&quot; : secret &quot;pulsar-tls-zookeeper&quot; not found'.</p> <p>Any idea how to handle this scenario?</p>
user14321182
<p>I found the answer here: <a href="https://github.com/jetstack/cert-manager/issues/3283" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/3283</a></p> <p>The secrets that were generated (including the hash suffix) are used as next private keys for key rotation. The correct secrets containing both the current key and the certificate (which the others do not contain) would be generated next. However, due to a an error during ca cert generation for the issuer, this does not happen.</p>
user14321182
<p>Does anyone know how to use SSL on Spring Boot application to connect with ElasticSearch which is deployed at Openshift in the form of https? I have a config.java in my Spring Boot application like the following:</p> <pre><code>@Configuration @EnableElasticsearchRepositories(basePackages = &quot;com.siolbca.repository&quot;) @ComponentScan(basePackages = &quot;com.siolbca.services&quot;) public class Config { @Bean public RestHighLevelClient client() { ClientConfiguration clientConfiguration = ClientConfiguration.builder() .connectedTo(&quot;elasticsearch-siol-es-http.siolbca-dev.svc.cluster.local&quot;) .usingSsl() .withBasicAuth(&quot;elastic&quot;,&quot;G0D1g6TurJ79pcxr1065pU0U&quot;) .build(); return RestClients.create(clientConfiguration).rest(); } @Bean public ElasticsearchOperations elasticsearchTemplate() { return new ElasticsearchRestTemplate(client()); } } </code></pre> <p>However, when I run it with Postman to run elasticsearch, an error appears like this:</p> <pre><code>javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target </code></pre> <p>I've seen some tutorials on the internet that say it's a certificate issue but I can't get a clue how to implement it in my code because I'm a beginner to Java &amp; Spring Boot. <a href="https://stackoverflow.com/questions/47334476/using-elasticsearch-java-rest-api-with-self-signed-certificates">using-elasticsearch-java-rest-api-with-self-signed-certificates</a> <a href="https://stackoverflow.com/questions/56598798/how-to-connect-spring-boot-2-1-with-elasticsearch-6-6-with-cluster-node-https">how-to-connect-spring-boot-2-1-with-elasticsearch-6-6-with-cluster-node-https</a></p> <p>And here’s my configuration for elasticsearch.yml:</p> <pre><code>cluster: name: elasticsearch-siol routing: allocation: awareness: attributes: k8s_node_name discovery: seed_providers: file http: publish_host: ${POD_NAME}.${HEADLESS_SERVICE_NAME}.${NAMESPACE}.svc network: host: &quot;0&quot; publish_host: ${POD_IP} node: attr: attr_name: attr_value k8s_node_name: ${NODE_NAME} name: ${POD_NAME} roles: - master - data store: allow_mmap: false path: data: /usr/share/elasticsearch/data logs: /usr/share/elasticsearch/logs xpack: license: upload: types: - trial - enterprise security: authc: realms: file: file1: order: -100 native: native1: order: -99 reserved_realm: enabled: &quot;false&quot; enabled: &quot;true&quot; http: ssl: certificate: /usr/share/elasticsearch/config/http-certs/tls.crt certificate_authorities: /usr/share/elasticsearch/config/http-certs/ca.crt enabled: true key: /usr/share/elasticsearch/config/http-certs/tls.key transport: ssl: certificate: /usr/share/elasticsearch/config/node-transport-cert/transport.tls.crt certificate_authorities: - /usr/share/elasticsearch/config/transport-certs/ca.crt - /usr/share/elasticsearch/config/transport-remote-certs/ca.crt enabled: &quot;true&quot; key: /usr/share/elasticsearch/config/node-transport-cert/transport.tls.key verification_mode: certificate </code></pre> <p>Does anyone know how to use the provided certificate in my Spring Boot application? Thank you.</p>
Achmad Fathur Rizki
<p>I solved my problem by ignoring SSL certificate verification while connecting to elasticsearch from my Backend (Spring Boot). I followed some instruction from website below:</p> <p><a href="https://stackoverflow.com/questions/62270799/ignore-ssl-certificate-verfication-while-connecting-to-elasticsearch-from-spring">Ignore SSL Certificate Verification</a></p> <p>I also modified the code by adding basic authentication as follows:</p> <pre><code>@Configuration @EnableElasticsearchRepositories(basePackages = &quot;com.siolbca.repository&quot;) @ComponentScan(basePackages = &quot;com.siolbca.services&quot;) public class Config { @Bean public RestHighLevelClient createSimpleElasticClient() throws Exception { try { final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(&quot;elastic&quot;,&quot;G0D1g6TurJ79pcxr1065pU0U&quot;)); SSLContextBuilder sslBuilder = SSLContexts.custom() .loadTrustMaterial(null, (x509Certificates, s) -&gt; true); final SSLContext sslContext = sslBuilder.build(); RestHighLevelClient client = new RestHighLevelClient(RestClient .builder(new HttpHost(&quot;elasticsearch-siol-es-http.siolbca-dev.svc.cluster.local&quot;, 9200, &quot;https&quot;)) //port number is given as 443 since its https schema .setHttpClientConfigCallback(new HttpClientConfigCallback() { @Override public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { return httpClientBuilder .setSSLContext(sslContext) .setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE) .setDefaultCredentialsProvider(credentialsProvider); } }) .setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() { @Override public RequestConfig.Builder customizeRequestConfig( RequestConfig.Builder requestConfigBuilder) { return requestConfigBuilder.setConnectTimeout(5000) .setSocketTimeout(120000); } })); System.out.println(&quot;elasticsearch client created&quot;); return client; } catch (Exception e) { System.out.println(e); throw new Exception(&quot;Could not create an elasticsearch client!!&quot;); } } } </code></pre>
Achmad Fathur Rizki
<p>Firstly, this is my folder:</p> <p><a href="https://i.stack.imgur.com/0zvFX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0zvFX.png" alt="enter image description here" /></a></p> <p>This is my Dockerfile:</p> <pre><code>FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env WORKDIR /app COPY *.csproj ./ RUN dotnet restore COPY . ./ RUN dotnet publish -c Release -o out FROM mcr.microsoft.com/dotnet/aspnet:5.0 WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [&quot;dotnet&quot;, &quot;PlatformService.dll&quot;] </code></pre> <p>platforms-depl.yaml (deployment file)</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: platforms-depl spec: replicas: 1 selector: matchLabels: app: platformservice template: metadata: labels: app: platformservice spec: containers: - name: platformservice image: hao14102000/platformservice:latest </code></pre> <p>platforms-np-srv.yaml (NodePort Service file)</p> <pre><code>apiVersion: v1 kind: Service metadata: name: platformnpservice-srv spec: type: NodePort selector: app: platformservice ports: - name: platformservice protocol: TCP port: 80 targetPort: 80 </code></pre> <p>When I apply 2 files this is what I see:</p> <p><a href="https://i.stack.imgur.com/Tf5KP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tf5KP.png" alt="enter image description here" /></a></p> <p>When I try to connect on port <code>31023</code> using both options below:</p> <pre><code>http://localhost:31023/api/platforms http://10.109.215.230:31023/api/platforms </code></pre> <p>It doesn't work. This happens:</p> <p><a href="https://i.stack.imgur.com/IR7Xg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IR7Xg.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/mjW6Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mjW6Y.png" alt="enter image description here" /></a></p> <p>I don't know what wrong with this...</p>
Kelvin
<h2>What happens with Linux containers:</h2> <p>Kubernetes on Windows Docker Desktop by default runs its components in WSL2 (Windows subsystem for Linux), it's separate virtual machine with its own IP address and <code>localhost</code>. This is the reason why service is not reachable on <code>localhost</code> from host OS (in this case Windows).</p> <p>Another option is to disable <code>using WSL2 based engine</code> in settings, instead <code>hyper-v</code> will be used and virtual machine will be created however in Docker Desktop it's said that preferably WSL2 should be used for performance benefits.</p> <h2>Available options how to access the service using WSL2:</h2> <ol> <li>Fastest and easiest (loadbalancer)</li> </ol> <p>Set up a <code>service</code> with <code>LoadBalancer</code> type. <code>EXTERNAL-IP</code> will be localhost which solves all questions immediately. For example:</p> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 11m nginx LoadBalancer 10.110.15.53 localhost 8080:30654/TCP 4m10s </code></pre> <p>Nginx is available in browser on <code>localhost:8080</code>.</p> <ol start="2"> <li>Using virtual machine's IP and nodeport</li> </ol> <p>Another option is to find <code>WSL</code> virtual machine and then access service on this IP and <code>nodeport</code>.</p> <p>To find WSL VM address, you need to run <code>wsl</code> command to connect to this VM and then find its IP address:</p> <pre><code>wsl # ip a | grep eth0 6: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000 inet 172.19.xxx.yyy/20 brd 172.19.xxx.yyy scope global eth0 </code></pre> <p>Nginx is available in browser on <code>172.19.xxx.yyy:30654</code>.</p> <ol start="3"> <li>Port-forward - for testing purposes</li> </ol> <p><code>Port-forward</code> is useful for testing purposes, but it shouldn't be used on production systems.</p> <p>To start the proxy to the service, run following command:</p> <pre><code>kubectl port-forward service/nginx 8080:80 &amp; </code></pre> <p>Nginx is available in browser on <code>localhost:8080</code></p> <h2>Assumptions when Hyper-V is used</h2> <p>First <code>hyper-v</code> should be installed on host machine. Note that not all versions of Windows are supported. Please refer to documentation on which versions and how to enable <code>hyper-v</code> <a href="https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v" rel="noreferrer">here</a>.</p> <p>When <code>using WSL2 based engine</code> is deselected, <code>hyper-v</code> is used to work with containers. It creates a separate virtual machine which can be found in <code>Hyper-v Manager</code>.</p> <ul> <li><code>nodeport</code> works on localhost + nodeport</li> <li><code>loadbalancer</code> doesn't work, you can't connect to <code>localhost</code> with service port even though <code>External-IP</code> shows <code>localhost</code>.</li> </ul> <h2>Windows containers on Windows Docker Desktop</h2> <p>It's also possible to run Windows containers on Windows Docker Desktop.</p> <p>It's required to change daemon which will be used. In tray select on <code>switch to Windows containers</code>. <a href="https://docs.docker.com/desktop/windows/#switch-between-windows-and-linux-containers" rel="noreferrer">Switch between linux and windows containers</a>.</p> <p>However <code>kubernetes</code> option will become unavailable, because <code>control plane</code> components are designed to be run on <code>linux</code> host.</p> <h2>Environment:</h2> <p><strong>OS</strong>: Windows 10 Enterprise, build: 19041.1165</p> <p><strong>Docker Desktop</strong>: 4.0.0 (67817)</p> <p><strong>Engine</strong>: 20.10.8</p> <p><strong>Kubernetes</strong>: 1.21.4</p> <h2>Useful links:</h2> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="noreferrer">Service types in Kubernetes</a></li> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">Kubernetes port-forwarding</a></li> <li><a href="https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/" rel="noreferrer">Hyper-V</a></li> <li><a href="https://docs.docker.com/desktop/windows/#switch-between-windows-and-linux-containers" rel="noreferrer">Docker Desktop for Windows users</a></li> </ul>
moonkotte
<p>I have Celery workers running on Kubernetes 1.20 cluster on AWS EKS using AWS Elasticache Redis as the broker. Because of the nature of the project ~80% of the time celery workers are running idle so the logical thing was to have them scale automatically. Scaling based on CPU/memory works ok. At about 4 workers node scaling also needs to kick in and that works ok as well. An obvious problem is that it takes some time for a new node to start and get fully operational before that node can start taking on new celery worker pods. So some waiting to scale up is expected.</p> <p>Somewhere in all that waiting a fresh celery worker pod gets started and starts accepting new tasks and executing them, but for some unknown reason the <code>startupProbe</code> does not complete. Because <code>startupProbe</code> was not successful the whole pod is killed potentially in the middle of a running task.</p> <p><strong>Question</strong>: Can I prevent celery from taking on tasks before the <code>startupProbe</code> is considered successful?</p> <p>Celery config</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: celery-worker labels: app: celery-worker spec: selector: matchLabels: app: celery-worker progressDeadlineSeconds: 900 template: metadata: labels: app: celery-worker spec: containers: - name: celery-worker image: -redacted- imagePullPolicy: Always command: [&quot;./scripts/celery_worker_entrypoint_infra.sh&quot;] env: - name: CELERY_BROKER_URL valueFrom: secretKeyRef: name: celery-broker-url-secret key: broker-url startupProbe: exec: command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;celery -q -A app inspect -d celery@$HOSTNAME --timeout 10 ping&quot;] initialDelaySeconds: 20 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 30 periodSeconds: 10 readinessProbe: exec: command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;celery -q -b $CELERY_BROKER_URL inspect -d celery@$HOSTNAME --timeout 10 ping&quot;] periodSeconds: 120 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 3 livenessProbe: exec: command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;celery -q -b $CELERY_BROKER_URL inspect -d celery@$HOSTNAME --timeout 10 ping&quot;] periodSeconds: 120 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 5 resources: requests: memory: &quot;384Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;1Gi&quot; cpu: &quot;500m&quot; terminationGracePeriodSeconds: 2400 </code></pre> <p>Celery HPA config</p> <pre><code>kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2beta2 metadata: name: celery-worker spec: minReplicas: 2 maxReplicas: 40 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: celery-worker metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 </code></pre> <p>Celery startup script</p> <pre><code>python manage.py check exec celery --quiet -A app worker \ --loglevel info \ --concurrency 1 \ --uid=nobody \ --gid=nogroup </code></pre> <p>I'm sharing complete configs including <code>readinessProbe</code> and <code>livenessProbe</code> whose values are a bit inflated, but are a consequence of various try-and-error scenarios.</p> <p><strong>Edit:</strong> This is a catch-22 situation.</p> <p>I have defined <code>startupProbe</code> to check if celery is running in current host and that will only be true if celery worker is running. And if celery worker is running it will accept tasks. And if it will accept tasks <code>celery inspect</code> command might take too long causing the <code>startupProbe</code> to hang and failing. If <code>startupProbe</code> fails too many times it will kill the pod.</p> <p>Furthermore, if I call <code>celery inspect</code> without destination (host) defined, <code>startupProbe</code> will fail on initial deployment.</p> <p>Conclusion: <code>celery inspect</code> is not a good <code>startupProbe</code> candidate.</p>
mislavcimpersak
<blockquote> <p>Conclusion: <code>celery inspect</code> is not a good <code>startupProbe</code> candidate.</p> </blockquote> <p>I agree.</p> <p>You also mentioned that you have an active <code>healtcheck</code></p> <blockquote> <p>I have in another container a web service that uses Django framework and exposes health check at <code>/health</code></p> </blockquote> <p>I think it is worth using it to create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">startup probe</a>:</p> <blockquote> <p>Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a <code>failureThreshold * periodSeconds</code> long enough to cover the worse case startup time.</p> </blockquote> <p>You also mention an example attack:</p> <blockquote> <p>Hmm, I'm not sure that is a smart thing to do. If for instance my web service that serves <code>/health</code> gets DDoS-ed my celery workers would also fail.</p> </blockquote> <p>However, this shouldn't be a problem. Startup probe will only run when the container is started. The chance that someone will attack you while the environment is being launched is practically zero. You should use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes" rel="nofollow noreferrer">readiness or liveness</a> probe to check if your container is alive while the application is running.</p> <blockquote> <p>The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.</p> </blockquote> <p>Here you can find example yaml</p> <pre class="lang-yaml prettyprint-override"><code>ports: - name: liveness-port containerPort: 8080 hostPort: 8080 livenessProbe: httpGet: path: /healthz port: liveness-port failureThreshold: 1 periodSeconds: 10 startupProbe: httpGet: path: /healthz port: liveness-port failureThreshold: 30 periodSeconds: 10 </code></pre> <p>In this case <code>startupProbe</code> will execute on their first initialization, then will be used <code>livenessProbe</code>.</p>
Mikołaj Głodziak
<p>I'm trying to access minikube dashboard from host OS (Windows 10).</p> <p>Minikube is running on my virtual machine Ubuntu 20.04 server.</p> <p>The host is Windows 10 and I use VirtualBox to run my VM.</p> <p>These are the commands I ran on Ubuntu:</p> <pre><code>tomas@ubuntu20:~$ minikube start * minikube v1.22.0 on Ubuntu 20.04 (vbox/amd64) * Using the docker driver based on existing profile * Starting control plane node minikube in cluster minikube * Pulling base image ... * Updating the running docker &quot;minikube&quot; container ... * Preparing Kubernetes v1.21.2 on Docker 20.10.7 ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 - Using image kubernetesui/dashboard:v2.1.0 - Using image kubernetesui/metrics-scraper:v1.0.4 * Enabled addons: storage-provisioner, default-storageclass, dashboard * kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' * Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default tomas@ubuntu20:~$ kubectl get po -A Command 'kubectl' not found, but can be installed with: sudo snap install kubectl tomas@ubuntu20:~$ minikube kubectl -- get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-558bd4d5db-9p9ck 1/1 Running 2 72m kube-system etcd-minikube 1/1 Running 2 72m kube-system kube-apiserver-minikube 1/1 Running 2 72m kube-system kube-controller-manager-minikube 1/1 Running 2 72m kube-system kube-proxy-xw766 1/1 Running 2 72m kube-system kube-scheduler-minikube 1/1 Running 2 72m kube-system storage-provisioner 1/1 Running 4 72m kubernetes-dashboard dashboard-metrics-scraper-7976b667d4-r9k7t 1/1 Running 2 54m kubernetes-dashboard kubernetes-dashboard-6fcdf4f6d-c7kwf 1/1 Running 2 54m </code></pre> <p>And then I open another terminal window and I run:</p> <pre><code>tomas@ubuntu20:~$ minikube dashboard * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... * Opening http://127.0.0.1:36337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser... http://127.0.0.1:36337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ </code></pre> <p>Now on my Windows 10 host machine I go to web browser type in:</p> <pre><code>http://127.0.0.1:36337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ </code></pre> <p>But I get error:</p> <pre><code>This site can’t be reached 127.0.0.1 refused to connect. </code></pre> <p>How can I access minikube dashboard from my host OS web browser?</p>
Tomas.R
<h2>Reproduction</h2> <p>I reproduced this behaviour on Windows 10 and ubuntu 18.04 LTS virtual machine running using <code>VirtualBox</code>.</p> <p>I have tried both <code>minikube drivers</code>: docker and none (last one means that all kubernetes components will be run on localhost) and behaviour is the same.</p> <h2>What happens</h2> <p>Minikube is designed to be used on localhost machine. When <code>minikube dashboard</code> command is run, minikube downloads images (metrics scraper and dashboard itsefl), launches them, test if they are healthy and then create proxy which is run on <code>localhost</code>. It can't accept connections outside of the virtual machine (in this case it's Windows host to ubuntu VM).</p> <p>This can be checked by running <code>netstat</code> command (cut off some not useful output):</p> <pre><code>$ minikube dashboard 🔌 Enabling dashboard ... 🚀 Launching proxy ... 🤔 Verifying proxy health ... 👉 http://127.0.0.1:36317/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ $ sudo netstat -tlpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:36317 0.0.0.0:* LISTEN 461195/kubectl </code></pre> <h2>How to resolve it</h2> <p>Once <code>minikube dashboard</code> command has been run, kubernetes dashboard will remain running in <code>kubernetes-dashboard</code> namespace.</p> <p>Proxy to it should be open manually with following command:</p> <pre><code>kubectl proxy --address='0.0.0.0' &amp; </code></pre> <p>Or if you don't have <code>kubectl</code> installed on your machine:</p> <pre><code>minikube kubectl proxy -- --address='0.0.0.0' &amp; </code></pre> <p>It will start a proxy to kubernetes api server on port <code>8001</code> and will serve on all addresses (it can be changed to default Virtual box NAT address <code>10.2.0.15</code>).</p> <p><strong>Next step</strong> is to add <code>port-forwarding</code> in VirtualBox. Go to your virtual machine -&gt; settings -&gt; network -&gt; NAT -&gt; advanced -&gt; port-forwarding</p> <p>Add a new rule:</p> <ul> <li>host IP = 127.0.0.1</li> <li>host port = any free one, e.g. I used 8000</li> <li>guest IP = can be left empty</li> <li>guest port = 8001 (where proxy is listening to)</li> </ul> <p>Now you can go to your browser on Windows host, paste the URL, correct the port which was assigned in <code>host port</code> and it will work:</p> <pre><code>http://127.0.0.1:8000/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ </code></pre> <h2>Useful links:</h2> <ul> <li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy" rel="nofollow noreferrer">kubectl proxy command</a></li> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Kubernetes dashboard</a></li> </ul>
moonkotte
<p>Our company have a kubernetes cluster on Alibaba Cloud which version is v1.14.0. I've found that the worker nodes in Not Ready status will be removed from cluster everyday at 22:00. At first I thought that's because we have cluster-autoscaler deployed in the cluster, so I checked it's config, but didn't find any config will automatic remove nodes in cluster.The infomation of our cluster-autoscaler is listed below.</p> <pre><code>Images:registry.cn-hangzhou.aliyuncs.com/acs/autoscaler:v1.3.1-be4edda command: - command: - ./cluster-autoscaler - --v=5 - --stderrthreshold=info - --cloud-provider=alicloud - --scan-interval=10s - --scale-down-delay-after-add=10m - --scale-down-delay-after-failure=1m - --scale-down-unready-time=1m - --ok-total-unready-count=1000 - --max-empty-bulk-delete=50 - --expander=least-waste - --leader-elect=false - --scale-down-unneeded-time=10m - --scale-down-utilization-threshold=0.05 - --scale-down-gpu-utilization-threshold=0.3 - --skip-nodes-with-local-storage=false - --nodes=0:100:private_information </code></pre> <p>Any help would be appreciated.</p>
Eli
<p>Problem solved. Somebody set a crontab mission clean the NotReady status nodes.</p>
Eli
<p>so I am currently connected to a contabo hosted kubernetes cluster. On there I have running kafka and opensearch/opensearch dashboards deployements. I am trying to run logstash so that I can get the data from a kafka topic to opensearch, <a href="https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin" rel="nofollow noreferrer">https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin</a> this is the image that I use for logstash (<a href="https://justpaste.it/47676" rel="nofollow noreferrer">https://justpaste.it/47676</a> this is my logstash configuration). And the following is my opensearch configuration <a href="https://justpaste.it/a090p" rel="nofollow noreferrer">https://justpaste.it/a090p</a> When I deploy logstash, I successfully get the data from the kafka topic, so my input plugin is working as expected, but the output is not, I am failing to output data to opensearch from logstash. The following is the logs from the logstash pod: <a href="https://justpaste.it/620g4" rel="nofollow noreferrer">https://justpaste.it/620g4</a> .</p> <p>This is the output of &quot;kubectl get services&quot;</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboards-opensearch-dashboards ClusterIP 10.96.114.252 &lt;none&gt; 5601/TCP 5d20h grafana ClusterIP 10.107.83.28 &lt;none&gt; 3000/TCP 44h logstash-service LoadBalancer 10.102.132.114 &lt;pending&gt; 5044:31333/TCP 28m loki ClusterIP 10.99.30.246 &lt;none&gt; 3100/TCP 43h loki-headless ClusterIP None &lt;none&gt; 3100/TCP 43h my-cluster-kafka-0 NodePort 10.101.196.50 &lt;none&gt; 9094:32000/TCP 53m my-cluster-kafka-1 NodePort 10.96.247.75 &lt;none&gt; 9094:32001/TCP 53m my-cluster-kafka-2 NodePort 10.98.203.5 &lt;none&gt; 9094:32002/TCP 53m my-cluster-kafka-bootstrap ClusterIP 10.111.178.24 &lt;none&gt; 9091/TCP,9092/TCP,9093/TCP 53m my-cluster-kafka-brokers ClusterIP None &lt;none&gt; 9090/TCP,9091/TCP,9092/TCP,9093/TCP 53m my-cluster-kafka-external-bootstrap NodePort 10.109.134.74 &lt;none&gt; 9094:32100/TCP 53m my-cluster-zookeeper-client ClusterIP 10.98.157.173 &lt;none&gt; 2181/TCP 54m my-cluster-zookeeper-nodes ClusterIP None &lt;none&gt; 2181/TCP,2888/TCP,3888/TCP 54m opensearch-cluster-master ClusterIP 10.98.55.121 &lt;none&gt; 9200/TCP,9300/TCP 19h opensearch-cluster-master-headless ClusterIP None &lt;none&gt; 9200/TCP,9300/TCP 19h prometheus-operated ClusterIP None &lt;none&gt; 9090/TCP 25m prometheus-operator ClusterIP None &lt;none&gt; 8080/TCP 50m </code></pre> <p>What am I doing wrong and how do I establish this connection?</p>
MarkoFire
<p>I figured it out. I think that it was expecting an ssl certificate and that is why it was refusing the connection. The way that I &quot;fixed&quot; this (because I don't need the ssl certification for this project for now) is that I changed the logstash configuration in this way.</p> <pre><code> logstash.conf: | input { kafka{ codec =&gt; json bootstrap_servers =&gt; &quot;10.111.178.24:9092&quot; topics =&gt; [&quot;t_events&quot;] } } output { opensearch { hosts =&gt; [&quot;https://10.102.102.109:9200&quot;] ssl_certificate_verification =&gt; false user =&gt; &quot;admin&quot; password =&gt; &quot;admin&quot; index =&gt; &quot;logstash-logs-%{+YYYY.MM.dd}&quot; } } </code></pre> <p>So I have added the &quot;ssl_certificate_verification =&gt; false&quot; line to the config and that enabled me to connect from logstash to opensearch and send the data. Now I have the data encryption aspect by using a https protocol but I am lacking the ssl authentication which I am fine with for this project.</p>
MarkoFire
<p>Recently I was searching for ways to reduce cloud bill and came up to a company named <code>CAST.AI</code>.</p> <p>So to run a savings report you need to install their agent to your cluster and they claim it is <code>read-only</code>.</p> <p>How do I check if this is true?</p> <p>This comes from the <a href="https://pastebin.com/pLbYAEGP" rel="nofollow noreferrer">yaml file they provide</a> (too long to paste whole manifest here)</p>
Bob
<h2 id="short-answer">Short answer</h2> <p>Based on <code>cast.io</code> manifest <strong>it's indeed <code>read-only</code> and safe to say it won't mess up anything in the cluster</strong></p> <h2 id="detailed-answer">Detailed answer</h2> <p>In short words manifest will create: namespace, serviceaccount, clusterole with read-only permissions, clusterrolebinding (where mapping between service account and cluster role happens), secret and deployment with pod which will collect cluster's data.</p> <p><code>ClusterRole</code> means that service account linked to this <code>ClusterRole</code> will have access with given verbs within <strong>all namespaces</strong> (which is fine for resource audit).</p> <p>Below is <code>ClusterRole</code> from manifest (added several comments at the beginning, structure is the same):</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: castai-agent labels: &quot;app.kubernetes.io/name&quot;: castai-agent rules: # --- # Required for cost savings estimation features. # --- - apiGroups: # api group to look in - &quot;&quot; resources: # resources where this ClusterRole will have access to - pods - nodes - replicationcontrollers - persistentvolumeclaims - persistentvolumes - services verbs: # what this cluster role is allowed to do - get - list - watch - apiGroups: - &quot;&quot; resources: - namespaces verbs: - get - apiGroups: - &quot;apps&quot; resources: - deployments - replicasets - daemonsets - statefulsets verbs: # what this cluster role is allowed to do with resources above - get - list - watch - apiGroups: - &quot;storage.k8s.io&quot; resources: - storageclasses - csinodes verbs: # what this cluster role is allowed to do - get - list - watch - apiGroups: - &quot;batch&quot; resources: - jobs verbs: # what this cluster role is allowed to do - get - list - watch </code></pre> <p>All actions that <code>ClusterRole</code> is allowed to perform are: <code>get</code>, <code>list</code> and <code>watch</code> which are harmless.</p> <p>Here is a list of all available verbs:</p> <ul> <li>get</li> <li>list</li> <li>create</li> <li>update</li> <li>patch</li> <li>watch</li> <li>delete</li> <li>deletecollection</li> </ul> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#review-your-request-attributes" rel="nofollow noreferrer">list of all available attributes, including verbs</a></p> <h2 id="resources-and-limits">Resources and limits</h2> <p>Worst case scenario <code>cast.io</code> pod will consume resources by its limit (this part in deployment), however with today's clusters it shouldn't be an issue:</p> <pre><code> resources: requests: cpu: 100m memory: 64Mi limits: cpu: 1000m memory: 256Mi </code></pre> <p><strong>Requests</strong> means that this amount of resources are required for <code>kubelet</code> to run this pod on the node.</p> <p><strong>Limits</strong> as it's named limits maximum possible resources allocation for pod. If it tries to consume more, it will be evicted and rescheduled again to be created.</p> <p><strong>Useful links:</strong></p> <ul> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes RBAC - Role Base Access Control</a></li> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">Kubernetes autorization overview</a></li> <li><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">Resources and limits</a></li> </ul>
moonkotte
<p>I have 1 master and 2 worker on my k8s cluster.It's bare metal and I can't use any of cloud providers. I just can use DNS load balancer. I want to expose valid ports (like 80 and 443) on my nodes because of that I can't use NodePort. What is the best solution?</p> <p>My only solution was to install Nginx on all of my nodes and proxy ports to my ClusterIp services.I don't know that this is a good solution or not.</p> <p><a href="https://i.stack.imgur.com/xldfX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xldfX.jpg" alt="enter image description here" /></a></p>
heydar dasoomi
<p>Following things that you are doing right :</p> <ol> <li>Cluster IP service - If you don't want to expose your services to be invoked form outside the cluster, CLusterIP is right way instead of NodePort or LoadBalancer.</li> </ol> <p>Following things that you can do:</p> <ol> <li>Create an Ingress Controller and and Ingress resource for your cluster which will listen on port 80 and 443 and proxy the requests to your services according to routes mentioned in the ingress.</li> <li>You can create inginx-ingress controller using link: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></li> <li>Then create an Ingress resource using link <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></li> </ol>
Gautam Rajotya
<p>I would like to sort the pod by maximum CPU usage. I know there is a lot of tools which comes with monitoring. I would like to find this by using 'kubectl top' command.</p> <p>any help is appreciated - thanks</p>
Gowtham
<p>you can run the below command to sort all pods across all namespaces by their cpu utilization.</p> <pre><code>kubectl top po -A --sort-by=cpu </code></pre>
Akhila. G.Krishnan
<p>I have deployed <a href="https://github.com/monzo/egress-operator#egress-operator" rel="nofollow noreferrer">egress-operator</a> in my ubuntu machine and this operator is internally using <a href="https://www.envoyproxy.io/" rel="nofollow noreferrer">envoy</a> proxy to control egress traffic.</p> <p>The idea is to allow only <a href="https://github.com/monzo/egress-operator#usage" rel="nofollow noreferrer">whitelisted domains</a> from test-pod for egress. I have applied the external service <code>yaml</code> of this operator but it's giving the opposite result, instead of allowing <code>google.com</code> its blocking google.com and allowing other calls. What possibly am I doing wrong?</p> <p><strong>My ExternalService.yaml</strong></p> <pre><code> apiVersion: egress.monzo.com/v1 kind: ExternalService metadata: name: google spec: dnsName: google.com # optional, defaults to false, instructs dns server to rewrite queries for dnsName hijackDns: true ports: - port: 80 - port: 443 protocol: TCP minReplicas: 1 maxReplicas: 3 </code></pre> <p><strong>My testpod.yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx namespace: testNs-system labels: egress.monzo.com/allowed-gateway: google spec: containers: - image: nginx:1.14.2 command: - &quot;sleep&quot; - &quot;604800&quot; imagePullPolicy: IfNotPresent name: nginx restartPolicy: Always </code></pre> <p>From testpod when <code>curl -v https://google.com</code> is blocking and other urls are allowed. As per operator's Readme, I need a <a href="https://github.com/monzo/egress-operator#blocking-non-gateway-traffic" rel="nofollow noreferrer">defaut-deny-Egress K3s policy also</a>, therefore I applied that too. but after <code>default-deny-Egress</code> policy all egress calls including google.com (the one whitelisted) is blocking from testpod.</p> <p><strong>Default-Deny-All-Egress.yaml</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all-egress namespace: testNs-system spec: podSelector: matchLabels: app: nginx egress.monzo.com/allowed-gateway: google policyTypes: - Egress egress: [] </code></pre> <p>How can I route the egress traffic from the egress-operator pod or egress-operator gateway?</p>
solveit
<p>Posting this answer as a community wiki, feel free to edit and expand.</p> <hr /> <p><code>Istio</code> can be used as a solution for this case. It's an open-source project so it doesn't require to pay for its usage.</p> <p>Istio has a very good documentation with examples how to achieve different results. Documentation is much better for <code>istio</code> in comparison with <code>monzo</code> operator + a lot of big companies use it so it's reliable solution.</p> <hr /> <p>Accessing external services and whow it works:</p> <blockquote> <p>Because all outbound traffic from an Istio-enabled pod is redirected to its sidecar proxy by default, accessibility of URLs outside of the cluster depends on the configuration of the proxy. By default, Istio configures the Envoy proxy to pass through requests for unknown services. Although this provides a convenient way to get started with Istio, configuring stricter control is usually preferable.</p> </blockquote> <p>Please find <code>istio</code> documentation and covered use cases with the same goal as yours:</p> <ul> <li><a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/" rel="nofollow noreferrer">Using egress control</a></li> <li><a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/" rel="nofollow noreferrer">Using egress gateway</a></li> </ul>
moonkotte
<p>I'm using Elastic Heartbeat in a Kubernetes Cluster.</p> <p>I'm trying to setup google cloud platform module to Heartbeat, the documentation say:</p> <pre><code> metricbeat.modules: - module: googlecloud metricsets: - compute region: &quot;us-&quot; project_id: &quot;your project id&quot; credentials_file_path: &quot;your JSON credentials file path&quot; exclude_labels: false period: 1m </code></pre> <p>I have my credentials.json file to access to GCP, however, I can't put this credentials into kubernetes pod with Heartbeat.</p> <p>I tried with a kubernetes secret, but the module configuration does not allow this. Just allow put a path.</p> <p>How I can put this credentials into my heartbeat pod?</p> <p>Thanks!</p>
Alejandro Sotillo
<p>Solved!</p> <p>I created a secret with my credentials.json file and I mounted the secret as volumen in the pod.</p> <p>Configuration:</p> <p>secret.yaml:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: credentials-secret type: Opaque stringData: sa_json: | { &quot;type&quot;: &quot;service_account&quot;, &quot;project_id&quot;: &quot;erased&quot;, &quot;private_key_id&quot;: &quot;erased&quot;, &quot;private_key&quot;: &quot;-----BEGIN PRIVATE KEY-----erased-----END PRIVATE KEY-----\n&quot;, &quot;client_email&quot;: &quot;erased&quot;, &quot;client_id&quot;: &quot;erased&quot;, &quot;auth_uri&quot;: &quot;https://accounts.google.com/o/oauth2/auth&quot;, &quot;token_uri&quot;: &quot;https://oauth2.googleapis.com/token&quot;, &quot;auth_provider_x509_cert_url&quot;: &quot;https://www.googleapis.com/oauth2/v1/certs&quot;, &quot;client_x509_cert_url&quot;: &quot;https://www.googleapis.com/robot/v1/metadata/x509/xxxxx.iam.gserviceaccount.com&quot; } </code></pre> <p>deployment.yaml:</p> <pre><code>--- volumeMounts: - mountPath: /etc/gcp name: service-account-credentials-volume readOnly: true --- --- --- volumes: - name: service-account-credentials-volume secret: secretName: credentials-secret items: - key: sa_json path: credentials.json </code></pre>
Alejandro Sotillo
<p>I have the following ingress.yaml file</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: in annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600 alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/certificate-arn: xxxx alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;: 80}, {&quot;HTTPS&quot;: 443}]' spec: rules: - http: paths: - path: /api/bulk-api/* backend: serviceName: dg-bulk-api servicePort: 5000 - path: /api/adjuster-selection backend: serviceName: dg-adjuster-selection servicePort: 5050 - path: /api/cockpit/* backend: serviceName: dg-cockpit servicePort: 5050 - path: /api/regression/* backend: serviceName: dg-regression servicePort: 5005 - path: /api/lp/task-details* backend: serviceName: lp-task-detail servicePort: 5050 - path: /api/tool-setup/* backend: serviceName: dg-tool-setup servicePort: 5000 - path: /api/guideline/* backend: serviceName: dg-guideline servicePort: 5050 - path: /* backend: serviceName: dg-ui servicePort: 80 </code></pre> <p>The above mentioned yaml creates an ALB with listener at 80 and 443 added for all the routes. However, I want listener 80 for for all routes except dg-ui service and 443 for dg-ui service only. Let me know how can this be done.</p>
Gautam Rajotya
<p>I have been able to solve the issue. Thought it would be helpful for everyone.</p> <ol> <li>Updated my ALB Ingress Controller to v2.1. Instructions can be found at: <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/installation/" rel="nofollow noreferrer">AWS LoadBalancer Controller</a></li> <li>Create separate Ingress Yaml for Http and Https listener rules.</li> <li>Add annotation: <code>alb.ingress.kubernetes.io/group.name: my-team.awesome-group</code> to both Ingress. This would create 2 Ingress and attach the rules to 1 common ALB. More on this annotation can be found here: <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/ingress/annotations/#group.name" rel="nofollow noreferrer">IngressGroups</a></li> </ol>
Gautam Rajotya
<p>Friends, I am trying to implement a readinessProbe here like this:</p> <pre><code>readinessProbe: exec: command: [&quot;mysql&quot;, &quot;-u root&quot;, &quot;-p $(MYSQL_ROOT_PASSWORD)&quot;, &quot;SHOW DATABASES;&quot;] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 </code></pre> <p>But I am getting an access denied error: <strong>ERROR 1045 (28000): Access denied for user ' root'@'localhost' (using password: YES)</strong></p> <p>When I exec inside my pod, I can connect to the database normally so I think I am doing something wrong by the execution the the connection commands. Any idea how can I solve this problem?</p>
marcelo
<p>It worked for me like this:</p> <pre><code>readinessProbe: exec: command: - &quot;bash&quot; - &quot;-c&quot; - &quot;mysql --user=${MYSQL_USER} --password=${MYSQL_PASSWORD} --execute=\&quot;SHOW DATABASES;\&quot;&quot; </code></pre> <p>But I still don't know how to &quot;translate&quot; this if I want to use brackets.</p>
marcelo
<p>I have one .Net application running on K8 cluster which serves service requests. I had a requirement where I had to spin up pods when custom resource (CR) is available (which in turn have to be created during service call). I have implemented spinning up pod during runtime using operator pattern (GoLang) and its working fine. As of now I am applying custom resources manually. but this is not going to work in long run as I have to spin up N pods for N number of requests and have to apply N number of custom resources to achieve this. I am aware that, when end point is hit I can call K8's crud api s to create custom resource but not willing to touch our application code instead would like develop logic in custom operator codeline itself. but stuck as I have no idea how to achieve this? every input is appreciated.</p>
Nanda K S
<p>Maybe you can try work queue to schedule/queue requests to perform jobs <a href="https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/</a></p>
Ayush Sharma
<p>I have tried so many times to run skaffold from my project directory. It keeps me returning the same error: 1/1 deployment(s) failed</p> <p><a href="https://i.stack.imgur.com/WL3DP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WL3DP.png" alt="deployment and service creates or configures but Waiting for deployments to stabilize... in here it gives message deployment/auth-depl failed. Error: container auth is waiting to start: toufiqurr/auth:032c18c37052fbb11c28f36414f079c0562dcea8fd96070a55ecd98d31060fdb can't be pulled." /></a></p> <p>Skaffold.yaml file:</p> <pre><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: local: push: false artifacts: - image: ankan00/auth context: auth docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . </code></pre> <p>Created a docker image of ankan00/auth by docker build -t ankan00/auth .</p> <p>It ran successfully when I was working with this project. But I had to uninstall docker for some reason and then when I reinstalled docker built the image again(after deleting the previous instance of the image in docker desktop), then skaffold is not working anymore. I tried to delete skaffold folder and reinstall skaffold but the problem remains the same. Everytime it ends up in cleaning up and throwing 1/1 deployment(s) failed.</p> <p>My Dockerfile:</p> <pre><code>FROM node:alpine WORKDIR /app COPY package.json . RUN npm install COPY . . CMD [&quot;npm&quot;, &quot;start&quot;] </code></pre> <p>my auth-depl.yaml file which is in infra\k8s directory</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: auth-depl spec: replicas: 1 selector: matchLabels: app: auth template: metadata: labels: app: auth spec: containers: - name: auth image: ankan00/auth --- apiVersion: v1 kind: Service metadata: name: auth-srv spec: selector: app: auth ports: - name: auth protocol: TCP port: 3000 targetPort: 3000 </code></pre>
Toufiqur Rahman
<p>Okay! I resolved the isses by re-installing the docker desktop and not enabling Kubernetes in it. I installed Minikube and then I ran <code>skaffold dev</code> and this time it was not giving error in <em><strong>deployments to stabilize...</strong></em> stage. Looks like Kubernetes desktop is the culprit? I am not sure though because I ran it successfully before.</p> <p><strong>New Update!!!</strong> I worked again on the Kubernetes desktop. I deleted Minikube because Minicube uses the same port that the ingress-Nginx server uses to run the project. So, I had decided to put back Kubernetes desktop, also Google cloud Kubernetes engine. And scaffold works perfectly this time.</p>
Toufiqur Rahman
<p>I have the <code>minikube</code> environment as the following: -</p> <ul> <li>Host OS: <code>CentOS Linux release 7.7.1908 (Core)</code></li> <li>Docker: <code>Docker Engine - Community 20.10.7</code></li> <li>minikube: <code>minikube version: v1.20.0</code></li> </ul> <p>I would like to add some additional host mapping (5+ IP and name) to the <code>/etc/hosts</code> inside the <code>minikube</code> container. Then I use the <code>minikube ssh</code> to enter to the shell and try to <code>echo &quot;172.17.x.x my.some.host&quot; &gt;&gt; /etc/hosts</code>. There is an error as <code>-bash: /etc/hosts: Permission denied</code> since the user who login to this shell is a <code>docker</code>, not a <code>root</code>.</p> <p>I also found that at the host machine there is a docker container named <code>minikube</code> running, by using the <code>docker container ls</code>. Even I can go to this container with <code>root</code> by using <code>docker exec -it -u root minikube /bin/bash</code>. I understand that it is a kind of tweak and may be a bad practice. Especially it is too much tasks.</p> <p>Regarding to the <code>docker</code> and <code>docker-compose</code> which provides the <code>--add-host</code> and <code>extra_hosts</code> respectively to add hostname mappings, Does the <code>minikube</code> provide it? Is there any good practice to achieve this within the <code>minikube</code> and/or system administrator point-of-view good practice?</p> <h4>Edit 1</h4> <p>After <code>echo 172.17.x.x my.some.host &gt; ~/.minikube/files/etc/hosts</code> and start the <code>minikube</code>, there are some error as the following: -</p> <pre class="lang-sh prettyprint-override"><code>[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp: lookup localhost on 8.8.8.8:53: no such host. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' </code></pre> <p>Then I use the <code>vi</code> to create a whole <code>hosts</code> file at <code>~/.minikube/files/etc/hosts</code> as the following: -</p> <pre><code>127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.x.x my.some.host1 172.17.x.y my.some.host2 ... </code></pre> <p>At this time the <code>minikube</code> is started properly.</p>
Charlee Chitsuk
<p>Minikube has a <a href="https://minikube.sigs.k8s.io/docs/handbook/filesync/" rel="nofollow noreferrer">built-in sync mechanism</a> that could deploy a desired /etc/hosts with the following example:</p> <pre><code>mkdir -p ~/.minikube/files/etc echo 127.0.0.1 localhost &gt; ~/.minikube/files/etc/hosts minikube start </code></pre> <p>Then go and check if it's working:</p> <pre><code>minikube ssh </code></pre> <p>Once you are inside the container, you can view the contents of the /etc/hosts file using:</p> <pre><code>cat /etc/hosts </code></pre>
Pit
<p>I have a new Nuxt 3 application. In my config.ts, I have the following variables defined:</p> <pre class="lang-js prettyprint-override"><code> runtimeConfig: { // Private keys are only available on the server // Public keys that are exposed to the client public: { baseUrl: process.env.NUXT_PUBLIC_BASE_URL || '&lt;url&gt;', communityId: process.env.NUXT_ENV_COMMUNITY_ID || &lt;id&gt;, } }, </code></pre> <p>I have this application built to a docker image, then deployed to Kubernetes as a pod/deployment, alongside a configmap which holds the 2 variables I defined earlier.</p> <p>When I SSH into the pod, and run <code>echo $NUXT_ENV_COMMUNITY_ID</code>, I can see the ID I set in the configmap, and same story with the base url, however when I try to access the variables in Nuxt, they default to their default options. If I remove the defaults, the app crashes/returns a 500 error since they are not defined.</p> <p>Any suggestions?</p> <p>--EDIT YAML--</p> <p>Deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: &lt;app name&gt;-deployment namespace: default labels: app: &lt;app name&gt; spec: replicas: 2 selector: matchLabels: app: &lt;app name&gt; template: metadata: labels: app: &lt;app name&gt; spec: imagePullSecrets: - name: regcred containers: - name: &lt;container name&gt; image: &lt;my-image&gt; ports: - containerPort: 3000 envFrom: - configMapRef: name: &lt;app name&gt;-config </code></pre> <p>Configmap</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: &lt;app name&gt;-config namespace: default data: NUXT_PUBLIC_BASE_URL: &quot;&lt;url&gt;&quot; NUXT_ENV_COMMUNITY_ID: &quot;2&quot; </code></pre>
Flinty926
<p>See <a href="https://nuxt.com/docs/guide/going-further/runtime-config#environment-variables" rel="nofollow noreferrer">https://nuxt.com/docs/guide/going-further/runtime-config#environment-variables</a>.</p> <p>You can set your variables as empty strings in <code>runtimeConfig.public</code>. I also notice there's no <code>_PUBLIC_</code> on the second one, might be the issue.</p>
Agénor Debriat
<p>I have a workflow on Github action that builds, tests, and pushes a container to GKE. I followed the steps outlined in <a href="https://docs.github.com/en/actions/guides/deploying-to-google-kubernetes-engine" rel="nofollow noreferrer">https://docs.github.com/en/actions/guides/deploying-to-google-kubernetes-engine</a> but my build keeps on failing. The failure comes from the Kustomization stage of the build process.</p> <p>This is what the error looks like:</p> <pre><code>Run ./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA ./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA ./kustomize build . | kubectl apply -f - kubectl rollout status deployment/$DEPLOYMENT_NAME kubectl get services -o wide shell: /usr/bin/bash -e ***0*** env: PROJECT_ID: *** GKE_CLUSTER: codematictest GKE_ZONE: us-east1-b DEPLOYMENT_NAME: codematictest IMAGE: codematictest CLOUDSDK_METRICS_ENVIRONMENT: github-actions-setup-gcloud KUBECONFIG: /home/runner/work/codematic-test/codematic-test/fb7d2ebb-4c82-4d43-af10-5b0b62bab1fd Error: Missing kustomization file 'kustomization.yaml'. Usage: kustomize edit set image [flags] Examples: The command set image postgres=eu.gcr.io/my-project/postgres:latest my-app=my-registry/my-app@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 will add images: - name: postgres newName: eu.gcr.io/my-project/postgres newTag: latest - digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 name: my-app newName: my-registry/my-app to the kustomization file if it doesn't exist, and overwrite the previous ones if the image name exists. The command set image node:8.15.0 mysql=mariadb alpine@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 will add images: - name: node newTag: 8.15.0 - name: mysql newName: mariadb - digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 name: alpine to the kustomization file if it doesn't exist, and overwrite the previous ones if the image name exists. Flags: -h, --help help for image Error: Process completed with exit code 1. </code></pre> <p>My GitHub workflow file looks like this:</p> <pre><code>name: gke on: push env: PROJECT_ID: ${{ secrets.GKE_PROJECT }} GKE_CLUSTER: codematictest GKE_ZONE: us-east1-b DEPLOYMENT_NAME: codematictest IMAGE: codematictest jobs: setup-build-publish-deploy: name: Setup, Build, Publish, and Deploy defaults: run: working-directory: api runs-on: ubuntu-latest environment: production steps: - name: Checkout uses: actions/checkout@v2 # Setup gcloud CLI - uses: google-github-actions/[email protected] with: service_account_key: ${{ secrets.GKE_SA_KEY }} project_id: ${{ secrets.GKE_PROJECT }} # Configure Docker to use the gcloud command-line tool as a credential # helper for authentication - run: |- gcloud --quiet auth configure-docker # Get the GKE credentials so we can deploy to the cluster - uses: google-github-actions/[email protected] with: cluster_name: ${{ env.GKE_CLUSTER }} location: ${{ env.GKE_ZONE }} credentials: ${{ secrets.GKE_SA_KEY }} # Build the Docker image - name: Build run: |- docker build \ --tag &quot;gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA&quot; \ --build-arg GITHUB_SHA=&quot;$GITHUB_SHA&quot; \ --build-arg GITHUB_REF=&quot;$GITHUB_REF&quot; \ . # Push the Docker image to Google Container Registry - name: Publish run: |- docker push &quot;gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA&quot; # Set up kustomize - name: Set up Kustomize run: |- curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64 chmod u+x ./kustomize # Deploy the Docker image to the GKE cluster - name: Deploy run: |- ./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA ./kustomize build . | kubectl apply -f - kubectl rollout status deployment/$DEPLOYMENT_NAME kubectl get services -o wide </code></pre>
Bash
<p>The kustomization file, as explained in it's <a href="https://github.com/kubernetes-sigs/kustomize#usage" rel="nofollow noreferrer">repository</a>, should be in the next file structure:</p> <pre><code>~/someApp ├── deployment.yaml ├── kustomization.yaml └── service.yaml </code></pre>
Pit
<p>I'm getting the following error when running <code>skaffold dev</code> on my microservices project. I literally taken the code straight out of a tutorial on microservices, but am still getting the error:</p> <pre><code>The Deployment &quot;orders-mongo-depl&quot; is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{&quot;app&quot;:&quot;orders-mongo&quot;}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable kubectl apply: exit status 1 </code></pre> <p>Here is my &quot;orders-mongo-depl.yaml&quot; file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: orders-mongo-depl spec: replicas: 1 selector: matchLabels: app: orders-mongo template: metadata: labels: app: orders-mongo spec: containers: - name: orders-mongo image: mongo --- apiVersion: v1 kind: Service metadata: name: orders-mongo-srv spec: selector: app: orders-mongo ports: - name: db protocol: TCP port: 27017 targetPort: 27017 </code></pre> <p>Here is my skaffold.yaml file</p> <pre><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: local: push: false artifacts: - image: stephengrider/auth context: auth docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . - image: stephengrider/client context: client docker: dockerfile: Dockerfile sync: manual: - src: '**/*.js' dest: . - image: stephengrider/tickets context: tickets docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . - image: stephengrider/orders context: orders docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . </code></pre> <p>I have tried restarting skaffold, deleting and restarting Minikube, changing the minikube driver between virtualbox and docker, and restarting my computer. I am on the latest version of ubuntu and have up to date minikube, kubernetes and docker.</p>
Patrick Gorman
<p>Posting this as an answer out of comments since it resolved the issue.</p> <p><strong>Short answer</strong></p> <p>To cleanup all deployments and objects, following command should be issued:</p> <p><code>skaffold delete</code></p> <p><strong>A bit more details</strong></p> <p>During development and testing objects are created. When any changes are done within the config or objects itself, error is fired e.g.</p> <pre><code>The Deployment &quot;orders-mongo-depl&quot; is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{&quot;app&quot;:&quot;orders-mongo&quot;}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable </code></pre> <p>Short test in kubernetes showed that changing <code>selector</code> in service or development producing exactly the same error which leads to necessity to correct manifest/objects or reset deployments in <code>skaffold</code> if not clear where discrepancy comes from.</p> <p><a href="https://skaffold.dev/docs/tutorials/artifact-dependencies/#cleanup" rel="nofollow noreferrer">Skaffold cleanup reference</a></p>
moonkotte
<p>I have installed a kubernetes cluster on <code>Azure</code> with kubespray <code>2.13.2</code>. But after I have installed some pods of my data platform components, I have noticed that the pods running on the same node cannot access to each other through service.</p> <p>For example, my presto coordinator has to access hive metastore. Let's see the services in my namespace:</p> <pre><code>kubectl get svc -n ai-developer NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metastore ClusterIP 10.233.12.66 &lt;none&gt; 9083/TCP 4h53m </code></pre> <p>Hive Metastore service is called <code>metastore</code>, through which my presto coordinator has to access hive metastore pod. Let's see the following pods in my namespace:</p> <pre><code>kubectl get po -n ai-developer -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES metastore-5544f95b6b-cqmkx 1/1 Running 0 9h 10.233.69.20 minion-3 &lt;none&gt; &lt;none&gt; presto-coordinator-796c4c7bcd-7lngs 1/1 Running 0 5h32m 10.233.69.29 minion-3 &lt;none&gt; &lt;none&gt; presto-worker-0 1/1 Running 0 5h32m 10.233.67.52 minion-1 &lt;none&gt; &lt;none&gt; presto-worker-1 1/1 Running 0 5h32m 10.233.70.24 minion-4 &lt;none&gt; &lt;none&gt; presto-worker-2 1/1 Running 0 5h31m 10.233.68.24 minion-2 &lt;none&gt; &lt;none&gt; presto-worker-3 1/1 Running 0 5h31m 10.233.71.27 minion-0 &lt;none&gt; &lt;none&gt; </code></pre> <p>Take a look at that the hive metastore pod <code>metastore-5544f95b6b-cqmkx </code> which is running on the node <code>minion-3</code> on which presto coordinator pod <code>presto-coordinator-796c4c7bcd-7lngs</code> also is running.</p> <p>I have configured hive metastore url of <code>thrift://metastore:9083</code> to hive properties for hive catalog in presto coordinator. When the presto pods are running on that same node where hive metastore pod is running, they cannot access to my hive metastore, but the pod running on other node where hive metastore is not running can access to the hive metastore through <code>service</code> very well.</p> <p>I have mentioned just one example, but I have experienced several other cases like this example for now.</p> <p><code>kubenet</code> is installed as network plugin in my kubernetes cluster installed with kubespray on <code>azure</code>:</p> <pre><code>/usr/local/bin/kubelet --logtostderr=true --v=2 --node-ip=10.240.0.4 --hostname-override=minion-3 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/etc/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/kubelet.conf --pod-infra-container-image=k8s.gcr.io/pause:3.1 --runtime-cgroups=/systemd/system.slice --hairpin-mode=promiscuous-bridge --network-plugin=kubenet --cloud-provider=azure --cloud-config=/etc/kubernetes/cloud_config </code></pre> <p>Any idea?</p>
mykidong
<p>Please check if the iptables Chain FORWARD default policy is ACCEPT . In my case , set the Forward chain default policy from drop to accept, the communitcation between nodes works well.</p>
Paul
<p>I have set up a kubernetes cluster based on three VMs Centos 8 and I deployed a pod with nginx.</p> <p><strong>IP addresses of the VMs:</strong></p> <pre><code>kubemaster 192.168.56.20 kubenode1 192.168.56.21 kubenode2 192.168.56.22 </code></pre> <p>On each VM the interfaces and routes are defined as following:</p> <pre><code>ip addr: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: enp0s3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:d2:1b:97 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3 valid_lft 75806sec preferred_lft 75806sec 3: enp0s8: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:df:77:05 brd ff:ff:ff:ff:ff:ff inet 192.168.56.22/24 brd 192.168.56.255 scope global noprefixroute enp0s8 valid_lft forever preferred_lft forever 4: virbr0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:ff:47:9a brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 5: virbr0-nic: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:ff:47:9a brd ff:ff:ff:ff:ff:ff 6: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:19:52:19:b1 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 7: flannel.1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 22:b8:b4:5a:5a:26 brd ff:ff:ff:ff:ff:ff inet 10.244.2.0/32 brd 10.244.2.0 scope global flannel.1 valid_lft forever preferred_lft forever ip route: default via 10.0.2.2 dev enp0s3 proto dhcp metric 100 default via 192.168.56.1 dev enp0s8 proto static metric 101 10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100 10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.22 metric 101 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown </code></pre> <p>On each VM I have two network adapters, one NAT for internet access (enp0s3) and one Host only Network for the 3 VMs to communicate (enp0s8) with each other (it is ok I tested it with ping command).</p> <p>On each VM I applied the following firewall rules:</p> <pre><code>firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API server firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API firewall-cmd --permanent --add-port=10250/tcp # Kubelet API firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager firewall-cmd --permanent --add-port=8285/udp # Flannel firewall-cmd --permanent --add-port=8472/udp # Flannel firewall-cmd --add-masquerade –permanent firewall-cmd --reload </code></pre> <p>finally I deployed the cluster and nginx with the following commands:</p> <pre><code>sudo kubeadm init --apiserver-advertise-address=192.168.56.20 --pod-network-cidr=10.244.0.0/16 (for Flannel CNI) kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl create deployment nginx --image=nginx kubectl create service nodeport nginx --tcp=80:80 </code></pre> <p>More general information of my cluster:</p> <p><strong>kubectl get nodes -o wide</strong></p> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kubemaster Ready master 3h8m v1.19.2 192.168.56.20 &lt;none&gt; CentOS Linux 8 (Core) 4.18.0-193.19.1.el8_2.x86_64 docker://19.3.13 kubenode1 Ready &lt;none&gt; 3h6m v1.19.2 192.168.56.21 &lt;none&gt; CentOS Linux 8 (Core) 4.18.0-193.19.1.el8_2.x86_64 docker://19.3.13 kubenode2 Ready &lt;none&gt; 165m v1.19.2 192.168.56.22 &lt;none&gt; CentOS Linux 8 (Core) 4.18.0-193.19.1.el8_2.x86_64 docker://19.3.13 </code></pre> <p><strong>kubectl get pods --all-namespaces -o wide</strong></p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default nginx-6799fc88d8-mrvsg 1/1 Running 0 3h 10.244.1.3 kubenode1 &lt;none&gt; &lt;none&gt; kube-system coredns-f9fd979d6-6qxk9 1/1 Running 0 3h9m 10.244.1.2 kubenode1 &lt;none&gt; &lt;none&gt; kube-system coredns-f9fd979d6-bj2fd 1/1 Running 0 3h9m 10.244.0.2 kubemaster &lt;none&gt; &lt;none&gt; kube-system etcd-kubemaster 1/1 Running 0 3h9m 192.168.56.20 kubemaster &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-kubemaster 1/1 Running 0 3h9m 192.168.56.20 kubemaster &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-kubemaster 1/1 Running 0 3h9m 192.168.56.20 kubemaster &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-fdv4p 1/1 Running 0 166m 192.168.56.22 kubenode2 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-vvhsz 1/1 Running 0 3h6m 192.168.56.21 kubenode1 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-vznl5 1/1 Running 0 3h6m 192.168.56.20 kubemaster &lt;none&gt; &lt;none&gt; kube-system kube-proxy-45tmz 1/1 Running 0 3h9m 192.168.56.20 kubemaster &lt;none&gt; &lt;none&gt; kube-system kube-proxy-nb7jt 1/1 Running 0 3h7m 192.168.56.21 kubenode1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-tl9n5 1/1 Running 0 166m 192.168.56.22 kubenode2 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-kubemaster 1/1 Running 0 3h9m 192.168.56.20 kubemaster &lt;none&gt; &lt;none&gt; </code></pre> <p><strong>kubectl get service -o wide</strong></p> <pre><code>kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h10m &lt;none&gt; nginx NodePort 10.102.152.25 &lt;none&gt; 80:30086/TCP 179m app=nginx </code></pre> <p><strong>Kubernetes version:</strong></p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.2&quot;, GitCommit:&quot;f5743093fd1c663cb0cbc89748f730662345d44d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-09-16T13:41:02Z&quot;, GoVersion:&quot;go1.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.2&quot;, GitCommit:&quot;f5743093fd1c663cb0cbc89748f730662345d44d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-09-16T13:32:58Z&quot;, GoVersion:&quot;go1.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p><strong>iptables version:</strong></p> <pre><code>iptables v1.8.4 (nf_tables) </code></pre> <p><strong>Results and issue:</strong></p> <ul> <li>If I do curl 192.168.56.21:30086 from any VM -&gt; OK I get the nginx code.</li> <li>If I try other IPs (e.g., curl 192.168.56.22:30086), it fails... (curl: (7) Failed to connect to 192.168.56.22 port 30086: Connection time out)</li> </ul> <p><strong>What I tried to debug:</strong></p> <pre><code>sudo netstat -antup | grep kube-proxy o tcp 0 0 0.0.0.0:30086 0.0.0.0:* LISTEN 4116/kube-proxy o tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 4116/kube-proxy o tcp 0 0 192.168.56.20:49812 192.168.56.20:6443 ESTABLISHED 4116/kube-proxy o tcp6 0 0 :::10256 :::* LISTEN 4116/kube-proxy </code></pre> <p>Thus on each VM it seems the kube-proxy listens on port 30086 which is OK.</p> <p>I tried to apply this rule on each node (found on another ticket) without any success:</p> <pre><code>iptables -A FORWARD -j ACCEPT </code></pre> <p>Do you have any idea why I can't reach the service from master node and node 2?</p> <p><strong>First update:</strong></p> <ul> <li>It seems Centos 8 is not compatible with kubeadm. I changed for Centos 7 but still have the issue;</li> <li>The flannel pods created are using the wrong interface (enp0s3) instead of enp0s8. I modified the kube-flannel.yaml file and added the argument (--iface=enp0s8). Now my pods are using the correct interface.</li> </ul> <pre><code>kubectl logs kube-flannel-ds-nn6v4 -n kube-system: I0929 06:19:36.842149 1 main.go:531] Using interface with name enp0s8 and address 192.168.56.22 I0929 06:19:36.842243 1 main.go:548] Defaulting external address to interface address (192.168.56.22) </code></pre> <p>Even by fixing these two things I still have the same issue...</p> <p><strong>Second update:</strong></p> <p>The final solution was to flush iptables on each VM with the following commands:</p> <pre><code>systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker </code></pre> <p>Now it is working correctly :)</p>
Antoine
<p>I finally found the solution after having switched to Centos 7 and correct Flannel configuration (see other comments). Actually, I noticed some issues in the pods where coredns is running. Here is an example of what happens inside one of these pods:</p> <pre><code>kubectl logs coredns-f9fd979d6-8gtlp -n kube-system: E0929 07:09:40.200413 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get &quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0&quot;: dial tcp 10.96.0.1:443: connect: no route to host [INFO] plugin/ready: Still waiting on: &quot;kubernetes&quot; </code></pre> <p>The final solution was to flush iptables on each VM with the following commands:</p> <pre><code>systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker </code></pre> <p>Then I can access the service deployed from each VM :)</p> <p>I am still not sure to understand clearly what was the issue. Here is some information:</p> <ul> <li><a href="https://github.com/kubernetes/kubeadm/issues/193" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/193</a></li> <li><a href="https://www.developertyrone.com/blog/kubernetes-administrator-notes-coredns-fix-on-centos-no-route-to-host-networking-issues/" rel="nofollow noreferrer">https://www.developertyrone.com/blog/kubernetes-administrator-notes-coredns-fix-on-centos-no-route-to-host-networking-issues/</a></li> </ul> <p>I will keep investigating and post more information here.</p>
Antoine
<p>as documented <a href="https://plugins.jenkins.io/kubernetes/#plugin-content-using-the-pipeline-step" rel="nofollow noreferrer">here</a>, by default, commands in Jenkins agents will run in the jnlp container.</p> <p>And yes, when I run my jenkins pipeline using this code, it will run on my main container -</p> <pre><code>node('node-agent'){ container('main'){ sh &quot;ls -la&quot; } } </code></pre> <p>I want my jobs to run on 'main' container by default.</p> <p>Like if I write this pipeline -&gt;</p> <pre><code>node('node-agent'){ sh &quot;ls -la&quot; } </code></pre> <p>It will run on main instead of JNLP!</p> <p>My jenkins as a code configuration -</p> <pre><code>Jenkins:cluster: non-prod Jenkins:secrets: create: true secretsList: - name: jenkins-github-token-non-prod value: /us-west-2-non-prod/jenkins/secrets/github-token - name: jenkins-slack-token-non-prod value: /us-west-2-non-prod/jenkins/secrets/slack-token Jenkins:config: chart: jenkins namespace: default repo: https://charts.jenkins.io values: agent: enabled: true podTemplates: jenkins-slave-pod: | - name: jenkins-slave-pod label: jenkins-slave-pod containers: - name: main image: '805787217936.dkr.ecr.us-west-2.amazonaws.com/aba-jenkins-slave:ecs-global-node_master_57' command: &quot;sleep&quot; args: &quot;30d&quot; privileged: true master.JCasC.enabled: true master.JCasC.defaultConfig: true kubernetesConnectTimeout: 5 kubernetesReadTimeout: 15 maxRequestsPerHostStr: &quot;32&quot; namespace: default image: &quot;805787217936.dkr.ecr.us-west-2.amazonaws.com/aba-jenkins-slave&quot; tag: &quot;ecs-global-node_master_57&quot; workingDir: &quot;/home/jenkins/agent&quot; nodeUsageMode: &quot;NORMAL&quot; # name of the secret to be used for image pulling imagePullSecretName: componentName: &quot;eks-global-slave&quot; websocket: false privileged: false runAsUser: runAsGroup: resources: requests: cpu: &quot;512m&quot; memory: &quot;512Mi&quot; limits: cpu: &quot;512m&quot; memory: &quot;512Mi&quot; podRetention: &quot;Never&quot; volumes: [ ] workspaceVolume: { } envVars: [ ] # - name: PATH # value: /usr/local/bin command: args: &quot;${computer.jnlpmac} ${computer.name}&quot; # Side container name sideContainerName: &quot;jnlp&quot; # Doesn't allocate pseudo TTY by default TTYEnabled: true # Max number of spawned agent containerCap: 10 # Pod name podName: &quot;jnlp&quot; # Allows the Pod to remain active for reuse until the configured number of # minutes has passed since the last step was executed on it. idleMinutes: 0 # Timeout in seconds for an agent to be online connectTimeout: 100 serviceAccount: annotations: {} controller: numExecutors: 1 additionalExistingSecrets: [] JCasC: securityRealm: | local: allowsSignup: false users: - id: &quot;aba&quot; password: &quot;aba&quot; # securityRealm: | # saml: # binding: &quot;urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect&quot; # displayNameAttributeName: &quot;http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name&quot; # groupsAttributeName: &quot;http://schemas.xmlsoap.org/claims/Group&quot; # idpMetadataConfiguration: # period: 0 # url: &quot;https://aba.onelogin.com/saml/metadata/34349e62-799f-4378-9d2a-03b870cbd965&quot; # maximumAuthenticationLifetime: 86400 # usernameCaseConversion: &quot;none&quot; # authorizationStrategy: |- # roleBased: # forceExistingJobs: true configScripts: credentials: | credentials: system: domainCredentials: - credentials: - string: scope: GLOBAL id: slack-token description: &quot;Slack access token&quot; secret: &quot;${jenkins-slack-token-non-prod-value}&quot; - usernamePassword: id: &quot;github-credentials&quot; password: &quot;aba&quot; scope: GLOBAL username: &quot;aba&quot; plugin-config: | jenkins: disabledAdministrativeMonitors: - &quot;hudson.model.UpdateCenter$CoreUpdateMonitor&quot; - &quot;jenkins.diagnostics.ControllerExecutorsNoAgents&quot; security: updateSiteWarningsConfiguration: ignoredWarnings: - &quot;core-2_263&quot; - &quot;SECURITY-2617-extended-choice-parameter&quot; - &quot;SECURITY-2170&quot; - &quot;SECURITY-2796&quot; - &quot;SECURITY-2169&quot; - &quot;SECURITY-2332&quot; - &quot;SECURITY-2232&quot; - &quot;SECURITY-1351&quot; - &quot;SECURITY-1350&quot; - &quot;SECURITY-2888&quot; unclassified: slackNotifier: teamDomain: &quot;superops&quot; baseUrl: &quot;https://superops.slack.com/services/hooks/jenkins-ci/&quot; tokenCredentialId: &quot;slack-token&quot; globalLibraries: libraries: - defaultVersion: &quot;master&quot; allowVersionOverride: true name: &quot;aba-jenkins-library&quot; implicit: true retriever: modernSCM: scm: git: credentialsId: &quot;github-credentials&quot; id: &quot;shared-library-creds&quot; remote: &quot;https://github.com/aba-aba/aba-jenkins-library.git&quot; traits: - &quot;gitBranchDiscovery&quot; - &quot;cleanBeforeCheckoutTrait&quot; - &quot;ignoreOnPushNotificationTrait&quot; additionalPlugins: - junit:1119.1121.vc43d0fc45561 - prometheus:2.0.11 - saml:4.352.vb_722786ea_79d - role-strategy:546.ve16648865996 - blueocean-web:1.25.5 - github-branch-source:1677.v731f745ea_0cf - git-changelog:3.23 - scriptler:3.5 - sshd:3.249.v2dc2ea_416e33 - rich-text-publisher-plugin:1.4 - matrix-project:785.v06b_7f47b_c631 - build-failure-analyzer:2.3.0 - testng-plugin:555.va0d5f66521e3 - allure-jenkins-plugin:2.30.2 - timestamper:1.18 - ws-cleanup:0.42 - build-timeout:1.21 - slack:616.v03b_1e98d13dd - email-ext:2.91 - docker-commons:1.19 - docker-workflow:521.v1a_a_dd2073b_2e - rundeck:3.6.11 - parameter-separator:1.3 - extended-choice-parameter:346.vd87693c5a_86c - uno-choice:2.6.3 adminPassword: &quot;&quot; ingress: enabled: true hostName: jenkins.non-prod.us-west-2.int.isappcloud.com ingressClassName: nginx-int installPlugins: - kubernetes:3883.v4d70a_a_a_df034 - workflow-aggregator:590.v6a_d052e5a_a_b_5 - git:5.0.0 - configuration-as-code:1569.vb_72405b_80249 jenkinsUrlProtocol: https prometheus: enabled: true resources: limits: cpu: &quot;4&quot; memory: 8Gi requests: cpu: &quot;2&quot; memory: 4Gi sidecars: configAutoReload: resources: requests: cpu: 128m memory: 256Mi statefulSetAnnotations: pulumi.com/patchForce: &quot;true&quot; Name: eks-non-prod-us-west-2-jenkins department: aba division: enterprise environment: non-prod owner: devops project: eks-non-prod-us-west-2-jenkins team: infra tag: 2.362-jdk11 version: 4.1.13 Jenkins:stackTags: Name: eks-non-prod-us-west-2-jenkins department: aba division: enterprise environment: non-prod owner: devops project: eks-non-prod-us-west-2-jenkins team: infra aws:region: us-west-2 </code></pre>
EilonA
<p>I would say that more convenient way will be using <a href="https://plugins.jenkins.io/kubernetes/#plugin-content-declarative-pipeline" rel="nofollow noreferrer">declarative pipeline</a> with <code>defaultContainer</code> directive. Then you can specify provide your executor definition as standard <code>pod</code> definition file (put this in app repo or shared libraries) call them by <code>name</code>. This is example from doc:</p> <pre><code>pipeline { agent { kubernetes { defaultContainer 'maven' yamlFile 'KubernetesPod.yaml' } } stages { stage('Run maven') { steps { sh 'mvn -version' } } } } </code></pre>
Michał Lewndowski
<p>I've deployed my first app on my Kubernetes prod cluster a month ago.</p> <p>I could deploy my 2 services (front / back) from gitlab registry.</p> <p>Now, I pushed a new docker image to gitlab registry and would like to redeploy it in prod:</p> <p>Here is my deployment file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: reloader.stakater.com/auto: "true" labels: app: espace-client-client name: espace-client-client namespace: espace-client spec: replicas: 1 strategy: {} template: metadata: labels: app: espace-client-client spec: containers: - envFrom: - secretRef: name: espace-client-client-env image: registry.gitlab.com/xxx/espace_client/client:latest name: espace-client-client ports: - containerPort: 3000 resources: {} restartPolicy: Always imagePullSecrets: - name: gitlab-registry </code></pre> <p>I have no clue what is inside <code>gitlab-registry</code>. I didn't do it myself, and the people who did it left the crew :( Nevertheless, I have all the permissions, so, I only need to know what to put in the secret, and maybe delete it and recreate it.</p> <p>It seems that secret is based on my .docker/config.json</p> <pre><code>➜ espace-client git:(k8s) ✗ kubectl describe secrets gitlab-registry Name: gitlab-registry Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Type: kubernetes.io/dockerconfigjson Data ==== .dockerconfigjson: 174 bytes </code></pre> <p>I tried to delete existing secret, logout with </p> <pre><code>docker logout registry.gitlab.com kubectl delete secret gitlab-registry </code></pre> <p>Then login again:</p> <pre><code>docker login registry.gitlab.com -u myGitlabUser Password: Login Succeeded </code></pre> <p>and pull image with:</p> <pre><code>docker pull registry.gitlab.com/xxx/espace_client/client:latest </code></pre> <p>which worked.</p> <p>file: <code>~/.docker/config.json</code> is looking weird:</p> <pre><code>{ "auths": { "registry.gitlab.com": {} }, "HttpHeaders": { "User-Agent": "Docker-Client/18.09.6 (linux)" }, "credsStore": "secretservice" } </code></pre> <p>It doesn't seem to contain any credential... </p> <p>Then I recreate my secret</p> <pre><code>kubectl create secret generic gitlab-registry \ --from-file=.dockerconfigjson=/home/julien/.docker/config.json \ --type=kubernetes.io/dockerconfigjson </code></pre> <p>I also tried to do : </p> <pre><code>kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>and deploy again:</p> <pre><code>kubectl rollout restart deployment/espace-client-client -n espace-client </code></pre> <p>but I still have the same error:</p> <pre><code>Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client-6c8b88f795-wcrlh" is waiting to start: trying and failing to pull image </code></pre>
Juliatzin
<p>Another reason that could lead to failing image pulls is a deviation in the system time of your machine. This deviation can lead to an error in the tls certificate verification that is performed in image pulls. You can check the current system time: <code>date</code>.</p> <p>A quick fix is to manually set the correct time: <code>sudo date -s '2023-10-03 12:34:56'</code>. Alternatively, you can set up <a href="https://ubuntu.com/server/docs/network-ntp" rel="nofollow noreferrer">time synchronisation</a>.</p> <p>To check whether this is actually the problem, you can also try to pull images using your container runtime CLI e.g. <code>crictl pull hello-world</code>. If this throws an error regarding tls failing to verify the certificate, it is likely that a deviation in system time is the cause.</p>
Sören Metje
<p>I have already setup a service in a k3s cluster using:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myservice namespace: mynamespace labels: app: myapp spec: type: LoadBalancer selector: app: myapp ports: - port: 9012 targetPort: 9011 protocol: TCP </code></pre> <blockquote> <p>kubectl get svc -n mynamespace</p> </blockquote> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio ClusterIP None &lt;none&gt; 9011/TCP 42m minio-service LoadBalancer 10.32.178.112 192.168.40.74,192.168.40.88,192.168.40.170 9012:32296/TCP 42m </code></pre> <blockquote> <p>kubectl describe svc myservice -n mynamespace</p> </blockquote> <pre><code>Name: myservice Namespace: mynamespace Labels: app=myapp Annotations: &lt;none&gt; Selector: app=myapp Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.32.178.112 IPs: 10.32.178.112 LoadBalancer Ingress: 192.168.40.74, 192.168.40.88, 192.168.40.170 Port: &lt;unset&gt; 9012/TCP TargetPort: 9011/TCP NodePort: &lt;unset&gt; 32296/TCP Endpoints: 10.42.10.43:9011,10.42.10.44:9011 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>I assume from the above that I sould be able to access the minIO console from: <a href="http://192.168.40.74:9012" rel="nofollow noreferrer">http://192.168.40.74:9012</a> but it is not possible.</p> <p>Error message:</p> <blockquote> <p>curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out</p> </blockquote> <p>Furthemore, If I execute</p> <blockquote> <p>kubectl get node -o wide -n mynamespace</p> </blockquote> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME antonis-dell Ready control-plane,master 6d v1.21.2+k3s1 192.168.40.74 &lt;none&gt; Ubuntu 18.04.1 LTS 4.15.0-147-generic containerd://1.4.4-k3s2 knodeb Ready worker 5d23h v1.21.2+k3s1 192.168.40.88 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 5.4.51-v7l+ containerd://1.4.4-k3s2 knodea Ready worker 5d23h v1.21.2+k3s1 192.168.40.170 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 5.10.17-v7l+ containerd://1.4.4-k3s2 </code></pre> <p>As it is shown above the INTERNAL-IPs of nodes are the same as the EXTERNAL-IPs of Load Balancer. Am I doing something wrong here?</p>
e7lT2P
<h2 id="k3s-cluster-initial-configuration">K3S cluster initial configuration</h2> <p>To reproduce the environment I created a two node <code>k3s</code> cluster following next steps:</p> <ol> <li><p>Install k3s control-plane on required host:</p> <pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh - </code></pre> </li> <li><p>Verify it works:</p> <pre><code>k8s kubectl get nodes -o wide </code></pre> </li> <li><p>To add a worker node, this command should be run on a worker node:</p> <pre><code>curl -sfL https://get.k3s.io | K3S_URL=https://control-plane:6443 K3S_TOKEN=mynodetoken sh - </code></pre> </li> </ol> <p>Where <code>K3S_URL</code> is a control-plane URL (with IP or DNS)</p> <p><code>K3S_TOKEN</code> can be got by:</p> <pre><code>sudo cat /var/lib/rancher/k3s/server/node-token </code></pre> <p>You should have a running cluster:</p> <pre><code>$ k3s kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3s-cluster Ready control-plane,master 27m v1.21.2+k3s1 10.186.0.17 &lt;none&gt; Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2 k3s-worker-1 Ready &lt;none&gt; 18m v1.21.2+k3s1 10.186.0.18 &lt;none&gt; Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2 </code></pre> <h2 id="reproduction-and-testing">Reproduction and testing</h2> <p>I created a simple deployment based on <code>nginx</code> image by:</p> <pre><code>$ k3s kubectl create deploy nginx --image=nginx </code></pre> <p>And exposed it:</p> <pre><code>$ k3s kubectl expose deploy nginx --type=LoadBalancer --port=8080 --target-port=80 </code></pre> <p>This means that <code>nginx</code> container in pod is listening to port <code>80</code> and <code>service</code> is accessible on port <code>8080</code> within the cluster:</p> <pre><code>$ k3s kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.43.0.1 &lt;none&gt; 443/TCP 29m &lt;none&gt; nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 25m app=nginx </code></pre> <p>Service is accessible on both IPs or <code>localhost</code> AND port <code>8080</code> or <code>NodePort</code> as well.</p> <p><strong>+</strong> taking into account the error you get <code>curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out</code> means that service is configured, but it doesn't respond properly (it's not 404 from ingress or <code>connection refused</code>).</p> <h2 id="answer-on-second-question-loadbalancer">Answer on second question - Loadbalancer</h2> <p>From <a href="https://rancher.com/docs/k3s/latest/en/networking/#how-the-service-lb-works" rel="noreferrer">rancher k3s official documentation about LoadBalancer</a>, <a href="https://github.com/k3s-io/klipper-lb" rel="noreferrer">Klipper Load Balancer</a> is used. From their github repo:</p> <blockquote> <p>This is the runtime image for the integrated service load balancer in klipper. This works by using a host port for each service load balancer and setting up iptables to forward the request to the cluster IP.</p> </blockquote> <p>From <a href="https://rancher.com/docs/k3s/latest/en/networking/#how-the-service-lb-works" rel="noreferrer">how the service loadbalancer works</a>:</p> <blockquote> <p>K3s creates a controller that creates a Pod for the service load balancer, which is a Kubernetes object of kind Service.</p> <p>For each service load balancer, a DaemonSet is created. The DaemonSet creates a pod with the svc prefix on each node.</p> <p>The Service LB controller listens for other Kubernetes Services. After it finds a Service, it creates a proxy Pod for the service using a DaemonSet on all of the nodes. This Pod becomes a proxy to the other Service, so that for example, requests coming to port 8000 on a node could be routed to your workload on port 8888.</p> <p>If the Service LB runs on a node that has an external IP, it uses the external IP.</p> </blockquote> <p>In other words yes, this is expected that loadbalancer has the same IP addresses as hosts' <code>internal-IP</code>s. Every service with loadbalancer type in k3s cluster will have its own <code>daemonSet</code> on each node to serve direct traffic to the initial service.</p> <p>E.g. I created the second deployment <code>nginx-two</code> and exposed it on port <code>8090</code>, you can see that there are two pods from two different deployments AND four pods which act as a loadbalancer (please pay attention to names - <code>svclb</code> at the beginning):</p> <pre><code>$ k3s kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-6799fc88d8-7m4v4 1/1 Running 0 47m 10.42.0.9 k3s-cluster &lt;none&gt; &lt;none&gt; svclb-nginx-jc4rz 1/1 Running 0 45m 10.42.0.10 k3s-cluster &lt;none&gt; &lt;none&gt; svclb-nginx-qqmvk 1/1 Running 0 39m 10.42.1.3 k3s-worker-1 &lt;none&gt; &lt;none&gt; nginx-two-6fb6885597-8bv2w 1/1 Running 0 38s 10.42.1.4 k3s-worker-1 &lt;none&gt; &lt;none&gt; svclb-nginx-two-rm594 1/1 Running 0 2s 10.42.0.11 k3s-cluster &lt;none&gt; &lt;none&gt; svclb-nginx-two-hbdc7 1/1 Running 0 2s 10.42.1.5 k3s-worker-1 &lt;none&gt; &lt;none&gt; </code></pre> <p>Both services have the same <code>EXTERNAL-IP</code>s:</p> <pre><code>$ k3s kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 50m nginx-two LoadBalancer 10.43.118.82 10.186.0.17,10.186.0.18 8090:31780/TCP 4m44s </code></pre>
moonkotte
<p>I am using Github plugin in Jenkins. My jenkins runs on K8S pod using Kubernets plugin.</p> <p>Trying to trigger a push event by pushing to my branch fails on Github side. <code>We couldn’t deliver this payload: failed to connect to host</code></p> <p>I'm able to reach <a href="http://www.github.com" rel="nofollow noreferrer">www.github.com</a> within the pod, and even using organization folder (scanning the github organization) gives me all the branches and webhook registers on github automatically using www..com/github-webhook</p> <p>I'm using access token of an account with Admin permissions in the repo in Github.</p> <p>Triggering manual hook from Jenkins gives me in the Jenkins logs - <code>Created hook https://api.github.com/repos/xxx-xxx/xxx-xxx/hooks/404803105 (events: [PULL_REQUEST, PUSH, REPOSITORY])</code></p> <p>But when I'm doing build, nothing shows in <a href="http://www.jenkins.com/logs.." rel="nofollow noreferrer">www.jenkins.com/logs..</a>. <a href="https://i.stack.imgur.com/yBBis.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yBBis.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/scKMP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/scKMP.png" alt="enter image description here" /></a></p>
EilonA
<p>From your description it looks like you have some notwork related problem. Seems that your <code>Jenkins</code> instance is not reachable from internet and that is why <code>GitHub</code> which is public service is not able to send payload to your host. Please make sure that at least this endpoint <code>$JENKINS_BASE_URL/github-webhook/</code> is exposed to internet so it can receive trigger message.</p>
Michał Lewndowski
<p>I would like to scale up my mysql and mongo database with hpa !</p> <p>I wonder if i should use <strong>Statefulsets</strong>, <strong>Operators</strong> or both.</p> <p>Also i can't understand the difference between StatefulSets and Operators.</p> <p>Could someone help me?</p> <p>Thank you very much!!</p>
alex
<p>Statefulsets and Operators are not that similar.</p> <p>Statefulset is a Kubernetes resource that handles pods that you need to hold a state. Normally a pod would get a new name if it is killed and respawned by Kubernetes, but if it is managed by a Statefulset it respawns with the same name. You would often use Statefulset if you want your application to have some persistence.</p> <p>Operators on the other hand are a pattern used in Kubernetes to extend the normal functionality by adding custom resource definitions (CRD) that are handled by a given operator.</p> <p>I think you would use Statefulsets if you want to implement your own solution, and use an Operator if you want to use an existing one.</p> <p>There exist multiple MongoDB Kubernetes Operators out there.but you could look into the <a href="https://www.mongodb.com/try/download/community-kubernetes-operator" rel="nofollow noreferrer">MongoDB Community Kubernetes Operator</a></p>
CrowDev
<p>I have around 20 container app and for each one I have <code>deployment.yaml</code> and now let's say I want to put different <code>replicas</code> for each one. What below image suggest that I need to match <code>metadata:name</code>.</p> <p>Is this means I need to create 20 <code>overlay.yaml</code> each for one container app? Can I managed all app with SINGLE overlay file?</p> <p><a href="https://i.stack.imgur.com/AAB16.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AAB16.png" alt="enter image description here" /></a></p>
user584018
<p>This can be resolved by using <code>patches</code>. You will be able to manage all deployments with one overlay file per environment where you can set explicitly number of replicas for each deployment.</p> <hr /> <p><strong>Here's a simplified example of it:</strong></p> <p>I have two environments: <code>dev</code> and <code>stage</code>. Both have <code>kustomization.yaml</code> with <code>patches</code> for specific deploys and numbers of replicas (different for both envs).</p> <p><code>tree command:</code></p> <pre><code>. ├── base │   ├── app-1.yaml │   ├── app-2.yaml │   └── kustomization.yaml └── overlays ├── dev │   └── kustomization.yaml └── stage └── kustomization.yaml </code></pre> <p>Deployment - <code>app-1.yaml:</code> (2nd is almost identical, different name and image)</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: example-1 spec: template: spec: containers: - name: example-1 image: example:1.0 </code></pre> <p>Below a snippet from <code>overlays/dev/kustomization.yaml</code>:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base patches: - patch: |- - op: replace path: /spec/replicas value: 2 target: group: apps version: v1 kind: Deployment name: example-1 - patch: |- - op: replace path: /spec/replicas value: 3 target: group: apps version: v1 kind: Deployment name: example-2 </code></pre> <p>Actual result:</p> <pre><code>$ kubectl kustomize overlays/dev apiVersion: apps/v1 kind: Deployment metadata: name: example-1 spec: replicas: 2 template: spec: containers: - image: example:1.0 name: example-1 --- apiVersion: apps/v1 kind: Deployment metadata: name: example-2 spec: replicas: 3 template: spec: containers: - image: example:2.0 name: example-2 </code></pre> <hr /> <p><strong>Useful links:</strong></p> <ul> <li><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/" rel="nofollow noreferrer">Kustomize patches</a></li> <li><a href="https://kubectl.docs.kubernetes.io/guides/config_management/components/" rel="nofollow noreferrer">Kustomize components</a> (this one is not directly connected, but idea is common)</li> </ul>
moonkotte
<p>I'm in the process of learning helm and trying to deploy the mariadb on my k8s cluster running on the DigitalOcean with 1 master and 2 worker nodes with the following command:</p> <pre><code>helm install my-mariadb bitnami/mariadb --version 12.2.2 </code></pre> <p>This in turn created multiple resources on the cluster including the PersistenVolumeClaim, which failed to spin up and gets stuck in the Pending state with the following error: <code>no persistent volumes available for this claim and no storage class is set</code>. This error in turn keeps all the pods being in the Pending state as well.</p> <p>Why does this resource even gets created? After looking into the templates of this helm chart <a href="https://artifacthub.io/packages/helm/bitnami/mariadb" rel="nofollow noreferrer">here</a> I can't even find the definition of it there.</p> <p>If that's actually fine that I can't see it, then why there's no StorageClass or PersistentVolume in deployed resources? Doesn't seem right to create them manually.</p>
Konstantin
<p>So, turns out that this problem arise only on the k8s cluster that's set up with kubeadm. I've fixed this issue by creating my own PersistentVolume and PersistentVolumeClaim as described in the link provided in the comments, but then faced another problems: the mariadb pod's container crashes as it can't create <code>/bitnami/mariadb/data</code> folder due to permission error.</p> <p>I was able to make this work on the local single-node k8s cluster without any additional steps and decided to abandon to try to resolve this issue on initial cluster, as I do this only for studying.</p>
Konstantin
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: django-k8-web-deployment labels: app: django-k8-web-deployment spec: replicas: 3 selector: matchLabels: app: django-k8-web-deployment template: metadata: labels: app: django-k8-web-deployment spec: containers: - name: django-k8s-web image: registry.digitalocean.com/chrisocean/django-k8s-web:latest envFrom: - secretRef: name: django-k8s-web-prod-env env: - name: PORT value: &quot;8001&quot; ports: - containerPort: 8001 imagePullSecrets: - name: oceandev </code></pre> <p>the above yaml file above is what I want to apply in kubernetes. I ran the folowing command on my terminal</p> <pre class="lang-bash prettyprint-override"><code>kubectl apply -f k8s/apps/django-k8s-web.yaml </code></pre> <p>then I go the following error on the terminal</p> <pre class="lang-bash prettyprint-override"><code>kubectl apply -f k8s/apps/django-k8s-web.yaml service/django-k8-web-service unchanged Error from server (BadRequest): error when creating &quot;k8s/apps/django-k8s-web.yaml&quot;: Deployment in version &quot;v1&quot; cannot be handled as a Deployment: strict decoding error: unknown field &quot;spec.template.spec.containers[0].envFrom[0].name&quot; </code></pre> <p>who knows how to resolve the issue?</p> <p>I wanted it to apply the changes in the yaml file but it is not working.When I ran the following command</p> <pre class="lang-bash prettyprint-override"><code>kubectl get pods </code></pre> <p>the STATUS of the pod is pending</p>
saint chris
<p>This is issue with indentation.</p> <p>Here is proper <code>deployment</code> definition</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: django-k8-web-deployment labels: app: django-k8-web-deployment spec: replicas: 3 selector: matchLabels: app: django-k8-web-deployment template: metadata: labels: app: django-k8-web-deployment spec: containers: - name: django-k8s-web image: registry.digitalocean.com/chrisocean/django-k8s-web:latest envFrom: - secretRef: name: django-k8s-web-prod-env env: - name: PORT value: &quot;8001&quot; ports: - containerPort: 8001 imagePullSecrets: - name: oceandev </code></pre>
Michał Lewndowski
<p>I am install kubernetes dashboard using this command:</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created </code></pre> <p>this is my kubernetes yaml config:</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard </code></pre> <p>get the result:</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.43.0.10 &lt;none&gt; 53/UDP,53/TCP 102d kubernetes-dashboard NodePort 10.254.180.117 &lt;none&gt; 443:31720/TCP 58s metrics-server ClusterIP 10.43.96.112 &lt;none&gt; 443/TCP 102d [root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get pods -n kube-system No resources found. </code></pre> <p>but when I check the port 31720:</p> <pre><code>lsof -i:31720 </code></pre> <p>the output is empty.Is the service deploy success? How to check the deploy log? Why the port not binding success?</p>
Dolphin
<p>it is under its own namespace - &quot;kubernetes-dashboard&quot;. So, just use kubectl get all -n kubernetes-dashboard to see everything</p>
Paul
<p>I've a namespace I'm unable to delete in my Kubernetes cluster. When I run <code>kubectl get ns traefik -o yaml</code>, I get the following:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Namespace metadata: annotations: cattle.io/status: '{&quot;Conditions&quot;:[{&quot;Type&quot;:&quot;ResourceQuotaInit&quot;,&quot;Status&quot;:&quot;True&quot;,&quot;Message&quot;:&quot;&quot;,&quot;LastUpdateTime&quot;:&quot;2021-06-11T20:28:59Z&quot;},{&quot;Type&quot;:&quot;InitialRolesPopulated&quot;,&quot;Status&quot;:&quot;True&quot;,&quot;Message&quot;:&quot;&quot;,&quot;LastUpdateTime&quot;:&quot;2021-06-11T20:29:00Z&quot;}]}' field.cattle.io/projectId: c-5g2hz:p-bl9sf lifecycle.cattle.io/create.namespace-auth: &quot;true&quot; creationTimestamp: &quot;2021-06-11T20:28:58Z&quot; deletionTimestamp: &quot;2021-07-04T07:21:20Z&quot; labels: field.cattle.io/projectId: p-bl9sf managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:field.cattle.io/projectId: {} f:labels: .: {} f:field.cattle.io/projectId: {} f:status: f:phase: {} manager: agent operation: Update time: &quot;2021-06-11T20:28:58Z&quot; - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:cattle.io/status: {} f:lifecycle.cattle.io/create.namespace-auth: {} manager: rancher operation: Update time: &quot;2021-06-11T20:28:58Z&quot; - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: .: {} k:{&quot;type&quot;:&quot;NamespaceContentRemaining&quot;}: .: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{&quot;type&quot;:&quot;NamespaceDeletionContentFailure&quot;}: .: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{&quot;type&quot;:&quot;NamespaceDeletionDiscoveryFailure&quot;}: .: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{&quot;type&quot;:&quot;NamespaceDeletionGroupVersionParsingFailure&quot;}: .: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{&quot;type&quot;:&quot;NamespaceFinalizersRemaining&quot;}: .: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} manager: kube-controller-manager operation: Update time: &quot;2021-07-04T07:21:26Z&quot; name: traefik resourceVersion: &quot;15400692&quot; uid: 4b198956-bbd5-4bdb-9dc6-9d53feda91e4 spec: finalizers: - kubernetes status: conditions: - lastTransitionTime: &quot;2021-07-04T07:21:25Z&quot; message: 'Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request' reason: DiscoveryFailed status: &quot;True&quot; type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: &quot;2021-07-04T07:21:26Z&quot; message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: &quot;False&quot; type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: &quot;2021-07-04T07:21:26Z&quot; message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: &quot;False&quot; type: NamespaceDeletionContentFailure - lastTransitionTime: &quot;2021-07-04T07:21:26Z&quot; message: All content successfully removed reason: ContentRemoved status: &quot;False&quot; type: NamespaceContentRemaining - lastTransitionTime: &quot;2021-07-04T07:21:26Z&quot; message: All content-preserving finalizers finished reason: ContentHasNoFinalizers status: &quot;False&quot; type: NamespaceFinalizersRemaining phase: Terminating </code></pre> <p>And when I run <code>kubectl delete ns traefik --v=10</code>, the last output is the following:</p> <pre><code>I0708 18:38:26.538676 31537 round_trippers.go:425] curl -k -v -XGET -H &quot;Accept: application/json&quot; -H &quot;User-Agent: kubectl/v1.20.2 (linux/amd64) kubernetes/faecb19&quot; 'http://127.0.0.1:44427/6614317c-41da-462b-8be3-c6cda2f0df24/api/v1/namespaces?fieldSelector=metadata.name%3Dtraefik&amp;resourceVersion=17101173&amp;watch=true' I0708 18:38:27.013394 31537 round_trippers.go:445] GET http://127.0.0.1:44427/6614317c-41da-462b-8be3-c6cda2f0df24/api/v1/namespaces?fieldSelector=metadata.name%3Dtraefik&amp;resourceVersion=17101173&amp;watch=true 200 OK in 474 milliseconds I0708 18:38:27.013421 31537 round_trippers.go:451] Response Headers: I0708 18:38:27.013427 31537 round_trippers.go:454] Access-Control-Allow-Origin: * I0708 18:38:27.013450 31537 round_trippers.go:454] Date: Thu, 08 Jul 2021 16:38:27 GMT I0708 18:38:27.013453 31537 round_trippers.go:454] Connection: keep-alive I0708 18:38:27.013468 31537 request.go:708] Unexpected content type from the server: &quot;&quot;: mime: no media type </code></pre> <p>I already tried to remove the finalizers as described on <a href="https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state" rel="nofollow noreferrer">https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state</a> but after some seconds I finally get <code>EOF</code>:</p> <pre class="lang-sh prettyprint-override"><code>&gt; curl -k -H &quot;Content-Type: application/json&quot; -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/traefik/finalize EOF </code></pre> <p>Is anyone having any idea how I can delete the traefik namespace?</p>
Nrgyzer
<p>Posting this as a community wiki out of comments, feel free to edit and expand.</p> <p>After analysing a state of the problematic namespace, this part was the main cause of the issue:</p> <pre><code>message: 'Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request' </code></pre> <p>The issue was with <code>metric server</code> in kubernetes. Once the <code>metric server</code> is available, namespace can be unblocked to be deleted.</p> <p>Similar issue was resolved in <a href="https://stackoverflow.com/questions/62442679/could-not-get-apiversions-from-kubernetes-unable-to-retrieve-the-complete-list/62464015#62464015">another answer on the stackoverflow</a>.</p>
moonkotte
<p>Can you help me? I want to deploy Ingres for nodeport. But I can't understand if this possible? </p> <p>I tried to find some information in Google but I got Ingress for load balancing or some difficult examples Ingress Ruby on rails and etc. </p>
noute
<p>Create deployment and service</p> <pre><code>kubectl create deploy test --image httpd kubectl expose deploy test --port 80 --target-port 80 </code></pre> <p>Check if the service is working</p> <pre><code>kubectl get svc </code></pre> <p>returns</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test ClusterIP 11.105.18.110 &lt;none&gt; 80/TCP 51s </code></pre> <p>Then</p> <pre><code>curl 11.105.18.110:80 </code></pre> <p>returns</p> <pre><code>&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>Create bare-metal ingress controller</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml </code></pre> <p><a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal</a></p> <p>This returns</p> <pre><code>namespace/ingress-nginx unchanged serviceaccount/ingress-nginx unchanged configmap/ingress-nginx-controller configured clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged role.rbac.authorization.k8s.io/ingress-nginx unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx unchanged validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured serviceaccount/ingress-nginx-admission unchanged clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created </code></pre> <p>Create ingress rules for nginx controller</p> <pre><code>kubectl apply -f -&lt;&lt;EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test annotations: kubernetes.io/ingress.class: nginx spec: rules: - http: paths: - backend: service: name: test port: number: 80 path: / pathType: Prefix EOF </code></pre> <p><strong>Annotation &quot;kubernetes.io/ingress.class: nginx&quot; is required to bind Ingress to the controller</strong></p> <p><a href="https://www.fairwinds.com/blog/intro-to-kubernetes-ingress-set-up-nginx-ingress-in-kubernetes-bare-metal" rel="nofollow noreferrer">https://www.fairwinds.com/blog/intro-to-kubernetes-ingress-set-up-nginx-ingress-in-kubernetes-bare-metal</a></p> <p>Get nodes ips</p> <pre><code>kubectl get no -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP master Ready control-plane,master 6d20h v1.21.0 134.156.0.81 node1 Ready &lt;none&gt; 5d v1.21.0 134.156.0.82 node2 Ready &lt;none&gt; 6d19h v1.21.0 134.156.0.83 node3 Ready &lt;none&gt; 6d19h v1.21.0 134.156.0.84 </code></pre> <p>Find ingress controller nodeport</p> <pre><code>kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 15.234.89.16 &lt;none&gt; 80:31505/TCP,443:32191/TCP 21m </code></pre> <p>Its 31505. Access test service over ingress node port on one of your nodes, for example on node1 134.156.0.82</p> <pre><code>curl 134.156.0.82:31505 </code></pre> <p>returns</p> <pre><code>&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>This was tested on Google Cloud virtual machines but without GKE</p>
Grzegorz Wilanowski
<p>I am new to GitOps, I have watched some tutorials about ArgoCD, but the applications are mostly deployed by syncing from the GitRepo manifest.</p> <p>I know that by running the cmd &quot;argocd sync&quot;, it will do a pull from a git repo and perform like a kubectl apply -f $FILE.yml</p> <p>So I'm a bit of lost, My question is that if my old way is to deploy the application via helm upgrade, now I should get rid of helm upgrade cmd to install new app right? Can I still stick to helm and perform some kind of helm upgrade but only generate the manifests and then push them to the git repo that stores the manifest so that ArgoCD can sync the manifest?</p>
RTC EG
<p>If I understand you issue correctly there is no need to create <code>yaml</code> manifest from <code>helm chart</code> and then use it with <code>ArgoCD</code>.</p> <p><code>ArgoCD</code> support few template engines (raw yaml, helm, jsonnet and kustomize) and you can directly pass your <code>helm values</code>.</p> <p>Here is some example manifest:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: gitea namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: destination: namespace: 'gitea' server: 'https://kubernetes.default.svc' project: apps syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true - ApplyOutOfSyncOnly=true - PruneLast=true source: path: gitea repoURL: https://github.com/test/helm-charts.git targetRevision: HEAD helm: values: |- # Here you can pass all values to overwrite default chart values ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx hosts: - host: gitea.test.com paths: - path: / pathType: Prefix tls: - hosts: - gitea.test.com secretName: tls-wildcard-test-com resources: limits: cpu: 600m memory: 600Mi requests: cpu: 500m memory: 500Mi </code></pre> <p>Here is also <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/helm/" rel="nofollow noreferrer">documentation</a> describing more ways how to integrate <code>ArgoCD</code> with <code>helm</code>.</p>
Michał Lewndowski
<p>I am trying to run <code>kubeadm init</code> but I get this error during preflight: <code>CGROUPS_PIDS: missing</code></p> <pre><code>kubeadm init phase preflight [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 3.13.0-37-generic CONFIG_NAMESPACES: enabled CONFIG_NET_NS: enabled CONFIG_PID_NS: enabled CONFIG_IPC_NS: enabled CONFIG_UTS_NS: enabled CONFIG_CGROUPS: enabled CONFIG_CGROUP_CPUACCT: enabled CONFIG_CGROUP_DEVICE: enabled CONFIG_CGROUP_FREEZER: enabled CONFIG_CGROUP_SCHED: enabled CONFIG_CPUSETS: enabled CONFIG_MEMCG: enabled CONFIG_INET: enabled CONFIG_EXT4_FS: enabled CONFIG_PROC_FS: enabled CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module) CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module) CONFIG_OVERLAYFS_FS: enabled (as module) CONFIG_AUFS_FS: enabled (as module) CONFIG_BLK_DEV_DM: enabled DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: aufs OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: missing CGROUPS_HUGETLB: enabled error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR SystemVerification]: missing required cgroups: pids [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p><code>kubeadm</code> version:</p> <pre><code>kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.2&quot;, GitCommit:&quot;faecb196815e248d3ecfb03c680a4507229c2a56&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:25:59Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Ubuntu version 20.04 running on Digital Ocean VM. I have previously had a kubernetes cluster on this same VM (also using kubeadm), but after a crashed cluster and Ubuntu upgrades this error now appears when trying to recreate the cluster from scratch (I ran <code>kubeadm reset</code> first).</p> <p>After hours of Googling I still have no idea how to fix this or even what it means. Any help is appreciated.</p> <hr /> <p><strong>Update</strong></p> <p>Output of kernel config CGROUPS:</p> <pre class="lang-none prettyprint-override"><code>$ cat /boot/config-`uname -r` | grep CGROUP CONFIG_CGROUPS=y # CONFIG_CGROUP_DEBUG is not set CONFIG_CGROUP_FREEZER=y CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_HUGETLB=y CONFIG_CGROUP_PERF=y CONFIG_CGROUP_SCHED=y CONFIG_BLK_CGROUP=y # CONFIG_DEBUG_BLK_CGROUP is not set CONFIG_NET_CLS_CGROUP=m CONFIG_NETPRIO_CGROUP=m </code></pre> <hr /> <p><strong>Update: Solved</strong></p> <p>It turned out that my cloud provider overruled the kernel to be an older one (3.x). After changing the setting, the Ubuntu GRUB-initiated kernel (5.x) is running, which has <strong>CGROUPS_PIDS</strong> enabled.</p>
Henrik
<p>This may report the fact that your kernel does not define the <em>pids</em> cgroup. The latter is provided by Linux kernel if the <strong>CONFIG_CGROUP_PIDS</strong> is defined.</p> <p>Look at your kernel configuration. Under Ubuntu, you can run something like:</p> <pre class="lang-none prettyprint-override"><code>$ cat /boot/config-`uname -r` | grep CGROUP CONFIG_CGROUPS=y CONFIG_BLK_CGROUP=y CONFIG_CGROUP_WRITEBACK=y CONFIG_CGROUP_SCHED=y CONFIG_CGROUP_PIDS=y CONFIG_CGROUP_RDMA=y CONFIG_CGROUP_FREEZER=y CONFIG_CGROUP_HUGETLB=y CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_PERF=y CONFIG_CGROUP_BPF=y # CONFIG_CGROUP_DEBUG is not set CONFIG_SOCK_CGROUP_DATA=y # CONFIG_BLK_CGROUP_IOLATENCY is not set CONFIG_BLK_CGROUP_IOCOST=y # CONFIG_BFQ_CGROUP_DEBUG is not set CONFIG_NETFILTER_XT_MATCH_CGROUP=m CONFIG_NET_CLS_CGROUP=m CONFIG_CGROUP_NET_PRIO=y CONFIG_CGROUP_NET_CLASSID=y </code></pre> <p>Then, check if your kernel is built with <strong>CONFIG_CGROUP_PIDS</strong>. In the above display, it is set.</p>
Rachid K.
<p>Is there a way to find the immutable fields in the workload's spec? I could see there are few fields mentioned as immutable in some of the workload resource documentation, But for example in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> is not clear which are immutable fields. Is there a better way to find out?</p> <p>Sorry, I am not so familiar with reading the Kubernetes API spec yet, I couldn't figure it out. Is there a better way?</p> <p>Thanks in advance, Naga</p>
ennc0d3
<p>Welcome to the community.</p> <p>Unfortunately there's no such list with all <code>immutable fields</code> combined in one place.</p> <p>There are two options:</p> <ol> <li>As you started through reading documentation and see if this is specified explicitly.</li> <li>Start with <code>kubernetes API</code> description. You can find it here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#-strong-api-overview-strong-" rel="nofollow noreferrer">Kubernetes API</a>. This is also available in more human-readable form <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">here</a>. Same applies here - it's not specified explicitly whether field is immutable or not.</li> </ol> <p>For instance all objects and fields for <code>statefulset</code> can be found <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#statefulset-v1-apps" rel="nofollow noreferrer">here</a>.</p> <p>(I will update it if I find a better way)</p>
moonkotte
<p>I just installed bitnami/wordpress image using Argo CD with Helm option. Now my deployment is synced with helm. Can I now for example sync it with my git repository? I mean to push current Wordpress files to git and sync with it? Because then I can modify plugin files what I need. Bitnami/wordpress is non-root container so I can't create sftp account.</p> <p>How to do it?</p>
Don Don Don
<p>You could do it by adding a sidecar container to your WordPress container that performed the git synchronization.</p> <p>To do so, you have to add the following values to your WordPress application in ArgoCD:</p> <pre class="lang-yaml prettyprint-override"><code>sidecars: - name: git-sync image: bitnami/git:2.32.0-debian-10-r24 imagePullPolicy: IfNotPresent command: - /bin/bash - -ec - | [[ -f &quot;/opt/bitnami/scripts/git/entrypoint.sh&quot; ]] &amp;&amp; source &quot;/opt/bitnami/scripts/git/entrypoint.sh&quot; while true; do #Add here your commands to synchronize the git repository with /bitnami/wordpress/wp-content sleep 60 done volumeMounts: - mountPath: /bitnami/wordpress name: wordpress-data subPath: Wordpress </code></pre> <p>This will configure a secondary container in your Wordpress pod sharing the wordpress-data volume. Changes</p> <p>Note: You will also need to provide the values <code>mariadb.auth.password</code>, <code>mariadb.auth.rootPassword</code> and <code>wordpressPassword</code> when performing the Application Sync in ArgoCD.</p>
Miguel Ruiz
<p>I'm using pipelines in Jenkins that are running using JNLP container instead of my container.</p> <p>I'm using Jenkins as a code (Jenkin Helm chart)</p> <p>If I add this block to the pipeline -</p> <pre><code> container('my container') { } </code></pre> <p>It is using 'my container'.</p> <p>How do I set it as default to all my pipelines? Do I really need to add this container block all the time?</p>
EilonA
<p>As mentioned in <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">kubernetes plugin doc</a> you can change it globally with following code:</p> <pre><code>pipeline { agent { kubernetes { defaultContainer 'maven' yamlFile 'KubernetesPod.yaml' } } stages { stage('Run maven') { steps { sh 'mvn -version' } } } } </code></pre>
Michał Lewndowski
<p>I have a pod which takes a long time.</p> <p>The liveness and readiness probe looked like this:</p> <pre class="lang-yaml prettyprint-override"><code> livenessProbe: failureThreshold: 3 httpGet: path: /ping port: api scheme: HTTP initialDelaySeconds: 240 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 60 readinessProbe: failureThreshold: 3 httpGet: path: /ping port: api scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 </code></pre> <p>Now, the container needs more than 4 minutes to start up. I set <code>initialDelaySeconds</code> for the liveness probe to 7200 (two hours) but after applying this, it still stops after four minutes.</p> <p>What am I missing?</p>
rabejens
<p>If you are using the same endpoint for the checks and it takes too long, then the readiness probe will time out after 3 failures before it even calls the liveness probe.</p> <p>Readiness should be used if the container is temporarily unable to service traffic, it will be called periodically and then the container will start to receive traffic if it starts returning successfully.</p> <p>It is not clear why your pod is taking so long and whether that is just at startup or when running.</p>
Luke Briner
<p>I am using kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.2&quot;, GitCommit:&quot;faecb196815e248d3ecfb03c680a4507229c2a56&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:25:59Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}</p> <p>I tried to allow PodPreset by adding <em>--runtime-config=settings.k8s.io/v1alpha1=true</em> to <em>kube-apiserver.yaml</em></p> <p>After added this line, the kube-apiserver produced errors</p> <p>This is the logs by <code>kubectl logs kube-apiserver-master-0 -n kube-system</code></p> <p>Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I0203 01:12:59.519598 1 server.go:632] external host was not specified, using 10.1.0.5 <strong>Error: unknown api groups settings.k8s.io</strong> <a href="https://i.stack.imgur.com/w9ZWz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w9ZWz.png" alt="enter image description here" /></a></p>
Tkaewkunha
<p>It is removed in v1.20. Please refer <a href="https://kubernetes.io/docs/setup/release/notes/#deprecation" rel="noreferrer">https://kubernetes.io/docs/setup/release/notes/#deprecation</a></p>
Rushikesh
<p>I’m sure many have come across this problem because I can’t be the only one.</p> <p>In Visual Studio code I work with Kubernetes and Azure Devops YAML. Both have complete different formatting. To work with each I find I have to uninstall the others extension.</p> <p>Has anyone worked out, how to have both together where VSCode can work out when your coding an Azure DevOps pipeline or a Kubernetes cluster?</p>
Jason
<p>I did do what @flyx recommended but late yesterday evening and did not post my answer but I am going to do that now.</p> <p>I worked out the following things.</p> <ol> <li>VSCode can't seem to resolve the schema that are in <code>http://www.schemastore.org/json/composer</code> if you try to reference them by JSON glob words</li> <li>You can get different schemas to work across different technologies. For Example Kubernetes, Azure DevOps and Ansible are what I have set up now.</li> <li>You have to be okay with having an end syntax on your YAML files for this to work. For example, my Ansible YAML files all now end with _a.yml, my Kubernetes YAML files all end with _kube.yaml, and my Azure DevOps YAML files all end with the default syntax .yml</li> </ol> <p>Having figured out those three things I was able to set up my env, to allow the inteli sense and JSON syntax checker to pull the different needed schemas. I have also found the Google Cloud Code extension for Kubernetes is actually better than the Microsoft one and it picks up a lot quicker that your building a Kubernetes YAML file.</p> <p>Also for this to work you need to save the file first, with the right extension for VSCode to know you are working on a certain technology and pull down the right Schema. Also, watch out for the .vscode file this sometimes gets a bit silly and does not put the right schema in, so just keep an eye on it. If it fails delete it close VSCode and reopen and you should be good to go.</p> <p>Here are my JSON settings for anyone that wants to do this:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>{ "redhat.telemetry.enabled": true, "yaml.schemas": { "https://raw.githubusercontent.com/microsoft/azure-pipelines-vscode/master/service-schema.json": "*.yml", // "http://www.schemastore.org/json/composer": ["/*"], // "https://raw.githubusercontent.com/apache/camel/main/dsl/camel-yaml-dsl/camel-yaml-dsl/src/generated/resources/camel-yaml-dsl.json" : "/_kube*.yaml", "kubernetes": ["/*.yaml"], "https://json.schemastore.org/ansible-playbook" : "/_a*.yaml", "https://json.schemastore.org/ansible-role-2.9" : ["^/roles/*/tasks/_a*.yaml", "^/tasks/_a*.yaml"] }, "vs-kubernetes": { "vscode-kubernetes.helm-path.windows": "C:\\Users\\Jason\\.vs-kubernetes\\tools\\helm\\windows-amd64\\helm.exe", "vscode-kubernetes.minikube-path.windows": "C:\\Users\\Jason\\.vs-kubernetes\\tools\\minikube\\windows-amd64\\minikube.exe" }, "files.associations": { "**/ci/*.yml": "azure-pipelines" }, "[azure-pipelines]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "yaml.schemaStore.url": "http://www.schemastore.org/api/json/catalog.json", "cloudcode.yaml.crdSchemaLocations": [ ], "cloudcode.yaml.schemas": { }, "cloudcode.yaml.yamlFileMatcher": "/_kube*.yaml", }</code></pre> </div> </div> </p> <p>Extensions used for this:</p> <ol> <li>redhat yaml</li> <li>googles cloud code</li> <li>Azure Pipelines</li> <li>Kubernetes Microsoft Default (Some reason this is needed to make the redhat yml extension behave)</li> <li>The Ansible Yaml Schema (This is not an extension these just come down from the json schema store but make sure you have the right url otherwise VSCode does not resolve it. See Point 1. )</li> </ol>
Jason
<p>So there's this page about <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging" rel="nofollow noreferrer">auditing-logs</a> and I'm very confused about:</p> <blockquote> <p>The k8s.io service is used for Kubernetes audit logs. These logs are generated by the Kubernetes API Server component and they contain information about actions performed using the Kubernetes API. For example, any changes you make on a Kubernetes resource by using the kubectl command are recorded by the k8s.io service. For more information, see Auditing in the Kubernetes documentation.</p> </blockquote> <blockquote> <p>The container.googleapis.com service is used for GKE control plane audit logs. These logs are generated by the GKE internal components and they contain information about actions performed using the GKE API. For example, any changes you perform on a GKE cluster configuration using a gcloud command are recorded by the container.googleapis.com service.</p> </blockquote> <p>which one shall I pick to get:</p> <ol> <li><code>/var/log/kube-apiserver.log</code> - API Server, responsible for serving the API</li> <li><code>/var/log/kube-controller-manager.log</code> - Controller that manages replication controllers</li> </ol> <p>or these are all similar to EKS where audit logs means a <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html" rel="nofollow noreferrer">separate thing</a>?</p> <blockquote> <p>Audit (audit) – Kubernetes audit logs provide a record of the individual users, administrators, or system components that have affected your cluster. For more information, see Auditing in the Kubernetes documentation.</p> </blockquote>
Ivan Petrov
<p>If the cluster still exists, you should be able to do the following on GKE</p> <pre><code>kubectl proxy curl http://localhost:8001/logs/kube-apiserver.log </code></pre> <p>AFAIK, there's no way to get server logs for clusters that have been deleted.</p>
Brian Gibbon
<pre class="lang-text prettyprint-override"><code>Error: failed to start container &quot;node-exporter&quot;: Error response from daemon: path /sys is mounted on /sys but it is not a shared or slave mount </code></pre> <p>shows that message here is the repository I took it from trying to make a node exporter to Grafana dashboard through Kubernetes pods followed this <a href="https://www.youtube.com/watch?v=1-tRiThpFrY&amp;t=791s" rel="nofollow noreferrer">video</a> and this <a href="https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/monitoring/prometheus/kubernetes/1.18.4" rel="nofollow noreferrer">repo</a></p> <p>ERROR screenshot <img src="https://i.stack.imgur.com/ZtgSE.png" alt="enter image description here" /></p>
Loopero
<p>Well for me (Docker-Desktop in MacOS) this command saved my day:</p> <pre><code>kubectl patch ds monitoring-prometheus-node-exporter --type &quot;json&quot; -p '[{&quot;op&quot;: &quot;remove&quot;, &quot;path&quot; : &quot;/spec/template/spec/containers/0/volumeMounts/2/mountPropagation&quot;}]' </code></pre> <p>credit: <a href="https://github.com/prometheus-community/helm-charts/issues/467#issuecomment-793682080" rel="noreferrer">GitHub Issues</a></p>
Ali
<p>I'm trying to patch a deployment and remove its volumes using <code>patch_namespaced_deployment</code> (<a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">https://github.com/kubernetes-client/python</a>) with the following arguments, but it's not working.</p> <pre><code>patch_namespaced_deployment( name=deployment_name, namespace='default', body={&quot;spec&quot;: {&quot;template&quot;: { &quot;spec&quot;: {&quot;volumes&quot;: None, &quot;containers&quot;: [{'name': container_name, 'volumeMounts': None}] } } } }, pretty='true' ) </code></pre> <p>How to reproduce it:</p> <p>Create this file app.yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/bound-by-controller: &quot;yes&quot; finalizers: - kubernetes.io/pv-protection labels: volume: pv0001 name: pv0001 resourceVersion: &quot;227035&quot; selfLink: /api/v1/persistentvolumes/pv0001 spec: accessModes: - ReadWriteOnce capacity: storage: 5Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: myclaim namespace: default resourceVersion: &quot;227033&quot; hostPath: path: /mnt/pv-data/pv0001 type: &quot;&quot; persistentVolumeReclaimPolicy: Recycle volumeMode: Filesystem status: phase: Bound --- apiVersion: apps/v1 kind: Deployment metadata: name: pv-deploy spec: replicas: 1 selector: matchLabels: app: mypv template: metadata: labels: app: mypv spec: containers: - name: shell image: centos:7 command: - &quot;bin/bash&quot; - &quot;-c&quot; - &quot;sleep 10000&quot; volumeMounts: - name: mypd mountPath: &quot;/tmp/persistent&quot; volumes: - name: mypd persistentVolumeClaim: claimName: myclaim </code></pre> <pre><code>- kubectl apply -f app.yaml - kubectl describe deployment.apps/pv-deploy (to check the volumeMounts and Volumes) - kubectl patch deployment.apps/pv-deploy --patch '{&quot;spec&quot;: {&quot;template&quot;: {&quot;spec&quot;: {&quot;volumes&quot;: null, &quot;containers&quot;: [{&quot;name&quot;: &quot;shell&quot;, &quot;volumeMounts&quot;: null}]}}}}' - kubectl describe deployment.apps/pv-deploy (to check the volumeMounts and Volumes) - Delete the application now: kubectl delete -f app.yaml - kubectl create -f app.yaml - Patch the deployment using the python library function as stated above. The *VolumeMounts* section is removed but the Volumes still exist. </code></pre> <p>** EDIT ** Running the <code>kubectl patch</code> command works as expected. But after executing the Python script and running a <code>describe deployment</code> command, the persistentVolumeClaim is replaced with an emptyDir like this</p> <pre><code> Volumes: mypd: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; </code></pre>
Mehdi Khlifi
<p>What you're trying to do is called a strategic merge patch (<a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</a>). As you can see in the documentation, <strong>With a strategic merge patch, a list is either replaced or merged depending on its patch strategy</strong> so this may be why you're seeing this behavior.</p> <p>I think you should go with <strong>replace</strong> <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_replace/" rel="nofollow noreferrer">https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_replace/</a> and instead of trying to manage a part of your deployment object, replace it with a new one.</p>
El ktiba
<p>We're using Terraform to deploy AKS clusters to an environment behind a proxy over VPN. Deployment of the cluster works correctly when off-network without the proxy, but errors out on Helm deployment creation on-network.</p> <p>We are able to connect to the cluster after it's up while on the network using the following command after retrieving the cluster context.</p> <pre><code>kubectl config set-cluster &lt;cluster name&gt; --certificate-authority=&lt;path to organization's root certificate in PEM format&gt; </code></pre> <p>The Helm deployments are also created with Terraform after the creation of the cluster. It seems that these require the <code>certificate-authority</code> data to deploy and we haven't been able to find a way to automate this at the right step in the process. Consequently, the <code>apply</code> fails with the error:</p> <blockquote> <p>x509: certificate signed by unknown authority</p> </blockquote> <p>Any idea how we can get the <code>certificate-authority</code> data in the right place so the Helm deployments stop failing? Or is there a way to get the cluster to implicitly trust that root certificate? We've tried a few different things:</p> <ol> <li><p>Researched if you could automatically have that data in there when retrieving the cluster context (i.e. <code>az aks get-credentials --name &lt;cluster name&gt; --resource-group &lt;cluster RG&gt;</code>)?** Couldn't find an easy way to accomplish this.</p> </li> <li><p>We started to consider adding the root cert info as part of the kubeconfig that's generated during deployment (rather than the one you create when retrieving the context). The idea is that it can be passed in to the kubernetes/helm providers and also leveraged when running <code>kubectl</code> commands via local-exec blocks. We know that works but that means that we couldn't find a way to automate that via Terraform.</p> </li> <li><p>We've tried providing the root certificate to the different fields of the provider config, shown below. We've specifically tried a few different things with <code>cluster_ca_certificate</code>, namely providing the PEM-style cert of the root CA.</p> </li> </ol> <pre><code> provider &quot;kubernetes&quot; { host = module.aks.kube_config.0.host client_certificate = base64decode(module.aks.kube_config.0.client_certificate) client_key = base64decode(module.aks.kube_config.0.client_key) cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate) } provider &quot;helm&quot; { version = &quot;&gt;= 1.2.4&quot; kubernetes { host = module.aks.kube_config.0.host client_certificate = base64decode(module.aks.kube_config.0.client_certificate) client_key = base64decode(module.aks.kube_config.0.client_key) cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate) } } </code></pre> <p>Thanks in advance for the help! Let me know if you need any additional info. I'm still new to the project so I may not have explained everything correctly.</p>
Jon
<p>In case anyone finds this later, we ultimately ended up just breaking the project up into two parts: cluster creation and bootstrap. This let us add a <code>local-exec</code> block in the middle to run the <code>kubectl config set-cluster...</code> command. So the order of operations is now:</p> <ol> <li>Deploy AKS cluster (which copies Kube config locally as one of the Terraform outputs)</li> <li>Run the command</li> <li>Deploy microservices</li> </ol> <p>Because we're using Terragrunt, we can just use its <code>apply-all</code> function to execute both operations, setting the dependencies described <a href="https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#dependencies-between-modules" rel="nofollow noreferrer">here</a>.</p>
Jon
<p>I am getting below error</p> <pre><code>Running with gitlab-runner 14.0.0 (3b6f852e) on gitlab-runner-artefactory-gitlab-runner-68f94fd89-pnd6g DaN3U_2T Preparing the &quot;kubernetes&quot; executor 00:00 Using Kubernetes namespace: sai Using Kubernetes executor with image aie-docker-dev-mydockerrepo/python:3.6-strech ... Using attach strategy to execute scripts... Preparing environment 00:06 Waiting for pod sai/runner-dan3u2t-project-9879-concurrent-0zdmgf to be running, status is Pending Waiting for pod sai/runner-dan3u2t-project-9879-concurrent-0zdmgf to be running, status is Pending ContainersNotReady: &quot;containers with unready status: [build helper svc-0]&quot; ContainersNotReady: &quot;containers with unready status: [build helper svc-0]&quot; WARNING: Failed to pull image with policy &quot;&quot;: image pull failed: Back-off pulling image &quot;aie-docker-dev-mydockerrepo/python:3.6-strech&quot; ERROR: Job failed (system failure): prepare environment: waiting for pod running: pulling image &quot;aie-docker-dev-mydockerrepo/python:3.6-strech&quot;: image pull failed: Back-off pulling image &quot;aie-docker-dev-mydockerrepo/python:3.6-strech&quot;. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information </code></pre> <p>I am trying to run job using gitlab runner. What could be the possible reason for failure ? I figured out ImagePullSecret could be set but I am not sure where to use that ? Anyone having idea ?</p>
Laster
<p>According to: <a href="https://docs.gitlab.com/runner/install/kubernetes.html#using-an-image-from-a-private-registry" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html#using-an-image-from-a-private-registry</a></p> <p>If your Docker image is in a private repository, you need to create an image pull secret in the Kubernetes namespace where your job is running.</p> <p>You can create an image pull secret using the following command</p> <pre><code>kubectl create secret docker-registry &lt;SECRET_NAME&gt; \ --namespace &lt;NAMESPACE&gt; \ --docker-server=&quot;https://&lt;REGISTRY_SERVER&gt;&quot; \ --docker-username=&quot;&lt;REGISTRY_USERNAME&gt;&quot; \ --docker-password=&quot;&lt;REGISTRY_PASSWORD&gt;&quot; </code></pre> <p>Then configure the <code>image_pull_secrets</code> parameter in the Kubernetes executor config.toml settings.</p> <pre><code>[[runners]] [runners.kubernetes] image_pull_secrets = [&quot;&lt;SECRET_NAME&gt;&quot;] </code></pre> <p>Or if you use helm, <code>imagePullSecrets</code></p> <pre><code>runners: imagePullSecrets: [your-image-pull-secret] </code></pre> <p>Keep in mind that in both cases you need to pass an array of secrets, even if you have only one.</p>
mrzysztof
<p>I am unable to access the Kibana server UI through ingress path url I have deployed the Kibana pod along with Elasticsearch on Kubernetes cluster. While access the UI it is stating as &quot;503 service unavailable&quot; and it is re-directing path as <a href="https://myserver.com/spaces/enter" rel="nofollow noreferrer">https://myserver.com/spaces/enter</a>. Both the elasticsearch and kibana pods are running. And I am able to curl my elasticsearch pod through my ingress path url. Can someone help with the issue.</p> <p>Kibana yaml files:</p> <p>deployment.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: &quot;apps/v1&quot; kind: &quot;Deployment&quot; metadata: name: &quot;kibana-development&quot; namespace: &quot;development&quot; spec: selector: matchLabels: app: &quot;kibana-development&quot; replicas: 1 strategy: type: &quot;RollingUpdate&quot; rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: &quot;kibana-development&quot; spec: containers: - name: &quot;kibana-development&quot; image: &quot;docker.elastic.co/kibana/kibana:7.10.2&quot; imagePullPolicy: &quot;Always&quot; env: - name: &quot;ELASTICSEARCH_HOSTS&quot; value: &quot;https://my-server.com/elasticsearch&quot; ports: - containerPort: 5601 protocol: TCP imagePullSecrets: - name: &quot;kibana&quot; </code></pre> <p>service.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: &quot;v1&quot; kind: &quot;Service&quot; metadata: name: &quot;kibana-development&quot; namespace: &quot;development&quot; labels: app: &quot;kibana-development&quot; spec: ports: - port: 56976 targetPort: 5601 selector: app: &quot;kibana-development&quot; </code></pre> <p>ingress.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: &quot;networking.k8s.io/v1beta1&quot; kind: &quot;Ingress&quot; metadata: name: &quot;kibana-development-ingress&quot; namespace: &quot;development&quot; annotations: nginx.ingress.kubernetes.io/rewrite-target: &quot;/$1&quot; spec: rules: - host: &quot;my-server.com&quot; http: paths: - backend: serviceName: &quot;kibana-development&quot; servicePort: 56976 path: &quot;/kibana/(.*)&quot; </code></pre> <p>I am able to access Kibana through cliuster-ip:port, but not with ingress path url. Is there any annotations that I am missing? Or the version 7.10.2 for elasticsearch and kibana not stable. I checked my endpoint, it is showing my cluster-ip</p>
SVD
<p>Issue resolved now, needed to add the below two env variables in deployment.yaml file.</p> <pre><code>- name: &quot;SERVER_BASEPATH&quot; value: &quot;/kibana-development&quot; - name: &quot;SERVER_REWRITEBASEPATH&quot; value: &quot;false&quot; </code></pre> <p>Don't forget the &quot;/&quot; in SERVER_BASEPATH value</p>
SVD
<p>I'm trying to learn DevOps and having issues using Kubernetes with <code>redis</code> and my <code>node.js</code> app</p> <p>My <code>node.js</code> app connects to <code>redis</code> with following code:</p> <pre class="lang-js prettyprint-override"><code>const redis = require('redis'); const client = redis.createClient(process.env.REDIS_URL); module.exports = client </code></pre> <p>My <code>redis.yaml</code> file with a <code>deployment</code> and <code>service</code>:</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: apps/v1 # API version kind: Deployment metadata: name: redis-master # Unique name for the deployment labels: app: redis # Labels to be applied to this deployment spec: selector: matchLabels: # This deployment applies to the Pods matching these labels app: redis role: master tier: backend replicas: 1 # Run a single pod in the deployment template: # Template for the pods that will be created by this deployment metadata: labels: # Labels to be applied to the Pods in this deployment app: redis role: master tier: backend spec: # Spec for the container which will be run inside the Pod. containers: - name: master image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 --- apiVersion: v1 kind: Service # Type of Kubernetes resource metadata: name: redis-master # Name of the Kubernetes resource labels: # Labels that will be applied to this resource app: redis role: master tier: backend spec: ports: - port: 6379 # Map incoming connections on port 6379 to the target port 6379 of the Pod targetPort: 6379 selector: # Map any Pod with the specified labels to this service app: redis role: master tier: backend </code></pre> <p>my <code>app.yaml</code> file with a <code>deployment</code> and <code>service</code>:</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: apps/v1 kind: Deployment # Type of Kubernetes resource metadata: name: go-redis-app # Unique name of the Kubernetes resource spec: replicas: 3 # Number of pods to run at any given time selector: matchLabels: app: go-redis-app # This deployment applies to any Pods matching the specified label template: # This deployment will create a set of pods using the configurations in this template metadata: labels: # The labels that will be applied to all of the pods in this deployment app: go-redis-app spec: containers: - name: go-redis-app image: alsoares59/devops-project:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 8080 # Should match the port number that the Go application listens on env: # Environment variables passed to the container - name: REDIS_URL value: redis-master - name: PORT value: &quot;3000&quot; --- apiVersion: v1 kind: Service # Type of kubernetes resource metadata: name: go-redis-app-service # Unique name of the resource spec: type: NodePort # Expose the Pods by opening a port on each Node and proxying it to the service. ports: # Take incoming HTTP requests on port 9090 and forward them to the targetPort of 8080 - name: http port: 9090 targetPort: 8080 selector: app: go-redis-app # Map any pod with label `app=go-redis-app` to this service </code></pre> <p>For some reason when creating those deployments and services, my <code>node</code> app will crash saying it can't connect to <code>redis</code>.</p> <pre class="lang-sh prettyprint-override"><code>[vagrant@centos-minikube k8s]$ kubectl logs go-redis-app-6b687c7bd6-6npt7 &gt; [email protected] start / &gt; node src/server.js redis server should be at redis-master Server listening on port 3000 events.js:292 throw er; // Unhandled 'error' event ^ Error: connect ENOENT redis-master at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) Emitted 'error' event on RedisClient instance at: at RedisClient.on_error (/node_modules/redis/index.js:341:14) at Socket.&lt;anonymous&gt; (/node_modules/redis/index.js:222:14) at Socket.emit (events.js:315:20) at emitErrorNT (internal/streams/destroy.js:106:8) at emitErrorCloseNT (internal/streams/destroy.js:74:3) at processTicksAndRejections (internal/process/task_queues.js:80:21) { errno: -2, code: 'ENOENT', syscall: 'connect', address: 'redis-master' } npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] start: `node src/server.js` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] start script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /root/.npm/_logs/2020-12-23T13_39_56_529Z-debug.log </code></pre> <p>What am I missing ? I know Kubernetes should translate <code>redis-master</code> into <code>redis</code>'s pod IP address but I don't know if he is doing it well.</p>
SoaAlex
<p>I found it myself. I had to change the redis-client.js to this:</p> <pre class="lang-js prettyprint-override"><code>const redis = require('redis'); console.log(&quot;redis server should be at &quot;+process.env.REDIS_URL) const client = redis.createClient({ host: process.env.REDIS_URL, port: process.env.REDIS_PORT }); module.exports = client </code></pre>
SoaAlex
<p>I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting <code>ImagePullBackOff</code> error. Here is the yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongodb-deployment labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password apiVersion: v1 kind: Service metadata: name: mongodb-service spec: selector: app: mongodb ports: - protocol: TCP port: 27017 targetPort: 27017 </code></pre> <p>I tried to pull the image locally by using <code>docker pull mongo</code>, <code>minikube image pull mongo</code> &amp; <code>minikube image pull mongo-express</code> several times while restarting docker and minikube several times.</p> <p>Logining into dockerhub (both in broweser and through terminal didn't work)</p> <p>I also tried to login into docker using <code>docker login</code> command and then modified my <code>/etc/resolv.conf</code> and adding <code>nameserver 8.8.8.8</code> and then restartied docker using <code>sudo systemctl restart docker</code> but even that failed to work.</p> <p>On running <code>kubectl describe pod</code> command I get this output:</p> <pre><code>Name: mongodb-deployment-6bf8f4c466-85b2h Namespace: default Priority: 0 Node: minikube/192.168.49.2 Start Time: Mon, 29 Aug 2022 23:04:12 +0530 Labels: app=mongodb pod-template-hash=6bf8f4c466 Annotations: &lt;none&gt; Status: Pending IP: 172.17.0.2 IPs: IP: 172.17.0.2 Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466 Containers: mongodb: Container ID: Image: mongo Image ID: Port: 27017/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: MONGO_INITDB_ROOT_USERNAME: &lt;set to the key 'mongo-root-username' in secret 'mongodb-secret'&gt; Optional: false MONGO_INITDB_ROOT_PASSWORD: &lt;set to the key 'mongo-root-password' in secret 'mongodb-secret'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-vlcxl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message </code></pre> <hr /> <pre><code> Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube Warning Failed 18m (x2 over 20m) kubelet Failed to pull image &quot;mongo:latest&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image &quot;mongo:latest&quot; Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff Normal Pulling 17m (x3 over 22m) kubelet Pulling image &quot;mongo:latest&quot; Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image &quot;mongo:latest&quot; Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image &quot;mongo:latest&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image &quot;mongo:latest&quot; Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff </code></pre> <p>PS: Ignore any any spacing errors</p>
DevOpsnoob
<p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p> <p>First, pull the image via <code>Docker</code></p> <pre class="lang-bash prettyprint-override"><code>docker image pull mongo </code></pre> <p>Then load the downloaded image to <code>minikube</code></p> <pre class="lang-bash prettyprint-override"><code>minikube image load mongo </code></pre> <p>And then everything will work because now kubectl will use the image that is stored locally.</p>
Ali
<p>I'm using Minikube single-node Kubernetes cluster inside Oracle VM Virtualbox. One of the pods in the node is a Next.js based client and the rest of the pods are different microservices. Let's say my client (Pod1) needs to send a HTTP request to the auth microservice (Pod2), before rendering - see the diagram: <a href="https://i.stack.imgur.com/fTzeq.png" rel="nofollow noreferrer">Minikube Cluster</a></p> <p>Below is my ingress-service.yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - host: dummyweb.info http: paths: - path: /api/users/?(.*) backend: serviceName: auth-srv servicePort: 3000 - path: /?(.*) backend: serviceName: client-srv servicePort: 3000 </code></pre> <p>You can see that each service has a specific path. Thus, I would like to send the HTTP request from client (Pod1) to Ingress Service and then Ingress to reroute the request to the appropriate service, depending on the path. In other words, client living in Pod1 will send HTTP GET request to auth service living in Pod2 through Ingress Service using the following URL: <code>http://&lt;ingress-service-url&gt;/api/users/....</code> I need to figure out what is the URL of the Ingress service.</p> <p>I enabled NGINX Ingress controller:</p> <pre><code>minikube addons enable ingress </code></pre> <p>I verified that the NGINX Ingress controller is running:</p> <pre><code>kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-f9fd979d6-hfnfj 1/1 Running 5 45h etcd-minikube 1/1 Running 5 45h ingress-nginx-admission-create-dkthv 0/1 Completed 0 23h ingress-nginx-admission-patch-4gtth 0/1 Completed 0 23h ingress-nginx-controller-789d9c4dc-qdqxv 1/1 Running 3 23h kube-apiserver-minikube 1/1 Running 5 45h kube-controller-manager-minikube 1/1 Running 5 45h kube-proxy-sr6pt 1/1 Running 5 45h kube-scheduler-minikube 1/1 Running 5 45h storage-provisioner 1/1 Running 11 45h </code></pre> <p>Then, I'm checking what services are available in kube-system namespace:</p> <pre><code>kubectl get services -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller-admission ClusterIP 10.97.5.35 &lt;none&gt; 443/TCP 24h kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 45h </code></pre> <p>I'm assuming that the inner URL to the Ingress service is:</p> <pre><code>http://ingress-nginx-controller-admission.kube-system.svc.cluster.local </code></pre> <p>As we saw above, ingress-nginx-controller-admission service exposes only port 443, so on HTTP request I'm getting the following error:</p> <pre><code>Server Error Error: connect ETIMEDOUT 10.97.5.35:80 This error happened while generating the page. Any console logs will be displayed in the terminal window. Call Stack &lt;unknown&gt; (Error: connect ETIMEDOUT 10.97.5.35 (80) TCPConnectWrap.afterConnect [as oncomplete] net.js (1145:16) </code></pre> <ol> <li>Is this the right inner URL to access Ingress Service in Minikube?</li> <li>If it is, how to open port 80?</li> </ol> <p>I'm not interested in connecting directly to auth service.</p>
Dimitar Georgiev
<p>There is a very simple solution to the problem described above:</p> <ol> <li>I assume that you have already enabled ingress addon. If not: <code>minikube addons enable ingress</code>.</li> <li>You'll need Helm to install NGINX Ingress Controller:</li> </ol> <pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm install my-release ingress-nginx/ingress-nginx </code></pre> <p>where you can replace <code>my-release</code> with whatever you like. In my case this is <code>dimi</code>.</p> <ol start="3"> <li>Check the available services in the default namespace:</li> </ol> <pre><code>kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE auth-mongo-srv ClusterIP 10.110.44.53 &lt;none&gt; 27017/TCP 46m auth-srv ClusterIP 10.106.154.84 &lt;none&gt; 3000/TCP 46m client-srv ClusterIP 10.108.31.36 &lt;none&gt; 3000/TCP 46m dimi-ingress-nginx-controller LoadBalancer 10.102.12.127 &lt;pending&gt; 80:31599/TCP,443:32639/TCP 34m dimi-ingress-nginx-controller-admission ClusterIP 10.102.171.116 &lt;none&gt; 443/TCP 34m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 58m nats-srv ClusterIP 10.110.239.15 &lt;none&gt; 4222/TCP,8222/TCP 46m orders-mongo-srv ClusterIP 10.101.67.81 &lt;none&gt; 27017/TCP 46m orders-srv ClusterIP 10.103.29.63 &lt;none&gt; 3000/TCP 46m tickets-mongo-srv ClusterIP 10.107.137.160 &lt;none&gt; 27017/TCP 46m tickets-srv ClusterIP 10.106.203.231 &lt;none&gt; 3000/TCP 46m </code></pre> <ol start="4"> <li>Now we have <code>dimi-ingress-nginx-controller</code> service of type LoadBalancer. So, if you need to deal with Server Side Rendering (SSR) and send a HTTP request from one pod (where your Next.js App is runnig) to one of your microservices (that is runnig in a different pod), before the page is even rendered, you can send the request to <code>http://dimi-ingress-nginx-controller.default.svc.cluster.local/&lt;path_set_in_the_Ingress_Resource&gt;</code>. For example, if my Next.js App needs to send a request to my Auth microservice the URL will be: <code>http://dimi-ingress-nginx-controller.default.svc.cluster.local/api/users</code>. This way the HTTP(S) traffic will be handled by the Ingress Resource, where we already have a list of rules matched against all incoming requests.</li> </ol>
Dimitar Georgiev
<p>I need to create a k8s resource which take some time until it will be available, for this I use the following</p> <p><a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#example-CreateOrUpdate" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#example-CreateOrUpdate</a></p> <pre><code>op, err := controllerutil.CreateOrUpdate(context.TODO(), c, deploy, func() error { }) func2() </code></pre> <p>now I need to call to <code>func2</code> <strong>right after</strong> the creation of the object was done (it may take 2-3 min until finish), How should I do it right?</p> <p>I found this but not sure how to combine them ...</p> <p><a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg#hdr-Watching_and_EventHandling" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg#hdr-Watching_and_EventHandling</a></p> <p>im using kubebuilder</p>
PJEM
<p>The above approach is more for cli usage.</p> <p>When you are using kubebuilder or the operator sdk then you need to deal with it in your reconcile function.</p> <p>Usually you have a custom resource that triggers your controllers reconcile function. When the custom resource is being created you then create the deployment and instead of returning an empty reconcile.Result (which marks it as done) you can return the reconcile.Result with the Requeue attribute.</p> <pre class="lang-golang prettyprint-override"><code>reconcile.Result{Requeue: true} </code></pre> <p>So during the next run you check if the deployment is ready. If not then you requeue again. Once it is ready you return an empty reconcile.Result struct.</p> <p>Also keep in mind that the reconcile function always needs to be idempotent as it will be run again for every custom resource during a restart of the controller and also every 10 hours by default.</p> <p>Alternatively you could also use an owner reference on the created deployment and then setup the controller to reconcile the owner resource (your custom resource) whenever an update happens on the owned resource (the deployment). With operator sdk this can be configured in the SetupWithManager function, which by default only uses the For option function. Here you need to add the Owns option function.</p> <pre class="lang-golang prettyprint-override"><code>// SetupWithManager sets up the controller with the Manager. func (r *YourReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&amp;yourapigroup.YourCustomResource{}). Owns(&amp;appv1.Deployment{}). Complete(r) } </code></pre> <p>I never used that approach though therefore it might be required to add more code for this to work.</p> <p>Using the owner reference can also come in handy if you do not require any finalizer code, because kubernetes will delete your owned resource (the deployment) automatically when the custom resource is being deleted.</p>
Timo Wendt
<p>I have deployed keycloak on kubernetes cluster and I want to access it with ingress path url, but I am getting 503 service unavilable when trying to access. But with cluster-ip I am able to access keycloak. With /auth I am able to access the main page of keycloak, i.e <a href="https://my-server.com/keycloak-development/auth/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/</a>, but when I try to access admin console it goes to 503 error.</p> <p>deployment.yaml</p> <pre><code>--- apiVersion: &quot;apps/v1&quot; kind: &quot;Deployment&quot; metadata: name: &quot;keycloak-development&quot; namespace: &quot;development&quot; spec: selector: matchLabels: app: &quot;keycloak-development&quot; replicas: 1 strategy: type: &quot;RollingUpdate&quot; rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: &quot;keycloak-development&quot; spec: containers: - name: &quot;keycloak-development&quot; image: &quot;mykeycloak-image:latest&quot; imagePullPolicy: &quot;Always&quot; env: - name: &quot;NODE_ENV&quot; value: &quot;development&quot; - name: &quot;PROXY_ADDRESS_FORWARDING&quot; value: &quot;true&quot; - name: &quot;KEYCLOAK_URL&quot; value: &quot;https://my-server.com/keycloak-development/&quot; ports: - containerPort: 53582 imagePullSecrets: - name: &quot;keycloak&quot; </code></pre> <p>service.yaml</p> <pre><code>-- apiVersion: &quot;v1&quot; kind: &quot;Service&quot; metadata: name: &quot;keycloak-development&quot; namespace: &quot;development&quot; labels: app: &quot;keycloak-development&quot; spec: ports: - port: 53582 targetPort: 8080 selector: app: &quot;keycloak-development&quot; </code></pre> <p>ingress.yaml</p> <pre><code>--- apiVersion: &quot;networking.k8s.io/v1beta1&quot; kind: &quot;Ingress&quot; metadata: name: &quot;keycloak-development-ingress&quot; namespace: &quot;development&quot; annotations: nginx.ingress.kubernetes.io/rewrite-target: &quot;/$1&quot; spec: rules: - host: &quot;my-server.com&quot; http: paths: - backend: serviceName: &quot;keycloak-development&quot; servicePort: 53582 path: &quot;/keycloak-development/(.*)&quot; </code></pre> <p>dockerfile</p> <pre><code>FROM registry.access.redhat.com/ubi8-minimal ENV KEYCLOAK_VERSION 12.0.1 ENV JDBC_POSTGRES_VERSION 42.2.5 ENV JDBC_MYSQL_VERSION 8.0.22 ENV JDBC_MARIADB_VERSION 2.5.4 ENV JDBC_MSSQL_VERSION 8.2.2.jre11 ENV LAUNCH_JBOSS_IN_BACKGROUND 1 ENV PROXY_ADDRESS_FORWARDING false ENV JBOSS_HOME /opt/jboss/keycloak ENV LANG en_US.UTF-8 ARG GIT_REPO ARG GIT_BRANCH ARG KEYCLOAK_DIST=https://github.com/keycloak/keycloak/releases/download/$KEYCLOAK_VERSION/keycloak-$KEYCLOAK_VERSION.tar.gz USER root RUN microdnf update -y &amp;&amp; microdnf install -y glibc-langpack-en gzip hostname java-11-openjdk-headless openssl tar which &amp;&amp; microdnf clean all ADD tools /opt/jboss/tools ENV KEYCLOAK_USER admin ENV KEYCLOAK_PASSWORD admin RUN /opt/jboss/tools/build-keycloak.sh USER 1000 EXPOSE 8080 EXPOSE 8443 ENTRYPOINT [ &quot;/opt/jboss/tools/docker-entrypoint.sh&quot; ] CMD [&quot;-b&quot;, &quot;0.0.0.0&quot;] </code></pre> <p>Note:- I am able to access keycloak and admin page with cluster-ip</p>
SVD
<p>After finding a lot I found the solution, we need to add these env variables to our deployment.yaml file to work</p> <ol> <li>KEYCLOAK_USER</li> <li>KEYCLOAK_PASSWORD</li> <li>PROXY_ADDRESS_FORWARDING (value:&quot;true&quot;)</li> <li>KEYCLOAK_FRONTEND_URL (In my case it was something like this:- <a href="https://my-server.com/keycloak-development/auth/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/</a> )</li> <li>KEYCLOAK_ADMIN_URL (In my case value for it was something like this:- <a href="https://my-server.com/keycloak-development/auth/realms/master/admin/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/realms/master/admin/</a>)</li> </ol> <p>For Docker image you can use (quay.io/keycloak/keycloak:8.0.2)</p> <p>While accessing the key-cloak application if you are using ingress based routing you need to add /auth/ to your ingress path url to access (Somthing like this:- <a href="https://my-server.com/keycloak-development/auth/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/</a> )</p>
SVD
<p>I am running a regional GKE kubernetes cluster in is-central1-b us-central-1-c and us-central1-f. I am running 1.21.14-gke.700. I am adding a confidential node pool to the cluster with this command.</p> <pre><code>gcloud container node-pools create card-decrpyt-confidential-pool-1 \ --cluster=svcs-dev-1 \ --disk-size=100GB \ --disk-type=pd-standard \ --enable-autorepair \ --enable-autoupgrade \ --enable-gvnic \ --image-type=COS_CONTAINERD \ --machine-type=&quot;n2d-standard-2&quot; \ --max-pods-per-node=8 \ --max-surge-upgrade=1 \ --max-unavailable-upgrade=1 \ --min-nodes=4 \ --node-locations=us-central1-b,us-central1-c,us-central1-f \ --node-taints=dedicatednode=card-decrypt:NoSchedule \ --node-version=1.21.14-gke.700 \ --num-nodes=4 \ --region=us-central1 \ --sandbox=&quot;type=gvisor&quot; \ --scopes=https://www.googleapis.com/auth/cloud-platform \ --service-account=&quot;card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com&quot; \ --shielded-integrity-monitoring \ --shielded-secure-boot \ --tags=testingdonotuse \ --workload-metadata=GKE_METADATA \ --enable-confidential-nodes </code></pre> <p>This creates a node pool but there is one problem... I can still SSH to the instances that the node pool creates. This is unacceptable for my use case as these node pools need to be as secure as possible. I went into my node pool and created a new machine template with ssh turned off using an instance template based off the one created for my node pool.</p> <pre><code>gcloud compute instance-templates create card-decrypt-instance-template \ --project=corp-dev-project --machine-type=n2d-standard-2 --network-interface=aliases=gke-svcs-dev-1-pods-10a0a3cd:/28,nic-type=GVNIC,subnet=corp-dev-project-private-subnet,no-address --metadata=block-project-ssh-keys=true,enable-oslogin=true --maintenance-policy=TERMINATE --provisioning-model=STANDARD --service-account=card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --region=us-central1 --min-cpu-platform=AMD\ Milan --tags=testingdonotuse,gke-svcs-dev-1-10a0a3cd-node --create-disk=auto-delete=yes,boot=yes,device-name=card-decrpy-instance-template,image=projects/confidential-vm-images/global/images/cos-89-16108-766-5,mode=rw,size=100,type=pd-standard --shielded-secure-boot --shielded-vtpm - -shielded-integrity-monitoring --labels=component=gke,goog-gke-node=,team=platform --reservation-affinity=any </code></pre> <p>When I change the instance templates of the nodes in the node pool the new instances come online but they do not attach to the node pool. The cluster is always trying to repair itself and I can't change any settings until I delete all the nodes in the pool. I don't receive any errors.</p> <p>What do I need to do to disable ssh into the node pool nodes with the original node pool I created or with the new instance template I created. I have tried a bunch of different configurations with a new node pool and the cluster and have not had any luck. I've tried different tags network configs and images. None of these have worked.</p> <p>Other info: The cluster was not originally a confidential cluster. The confidential nodes are the first of its kind added to the cluster.</p>
TeeTee
<p>One option you have here is to enable private IP addresses for the nodes in your cluster. The <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--enable-private-nodes" rel="nofollow noreferrer"><code>--enable-private-nodes</code> flag</a> will make it so the nodes in your cluster get <em>private</em> IP addresses (rather than the default public, internet-facing IP addresses).</p> <p>Note that in this case, you would still be able to SSH into these nodes, but only from within your VPC network.</p> <p>Also note that this means you would not be able to access <code>NodePort</code> type services from outside of your VPC network. Instead, you would need to use a <code>LoadBalancer</code> type service (or provide some other way to route traffic to your service from outside of the cluster, if required).</p> <hr /> <p>If you'd like to prevent SSH access even from within your VPC network, your easiest option would likely be to configure a firewall rule to deny SSH traffic to your nodes (TCP/UDP/SCTP port 22). Use network tags (the <code>--tags</code> flag) to target your GKE nodes.</p> <p>Something along the lines of:</p> <pre><code>gcloud compute firewall-rules create fw-d-i-ssh-to-gke-nodes \ --network NETWORK_NAME \ --action deny \ --direction ingress \ --rules tcp:22,udp:22,sctp:22 \ --source-ranges 0.0.0.0/0 \ --priority 65534 \ --target-tags my-gke-node-network-tag </code></pre> <hr /> <p>Finally, one last option I'll mention for creating a hardened GKE cluster is to use Google's <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/safer-cluster" rel="nofollow noreferrer"><code>safer-cluster</code></a> Terraform module. This is an opinionated setup of a GKE cluster that follows many of the principles laid out in Google's <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster" rel="nofollow noreferrer">cluster hardening guide</a> and the Terraform module takes care of a lot of the nitty-gritty boilerplate here.</p>
tomasgotchi
<p>I've a backend application running on node and mongodb. I wanted to shift my application to docker so I've dockerized node but I'm confused where should I run mongodb or should I dockerize it or not.</p> <p><strong>Note: I want to do this for production environment</strong></p> <p>Here are some scenarios, I've thought of:</p> <ol> <li>Run node in docker container and mongodb in local machine</li> <li>Run both node and database in docker containers and store the data in the container same as database</li> <li>Run node and database in docker containers and store the data in separate volume</li> </ol> <p><strong>Which scenario is the best to implement on production server?</strong></p> <p><em>I'm new to docker so <strong>if you have other solution to this problem, consider mentioning it.</strong> Also you can involve <strong>kubernetes</strong> in the scenarios if needed</em></p>
Abhishek Pankar
<p>You can use one of this two solution:</p> <ul> <li><p>Create a deployment for your node js server and use a local mongodb , connection estabilished via your host ip (but this only valide for developement ) . To use this solution for production you should create and setup a public @ip address to esure the TCP connection between mongo and node js app .</p> </li> <li><p>The Second solution is to create a deployment for your node js server and a statfulset for your mongodb database and use (pv,pvc,storageclasse) to persist your data .</p> </li> </ul> <p>I have a demo using mongo ,redis and nodejs (express app) that you can follow to resolve your issue Follow my example <a href="https://github.com/radhouen/products-store" rel="nofollow noreferrer">here</a>.</p>
rassakra
<p>I have been banging my head on the wall trying to get the following import to work.</p> <p><code>from kubernetes import config, client</code></p> <p>I pip installed kubernetes==11.0.0</p> <p>It works perfectly fine on <a href="https://repl.it/languages/python3" rel="nofollow noreferrer">https://repl.it/languages/python3</a> but getting import error on my CentOS 7 box.<br /> <code>from kubernetes import config, watch, client</code><br /> <code>ImportError: cannot import name 'config'</code></p> <p>I have been using Python for 7-8 years, I am not sure what I am missing.</p> <p>Thanks!</p>
Paul McNuggets
<p>Found my issue, file name was <code>kubernetes.py</code>. Renamed it and solved issue :|.</p>
Paul McNuggets
<p>Can I add multiple hosts to the Ingress controller so that they refer to the same target group in the aws load balancer? Example:</p> <pre><code> rules: - host: [&quot;foobar.com&quot;, &quot;api.foobar.com&quot;, &quot;status.foobar.com&quot;] http: paths: - backend: serviceName: foobar servicePort: 80 ``` </code></pre>
Bell
<p>You can use hostname wildcards if you are using Kubernetes &gt; 1.18 version.</p> <p>For more information check these links:</p> <p><a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/basic-configuration/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/basic-configuration/</a> <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <pre><code>rules: - host: &quot;foobar.com&quot; http: paths: - backend: serviceName: foobar servicePort: 80 - host: &quot;*.foobar.com&quot; http: paths: - backend: serviceName: foobar servicePort: 80 </code></pre>
Muneer
<p>I'm trying to configure EFK stack in my local minikube setup. I have followed <a href="https://www.metricfire.com/blog/logging-for-kubernetes-fluentd-and-elasticsearch/" rel="nofollow noreferrer">this tutorial</a>.</p> <p>Everything is working fine (I can see all my console logs in kibana and Elasticsearch). But I have another requirement. I have Node.js application which is logs as files to custom path <code>/var/log/services/dev</code> inside the pod.</p> <p>File Tree:</p> <pre><code>/var/log/services/dev/# ls -l total 36 -rw-r--r-- 1 root root 28196 Nov 27 18:09 carts-service-dev.log.2021-11-27T18.1 -rw-r--r-- 1 root root 4483 Nov 27 18:09 carts-service-dev.log.2021-11-27T18 </code></pre> <p>How to configure my Fluentd to read all my console logs and also read logs from the custom path configured?</p> <p>My App Deployment File:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: carts spec: replicas: 1 selector: matchLabels: app: carts template: metadata: labels: app: carts spec: containers: - name: app image: carts-service resources: limits: memory: &quot;1024Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 4000 </code></pre> <p>My Fluentd DaemonSet File:</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-system labels: k8s-app: fluentd-logging version: v1 spec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi env: - name: FLUENT_ELASTICSEARCH_HOST value: &quot;elasticsearch.default&quot; - name: FLUENT_ELASTICSEARCH_PORT value: &quot;9200&quot; </code></pre> <p>I do know that log files written into custom path <code>/var/log/services/dev</code> will be deleted if pod crashes. So I have to use persistent volume to mount this path.</p> <p>But I lack the experience of how to create persistent volume for that path and also link Fluentd to read from it.</p> <p>Thanks in advance.</p>
Sathish
<p>If a pod crashes, all logs still will be accessible in <code>efk</code>. No need to add a persistent volume to the pod with your application only for storing log file.</p> <p>Main question is how to get logs from this file. There are <strong>two main approaches</strong> which are suggested and based on kubernetes documentation:</p> <ol> <li><p><strong>Use a sidecar container</strong>.</p> <p>Containers in pod have the same file system and <code>sidecar</code> container will be streaming logs from file to <code>stdout</code> and/or <code>stderr</code> (depends on implementation) and after logs will be picked up by kubelet.</p> <p>Please find <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#streaming-sidecar-container" rel="nofollow noreferrer">streaming sidecar container</a> and example how it works.</p> </li> <li><p><strong>Use a sidecar container with a logging agent</strong>.</p> <p>Please find <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent" rel="nofollow noreferrer">Sidecar container with a logging agent</a> and configuration example using <code>fluentd</code>. In this case logs will be collected by <code>fluentd</code> and they won't be available by <code>kubectl logs</code> commands since <code>kubelet</code> is not responsible for these logs.</p> </li> </ol>
moonkotte
<p>I want to expose a pod app to port 80, for that i have installed Metallb and configured a load balancer like this:</p> <p>metallb-config.yaml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.100-192.168.1.150 </code></pre> <p>loadbalancer.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: loadbalancer-watchdog spec: selector: part: watchdog ports: - port: 80 targetPort: 10069 type: LoadBalancer externalTrafficPolicy: Local </code></pre> <p>But when i do <code>kubectl get svc</code> the loadBalancer keep appearing as and if i check metallb with `kubectl -n metallb-system get all i can see this:</p> <p><a href="https://i.stack.imgur.com/254gc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/254gc.png" alt="enter image description here" /></a></p> <p>If i check the logs:</p> <p><a href="https://i.stack.imgur.com/MEbPX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MEbPX.png" alt="enter image description here" /></a></p> <p>Has something to do with my config or do i miss some step in the configuration of Metallb?</p> <p>EDIT:</p> <p>Output of <code>kubectl -n kube-system get pods</code> <a href="https://i.stack.imgur.com/fbLDQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fbLDQ.png" alt="enter image description here" /></a></p> <p>Apparently the coredns are down and also the storage.</p>
Jaume Garcia Sanchez
<p>the same thing happened to me but it did not start because it collided with a docker swarm port, just run</p> <p><code>docker swarm leave -f </code></p> <p>and try again</p>
Carlos Parada
<p>Trying to solve dependency between pods using postStart lifecycle.</p> <p>Use case: micro service A should start after the start of micro service B.</p> <p>For that we have added one container (curl) which will check if dependent service is up or not using curl command.</p> <p>But when we add any command in postStart lifecycle hook, pods keep restarting and goes in crashlookbackoff state</p> <p>Deployment.yaml :</p> <pre><code>kind: Deployment metadata: name: Microservice-A-deployment spec: replicas: 1 selector: matchLabels: app: Microservice-A template: metadata: labels: app: Microservice-A date: 20thJune2021 annotations: sidecar.istio.io/rewriteAppHTTPProbers: &quot;false&quot; proxy.istio.io/config: '{ &quot;holdApplicationUntilProxyStarts&quot;: true }' spec: containers: - name: curl image: ewoutp/docker-nginx-curl imagePullPolicy: IfNotPresent command: [ 'sh', '-c', 'touch /tmp/healthy; echo The Pod is running &amp;&amp; sleep 50' ] livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 15 periodSeconds: 5 lifecycle: postStart: exec: command: [ &quot;/bin/sh&quot;, &quot;-c&quot;, 'sleep 10;until [ $(eval curl -o -I -L -s -w &quot;%{http_code}&quot; http://microservice-B-api-service:9439/manage/health) -eq 200 ]; do echo &quot;Waiting for microservice-B API&quot;;sleep 10; done; exit 0' ] - name: Microservice-A image: microserviceA:latest imagePullPolicy: Always ports:[![enter image description here][1]][1] - name: port containerPort: 8080 livenessProbe: httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 120 periodSeconds: 30 timeoutSeconds: 30 imagePullSecrets: - name: dockersecret </code></pre> <p>Note: Reason for not using init-container: As we have implemented Istio with strict MTLS policy. <a href="https://github.com/istio/istio/issues/32039" rel="nofollow noreferrer">https://github.com/istio/istio/issues/32039</a></p> <p>Found below while searching for this issue on internet. <a href="https://i.stack.imgur.com/bbTwh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bbTwh.png" alt="enter image description here" /></a></p>
Ankita Sawant
<p>That is because your command in postStart is sleeping for 10 seconds and your <code>LivenessProbe</code> is configured to fail after 5 seconds.</p> <p>Maybe increase <code>initialDelaySeconds</code> or add a <code>failureThreshold</code>.</p>
PSchoe
<p><strong>skaffold.yaml</strong></p> <pre><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: local: push: false artifacts: - image: karan346/auth context: auth docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' des: . </code></pre> <p><strong>Error</strong></p> <pre><code>parsing skaffold config: error parsing skaffold configuration file: unable to parse config: yaml: unmarshal errors: line 10: field des not found in type v2alpha3.SyncRule </code></pre> <p>Not able to fix the issue. Everything is setup correctly.</p> <p>Also, is there any version that is stable and won't give errors in the future?</p>
Karan Sharma
<p>Error you're facing is:</p> <pre><code>line 10: field des not found in type v2alpha3.SyncRule </code></pre> <p>There's no field <code>des</code> in these <code>api</code> and <code>kind</code>.</p> <p>Based on the <a href="https://skaffold.dev/docs/pipeline-stages/filesync/#manual-sync-mode" rel="nofollow noreferrer">documentation for manual file sync</a>, field should be named as <code>dest</code>. See the example below:</p> <pre><code>build: artifacts: - image: gcr.io/k8s-skaffold/node-example context: node sync: manual: # sync a single file into the `/etc` folder - src: '.filebaserc' dest: /etc </code></pre> <hr /> <p>Last available <code>apiVersion</code> of <code>skaffold</code> at the moment of posting the answer is <code>skaffold/v2beta26</code>.</p> <p>It's always can be checked on <a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer">skaffold.yaml documentation</a></p>
moonkotte
<p>I`vw tried to install jenkins on minikube according this article <a href="https://www.jenkins.io/doc/book/installing/kubernetes/" rel="nofollow noreferrer">https://www.jenkins.io/doc/book/installing/kubernetes/</a></p> <p>When I type <code>kubectl logs pod/jenkins-0 init -n jenkins</code> I get</p> <pre><code>disable Setup Wizard /var/jenkins_config/apply_config.sh: 4: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/jenkins.install.UpgradeWizard.state: Permission denied </code></pre> <p>I almost sure that I have some problems with file system on mac.</p> <p>I did not create serviceAccount from article because helm have not seen it and returns error.</p> <p>Instead of it I changed in jenkins-values.yaml</p> <pre><code>serviceAccount: create: true name: jenkins annotations: {} </code></pre> <p>Then I tried set next values to 0. It have no affect.</p> <pre><code> runAsUser: 1000 fsGroup: 1000 </code></pre> <p>Addition info: kubectl get all -n jenkins</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/jenkins-0 0/2 Init:CrashLoopBackOff 7 15m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/jenkins ClusterIP 10.104.114.29 &lt;none&gt; 8080/TCP 15m service/jenkins-agent ClusterIP 10.104.207.201 &lt;none&gt; 50000/TCP 15m NAME READY AGE statefulset.apps/jenkins 0/1 15m </code></pre> <p>Also tried to use different directories for volume live /Volumes/data and add 777 permissions to it.</p>
Edgar Kovalenko
<p>There are a couple potentials in here, but there is a solution without switching to <code>runAsUser 0</code> (which breaks security assessments).</p> <p>The folder <code>/data/jenkins-volume</code> is created as root by default, with a 755 permission set so you can't create persistent data in this dir with the default jenkins build.</p> <ul> <li>To fix this, enter minikube with <code>$ minikube ssh</code> and run: <code>$ chown 1000:1000 /data/jenkins-volume</code></li> </ul> <p>The other thing that could be biting you (after fixing the folder permissions) is SELinux policies, when you are running your Kubernetes on a RHEL based OS.</p> <ul> <li>To fix this: <code>$ chcon -R -t svirt_sandbox_file_t /data/jenkins-volume</code></li> </ul>
Brock R.