prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I've a custom operator that listens to changes in a CRD I've defined in a Kubernetes cluster.</p> <p>Whenever something changed in the defined custom resource, the custom operator would reconcile and idempotently create a secret (that would be owned by the custom resource).</p> <hr /> <p><strong>What I expect is for the operator to Reconcile only when something changed in the custom resource or in the secret owned by it</strong>.</p> <p><strong>What I observe is that for some reason the <code>Reconcile</code> function triggers for every CR on the cluster in <em>strange intervals</em> without observable changes to related entities</strong>. I've tried focusing on a specific instance of the CR and follow the times in which <code>Reconcile</code> was called for it. The intervals of these calls are very strange. It seems that the calls are alternating between two series - one starts at 10 hours and diminishes seven minutes at a time. The other starts at 7 minutes and grows by 7 minutes a time.</p> <p>To demonstrate, Reconcile <code>triggered</code> at these times (give or take a few seconds):</p> <pre><code>00:00 09:53 (10 hours - 1*7 minute interval) 10:00 (0 hours + 1*7 minute interval) 19:46 (10 hours - 2*7 minute interval) 20:00 (0 hours + 2*7 minute interval) 29:39 (10 hours - 3*7 minute interval) 30:00 (0 hours + 3*7 minute interval) </code></pre> <p>Whenever the diminishing intervals become less than 7 hours, it resets back to 10 hour intervals. The same with the growing series - as soon as the intervals are higher than 3 hours it resets back to 7 minutes.</p> <hr /> <p><strong>My main question is how can I investigating why Reconcile is being triggered?</strong></p> <p>I'm attaching here the manifests for the CRD, the operator and a sample manifest for a CR:</p> <pre><code>apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.4.1 creationTimestamp: &quot;2021-10-13T11:04:42Z&quot; generation: 1 name: databaseservices.operators.talon.one resourceVersion: &quot;245688703&quot; uid: 477f8d3e-c19b-43d7-ab59-65198b3c0108 spec: conversion: strategy: None group: operators.talon.one names: kind: DatabaseService listKind: DatabaseServiceList plural: databaseservices singular: databaseservice scope: Namespaced versions: - name: v1alpha1 schema: openAPIV3Schema: description: DatabaseService is the Schema for the databaseservices API properties: apiVersion: description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' type: string kind: description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' type: string metadata: type: object spec: description: DatabaseServiceSpec defines the desired state of DatabaseService properties: cloud: type: string databaseName: description: Foo is an example field of DatabaseService. Edit databaseservice_types.go to remove/update type: string serviceName: type: string servicePlan: type: string required: - cloud - databaseName - serviceName - servicePlan type: object status: description: DatabaseServiceStatus defines the observed state of DatabaseService type: object type: object served: true storage: true subresources: status: {} status: acceptedNames: kind: DatabaseService listKind: DatabaseServiceList plural: databaseservices singular: databaseservice conditions: - lastTransitionTime: &quot;2021-10-13T11:04:42Z&quot; message: no conflicts found reason: NoConflicts status: &quot;True&quot; type: NamesAccepted - lastTransitionTime: &quot;2021-10-13T11:04:42Z&quot; message: the initial names have been accepted reason: InitialNamesAccepted status: &quot;True&quot; type: Established storedVersions: - v1alpha1 ---- apiVersion: operators.talon.one/v1alpha1 kind: DatabaseService metadata: creationTimestamp: &quot;2021-10-13T11:14:08Z&quot; generation: 1 labels: app: talon company: amber repo: talon-service name: db-service-secret namespace: amber resourceVersion: &quot;245692590&quot; uid: cc369297-6825-4fbf-aa0b-58c24be427b0 spec: cloud: google-australia-southeast1 databaseName: amber serviceName: pg-amber servicePlan: business-4 ---- apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: &quot;75&quot; secret.reloader.stakater.com/reload: db-credentials simpledeployer.talon.one/image: &lt;path_to_image&gt;/production:latest creationTimestamp: &quot;2020-06-22T09:20:06Z&quot; generation: 77 labels: simpledeployer.talon.one/enabled: &quot;true&quot; name: db-operator namespace: db-operator resourceVersion: &quot;245688814&quot; uid: 900424cd-b469-11ea-b661-4201ac100014 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: name: db-operator strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: db-operator spec: containers: - command: - app/db-operator env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: OPERATOR_NAME value: db-operator - name: AIVEN_PASSWORD valueFrom: secretKeyRef: key: password name: db-credentials - name: AIVEN_PROJECT valueFrom: secretKeyRef: key: projectname name: db-credentials - name: AIVEN_USERNAME valueFrom: secretKeyRef: key: username name: db-credentials - name: SENTRY_URL valueFrom: secretKeyRef: key: sentry_url name: db-credentials - name: ROTATION_INTERVAL value: monthly image: &lt;path_to_image&gt;/production@sha256:&lt;some_sha&gt; imagePullPolicy: Always name: db-operator resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: db-operator serviceAccountName: db-operator terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: &quot;2020-06-22T09:20:06Z&quot; lastUpdateTime: &quot;2021-09-07T11:56:07Z&quot; message: ReplicaSet &quot;db-operator-cb6556b76&quot; has successfully progressed. reason: NewReplicaSetAvailable status: &quot;True&quot; type: Progressing - lastTransitionTime: &quot;2021-09-12T03:56:19Z&quot; lastUpdateTime: &quot;2021-09-12T03:56:19Z&quot; message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: &quot;True&quot; type: Available observedGeneration: 77 readyReplicas: 1 replicas: 1 updatedReplicas: 1 </code></pre> <hr /> <p>Note:</p> <ul> <li>When Reconcile finishes, I return:</li> </ul> <pre><code>return ctrl.Result{Requeue: false, RequeueAfter: 0} </code></pre> <p>So that shouldn't be the reason for the repeated triggers.</p> <ul> <li>I will add that I have recently updated the Kubernetes cluster version to v1.20.8-gke.2101.</li> </ul>
<p>This would require more info on how your controller is set up. For example what is the sync period you have set. This could be due to default sync period set which reconciles all the objects at given interval of time.</p> <blockquote> <p>SyncPeriod determines the minimum frequency at which watched resources are reconciled. A lower period will correct entropy more quickly, but reduce responsiveness to change if there are many watched resources. Change this value only if you know what you are doing. Defaults to 10 hours if unset. there will a 10 percent jitter between the SyncPeriod of all controllers so that all controllers will not send list requests simultaneously.</p> </blockquote> <p>For more information check this: <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.2/pkg/manager/manager.go#L134" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.2/pkg/manager/manager.go#L134</a></p>
<p>I would like to write a small programm similar to <code>kubectl explain</code>.</p> <p>I use the python client.</p> <p>With <code>kubectl explain pods -v=8</code> I see which APIs get called.</p> <p>The URL is <code>/openapi/v2</code></p> <p>I tried this:</p> <pre><code>from kubernetes import client, config # Configs can be set in Configuration class directly or using helper utility from kubernetes.client import ApiClient config.load_kube_config() print(ApiClient().call_api('/openapi/v2', method='GET')) </code></pre> <p>But the result is empty:</p> <pre><code>(None, 200, HTTPHeaderDict({'Accept-Ranges': 'bytes', 'Audit-Id': '5f025f01-cab9-4816-8579-751b47604275', 'Cache-Control': 'no-cache, private', 'Content-Length': '3315308', 'Content-Type': 'text/plain; charset=utf-8', 'Etag': '&quot;194A5412D92C8239FAA388BD61A2729940609093EE00703602A983C97E2D7FD9FFA0E25F481A2659782EC80339F6A25CD9FD414B8D652409E1B521BB4F53E5DB&quot;', 'Last-Modified': 'Thu, 31 Mar 2022 17:51:05 GMT', 'Vary': 'Accept-Encoding, Accept', 'X-Kubernetes-Pf-Flowschema-Uid': 'f70aa7db-e8d7-4690-becf-40ac57d88c1f', 'X-Kubernetes-Pf-Prioritylevel-Uid': '5c900157-e070-46c3-b774-a77dfa6128bc', 'Date': 'Sat, 02 Apr 2022 21:29:56 GMT'})) </code></pre> <p>How can I get the nice docs which <code>kubectl explain</code> shows via Python?</p>
<p>You're already getting the data, it's just that some error occurs while processing it :) To turn off post-processing, you need to pass the _preload_content=False argument to call_api<br /> Then the code will look something like this:</p> <pre><code>import json from kubernetes import client, config # Configs can be set in Configuration class directly or using helper utility from kubernetes.client import ApiClient config.load_kube_config() apiClient = ApiClient() answer = apiClient.call_api('/openapi/v2', method='GET', _preload_content=False) data = json.loads(answer[0].data) print(data) </code></pre> <p>If you only want to get the description, you can use curl like this with Bearer Auth: <a href="https://blog.ronnyvdb.net/2019/08/07/howto-curl-the-kubernetes-api-server" rel="nofollow noreferrer">https://blog.ronnyvdb.net/2019/08/07/howto-curl-the-kubernetes-api-server</a></p> <pre><code> curl -s $APISERVER/openapi/v2 --header &quot;Authorization: Bearer $TOKEN&quot; --cacert ca.crt </code></pre> <p>Or with TSL Auth:</p> <pre><code> curl -s $APISERVER/openapi/v2 --cert client.crt --key client.key --cacert ca.crt </code></pre> <p>After that, you can use the tools to work with the openAPI description: <a href="https://openapi.tools" rel="nofollow noreferrer">https://openapi.tools</a></p> <p>For example, upload json to <a href="https://mrin9.github.io/OpenAPI-Viewer" rel="nofollow noreferrer">https://mrin9.github.io/OpenAPI-Viewer</a> and enjoy</p>
<p>I understand that, in Cloud scenarios, a LoadBalancer resource refers to and provisions an external layer 4 load balancer. There is some proprietary integration by which the cluster configures this load balancing. OK.</p> <p>On-prem we have no such integration so we create our own load balancer outside of the cluster which distributes traffic to all nodes - effectively, in our case to the ingress.</p> <p>I see no need for the cluster to know anything about this external load balancer.</p> <p>What is a LoadBalancer* then in an on-prem scenario? Why do I have them? Why do I need one? What does it do? What happens when I create one? What role does the LoadBalancer resource play in ingress traffic. What effect does the LoadBalancer have outside the cluster? Is a LoadBalancer just a way to get a new IP address? How/why/ where does the IP point to and where does the IP come from?</p> <ul> <li>All questions refer to the Service of type “LoadBalancer” inside cluster and not my load balancer outside the cluster of which the cluster has no knowledge.</li> </ul>
<p>As pointed out in the comments a kubernetes service of type LoadBalancer can not be used by default with on-prem setups. You can use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">metallb</a> to setup a service of that type in an on prem environment.</p> <blockquote> <p>Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. [...] If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created. [...] MetalLB aims to redress this imbalance by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters also “just work” as much as possible.</p> </blockquote> <p>You can for example use the BGP mode to advertise the service's IP to your router, read more on that in the <a href="https://metallb.universe.tf/concepts/bgp/" rel="nofollow noreferrer">docs</a>.</p> <p>The project is still in beta but is promoted as production ready and used by several bigger companies.</p> <p><strong>Edit</strong></p> <p>Regarding your question in the comments:</p> <blockquote> <p>Can I just broadcast the MAC address of my node and manually add the IP I am broadcasting to the LoadBalancer service via kubectl edit?</p> </blockquote> <p>Yes that would work too. That's basically what metallb does, announcing the IP and updating the service.</p> <p>Why need a software then? Imaging having 500 hosts that come and go with thousends of services of type <code>LoadBalancer</code> that come and go. You need an automation here.</p> <blockquote> <p>Why does Kubernetes need to know this IP?</p> </blockquote> <p>It doesn't. If you don't use an external ip, the service is still usable via it's NodePort, see for example the <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">istio docs</a> (with a little more details added by me):</p> <pre><code>$ kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingressgateway LoadBalancer 192.12.129.119 &lt;Pending&gt; [...],80:32123/TCP,443:30994/TCP,[...] </code></pre> <p>Here the external IP is not set and stays in <code>&lt;Pending&gt;</code>. You can still use the service by pointing your traffic to <code>&lt;Node-IP&gt;:32123</code> for plain http and to <code>&lt;Node-IP&gt;:30994</code> for https. As you can see above those ports are mapped to 80 and 443.</p> <p>If the external ip is set you can direct traffic directly to port 80 and 443 on the external load balancer. Kube-Proxy will create an iptables chain with the destination of you external ip, that basically leads from the external IP over the service ip with a load balancer configuration to a pod ip.</p> <p>To investigate that set up a service of type LoadBalancer, make sure it has an external ip, connect to the host and run the iptables-save command (e.g. <code>iptables-save | less</code>). Search for the external ip and follow the chain until you end up at the pod.</p>
<p>Trying to keep track of memory usage for a pod in k8s. Does the metric kubernetes.pod.memory.usage counts cached/buffer size? If yes, which metric should I use to keep track of actual memory usage</p>
<ol> <li>Does the metric kubernetes.pod.memory.usage counts cached/buffer size? The answer is <strong>Yes</strong>.</li> </ol> <p><a href="https://i.stack.imgur.com/VHTHE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VHTHE.png" alt="enter image description here" /></a></p> <ol start="2"> <li>Which metric should I use to keep track of actual memory usage? <code>container_memory_working_set_bytes</code></li> </ol> <p><a href="https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d" rel="nofollow noreferrer">The article</a> is recommended to you:</p> <blockquote> <p>You might think that memory utilization is easily tracked with <code>container_memory_usage_bytes</code>, however, this metric also includes cached (think filesystem cache) items that can be evicted under memory pressure.The better metric is <code>container_memory_working_set_bytes</code> as this is what the OOM killer is watching for.</p> </blockquote>
<p>I am trying to start up a couple of containers locally using k8s but container creation is stopped cause of <strong>ImagePullBackOff</strong>, <strong>ErrImagePull</strong>. The yaml is fine, tested it on another workstation. And i can pull images using regular docker. But it fails in k8s/minikube environment</p> <p>Error container logs is</p> <pre><code>Error from server (BadRequest): container &quot;mongo-express&quot; in pod &quot;mongoexpress-deployment-bd7cf697b-nc4h5&quot; is waiting to start: trying and failing to pull image </code></pre> <p>Error in minikube dashboard is</p> <pre><code>Failed to pull image &quot;docker.io/mongo&quot;: rpc error: code = Unknown desc = Error response from daemon: Get &quot;https://registry-1.docker.io/v2/&quot;: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>I tried pulling the image to my local docker cache and running</p> <pre><code>eval $(minikube docker-env) </code></pre> <p>But i keep getting this error. It doesnt see local image repository and it doesnt dowload the image by itself.</p> <p>I am 100% sure it has something to do with user access on Fedora. But dont have any idea what to do, and i've been trying to fix this for a couple of days :(.</p> <p>Please help, thank you</p> <p>Dont know if this helps: I tried using <a href="https://k3s.io/" rel="nofollow noreferrer">k3s</a>. Image pull is successful, but minikube isnt compatible with it on Fedora.</p> <p>Also... If i try using docker without sudo it doesnt pull images. With sudo it pulls.</p> <p>OS is Fedora, and i am using docker, kubernetes, minikube, podman as driver.</p> <pre><code>- linux version NAME=&quot;Fedora Linux&quot; VERSION=&quot;35 (Workstation Edition)&quot; - kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;23&quot;, GitVersion:&quot;v1.23.5&quot;, - docker version Version: 20.10.12 - minikube version minikube version: v1.25.2 </code></pre> <p>I am trying to start up locally this yaml file</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mongodb-secret type: Opaque data: mongo-root-username: dXNlcm5hbWU= mongo-root-password: cGFzc3dvcmQ= --- apiVersion: v1 kind: ConfigMap metadata: name: mongodb-configmap data: database_url: mongodb-service --- apiVersion: apps/v1 kind: Deployment metadata: name: mongoexpress-deployment labels: app: mongoexpress spec: replicas: 1 selector: matchLabels: app: mongoexpress template: metadata: labels: app: mongoexpress spec: containers: - name: mongo-express image: mongo-express ports: - containerPort: 8081 env: - name: ME_CONFIG_MONGODB_ADMINUSERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: ME_CONFIG_MONGODB_ADMINPASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password - name: ME_CONFIG_MONGODB_SERVER valueFrom: configMapKeyRef: name: mongodb-configmap key: database_url - name: WHATEVER value: Someconfig --- apiVersion: apps/v1 kind: Deployment metadata: name: mongodb-deployment labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password --- apiVersion: v1 kind: Service metadata: name: mongodb-service spec: selector: app: mongodb ports: - protocol: TCP port: 27017 targetPort: 27017 --- apiVersion: v1 kind: Service metadata: name: mongoexpress-service spec: selector: app: mongoexpress-deployment type: LoadBalancer ports: - protocol: TCP port: 8081 targetPort: 8081 nodePort: 30000 </code></pre>
<p>Based on the comments, my suggestion is to use the docker driver, since Docker has been installed in the system and is the <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">preferred stable driver</a>.</p> <pre class="lang-sh prettyprint-override"><code>minikube start --driver=docker </code></pre> <p>You can also set this as the default driver.</p> <pre class="lang-sh prettyprint-override"><code>minikube config set driver docker minikube start </code></pre> <p>That doesn't explain why it doesn't work with podman, though.</p>
<p>I have a pod that I can see on GKE. But if I try to delete them, I got the error:</p> <pre><code>kubectl delete pod my-pod --namespace=kube-system --context=cluster-1 </code></pre> <blockquote> <p>Error from server (NotFound): pods &quot;my-pod&quot; not found</p> </blockquote> <p>However, if I try to patch it, the operation was completed successfully:</p> <pre><code>kubectl patch deployment my-pod --namespace kube-system -p &quot;{\&quot;spec\&quot;:{\&quot;template\&quot;:{\&quot;metadata\&quot;:{\&quot;annotations\&quot;:{\&quot;secrets-update\&quot;:\&quot;`date +'%s'`\&quot;}}}}}&quot; --context=cluster-1 </code></pre> <blockquote> <p>deployment.apps/my-pod patched</p> </blockquote> <p>Same namespace, same context, same pod. Why kubectl fails to delete the pod?</p>
<pre><code>kubectl patch deployment my-pod --namespace kube-system -p &quot;{\&quot;spec\&quot;:{\&quot;template\&quot;:{\&quot;metadata\&quot;:{\&quot;annotations\&quot;:{\&quot;secrets-update\&quot;:\&quot;`date +'%s'`\&quot;}}}}}&quot; --context=cluster-1 </code></pre> <p>You are patching the <strong>deployment</strong> here, not the pod.</p> <p>Additionally, your <strong>pod</strong> will not be called &quot;my-pod&quot; but would be called the name of your deployment plus a hash (random set of letters and numbers), something like &quot;my-pod-ace3g&quot;</p> <p>To see the pods in the namespace use</p> <p><code>kubectl get pods -n {namespace}</code></p> <p>Since you've put the deployment in the &quot;kube-system&quot; namespace, you would use</p> <p><code>kubectl get pods -n kube-system</code></p> <p>Side note: Generally don't use the <code>kube-system</code> namespace unless your deployment is related to the cluster functionality. There's a namespace called <code>default</code> you can use to test things</p>
<p>I have a war file that I deployed with Tomcat.<br /> I have a Dockerfile and at the end I build a kubernetes pod.</p> <p>The problem is that if my property files from my app do exist in the path: <code>/usr/local/tomcat/webapps/myapp/WEB-INF/classes/config/</code> and not in path <code>/usr/local/tomcat/webapps/myapp/WEB-INF/classes/</code>, so the application does not start.</p> <p>Is it possible to set a classpath in Tomcat to point to a specific folder?<br /> For example, I want to set classpath like: <code>/usr/local/tomcat/webapps/myapp/WEB-INF/classes/config/</code>.<br /> I don't want to have duplicate property files.</p>
<p>As <a href="https://stackoverflow.com/a/2161583/6309">mentioned here</a>:</p> <blockquote> <p><code>foo.properties</code> is supposed to be placed in one of the roots which are covered by the default classpath of a webapp, e.g. webapp's <code>/WEB-INF/lib</code> and /WEB-INF/classes, server's /lib, or JDK/JRE's /lib.</p> <ul> <li>If the properties file is webapp-specific, best is to place it in <code>/WEB-INF/classes</code>.</li> <li>If you're developing a standard WAR project in an IDE, drop it in <code>src</code> folder (the project's source folder).</li> <li>If you're using a Maven project, drop it in <code>/main/resources</code> folder.</li> </ul> <p>You can alternatively also put it somewhere outside the default classpath and add its path to the classpath of the appserver.<br /> <strong>In for example Tomcat you can configure it as <code>shared.loader</code> property of <code>Tomcat/conf/catalina.properties</code>.</strong></p> </blockquote> <p><a href="https://gist.github.com/ghusta/12b50687a39bd02a88680df450a840f4" rel="nofollow noreferrer">Example</a>:</p> <pre><code>FROM tomcat:8.5-jre8 # $CATALINA_HOME is defined in tomcat image ADD target/my-webapp*.war $CATALINA_HOME/webapps/my-webapp.war # Application config RUN mkdir $CATALINA_HOME/app_conf/ ADD src/main/config/test.properties $CATALINA_HOME/app_conf/ # Modify property 'shared.loader' in catalina.properties RUN sed -i -e 's/^shared.loader=$/shared.loader=&quot;${catalina.base} \/ app_conf&quot;/' $CATALINA_HOME/conf/catalina.properties </code></pre>
<p>I am trying to patch a secret using kubectl</p> <pre><code>kubectl patch secret operator-secrets --namespace kube-system --context=cluster1 --patch &quot;'{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR\&quot;}}'&quot; </code></pre> <p>But I receive the error</p> <blockquote> <p>Error from server (BadRequest): json: cannot unmarshal string into Go value of type map[string]interface {}</p> </blockquote> <p>If I run the command using echo, it seems to be a valid JSON</p> <pre><code>$ echo &quot;'{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR\&quot;}}'&quot; '{&quot;data&quot;: {&quot;FOOBAR&quot;: &quot;value that I want&quot;}}' </code></pre> <p>What can be?</p>
<blockquote> <p>If I run the command using echo, it seems to be a valid JSON</p> </blockquote> <p>In fact, it does not. Look carefully at the first character of the output:</p> <pre><code>'{&quot;data&quot;: {&quot;FOOBAR&quot;: &quot;value that I want&quot;}}' </code></pre> <p>Your &quot;JSON&quot; string starts with a single quote, which is an invalid character. To get valid JSON, you would need to rewrite your command to look like this:</p> <pre><code>echo &quot;{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR\&quot;}}&quot; </code></pre> <p>And we can confirm that's valid JSON using something like the <code>jq</code> command:</p> <pre><code>$ echo &quot;{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR\&quot;}}&quot; | jq . { &quot;data&quot;: { &quot;FOOBAR&quot;: &quot;value that i want&quot; } } </code></pre> <p>Making your patch command look like:</p> <pre><code>kubectl patch secret operator-secrets \ --namespace kube-system \ --context=cluster1 \ --patch &quot;{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR\&quot;}}&quot; </code></pre> <p>But while that patch is now valid JSON, it's still going to fail with a new error:</p> <pre><code>The request is invalid: patch: Invalid value: &quot;map[data:map[FOOBAR:value that i want]]&quot;: error decoding from json: illegal base64 data at input byte 5 </code></pre> <p>The value of items in the <code>data</code> map must be base64 encoded values. You can either base64 encode the value yourself:</p> <pre><code>kubectl patch secret operator-secrets \ --namespace kube-system \ --context=cluster1 \ --patch &quot;{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$(base64 &lt;&lt;&lt;&quot;$FOOBAR&quot;)\&quot;}}&quot; </code></pre> <p>Or use <code>stringData</code> instead:</p> <pre><code>kubectl patch secret operator-secrets \ --namespace kube-system \ --context=cluster1 \ --patch &quot;{\&quot;stringData\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR\&quot;}}&quot; </code></pre>
<p>I have a container with a dotnet core application running in Azure Kubernetes Service. No memory limits were specified in Pod specs.</p> <p>The question is why GC.GetTotalMemory(false) shows approx. 3 Gb of memory used while AKS Insights shows 9.5 GB for this Pod container?</p> <p><a href="https://i.stack.imgur.com/rRBa9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rRBa9.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/nmVD4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nmVD4.png" alt="enter image description here" /></a></p> <p>Running <code>top</code> reveals these 9.5 GB: <a href="https://i.stack.imgur.com/rATs5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rATs5.png" alt="enter image description here" /></a></p>
<p>As I understand <code>GC.GetTotalMemory(false)</code> returns the size of managed objects in bytes but the entire working memory set is much larger because memory is allocated in pages and because of managed heap fragmentation and because GC is not performed.</p>
<p>I have some images on my local docker instance, one of which is named <code>door_controls</code>:</p> <pre><code>&gt;docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE door_controls latest d22f58cdc9c1 3 hours ago 1.12GB [...] </code></pre> <p>This image is also deployed to my minikube instance:</p> <pre><code>&gt;minikube ssh -- docker image ls door_controls latest d22f58cdc9c1 3 hours ago 1.12GB [...] </code></pre> <p>(It is also clear to me that these are different docker daemons, because the one on minikube also lists the k8s daemons). The documentation (and <a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube">this canonical</a> Stackoverflow answer) suggests that</p> <pre><code>minikube image load door_controls minikube kubectl run door-controls --image=door_controls --port=7777 </code></pre> <p>is the way to deploy this image from the command line, but the event log tells me something failed along the way:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 59s default-scheduler Successfully assigned default/door-controls to minikube Normal Pulling 17s (x3 over 59s) kubelet Pulling image &quot;door_controls&quot; Warning Failed 16s (x3 over 57s) kubelet Failed to pull image &quot;door_controls&quot;: rpc error: code = Unknown desc = Error response from daemon: pull access denied for door_controls, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Warning Failed 16s (x3 over 57s) kubelet Error: ErrImagePull Normal BackOff 1s (x3 over 57s) kubelet Back-off pulling image &quot;door_controls&quot; Warning Failed 1s (x3 over 57s) kubelet Error: ImagePullBackOff </code></pre> <p>In the classic tradition of invoking commands I don't understand in the hope that it fixes the problem, I have followed the advice of another answer and tried setting the docker daemon to minikube's:</p> <pre><code>&gt;eval $(minikube -p minikube docker-env) //these aren't echoed by the shell of course export DOCKER_TLS_VERIFY=&quot;1&quot; export DOCKER_HOST=&quot;tcp://127.0.0.1:54664&quot; export DOCKER_CERT_PATH=&quot;/Users/airza/.minikube/certs&quot; export MINIKUBE_ACTIVE_DOCKERD=&quot;minikube&quot; </code></pre> <p>It seems as though the default configuration wants TLS enabled (from localhost to localhost?) but I am not sure how to turn it off or how to add valid TLS from localhost to localhost in the first place.</p> <p>It is also not clear if this is even the issue or if something else is. Is my nomenclature for the image wrong? Do I need to specify a repo? Why does this image not deploy?</p>
<p>Minikube comes with its own docker daemon and not able to find images by default, the below works in my local env, i noticed the first step is already done and looks like the the image is still being pulled, step-2 might solve the problem below.</p> <ol> <li>Set the environment variables with eval $(minikube docker-env), i see you have set this already.</li> <li>Set ImagePullPolicy to Never in order to use local docker images with the deployment, this will ensure that the image is not pulled from docker repo.</li> </ol> <p>you can try running the below yaml on your cluster for your use-case.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test spec: containers: image: door_controls name: door_controls imagePullPolicy: Never ports: - containerPort: 7777 </code></pre> <p>blog article: <a href="https://medium.com/bb-tutorials-and-thoughts/how-to-use-own-local-doker-images-with-minikube-2c1ed0b0968" rel="nofollow noreferrer">https://medium.com/bb-tutorials-and-thoughts/how-to-use-own-local-doker-images-with-minikube-2c1ed0b0968</a></p>
<p>On my PC I have multiple network interfaces:</p> <ol> <li><code>lo 127.0.0.1</code> - loopback interface</li> <li><code>enp2s0 192.168.1.244</code> - main interface</li> <li><code>lo:40 192.168.40.1</code> - a virtual loopback device</li> <li>others are irrelevant</li> </ol> <p>I am running apache on both the main interface and first loopback on ports <code>80</code> and <code>443</code> And I need that apache to be undisturbed.</p> <p>So I create a virtual loopback device for kubernetes to use with IP <code>192.168.40.1</code> But every time I am trying to attach it to kubernetes it also grabs the main interface as well.</p> <p>So far here is my ingress-controller config file (important parts):</p> <pre><code>kind: Service apiVersion: v1 metadata: name: traefik namespace: kube-system spec: loadBalancerIP: 192.168.40.1 externalIPs: - 192.168.40.1 ports: - name: web protocol: TCP port: 1380 targetPort: web nodePort: 32211 - name: websecure protocol: TCP port: 13443 targetPort: websecure nodePort: 32506 selector: app.kubernetes.io/instance: traefik app.kubernetes.io/name: traefik clusterIP: 10.43.181.90 clusterIPs: - 10.43.181.90 type: LoadBalancer sessionAffinity: None externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack allocateLoadBalancerNodePorts: true internalTrafficPolicy: Cluster </code></pre> <p>I tried changing both <code>externalIp</code> and <code>loadBalancerIP</code> settings but that only made kubernetes grab both <code>192.168.1.244</code> and <code>192.168.40.1</code>.</p> <p>For some reason when I am trying to edit the file it shows an extra read-only section in the dashboard editor:</p> <pre><code>status: loadBalancer: ingress: - ip: 192.168.1.244 </code></pre> <p>I guess that is whats holding the interface.</p> <p>How do I prevent Kubernetes from grabbing the main interface, and make it use only <code>192.168.40.1</code>?</p>
<p>k3s by default binds to first public system interface.<br /> To overwrite this behavior you need to provide <code>--node-ip=&lt;listen-ip&gt;</code> parameter to the server process.</p> <p>Since there is no configuration (neither internal nor exteral) that allows to set that parameter, the only way of setting it is by modifying the systemd service file.</p> <p>You have to modify <code>/etc/systemd/system/k3s.service</code> file and in last lines change</p> <pre><code>ExecStart=/usr/local/bin/k3s \ server \ </code></pre> <p>to</p> <pre><code>ExecStart=/usr/local/bin/k3s \ server --node-ip=192.168.40.1 \ </code></pre> <p>and restart kubernetes by running</p> <pre><code>sudo systemctl daemon-reload sudo systemctl restart k3s.service </code></pre> <p>This way Kubernetes will listen only on <code>192.168.40.1</code> using interface assigned to this ip.</p>
<p>I have Windows 11 Home (which does not allow Hyper-V, only Pro edition does). Installed WSL2 and Docker Desktop.</p> <p>Installed Minikube using Chocolatey but it refused to start. Searching on SO, I found this advice in several posts, but it failed to work.</p> <pre><code>PS C:\WINDOWS\system32&gt; docker system prune WARNING! This will remove: - all stopped containers - all networks not used by at least one container - all dangling images - all dangling build cache Are you sure you want to continue? [y/N] y error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Post &quot;http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/prune&quot;: open //./pipe/docker_engine: The system cannot find the file specified. PS C:\WINDOWS\system32&gt; minikube delete * Removed all traces of the &quot;minikube&quot; cluster. PS C:\WINDOWS\system32&gt; minikube start --driver=docker * minikube v1.25.2 on Microsoft Windows 11 Home 10.0.22000 Build 22000 * Using the docker driver based on user configuration X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: &quot;docker version --format -&quot; exit status 1: error during connect: In the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect.: Get &quot;http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/version&quot;: open //./pipe/docker_engine: The system cannot find the file specified. * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/ </code></pre>
<p>I thought of trying to have <em>Docker Desktop</em> already running <strong>before</strong> I start minikube.</p> <p>From the Windows Start menu, I ran <em>Docker Desktop in Administrator mode</em>.</p> <p>Now I ran the command again to remove old stuff,</p> <pre><code>PS C:\WINDOWS\system32&gt; minikube delete * Removed all traces of the &quot;minikube&quot; cluster. </code></pre> <p>and now specify the docker driver</p> <pre><code>PS C:\WINDOWS\system32&gt; minikube start --driver=docker * minikube v1.25.2 on Microsoft Windows 11 Home 10.0.22000 Build 22000 * Using the docker driver based on user configuration * Starting control plane node minikube in cluster minikube * Pulling base image ... &gt; gcr.io/k8s-minikube/kicbase: 379.06 MiB / 379.06 MiB 100.00% 10.23 MiB p * Creating docker container (CPUs=2, Memory=3000MB) ... * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... - kubelet.housekeeping-interval=5m - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass * Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default </code></pre> <p>I don't know kubernetes as I am learning it, but it appears to have worked. I hope this will be useful to someone so they do not have to go off and spend $99 to upgrade to Windows Pro - as I was going to do if this did not work.</p> <p><strong>Update</strong>: Here is a link with more details <a href="https://juwo.blogspot.com/" rel="nofollow noreferrer">How to run Kubernetes on Windows 11</a></p>
<p>I've been reading about the Kubernetes imagePullPolicy attribute when set to 'always', and it seems like something has changed:</p> <p>Up through <a href="https://v1-21.docs.kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">version 1.21 of the documentation</a>, it said the following:</p> <p><em>If you would like to always force a pull, you can do one of the following:</em></p> <ul> <li><em>set the imagePullPolicy of the container to Always.</em></li> <li><em>omit the imagePullPolicy and use :latest as the tag for the image to use; Kubernetes will set the policy to Always.</em></li> <li><em>omit the imagePullPolicy and the tag for the image to use.</em></li> <li><em>enable the AlwaysPullImages admission controller.</em></li> </ul> <p>But starting with <a href="https://v1-22.docs.kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">version 1.22 of the K8S documentation</a>, it says imagePullPolicy works as follows when set to always:</p> <p><em>Every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.</em></p> <p>These are very different explanations: &lt;= 1.21, it says that 'always' forces the image to always be pulled from the registry. But &gt;=1.22, it says 'always' forces a digest check against the registry, but will use a cached copy if nothing changed.</p> <p>I'm trying to understand if the <strong>behavior</strong> actually changed starting in 1.22, or was this simply a change to the explanation and documentation?</p>
<p>I think this change tends to save network bandwidth rather than changing the behaviour. In <code>1.21</code> or earlier versions, k8s is Always trying to pull the image all over again without checking some image layers exists on the node or not. In the new version, k8s will check the image layers and if they exist, it will pull only the missing layers. Yes, behavior is changed to some extent but users &amp; clusters are not supposed to be effected negatively by this change.</p> <p>These lines below in the same documentation indicates that its ultimate behaviour will not change;</p> <pre><code>The caching semantics of the underlying image provider make even imagePullPolicy: Always efficient, as long as the registry is reliably accessible. Your container runtime can notice that the image layers already exist on the node so that they don't need to be downloaded again. Note: You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly. </code></pre>
<p>I am using the opentelemetry-ruby otlp exporter for auto instrumentation: <a href="https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp</a></p> <p>The otel collector was installed as a daemonset: <a href="https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector</a></p> <p>I am trying to get the OpenTelemetry collector to collect traces from the Rails application. Both are running in the same cluster, but in different namespaces.</p> <p>We have enabled auto-instrumentation in the app, but the rails logs are currently showing these errors:</p> <p><code>E, [2022-04-05T22:37:47.838197 #6] ERROR -- : OpenTelemetry error: Unable to export 499 spans</code></p> <p>I set the following env variables within the app:</p> <pre><code>OTEL_LOG_LEVEL=debug OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318 </code></pre> <p>I can't confirm that the application can communicate with the collector pods on this port. Curling this address from the rails/ruby app returns &quot;Connection Refused&quot;. However I am able to curl <code>http://&lt;OTEL_POD_IP&gt;:4318</code> which returns 404 page not found.</p> <p>From inside a pod:</p> <pre><code># curl http://localhost:4318/ curl: (7) Failed to connect to localhost port 4318: Connection refused # curl http://10.1.0.66:4318/ 404 page not found </code></pre> <p>This helm chart created a daemonset but there is no service running. Is there some setting I need to enable to get this to work?</p> <p>I confirmed that otel-collector is running on every node in the cluster and the daemonset has HostPort set to 4318.</p>
<p>The problem is with this setting:</p> <pre><code>OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318 </code></pre> <p>Imagine your pod as a stripped out host itself. Localhost or 0.0.0.0 of your pod, and you don't have a collector deployed in your pod.</p> <p>You need to use the address from your collector. I've checked the examples available at the <a href="https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector/examples" rel="nofollow noreferrer">shared repo</a> and for <code>agent-and-standalone</code> and <code>standalone-only</code> you also have a k8s resource of type Service.</p> <p>With that you can use the full service name (with namespace) to configure your environment variable.<br /> Also, the Environment variable now is called <code>OTEL_EXPORTER_OTLP_TRACES_ENDPOINT</code>, so you will need something like this:</p> <pre><code>OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local:&lt;service-port&gt; </code></pre>
<p>I am trying to deploy a keycloak server with a postgres database attached using the <a href="https://artifacthub.io/packages/helm/bitnami/keycloak" rel="nofollow noreferrer">bitnami helm chart</a> configured as follows with flux.</p> <pre><code>apiVersion: helm.toolkit.fluxcd.io/v2beta1 kind: HelmRelease metadata: name: keycloak-release namespace: keycloak spec: releaseName: keycloak targetNamespace: keycloak chart: spec: chart: keycloak version: 7.1.x sourceRef: kind: HelmRepository name: bitnami-repo namespace: flux-system interval: '10s' install: remediation: retries: 3 timeout: '10m0s' values: image: debug: true containerPorts: http: 8080 https: 8443 management: 9990 resources: limits: memory: 256Mi cpu: 250m requests: memory: 256Mi cpu: 250m service: type: NodePort ports: postgresql: 5432 postgresql: enabled: true auth: existingSecret: postgres-keycloak auth: existingSecret: keycloak-secret livenessProbe: enabled: true httpGet: path: /auth/ port: http initialDelaySeconds: 300 periodSeconds: 1 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 readinessProbe: enabled: true httpGet: path: /auth/realms/master port: http initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 3 successThreshold: 1 extraEnvVars: - name: KEYCLOAK_ALWAYS_HTTPS value: &quot;true&quot; - name: PROXY_ADDRESS_FORWARDING value: &quot;true&quot; - name: JAVA_OPTS_APPEND value: &quot;-Djboss.as.management.blocking.timeout=7200&quot; - name : KEYCLOAK_HTTP_PORT value : &quot;8080&quot; - name: KEYCLOAK_HTTPS_PORT value: &quot;8443&quot; extraVolumes: - name: disable-theme-cache-volume configMap: name: disable-theme-cache extraVolumeMounts: - name: disable-theme-cache-volume mountPath: /opt/jboss/startup-scripts </code></pre> <p>However it seems there is a joss issue since the container never starts and the server stops without any particular reason.</p> <p><a href="https://i.stack.imgur.com/IOhxn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IOhxn.png" alt="logs" /></a></p> <p>I assume the issue must lie in some jboss configuration but I do not see what. Any idea? :)</p>
<p>My first guess is that the resource limits might be too restrictive. At least memory should be more.</p> <pre class="lang-yaml prettyprint-override"><code>resources: limits: memory: 256Mi cpu: 250m requests: memory: 256Mi cpu: 250m </code></pre>
<p>I'm attempting to mount a filepath into my datadog agent container, which is being provisioned into a kubernetes cluster via <a href="https://artifacthub.io/packages/helm/datadog/datadog" rel="nofollow noreferrer">Datadog Helm Chart.</a></p> <p>I'm using <code>agents.volumes</code> value to pass in. which the docs describe as &quot;Specify additional volumes to mount in the dd-agent container&quot;.</p> <p>Based on the syntax found in the <a href="https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L1151-L1156" rel="nofollow noreferrer">Datadog/helm-charts repo</a> - I'm using:</p> <pre><code> agents: volumes: - hostPath: path: /var/log/cloud-init.log name: cloud-init </code></pre> <p>But when I apply that change to my cluster, I don't see any evidence that this path has been mounted anywhere on my agent container. I'm not seeing any great explanation of mount a volume from my host container into the datadog agent container.</p>
<p>I see that value is only used to declare the volumes on the DaemonSet pod definition, not to mount them.</p> <p><code>agents.volumes</code> is for defining custom volumes on the agent but this is used on the DaemonSet definition, specifically on <code>spec.template.spec.volumes</code> <a href="https://github.com/DataDog/helm-charts/blob/main/charts/datadog/templates/daemonset.yaml#L154-L156" rel="nofollow noreferrer">look here</a>.</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: {{ template &quot;datadog.fullname&quot; . }} namespace: {{ .Release.Namespace }} ... spec: ... spec: ... volumes: ... {{- if .Values.agents.volumes }} {{ toYaml .Values.agents.volumes | indent 6 }} {{- end }} </code></pre> <p>To actually use those volumes you have to define the variable <a href="https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml#L1165-L1168" rel="nofollow noreferrer"><code>agents.volumeMounts</code></a> which is used <a href="https://github.com/DataDog/helm-charts/blob/4c1b07ea55833e40b3bfce45dd774edfdbf126ce/charts/datadog/templates/_container-agent.yaml#L190-L192" rel="nofollow noreferrer">here</a>.</p> <pre><code>{{- define &quot;container-agent&quot; -}} - name: agent image: &quot;{{ include &quot;image-path&quot; (dict &quot;root&quot; .Values &quot;image&quot; .Values.agents.image) }}&quot; ... volumeMounts: ... {{- if .Values.agents.volumeMounts }} {{ toYaml .Values.agents.volumeMounts | indent 4 }} {{- end }} ... {{- end -}} </code></pre> <p>So you most likely want to define your values like this:</p> <pre><code>agents: volumes: - hostPath: path: /var/log/cloud-init.log name: cloud-init volumeMounts: - name: cloud-init mountPath: /some/path readOnly: true </code></pre>
<p>I have an openshift namespace (<code>SomeNamespace</code>), in that namespace I have several pods.</p> <p>I have a route associated with that namespace (<code>SomeRoute</code>).</p> <p>In one of pods I have my spring application. It has REST controllers.</p> <p>I want to send message to that REST controller, how can I do it?</p> <p>I have a route URL: <code>https://some.namespace.company.name</code>. What should I find next?</p> <p>I tried to send requests to <code>https://some.namespace.company.name/rest/api/route</code> but it didn't work. I guess, I must somehow specify pod in my URL, so route will redirect requests to concrete pod but I don't know how I can do it.</p>
<p><strong>Routes</strong> are an <strong>OpenShift-specific</strong> way of exposing a Service outside the cluster. But, if you are developing an app that will be deployed onto <strong>OpenShift and Kubernetes</strong>, then you should use <strong>Kubernetes Ingress</strong> objects.</p> <p>Using Ingress means that your app’s manifests are more portable between different Kubernetes clusters.</p> <p>From the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">official Kubernetes docs</a>:</p> <blockquote> <ul> <li>An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.</li> <li>Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.</li> <li>Traffic routing is controlled by rules defined on the Ingress resource.</li> </ul> </blockquote> <h4>So, if you want to reach your REST controllers:</h4> <ul> <li><strong>from within the k8s cluster</strong>. Create a k8s <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> to expose an application running on a set of Pods as a network service:</li> </ul> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service namespace: your-namespace spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <p>This specification creates a new Service object named &quot;my-service&quot;, which targets TCP port 8080 on any Pod with the <code>app=MyApp</code> label. You can reach the REST controller using this URL:</p> <pre><code>http://my-service </code></pre> <ul> <li><strong>externally</strong>. Create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress resource</a> to configure externally-reachable URLs (a k8s Service 'my-service' should exist):</li> </ul> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-name namespace: your-namespace spec: rules: - host: &quot;foo.bar.com&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: my-service port: number: 80 </code></pre> <p>You can reach the REST controller using this URL:</p> <pre><code>http://foo.bar.com </code></pre>
<p>I am new to Kubernetes and have been stuck at one point.</p> <p>Lets say, I have multiple pods and I have some tasks running. Suddenly when pods stops due to whatever reason, I save the state of the task in some database (terminated but not completed) by catching SIGTERM signal or using terminationGracePeriod. So assuming I have 10 terminated tasks, I want to restart those tasks when the pod restarts. If multiple pods restarts, they all will fetch the terminated tasks from the database, makes the status &quot;In Progress&quot; and all will start the task. So instead of the task starting once it will start multiple times as multiple pods had found it terminated. I dont want to apply locks on database as it will slow down my code. So how can I restrict only one pod to fetch the terminated tasks and start only once.</p> <p>Just FYI, I am trying to achieve the restarting of tasks in Golang.</p>
<p>Store the state of the task in a database, and when the pod terminates, you update the state to 'terminated'. Then when pods start up again, query for tasks that have been 'terminated' and need to be continued. Grab a random ID for one of these tasks, and perform an UPDATE transaction to update the status to 'running' (make sure to also include WHERE status = 'terminated'). Single UPDATE operations in SQL are by default atomic, meaning no other transactions can modify the row while it is being updated. When using an ORM like GORM you will get a result containing the number of rows that was modified. If the number of rows is not equal to 1, that means another pod already updated this task, so we should grab another ID and try again until we perform an UPDATE where the number of rows updated is 1.</p> <p>This is just an idea, no guarantees that this will work for you, as I do not know the full extent of your tech stack (what DB, ORM etc).</p>
<p>I would like to use <code>kubectl</code> to print out all key-value pairs in my Secrets. I cannot figure out how to do this in one line with the <code>-o --jsonpath</code> flag or by piping into <code>jq</code>. I could certainly make a script to do this but I feel there must be a better way, given that the kubernetes GUI is pretty straightforward and liberal when it comes to letting you view Secrets.</p> <p>Say I create secret like so:</p> <p><code>kubectl create secret generic testsecret --from-literal=key1=val1 --from-literal=key2=val2</code></p> <p>Now I can run <code>kubectl get secret testsecret -o json</code> to get something like:</p> <pre><code>{ "apiVersion": "v1", "data": { "key1": "dmFsMQ==", "key2": "dmFsMg==" }, ... } </code></pre> <p>I can do something like</p> <p><code>kubectl get secret testsecret -o jsonpath='{.data}'</code> </p> <p>or </p> <p><code>kubectl get secret testsecret -o json | jq '.data'</code></p> <p>to get my key-value pairs in <em>non-list</em> format then I'd have to <code>base64 --decode</code> the values.</p> <p>What is the easiest way to get a clean list of all my key-value pairs? Bonus points for doing this across all Secrets (as opposed to just one specific one, as I did here).</p>
<p>I read this question as asking for how to decode <em>all secrets</em> in one go. I built on the accepted answer to produce a one-liner to do this:</p> <pre><code>kubectl get secrets -o json | jq '.items[] | {name: .metadata.name,data: .data|map_values(@base64d)}' </code></pre> <p>This has the added benefit of listing the name of the secret along with the decoded values for readability.</p>
<p>I am trying to create a Keycloak deployment having its configuration imported from a local file located at <code>./import/realm.json</code>.</p> <p>Folder structure:</p> <ul> <li><code>keycloak-deploy.yml</code></li> <li><code>import/realm.json</code></li> </ul> <p>However, when applying the deployment I get this error:</p> <pre><code> FATAL [org.keycloak.services] (ServerService Thread Pool -- 59) Error during startup: java.lang.RuntimeException: java.io.FileNotFoundException: /import/realm.json (No such file or directory) </code></pre> <p>This is the deployment (<code>keycloak-deploy.yml</code>) I'm trying to create:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: keycloak-deployment name: keycloak-deployment spec: replicas: 1 selector: matchLabels: app: keycloak-deployment strategy: {} template: metadata: creationTimestamp: null labels: app: keycloak-deployment spec: containers: - image: jboss/keycloak:latest name: keycloak env: - name: KEYCLOAK_USER value: admin - name: KEYCLOAK_PASSWORD value: superSecret - name: KEYCLOAK_IMPORT value: /import/realm.json ports: - containerPort: 8081 readinessProbe: httpGet: path: /auth/realms/master port: 8081 resources: {} status: {} </code></pre> <p>I'm a beginner with Kubernetes so any help is apreciated, thanks !</p>
<p>I followed what was said in the comments (thanks @Andrew Skorkin). It worked like this:</p> <ul> <li>deployment &amp; service:</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: keycloak-deployment name: keycloak-deployment spec: replicas: 1 selector: matchLabels: app: keycloak-deployment strategy: {} template: metadata: creationTimestamp: null labels: app: keycloak-deployment spec: containers: - image: jboss/keycloak:latest name: keycloak env: - name: KEYCLOAK_USER value: admin - name: KEYCLOAK_PASSWORD value: superSecret - name: KEYCLOAK_IMPORT value: /import/realm.json ports: - name: http containerPort: 8081 volumeMounts: - name: keycloak-volume mountPath: /import readinessProbe: httpGet: path: /auth/realms/master port: 8081 initialDelaySeconds: 30 timeoutSeconds: 30 resources: {} volumes: - name: keycloak-volume configMap: name: keycloak-configmap status: {} --- apiVersion: v1 kind: Service metadata: name: keycloak-service spec: selector: app: keycloak-service ports: - protocol: TCP port: 8081 targetPort: 8081 </code></pre> <ul> <li>config map:</li> </ul> <pre><code>apiVersion: v1 data: realm.json: | {json_content} kind: ConfigMap metadata: name: keycloak-configmap </code></pre> <p><code>json_content</code> contains the realm.json data. I exported the data from a working keycloak instance (made with docker-compose).</p>
<p>I have deployed a pod in kubernetes cluster that run a python script.</p> <p>The problem is i want to force the k8s to stop the container after the script complete his job and not to re-create another pod.</p> <p>To be aware that i have tried to use kind:job but it doesn't fulfill my need.</p> <p>I tried two types of kind, job and deployments.</p> <p>With the deployment the pod always show status first completed after that crush with crashloopbackof error.</p> <p>With the job the pod always show the status completed but i don't have the possibility to re-excute it with an automated way</p> <p>Do you have any suggestions about that?</p>
<p>I have posted community wiki answer to summarise the topic.</p> <p>User <a href="https://stackoverflow.com/users/213269/jonas" title="108,324 reputation">Jonas</a> has posted great suggestions:</p> <blockquote> <p>A kind <code>Job</code> does exactly this. Use <code>Job</code> and your problem is solved.</p> </blockquote> <blockquote> <p>If you deploy with <code>kubectl create -f job.yaml</code> and your job has a <code>generateName:</code> instead of <code>name:</code>, a new <code>Job</code> will be created each time.</p> </blockquote> <p>For more information look at the documentation about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a>. See also information about <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#generated-values" rel="nofollow noreferrer">Gnerated values</a>.</p>
<p>I am trying to access a simple minikube cluster from the browser, but I keep getting the following: <code>❗ Because you are using a Docker driver on windows, the terminal needs to be open to run it.</code></p> <p>I've created an external service for the cluster with the port number of 30384, and I'm running minikube in a docker container.</p> <p>I'm follwing &quot;Hello Minikube&quot; example to create my deployment.</p> <p>Step1: I created the deployment:</p> <p><code>kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4</code></p> <p>Step2: I created the external service: <code>kubectl expose deployment hello-node --type=LoadBalancer --port=8080</code></p> <p>Step3: I ran the service, and that;s where I stuffed up &quot;<code>minikube service hello-node</code></p> <p>The full return message:</p> <p><code>❗ Executing &quot;docker container inspect minikube --format={{.State.Status}}&quot; took an unusually long time: 2.3796077s</code> <code>💡 Restarting the docker service may improve performance.</code> <code>🏃 Starting tunnel for service hello-node.</code> <code>🎉 Opening service default/hello-node in default browser...</code> <code>❗ Because you are using a Docker driver on windows, the terminal needs to be open to run it.</code></p> <p>I tried to run the service to make it accessible from the browser, however, I wasn't able to.</p>
<p>You can get this working by using kubectl's port forwarding capability. For example, if you are running <code>hello-node</code> service:</p> <p><code>kubectl port-forward svc/hello-node 27017:27017</code></p> <p>This would expose the service on <code>localhost:27017</code></p> <p>You can also mention your pod instead of the service with the same command, you just need to specify your <code>pods/pod-name</code>, you can verify your pod name by <code>kubectl get pods</code></p>
<p>I have a Spring batch 2.1.x application deployed on Azure Kubernetes. Base image is Ubuntu 18.04. I see that the process is getting killed at times.</p> <p><strong>Process flow:</strong></p> <ol> <li>kubectl command line command to start the bash script</li> <li>bash script to start the spring batch</li> <li>spring batch application to print / generate the CSV file</li> </ol> <p><a href="https://i.stack.imgur.com/O6UJT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O6UJT.png" alt="enter image description here" /></a></p>
<p>It's perfectly normal in a Kubernetes environment that pods are shutdown sometimes and that replacement pods are scheduled on another worker node. This happens all the time due to node maintenance or to balance the workload. You can read more about this in the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">documentation about pod-lifecycle</a>.</p>
<p>I need to service gitlab, nexus and jupyterhub based on URL using one open port using k8s ingress.</p> <p>If the path is written as &quot;/&quot; when create ingress, it works normally, but if you write &quot;/nexus&quot; like this, a problem occurs during the redirection process.</p> <p>Have any of you solved the same problem? Please help.</p> <p><a href="https://i.stack.imgur.com/9h1HM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9h1HM.png" alt="enter image description here" /></a></p> <p>my ingress.yaml as below</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 creationTimestamp: &quot;2022-04-06T05:56:40Z&quot; generation: 7 name: nexus-ing namespace: nexus resourceVersion: &quot;119075924&quot; selfLink: /apis/extensions/v1beta1/namespaces/nexus/ingresses/nexus-ing uid: 4b4f97e4-225e-4faa-aba3-6af73f69c21d spec: ingressClassName: nginx rules: - http: paths: - backend: serviceName: nexus-service servicePort: 8081 path: /nexus(/|$)(.*) pathType: ImplementationSpecific status: loadBalancer: ingress: - ip: 172.30.1.87 </code></pre>
<p>That's a problem with nexus itself. Your ingress works as intended, and you cannot do more from this side.</p> <p>The problem here is that nexus webpage, i.e. index.html, requests resources in such a way that it's looking at the wrong place. You can see this by opening the network tab and inspecting the request URL of the missing statics.</p> <p>To see what I mean, let's examine the below HTML image tags.</p> <pre class="lang-html prettyprint-override"><code>&lt;img id=&quot;1&quot; src=&quot;./statics/some-image.svg&quot; alt=&quot;some image&quot; /&gt; &lt;img id=&quot;2&quot; src=&quot;/statics/some-image.svg&quot; alt=&quot;some image&quot; /&gt; </code></pre> <p>You can see that the first one, is using relative path, and would work with your configuration since the request URL would be relative to the location in the browser and then the nexus part gets stripped by the ingress controller.</p> <p>However, the second one is using absolute path, so it will not have the nexus part in the request URL and the ingress controller will not be able to route it to the correct service.</p> <p>This is a common problem when stripping path prefixes. It only works fully when the application you are serving when stripping a prefix is correctly configured.</p> <p>In your case this means, checking the documentation of the services, if you have any way to influence this.</p> <p>It may be more straight forward to route based on hostname instead of path. I.e <code>nexus.myhost.com</code>. For that, you would need a domain and point the corresponding A records to your ingress services IP / use a wildcard record.</p>
<p>I am trying to create a GitHub actions self-host runner in GKE. For the I created the Docker with Ubuntu base image and downloaded the GitHub runner code.</p> <pre class="lang-sh prettyprint-override"><code>curl -o actions-runner-linux-x64-2.288.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.288.1/actions-runner-linux-x64-2.288.1.tar.gz ./config.sh </code></pre> <p>Using the Kubernetes deployment.yaml file deployed the runner in the Kubernetes cluster, but in POD logs I am seeing the below error and the runner is unable to authenticate with the GitHub account.</p> <pre class="lang-none prettyprint-override"><code>-------------------------------------------------------------------------------- | ____ _ _ _ _ _ _ _ _ | | / ___(_) |_| | | |_ _| |__ / \ ___| |_(_) ___ _ __ ___ | | | | _| | __| |_| | | | | '_ \ / _ \ / __| __| |/ _ \| '_ \/ __| | | | |_| | | |_| _ | |_| | |_) | / ___ \ (__| |_| | (_) | | | \__ \ | | \____|_|\__|_| |_|\__,_|_.__/ /_/ \_\___|\__|_|\___/|_| |_|___/ | | | | Self-hosted runner registration | | | -------------------------------------------------------------------------------- # Authentication The SSL connection could not be established, see inner exception. An error occurred: Not configured </code></pre> <p>We are using Istio as a service mesh in our Kubernetes cluster.</p>
<p>As I mentioned earlier I am using Istio, After whitelisting the below URL in Istio. My issue got resolved and I am able to create a runner in the Kubernetes cluster.</p> <p>api.github.com</p> <p>raw.githubusercontent.com</p> <p>oauth2.googleapis.com</p>
<p>I'm trying to understand how traffic can be routed to a pod that has begun shutdown process.</p> <p>In Spring Boot docs it is mentioned that</p> <blockquote> <p>Once the pre-stop hook has completed, SIGTERM will be sent to the container and graceful shutdown will begin, allowing any remaining in-flight requests to complete.</p> </blockquote> <p><a href="https://docs.spring.io/spring-boot/docs/2.3.0.RELEASE/reference/html/deployment.html#cloud-deployment-kubernetes-container-lifecycle" rel="nofollow noreferrer">Kubernetes Container Lifecycle</a></p> <p>But in Kubernetes docs we have</p> <blockquote> <p>The Pod's termination grace period countdown begins before the PreStop hook is executed, so regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period. No parameters are passed to the handler.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">Container hooks</a></p> <p>In Kubernetes docs it say <code>The Pod's termination grace period countdown begins before the PreStop hook is executed</code> which means SIGTERM was send before the hook is called. Isn't this in contradiction to Spring Boot which says <code>Once the pre-stop hook has completed, SIGTERM will be sent to the container</code>?</p>
<p>It happens in the following order:</p> <ol> <li>Count down for termination grace period starts</li> <li>Pre stop hook starts executing</li> <li>Pre stop hooks finished</li> <li>SIGTERM is issued to the container, Spring Boot starts shutting down (possible waiting if graceful shutdown is configured)</li> </ol> <p>If at any point in time the grace period is exceeded SIGKILL is issued and all processes are terminated.</p>
<p>Good afternoon, colleagues. Please tell me. I set up the k8s+vault integration according to the instructions: <a href="https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar" rel="nofollow noreferrer">https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar</a></p> <p>But I have a test and production Kubernetes cluster and only one Vault. Is it possible to integrate one Vault with multiple Kubernetes clusters?</p> <p>Find parameter: authPath: &quot;auth/kubernetes&quot;, mayby for second cluster make: authPath: &quot;auth/kubernetes2&quot; .. etc</p>
<p>It's possible what needs to be done:</p> <pre><code>helm install vault hashicorp/vault \ --set &quot;injector.externalVaultAddr=http://external-vault:8200&quot; --set &quot;authPath=auth/kubernetesnew&quot; vault auth enable -path kubernetesnew kubernetes .. vault write auth/kubernetesnew/role/k8s-name-role \ bound_service_account_names=k8s-vault-sa \ bound_service_account_namespaces=k8s-vault-namespace \ ttl=24h </code></pre>
<p>When logged into gitlab using the oauth2 provider keycloak and trying to log out, Gitlab redirects to the sign_in page, but doesn't end out session on Keycloak, so we are logged in again.</p> <p>These are the environment variables used in gitlab kubernetes deployment:</p> <pre><code>- name: OAUTH2_GENERIC_APP_ID value: &lt;client-name&gt; - name: OAUTH2_GENERIC_APP_SECRET value: &quot;&lt;client-secret&gt;&quot; - name: OAUTH2_GENERIC_CLIENT_AUTHORIZE_URL value: &quot;https://&lt;keycloak-url&gt;/auth/realms/&lt;realm-name&gt;/protocol/openid-connect/auth&quot; - name: OAUTH2_GENERIC_CLIENT_END_SESSION_ENDPOINT value: &quot;https://&lt;keycloak-url&gt;/auth/realms/&lt;realm-name&gt;/protocol/openid-connect/logout&quot; - name: OAUTH2_GENERIC_CLIENT_SITE value: &quot;https://&lt;keycloak-url&gt;/auth/realms/&lt;realm-name&gt;&quot; - name: OAUTH2_GENERIC_CLIENT_TOKEN_URL value: &quot;https://&lt;keycloak-url&gt;/auth/realms/&lt;realm-name&gt;/protocol/openid-connect/token&quot; - name: OAUTH2_GENERIC_CLIENT_USER_INFO_URL value: &quot;https://&lt;keycloak-url&gt;/auth/realms/&lt;realm-name&gt;/protocol/openid-connect/userinfo&quot; - name: OAUTH2_GENERIC_ID_PATH value: sub - name: OAUTH2_GENERIC_NAME value: Keycloak - name: OAUTH2_GENERIC_USER_EMAIL value: email - name: OAUTH2_GENERIC_USER_NAME value: preferred_username - name: OAUTH2_GENERIC_USER_UID value: sub - name: OAUTH_ALLOW_SSO value: Keycloak - name: OAUTH_AUTO_LINK_LDAP_USER value: &quot;false&quot; - name: OAUTH_AUTO_LINK_SAML_USER value: &quot;false&quot; - name: OAUTH_AUTO_SIGN_IN_WITH_PROVIDER value: Keycloak - name: OAUTH_BLOCK_AUTO_CREATED_USERS value: &quot;false&quot; - name: OAUTH_ENABLED value: &quot;true&quot; - name: OAUTH_EXTERNAL_PROVIDERS value: Keycloak </code></pre> <p>I have tried a workaround mentioned here: <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/31203" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab/-/issues/31203</a> , but no luck. Please help.</p> <p>Note:</p> <pre><code>Gitlab version: 14.9.2 Keycloak version: 17 Kubernetes Version: 1.21.5 </code></pre>
<p>To be perfectly clear: the expectation is that you should be signed out of GitLabn, not necessarily keycloak altogether. This is happening correctly since you see the sign-in page after signing out. For example, if you sign into GitLab using Google and sign out of GitLab, you should only be signed out of GitLab, not Google.</p> <p>The behavior you are observing is due to the fact that you have auto-login <a href="https://docs.gitlab.com/ee/integration/omniauth.html#sign-in-with-a-provider-automatically" rel="nofollow noreferrer">(<code>auto_sign_in_with_provider</code>) enabled</a>, which automatically redirects users from the sign-in page to login again with keycloak (again) immediately after (successfully) signing out.</p> <p>To avoid this problem, in the GitLab settings (under Admin -&gt; Settings -&gt; General -&gt; Sign-in Restrictions) set the <strong>After sign-out path</strong> to be <code>/users/sign_in?auto_sign_in=false</code> or in other words <code>https://gitlab.example.com/users/sign_in?auto_sign_in=false</code><br /> Note the query string <code>?auto_sign_in=false</code> will prevent the auto-redirect to sign back into keycloak. You can also choose a different URL entirely.</p> <p>See <a href="https://docs.gitlab.com/ee/user/admin_area/settings/sign_in_restrictions.html#sign-in-information" rel="nofollow noreferrer">sign-in information</a> and <a href="https://docs.gitlab.com/ee/integration/omniauth.html#sign-in-with-a-provider-automatically" rel="nofollow noreferrer">sign in with provider automatically</a> for more information.</p>
<p>I am recording and monitoring SLOs (server-side request duration) of Kubernetes Pods via Prometheus using a <a href="https://godoc.org/github.com/prometheus/client_golang/prometheus#HistogramVec" rel="nofollow noreferrer">HistogramVec</a> within a Golang HTTP server. Every request’s duration is timed and persisted as described in the <a href="https://prometheus.io/docs/practices/histograms/" rel="nofollow noreferrer">Prometheus practices</a> and partitioned by status code, method and HTTP path. </p> <p>I am running autoscaling experiments therefore Pods are created &amp; terminated. After each experiment I fetch the metrics for all pods (including the ones already deleted) and plot a cumulative distribution, e.g.: <a href="https://i.stack.imgur.com/KoQkS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KoQkS.png" alt="CDF"></a> In order to make these plots more “accurate”, I opted for many, smaller histogram buckets and aggregate &amp; analyze the data locally and do not use the built-in <a href="https://prometheus.io/docs/practices/histograms/#quantiles" rel="nofollow noreferrer">Histogram Quantiles</a>. <strong>The ideal query would therefore return only the most recent value for all time series that have existed over a specified time range (green + red circles).</strong> <a href="https://i.stack.imgur.com/Wfs6H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wfs6H.png" alt="Timeseries"></a> Currently, I am using a <a href="https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries" rel="nofollow noreferrer">range query</a> within the script generating all the plots, e.g.:</p> <pre><code>http://localhost:9090/api/v1/query_range?query=http_request_duration_milliseconds_bucket{path="/service/login"}&amp;start=1591803898&amp;end=1591804801&amp;step=5s </code></pre> <p>However, I am aware that this is highly inefficient and costly as it retrieves a huge amount of surplus data even though I am only interested in the very last value for each individual time series. On the other hand, if I use an instant query, I only get the values for a specified moment, thus I’d need to shoot multiple queries &amp; first find out when some time series (red circles) were marked stale - which doesn’t seem great either. </p> <p>So, basically I'm looking for a way to work around the <a href="https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness" rel="nofollow noreferrer">Prometheus basics on staleness</a>, and stop stale time series from "disappearing":</p> <blockquote> <p>If no sample is found (by default) 5 minutes before a sampling timestamp, no value is returned for that time series at this point in time. This effectively means that time series "disappear" from graphs at times where their latest collected sample is older than 5 minutes or after they are marked stale.</p> </blockquote> <p>I am almost certain that there is a way to do this (e.g. an option to simply include stale time series), but I haven’t been able to put it together so far.</p>
<p>The solution is to use <a href="https://docs.victoriametrics.com/MetricsQL.html#last_over_time" rel="nofollow noreferrer">last_over_time()</a> function. For example, the following query returns the last values seen during the last hour per each histogram bucket:</p> <pre><code>last_over_time(http_request_duration_milliseconds_bucket{path=&quot;/service/login&quot;}[1h]) </code></pre> <p>This query must be sent to <a href="https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries" rel="nofollow noreferrer">/api/v1/query</a> instead of <a href="https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries" rel="nofollow noreferrer">/api/v1/query_range</a>, since <code>/api/v1/query</code> calculates the query only once at the given <code>time</code> timestamp, while <code>/api/v1/query_range</code> calculates the query <code>1+(end-start)/step</code> times at every point on the timer range <code>[start ... end]</code> with interval <code>step</code>.</p> <p>Note also that big number of histogram buckets multiplied by big number of unique <code>path</code> label values may result into too many time series, which is known as <a href="https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality" rel="nofollow noreferrer">high cardinality</a>. See <a href="https://www.robustperception.io/cardinality-is-key" rel="nofollow noreferrer">this article</a> for more details.</p> <p>See also <a href="https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350" rel="nofollow noreferrer">VictoriaMetrics historgrams</a>, which solve common issues in Prometheus histograms.</p>
<p>I want to update my pod because there is a new image uploaded to docker registry with <code>latest</code> tag.</p> <p>I am currently doing this:</p> <pre><code>kubectl delete -f deployment.yaml kubectl apply -f deployment.yaml </code></pre> <p>If I do:</p> <pre><code>kubectl apply -f deployment.yaml </code></pre> <p>It says my deployment is unchanged.</p> <p><strong>However, I want make my service alive even for a second. Isn't there a way I could do something like the following?</strong></p> <pre><code>kubectl re-apply -f deployment.yaml </code></pre>
<p>You can just delete the POD once and restart the POD so it will change the image and pull new version from docker registry.</p> <p>However, make sure <code>imagePullPolicy</code> set to <strong>always</strong> in your <code>deployment.yaml</code></p> <p>Or else you need to update one minor field into <code>deployment.yaml</code> and keep <code>imagePullPolicy</code> to <strong>always</strong> in that case apply will change the deployment.</p> <p><strong>Example</strong> :</p> <pre><code>spec: containers: - name: test image: image:latest ports: - containerPort: 80 imagePullPolicy: Always imagePullSecrets: - name: docker-secret </code></pre> <p><strong>Option 2</strong></p> <pre><code>kubectl rollout restart deployment/&lt;deployment-name&gt; </code></pre> <p>Read more at : <a href="https://stackoverflow.com/questions/33112789/how-do-i-force-kubernetes-to-re-pull-an-image">How do I force Kubernetes to re-pull an image?</a></p>
<p>I've been trying to run few services in AWS EKS Cluster. I followed the ingress-nginx guide to get https with AWS ACM certificate</p> <blockquote> <p><a href="https://kubernetes.github.io/ingress-nginx/deploy/#aws" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#aws</a></p> </blockquote> <p>Used tls termination at ingress controller</p> <p>I used 3 routes for each services as</p> <p><strong>adminer.xxxx.com</strong> - points to an adminer service</p> <p><strong>socket.xxxx.com</strong> - points to the wss service written in nodejs</p> <p><strong>service.xxxx.com</strong> - points to a program that returns a page which connects to socket url</p> <p>Without TLS Termination, in http:// everything works fine, <strong>ws://socket.xxxx.com/socket.io</strong> gets connected and responds well.</p> <p>When I add TLS, the request goes to <strong>wss://socket.xxxx.com/socket.io</strong> and the nginx returns 400. I Can't figure out why it happens.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-body-size: 100m nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $http_host; # nginx.ingress.kuberenetes.io/use-regex: &quot;true&quot; spec: rules: - host: adminer.xxxx.com http: paths: - path: / backend: serviceName: adminer-svc servicePort: 8080 - host: socket.xxxx.com http: paths: - path: / backend: serviceName: nodejs-svc servicePort: 2020 - host: service.xxxx.com http: paths: - path: / backend: serviceName: django-svc servicePort: 8000 </code></pre> <p>I Tried with and without these configurations</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $http_host; </code></pre> <p>Also I've tried changing the <strong>socket.xxxx.com</strong> into <strong>service.xxxx.com</strong> and assigned to be forwarded for <em><strong>/socket.io</strong></em> path</p> <p>I've also put a url in nodejs with express to test if its working at all, and it responds properly in https://</p> <p>Only the wss:// has the issue.</p> <p>PS : This entire Service works when nginx is setup in a normal system with nginx configuration</p> <pre><code>location / { proxy_pass http://localhost:2020/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } </code></pre> <p>I tried request like this as well</p> <p><a href="https://node-socket.xxxx.com/socket.io/?EIO=3&amp;transport=polling" rel="noreferrer">https://node-socket.xxxx.com/socket.io/?EIO=3&amp;transport=polling</a> this works</p> <p><a href="https://node-socket.xxxx.comsocket.io/?EIO=3&amp;transport=websocket" rel="noreferrer">https://node-socket.xxxx.comsocket.io/?EIO=3&amp;transport=websocket</a> this doesnt.</p> <p>Combinations I tried</p> <pre><code>protocol, balancer, backendproto, transport =&gt; result wss://, ELB, TCP, websocket =&gt; 400 wss://, NLB, TCP, websocket =&gt; 400 wss://, ELB, HTTP, websocket =&gt; 400 wss://, NLB, HTTP, websocket =&gt; 400 ws://, ELB, TCP, websocket =&gt; 400 ws://, ELB, HTTP, websocket =&gt; 400 ws://, NLB, TCP, websocket =&gt; 400 ws://, NLB, HTTP, websocket =&gt; 400 </code></pre> <p>polling worked in every cases</p>
<p>You seems to be missing</p> <pre><code>nginx.org/websocket-services </code></pre> <p>annotation</p> <p>It's value should be a value of kubernetes service name. See <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/</a></p>
<p>How can I check if some namespace is missing quota?</p> <p>I expected the <code>absent()</code> function to return 1 when something doesn't exist and 0 when something exists. So I tried to do the next query:</p> <pre class="lang-yaml prettyprint-override"><code>absent(kube_namespace_labels) * on(namespace) group(kube_resourcequota) by(namespace) </code></pre> <p>But Prometheus returned <code>Empty query result</code>.</p> <p>My final goal is to alert if some namespace is missing quota, how can I achieve this?</p>
<p>You can use a different query instead to have all namespaces where <code>resourcequota</code> is missing:</p> <pre><code>count by (namespace)(kube_namespace_labels) unless sum by (namespace)(kube_resourcequota) </code></pre>
<p>I am doing an exercise from KodeKoud which provide the CKAD certification training.</p> <p>The exercise has a <code>my-kube-config.yml</code> file located under <code>root/</code>. The file content is below:</p> <p>(I ommited some unrelated parts)</p> <pre><code>apiVersion: v1 kind: Config clusters: - name: production cluster: certificate-authority: /etc/kubernetes/pki/ca.crt server: https://controlplane:6443 - name: development cluster: certificate-authority: /etc/kubernetes/pki/ca.crt server: https://controlplane:6443 - name: test-cluster-1 cluster: certificate-authority: /etc/kubernetes/pki/ca.crt server: https://controlplane:6443 contexts: - name: test-user@production context: cluster: production user: test-user - name: research context: cluster: test-cluster-1 user: dev-user users: - name: test-user user: client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt client-key: /etc/kubernetes/pki/users/test-user/test-user.key - name: dev-user user: client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key current-context: test-user@development </code></pre> <p>The exercise asking me to:</p> <blockquote> <p>use the <code>dev-user</code> to access <code>test-cluster-1</code>. Set the current context to the right one so I can do that.</p> </blockquote> <p>Since I see in the config file, there is a context named <code>research</code> which meets the requirement, so I run the following command to change the current context to the required one:</p> <pre><code>kubectl config use-context research </code></pre> <p>but the console gives me error: <code>error: no context exists with the name: &quot;research&quot;</code>.</p> <p>Ok, I guessed maybe the <code>name</code> with value <code>research</code> is not acceptable, maybe I have to follow the convention of <code>&lt;user-name&gt;@&lt;cluster-name&gt;</code>? I am not sure , but I then tried the following:</p> <ol> <li>I modified the name from <code>research</code> to <code>dev-user@test-cluster-1</code>, so that context part becomes:</li> </ol> <pre><code>- name: dev-user@test-cluster-1 context: cluster: test-cluster-1 user: dev-user </code></pre> <ol start="2"> <li>after that I run command: <code>kubectl config use-context dev-user@test-cluster-1</code>, but I get error:</li> </ol> <pre><code>error: no context exists with the name: &quot;dev-user@test-cluster-1&quot; </code></pre> <p>Why? Based on the course material that is the way to chagne the default/current context. Is the course out-dated that I am using a deprecated one? What is the problem?</p>
<p>Your initial idea was correct. You would need to change the context to <code>research</code> which can be done using</p> <blockquote> <p>kubectl config use-context research</p> </blockquote> <p>But the command would not be applied to the correct config in this instance. You can see the difference by checking the current-context with and without a kubeconfig directed to the <code>my-kube-config</code> file.</p> <blockquote> <p>kubectl config current-context</p> <p>kubernetes-admin@kubernetes</p> </blockquote> <blockquote> <p>kubectl config --kubeconfig=/root/my-kube-config current-context</p> <p>test-user@development</p> </blockquote> <p>So run the <code>use-context</code> command with the correct kubeconfig</p> <blockquote> <p>kubectl config --kubeconfig=/root/my-kube-config use-context research</p> </blockquote>
<p>When I run the command</p> <pre><code>kubectl create -f .k8s/deployment.yaml --context=cluster-1 </code></pre> <p>I get the error</p> <blockquote> <p>error: error validating &quot;.k8s/deployment.yaml&quot;: error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field &quot;volumes&quot; in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p><code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment ... spec: containers: ... volumes: - name: auth secret: secretName: d-secrets items: - key: SECRETS path: foobar.json </code></pre> <p>What can be?</p>
<p><code>...unknown field &quot;volumes&quot; in io.k8s.api.core.v1.Container</code></p> <p>Your <code>volumes</code> section is placed wrongly. Try:</p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: ... template: ... spec: containers: - name: ... ... volumes: &lt;-- should be same level as `containers` - name: auth secret: secretName: d-secrets items: - key: SECRETS path: foobar.json </code></pre>
<p>I just want to check my understanding of a microservice architecture.</p> <p>I have 5 different apps that I'm building and running in their own Dockerfile. Each docker file first builds that app before pulling the Apache httpd image and moving the built files over to its server.</p> <p>This means that all 5 apps have separate httpd servers serving that application at different urls. Each app communicates with the other, getting the necessary resources over http.</p> <p>I'm looking to deploy this in Kubernetes.</p> <p>Is it normal to have a server per service? <em>or</em> would you create a separate server container and copy all the files over to that one.</p>
<p>Yes it is normal, each microservice should have its own web server, so that they run in isolation and can be scaled individually.</p>
<p>I have successfully installed <code>external-dns</code> in my Kubernetes cluster following the official steps on <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md" rel="nofollow noreferrer">github</a>, it creates a Route53 record and I am able to access it correctly. I installed this on a specific namespace.</p> <p>My question is, do I need to deploy <code>external-dns</code> on each namespace (and then creating the service account, cluster role binding and deployment) or I can use the same deployment across namespaces?</p>
<p>The answer is no, you don't need to deploy it more than once. If you don't specify a namespace in the args section of your external-dns deployment using <code>--namespace=</code>, it works for all of the namespaces in the cluster. <a href="https://i.stack.imgur.com/imchR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/imchR.png" alt="enter image description here" /></a></p>
<p>I want to deploy Application Load Balancer for Traefik running in Kubernetes. So, I tried the following:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: traefik-application-elb namespace: kube-system annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: &quot;tcp&quot; service.beta.kubernetes.io/aws-load-balancer-type: &quot;elb&quot; service.beta.kubernetes.io/aws-load-balancer-name: &quot;eks-traefik-application-elb&quot; service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: &quot;App=traefik,Env=dev&quot; spec: type: LoadBalancer ports: - protocol: TCP name: web port: 80 - protocol: TCP name: websecure port: 443 selector: app: traefik </code></pre> <p>But internet-facing <code>Classic</code> load balancer was created. Also tried <code>service.beta.kubernetes.io/aws-load-balancer-type: &quot;alb&quot;</code> but nothing changed.</p> <p>What is wrong here ?</p>
<p>From the docs: <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html</a></p> <blockquote> <p>When you create a Kubernetes Service of type LoadBalancer, the AWS cloud provider load balancer controller creates AWS Classic Load Balancers by default, but can also create AWS Network Load Balancers.</p> </blockquote> <p>ALB is not supported as a k8s load balancer. You can specify an NLB if desired.</p> <p>However you can use ALB as an ingress controller with a deployment like this - <a href="https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/</a></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: SearchApp annotations: # share a single ALB with all ingress rules with search-app-ingress alb.ingress.kubernetes.io/group.name: search-app-ingress spec: defaultBackend: service: name: search-svc # route traffic to the service search-svc port: number: 80 </code></pre>
<p>I am trying to map a configMap in JSON format to my docker image in Kubernetes I am using config npm package to fetch the configurations.</p> <p>The idea is that I will have a file development.json in /config directory from there the config package will pick it up. This all works in localhost. The name of the config file is the same as the NODE_ENV variable, which I am also setting in the deployment.yaml</p> <p>I am using default namespace</p> <p>This is the beginning of the configMap (I can see it is created in google kubernetes)</p> <p>I am running ls is the config directory to see if the development.json file has been mounted but it is not. I want the /config to be replaced and only contain the development.json file</p> <p>I have also tried with the subPath parameter but same result</p> <p>What am I doing wrong ? Should I see in the events that the configMap is mounted. There is no log of that except when I delete the configMap and try to do the mount, so I figure that the mounting is happening</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: config-development namespace: default data: development.json: | { &quot;production&quot;: false, </code></pre> <p>Here is the mount:</p> <pre><code>volumes: - name: config-volume configMap: name: config-development containers: - name: salesforce-compare-api image: XXXX command: [&quot;ls&quot;] args: [&quot;config&quot;, &quot;-la&quot;] imagePullPolicy: Always env: - name: NODE_ENV value: &quot;development&quot; volumeMounts: - name: config-volume mountPath: /config/development.json </code></pre>
<p>Usually, when the configmap cannot be mounted, the pod will not even start. So the fact that it started, shows that its mounted.</p> <p>In any case, your volumeMounts looks problematic.</p> <pre><code>volumeMounts: - name: config-volume mountPath: /config/development.json </code></pre> <p>This lead to the full configmap being mounted into a folder named development.json, while you actually only want to mount the one file.</p> <p>Use a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subpath</a> for this.</p> <pre><code>volumeMounts: - name: config-volume mountPath: /config/development.json subPath: development.json </code></pre> <p>That said, if you config folder inside the container is otherwise empty, you can also drop the subpath and mount the configmap to the /config dir, since it will not override anything important.</p> <pre><code>volumeMounts: - name: config-volume mountPath: /config </code></pre>
<p>I'm trying to find some sort of signal from a cluster indicating that there has been some sort of change with a Kubernetes cluster. I'm looking for any change that could cause issues with software running on that cluster such as Kubernetes version change, infra/distro/layout change, etc.</p> <p>The only signal that I have been able to find is a node restart, but this can happen for any number of reasons - I'm trying to find something a bit stronger than this. I am preferably looking for something platform agnostic as well.</p>
<p>From a pure Kubernetes perspective, I think the best you can do is monitor Node events (such as drain, reboot, etc) and then check to see of the version of the node has actually changed. You may also be able to watch Node resources and check to see if the version has changed as well.</p> <p>For GKE specifically, you can actually set up <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-notifications#notification-types" rel="nofollow noreferrer">cluster notifications</a> and then subscribe to the UpgradeEvent and/or UpgradeAvailableEvent.</p> <p>I believe AKS may have recently introduced support for events as well, although I believe it currently only supports something similar to the UpgradeAvailableEvent.</p>
<p>I am using docker to run my java war application and when I run the container I got this exception <strong>java.net.BindException: Address already in use</strong> .</p> <p>The container expose port 8085 (8080-&gt;8085/tcp) I executed this command to run the docker container :</p> <blockquote> <p>docker run -p 8080:8085/tcp -d --name=be-app java-app-image:latest</p> </blockquote> <p>this is screenshot of the error <a href="https://i.stack.imgur.com/Hjoc0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hjoc0.png" alt="enter image description here" /></a></p> <p>I checked the opened ports inside the container <a href="https://i.stack.imgur.com/6YIvP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6YIvP.png" alt="enter image description here" /></a></p> <p>I cannot restart the tomcat inside the container because it will stop , I thought about changing the 8085 port in the server.xml file , but I think that I should change the exposed port also. Is there any solution to avoid this exception ? ( java.net.BindException: Address already in use)</p> <p>this is also what I am getting when I run command ps aux <a href="https://i.stack.imgur.com/xItSj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xItSj.png" alt="enter image description here" /></a></p>
<p>The <code>ps</code> shows <em>two</em> java processes, possibly running Tomcat.</p> <p>Since they would be running with the same parameters, including ports, it seems expected the second process fails with</p> <pre><code>java.net.BindException: Address already in use </code></pre> <p>Make sure to <code>docker stop</code> everything first, and check the status of <code>docker ps --all</code></p>
<p>I have a strange case. Context:</p> <p>At the very first, the client was using our domain for their store, the URL was something like <code>somestore.eu.mycompany.com</code></p> <p>Then, the client upgraded to a custom domain, other clients did this without any problem.</p> <p>We deleted the whole namespace with the old subdomain and created a new one with the domain.</p> <p>The root domain works flawlessly, without SSL certificate issues. However the staging subdomain works sometimes, sometimes without an SSL certificate issue, sometime with this error:</p> <pre><code>$ curl -vI https://staging.somestore.com/ * Trying 35.102.186.11:443... * TCP_NODELAY set * Connected to staging.somestore.com (35.102.186.11) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=*.eu.mycompany.com * start date: Mar 13 09:29:46 2022 GMT * expire date: Jun 11 09:29:45 2022 GMT * subjectAltName does not match staging.somestore.com * SSL: no alternative certificate subject name matches target host name 'staging.somestore.com' </code></pre> <p>Looking at the logs I can see that <code>nginx-ingress</code> still trying to get the old certificate</p> <p><code>kubectl logs -f -n ingress-nginx nginx-ingress-controller-55f88544bf-dk7ht | grep my-namespace</code></p> <blockquote> <p>SSL certificate &quot;my-namespace/tls-cert&quot; does not contain a Common Name or Subject Alternative Name for server &quot;somestore.eu.mycompany.com&quot;: x509: certificate is valid for somestore.com, staging.somestore.com, <a href="http://www.somestore.com" rel="nofollow noreferrer">www.somestore.com</a>, not somestore.eu.mycompany.com</p> </blockquote> <p>Why Kubernetes's nginx-ingress still trying to get the old certificate?</p>
<ol> <li>make sure you have the correct configmaps, secrets or other configuration in your cluster (E.g. where SSL certs are stored). The desired config must be present, the deprecated must be dumped.</li> <li>perform a rollout restart on your deployment. ( E.g. if <code>nginx-ingress</code> is the name of the deployment in the <code>ingress</code> namespace, do this: <code>kubectl rollout restart -n ingress deploy/nginx-ingress</code> )</li> </ol>
<p>My nginx-ingress-controller is in the <code>ingress-nginx</code> namespace and I've set the large-client-header-buffers to <code>4 16k</code>, <code>4 32k</code> etc.</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx data: proxy-buffer-size: "16k" large-client-header-buffers: "4 16k" </code></pre> <p>When I inspect the configuration in the nginx-controller pod I see:</p> <pre><code> kubectl exec -n ingress-nginx nginx-ingress-controller-65fd579494-jptxh cat /etc/nginx/nginx.conf | grep large_client_header large_client_header_buffers 4 16k; </code></pre> <p>So everything seems to be configured correctly, still I get the error message <code>400 Bad Request Request Header Or Cookie Too Large</code></p>
<p>There is <a href="https://github.com/kubernetes/ingress-nginx/issues/319" rel="nofollow noreferrer">dedicated topic on github</a> about the problem. You can find there possible solutions. This problem should be completely removed based on <a href="https://github.com/helm/charts/issues/20901" rel="nofollow noreferrer">this issue</a>.</p> <p>Look also at more tutorials how can you solve this problem, but from the browser site:</p> <ul> <li><a href="https://www.minitool.com/news/request-header-or-cookie-too-large.html" rel="nofollow noreferrer">How to Fix the “Request Header Or Cookie Too Large” Issue [MiniTool News]</a></li> <li><a href="https://support.mozilla.org/gl/questions/918154" rel="nofollow noreferrer">400 Bad Request Request Header Or Cookie Too Large nginx - What does this error mean?</a></li> <li><a href="https://support.mozilla.org/en-US/questions/1327416" rel="nofollow noreferrer">HTTP Error 400. The size of the request headers is too long.</a>.</li> </ul>
<p>aws-load-balancer-scheme: internal does not create an NLB, no error, it just never creates the NLB</p> <p>If I use the deprecated service.beta.kubernetes.io/aws-load-balancer-internal: &quot;true&quot;, it works fine.</p> <p>However, the annotation documentation says to use scheme instead.</p> <p>Here is my full code:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nlb-sample-service1 namespace: test annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internal service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-type: nlb spec: ports: - port: 80 targetPort: 80 protocol: TCP type: LoadBalancer selector: app: nginx </code></pre> <p>I've tried a number of different variations of that, nothing I tried works when scheme is used.</p> <p>What am I missing here?</p> <p>Is there any way to get an error? kubectl create -f service-file.yaml runs without any errors to stdout.</p> <p>Thanks in advance.</p>
<p>You need to use load balancer type <code>external</code> along with <code>internal</code> scheme as shown below:</p> <pre><code>service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-scheme: internal </code></pre> <p>For details, please refer to <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/nlb/" rel="nofollow noreferrer">kubernetes doc</a></p>
<p>In the Kubernetes docs, in the <a href="https://kubernetes.io/docs/concepts/configuration/overview/#using-labels" rel="nofollow noreferrer">Using Labels</a> section it says:</p> <blockquote> <p>A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a Deployment.</p> </blockquote> <p>I don't understand how this can be achieved by a Deployment? When we want to update a service shouldn't it happen separately (on its own), separate from any Deployments? Like:</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f service.yaml </code></pre>
<p>A service points to a set of endpoints (pods) determined by its label selector.</p> <p>Let's take an example of a service that has the label selector</p> <pre class="lang-yaml prettyprint-override"><code>app: api version: v1 </code></pre> <p>It will point to all pods that have these two labels (may have more).</p> <p>If you deploy a new version, with label <code>version: v2</code> the existing service will NOT point to these pods since the label selector no longer matches the pods labels.</p> <p>On the other hand, if you omit <code>version: v1</code> from the label selector of the service and leave only <code>app: api</code>, the service will point to any pod that has the <code>app: api</code> label, meaning when you deploy a new version, even if the version label has a new value, the service will still point to these pods.</p> <p>This way you can update the service pods without updating the service itself - you can do that only by deploying your new api version.</p>
<p>I'm trying to find some sort of signal from a cluster indicating that there has been some sort of change with a Kubernetes cluster. I'm looking for any change that could cause issues with software running on that cluster such as Kubernetes version change, infra/distro/layout change, etc.</p> <p>The only signal that I have been able to find is a node restart, but this can happen for any number of reasons - I'm trying to find something a bit stronger than this. I am preferably looking for something platform agnostic as well.</p>
<p>In addition to watching Node events (see the complete list of events <a href="https://github.com/kubernetes/kubernetes/blob/7380fc735aca591325ae1fabf8dab194b40367de/pkg/kubelet/events/event.go#L50" rel="nofollow noreferrer">here</a>), you can use Kubernetes' <strong>Node Problem Detector</strong> for monitoring and reporting about a node's health (<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health" rel="nofollow noreferrer">link</a>).</p> <blockquote> <p>There are tons of node problems that could possibly affect the pods running on the node, such as:</p> <ul> <li>Infrastructure daemon issues: ntp service down;</li> <li>Hardware issues: Bad CPU, memory or disk;</li> <li>Kernel issues: Kernel deadlock, corrupted file system;</li> <li>Container runtime issues: Unresponsive runtime daemon;</li> </ul> </blockquote> <p>Node-problem-detector collects node problems from various daemons and make them visible to the upstream layers.</p> <p>Node-problem-detector supports several exporters:</p> <ul> <li><strong>Kubernetes exporter</strong> reports node problems to Kubernetes API server: temporary problems get reported as Events, and permanent problems get reported as Node Conditions.</li> <li>Prometheus exporter.</li> <li>Stackdriver Monitoring API.</li> </ul> <hr /> <p>Another option is the <strong>Prometheus Node Exporter</strong> (<a href="https://prometheus.io/docs/guides/node-exporter/" rel="nofollow noreferrer">link</a>). It exposes a wide variety of hardware- and kernel-related metrics (<strong>OS release info, system information as provided by the 'uname' system call</strong>, memory statistics, disk IO statistics, NFS statistics, etc.).</p> <p>Check the list of all existing collectors and the supported systems <a href="https://github.com/prometheus/node_exporter#collectors" rel="nofollow noreferrer">here</a>.</p>
<p>I want to monitor disk usages of persistent volumes in the cluster. I am using <a href="https://github.com/coreos/kube-prometheus" rel="noreferrer">CoreOS Kube Prometheus</a>. A dashboard is trying to query with a metric called <strong>kubelet_volume_stats_capacity_bytes</strong> which is not available anymore with Kubernetes versions starting from v1.12.</p> <p>I am using Kubernetes version v1.13.4 and <a href="https://github.com/MaZderMind/hostpath-provisioner" rel="noreferrer">hostpath-provisioner</a> to provision volumes based on persistent volume claim. I want to access current disk usage metrics for each persistent volume.</p> <ul> <li><p><strong>kube_persistentvolumeclaim_resource_requests_storage_bytes</strong> is available but it shows only the persistent claim request in bytes</p></li> <li><p><strong>container_fs_usage_bytes</strong> is not fully covers my problem.</p></li> </ul>
<p>Per-PVC disk space usage in percentage can be determined with the following query:</p> <pre><code>100 * sum(kubelet_volume_stats_used_bytes) by (persistentvolumeclaim) / sum(kubelet_volume_stats_capacity_bytes) by (persistentvolumeclaim) </code></pre> <p>The <code>kubelet_volume_stats_used_bytes</code> metric shows per-PVC disk space usage in bytes.</p> <p>The <code>kubelet_volume_stats_capacity_bytes</code> metric shows per-PVC disk size in bytes.</p>
<p>I have roughly 20 cronjobs in Kubernetes that handle various tasks at specific time intervals. Currently there's a fair bit of overlap causing usage of resources to spike, opposed to the usage graph being more flat.</p> <p>Below is a rough example of one of my cronjobs:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: cronjob metadata: name: my-task spec: schedule: &quot;*/20 * * * *&quot; successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 1 suspend: false concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 1 ttlSecondsAfterFinished: 900 template: spec: serviceAccountName: my-task-account containers: - name: my-task image: 12345678910.dkr.ecr.us-east-1.amazonaws.com/my-task:latest command: [&quot;/bin/sh&quot;] args: - -c - &gt;- python3 my-task.py resources: requests: memory: &quot;3Gi&quot; cpu: &quot;800m&quot; limits: memory: &quot;5Gi&quot; cpu: &quot;1500m&quot; restartPolicy: Never </code></pre> <p>Is there a way to stagger my jobs so that they aren't all running concurrently?</p> <p>ie.</p> <ul> <li>job 1 starts at 12:00 with next run at 12:20</li> <li>job 2 starts at 12:01 with next run at 12:21</li> <li>job 3 starts at 12:02 with next run at 12:22</li> <li>job 4 starts at 12:03 with next run at 12:23</li> </ul> <p>A solution where this is handled automatically would be 1st prize however a manually configured solution would also suffice.</p>
<p><strong>Posting my comment as the answer for better visibility.</strong></p> <p>As far as I understood, all your jobs are configured separately, you can set specific schedule for each of them, e.g. for job 1 that starts at 12:00 with next run at 12:20 it can set up like this:</p> <pre><code>spec: schedule: &quot;0,20 12 * * *&quot; </code></pre> <p>and correspondingly for job 2:</p> <pre><code>spec: schedule: &quot;01,21 12 * * *&quot; </code></pre>
<p>I have a Helm Chart that looks like this:</p> <pre><code>apiVersion: v1 name: my-app version: 1.0-123.4b3k32 </code></pre> <p>I am trying to use helm template (or lint, it doesn't matter, they both throw the same error). With helm2 this wasn't a problem, but with helm3, it is complaining that</p> <pre><code>Error: validation: chart.metadata.version &quot;1.0-304.0770996&quot; is invalid </code></pre> <p>Now when executing helm template, I want to override this value using</p> <pre><code>helm template --set chart.metadata.version='0.0.0' </code></pre> <p>but I keep getting the same error, what am I doing wrong here?</p> <p>Changing it in the Chart itself is not an option. I tested by changing it manually to 0.0.0 to see if it works and it does. Setting it to 0.0.0 during templating would be fine for me.</p>
<p>The <a href="https://docs.helm.sh/docs/topics/charts/#charts-and-versioning" rel="nofollow noreferrer">Chart.yaml <code>version:</code> field</a> is specified to be a <a href="https://semver.org" rel="nofollow noreferrer">semantic version</a>; quoting the Helm documentation, &quot;non-SemVer names are explicitly disallowed by the system.&quot; SerVer's rule 2 is:</p> <blockquote> <p>A normal version number MUST take the form X.Y.Z where X, Y, and Z are non-negative integers....</p> </blockquote> <p>So changing the chart version to <code>1.0.0-456.a1b2c3d</code> might resolve this problem. (SerVer rule 9 allows the <code>-456...</code> suffix to indicate a pre-release version but all of its examples have three-part versions before the hyphen.)</p> <p>There is no way to override these <code>Chart.yaml</code> values with <a href="https://docs.helm.sh/docs/helm/helm_install/" rel="nofollow noreferrer"><code>helm install</code></a> or related commands. In particular <code>helm install --set</code> provides overrides to the <code>values.yaml</code> file, what template code sees as <code>.Values</code>.</p>
<p>If a Deployment uses ReplicaSets to scale Pods up and down, and StatefulSets don't have ReplicaSets...</p> <p>So, how does it manage to scale Pods up and down? I mean, what resource is responsible? What requests does a StatefulSet make in order to scale?</p>
<p>In short StatefulSet Controller handles statefulset replicas.</p> <p>A StatefulSet is a Kubernetes API object for managing stateful application workloads. StatefulSets handle the deployment and scaling of sets of Kubernetes pods, providing guarantees about their uniqueness and ordering.</p> <p>Similar to deployments, StatefulSets manage pods with identical container specifications. They differ in terms of maintaining a persistent identity for each pod. While the pods are all created based on the same spec, they are not interchangeable, so each pod is given a persistent identifier that is maintained through rescheduling.</p> <p>Benefits of a StatefulSet deployment include:</p> <p><strong>Unique identifiers</strong>—every pod in the StatefulSet is assigned a unique, stable network identity, consisting of a hostname based on the application name and instance number. For example, a StatefulSet for a web application with three instances may have pods labeled web1, web2 and web3.</p> <p><strong>Persistent storage</strong>—every pod has its own stable, persistent volume, either by default or as defined per storage class. When the pods in a cluster are scaled down or deleted, their associated volumes are not lost, and the data persists. Unneeded resources can be purged by scaling down the StatefulSet to 0 before deleting the unused pods.</p> <p><strong>Ordered deployment and scaling</strong>—the pods in a StatefulSet are created and deployed in order, according to their increments. Pods are also shut down in (reverse) order, ensuring that the deployment and runtime are reliable and repeatable. The StatefulSet won’t scale until all every required pod is running, so if a pod fails, it will recreate the pod before it attempts to add more instances as per the scaling requirements.</p> <p><strong>Automated, ordered updates</strong>—a StatefulSets can handle rolling updates, shutting down each node and rebuilding it according to the original order, until every node has been replaced and the older versions cleaned up. The persistent volumes can be reused, so data is migrated to the new version automatically.</p>
<p>hope you are doing fine,</p> <p>i got that error :error:</p> <p><code>error converting YAML to JSON: yaml: line 33: found character that cannot start any token</code></p> <p>while trying to deploy this cronjob on my k8s cluster, can you please check and let me know if you have any clues about the reason of having this error ?</p> <p>the file is as follows:</p> <pre><code>--- apiVersion: batch/v1beta1 kind: CronJob metadata: name: resourcecleanup spec: # 10:00 UTC == 1200 CET schedule: '0 10 * * 1-5' jobTemplate: spec: template: metadata: annotations: iam.amazonaws.com/role: arn:aws:iam::%%AWS_ACCOUNT_NUMBER%%:role/k8s/pod/id_ResourceCleanup spec: containers: - name: resourcecleanup image: cloudcustodian/c7n args: - run - -v - -s - /tmp - -f - /tmp/.cache/cloud-custodian.cache - /home/custodian/delete-unused-ebs-volumes-policies.yaml volumeMounts: - name: cleanup-policies mountPath: /home/custodian/delete-unused-ebs-volumes-policies.yaml subPath: delete-unused-ebs-volumes-policies.yaml env: - name: AWS_DEFAULT_REGION value: %%AWS_REGION%% volumes: - name: cleanup-policies configMap: name: cleanup-policies restartPolicy: Never --- </code></pre>
<p>The problem could be from your indentation method, Try using spaces and not tabs for your indentation. Use 2 spaces for each indentation. Hope this helps.</p>
<p>I need to service gitlab, nexus and jupyterhub based on URL using one open port using k8s ingress.</p> <p>If the path is written as &quot;/&quot; when create ingress, it works normally, but if you write &quot;/nexus&quot; like this, a problem occurs during the redirection process.</p> <p>Have any of you solved the same problem? Please help.</p> <p><a href="https://i.stack.imgur.com/9h1HM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9h1HM.png" alt="enter image description here" /></a></p> <p>my ingress.yaml as below</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 creationTimestamp: &quot;2022-04-06T05:56:40Z&quot; generation: 7 name: nexus-ing namespace: nexus resourceVersion: &quot;119075924&quot; selfLink: /apis/extensions/v1beta1/namespaces/nexus/ingresses/nexus-ing uid: 4b4f97e4-225e-4faa-aba3-6af73f69c21d spec: ingressClassName: nginx rules: - http: paths: - backend: serviceName: nexus-service servicePort: 8081 path: /nexus(/|$)(.*) pathType: ImplementationSpecific status: loadBalancer: ingress: - ip: 172.30.1.87 </code></pre>
<p>I solve this problem by myself</p> <ol> <li>I edited my pc hosts file</li> </ol> <pre><code>172.30.1.87 nexus.k8s.io 172.30.1.87 gitlab.k8s.io </code></pre> <ol start="2"> <li>I edited each Ingress in same service namespace</li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / namespace: nexus spec: ingressClassName: nginx rules: - host: nexus.k8s.io http: paths: - backend: serviceName: nexus-service servicePort: 8081 path: / status: loadBalancer: ingress: - ip: 172.30.1.87 </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: gitlab-ingress namespace: gitlab spec: ingressClassName: nginx rules: - host: gitlab.k8s.io http: paths: - backend: serviceName: gitlab-webservice servicePort: 8181 path: / status: loadBalancer: ingress: - ip: 172.30.1.87 </code></pre> <ol start="3"> <li>connect test ingress Hostname + ingress Controller Nodeport</li> </ol> <p><a href="https://i.stack.imgur.com/v2nYv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v2nYv.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/tU4e9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tU4e9.png" alt="enter image description here" /></a></p>
<p>I have few consumers which updates their local cache on reading a message from the kinesis queue. But this leads to inconsistency as message is only processed by only one among all the replicas and any information retrieved from other replicas gives us invalid cache.</p> <p>How can i make sure i clear the cache among all the replicas when the invalidation message appears in the queue.</p>
<p>The library that you're using assigns consumers to shards, so any message that you put on the stream will only go to one consumer.</p> <p>The best solution would be to send your invalidation message out-of-band, using another Kinesis stream (or alternative such as SNS). This would add complexity to your listener, as they'd now have to listen for two sources (and you couldn't use that library for the out-of-band messages).</p> <p>If you want to send in-band cache invalidation messages, then you need to write the validation message multiple times, with different partition keys, so that it goes to all shards in the stream. The way that the partition key works is that it's MD5-hashed, and that hash is used to select a shard (see <a href="https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html" rel="nofollow noreferrer">PutRecord doc</a>).</p> <p>Assuming that your application can handle multiple cache invalidation messages, I think that the best way to handle this is to write multiple messages with different partition keys (no need to get fancy, just use &quot;A&quot;, &quot;B&quot;, &quot;C&quot;, and so on). Keep doing this until you've written a message to every shard (getting the shard ID for each record from the response to <code>PutRecord</code>).</p> <p>You can call <a href="https://docs.aws.amazon.com/kinesis/latest/APIReference/API_ListShards.html" rel="nofollow noreferrer">ListShards</a> with a <code>ShardFilter</code> of <code>AT_LATEST</code> to get the shard IDs of the currently active shards.</p>
<p>I'm very new to Istio and not a Kubernete's expert, though I have used the latter. I respectfully ask for your understanding and a bit more details than you might normally include.</p> <p>For simplicity, say I have two services, both Java/SpringBoot. Service A listens to requests from the outside world, Service B listens to requests from Service A. Service B is scalable, and at points might return 503. I wish to have service A retry calls to service B in a configurable non-programmatic way. Here's a blog/link that I tried to follow that I think is very similar.</p> <p><a href="https://samirbehara.com/2019/06/05/retry-design-pattern-with-istio/" rel="nofollow noreferrer">https://samirbehara.com/2019/06/05/retry-design-pattern-with-istio/</a></p> <p>Two questions:</p> <ol> <li><p>It may seem obvious, but if I wanted to define a virtual retriable service, do I add it to the existing application.yml file for the project or is there some other file that the networking.istio.io/v1alpha3 goes?</p> </li> <li><p>Would I define the retry configuration in the yaml/repo for Service A or Service B? I can think of reasons for architecting Istio either way.</p> </li> </ol> <p>Thanks, Woodsman</p>
<p>If the scalable service is returning <code>503</code>, it makes sense to add a virtual service just like the blog example for <code>serviceB</code> and make <code>serviceA</code> connect to <code>virtualServiceB</code> which will do the retries to <code>ServiceB</code></p> <p>Now, for this to work (from within the cluster):</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: serviceB spec: hosts: – serviceB http: – route: – destination: host: serviceB retries: attempts: 3 perTryTimeout: 2s </code></pre> <p>These lines:</p> <pre><code> hosts: – serviceB </code></pre> <p>Will tell the default Istio Gateway (mesh) to route all the traffic not to <code>serviceB</code>, but to <code>virtualServiceB</code> first which will then route to <code>ServiceB</code>. Then you will have retries from <code>virtualServiceB</code> to <code>serviceB</code>.</p> <p>Hope this helps</p>
<p>suddently, this error message started appearing when starting minikube:</p> <pre><code>* Creating docker container (CPUs=2, Memory=4000MB) .../ E0124 18:28:01.039963 8724 kic.go:267] icacls failed applying permissions - err - [%!s(&lt;nil&gt;)], output - [file elaborato: C:\Users\user\.minikube\machines\minikube\id_rsa Elaborazione completata per 1 file. Elaborazione non riuscita per 0 file] </code></pre> <p>It's strange because it gives an error, but also sais that 1 file has been elaborated correctly. Minikube is not behaving strange though. Should I just ignore this?</p>
<p>Sounds like it is a known issue (and will be fixed in 1.26.0): <a href="https://github.com/kubernetes/minikube/issues/13868" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/13868</a></p>
<p>I have one service with 10 pods inside of my cluster and it sends multiple requests to the master node for more than an hour. I put an annotation in my ingress resource and modified it with some annotation, the first annotation I used is only to change the load balancer method to ewma load balancing.</p> <pre><code>Annotations: nginx.ingress.kubernetes.io/load-balance: ewma </code></pre> <p>During the time when the client requested the service from the master node, it received the request perfectly, which means the client received different responses from different pods and from different agent nodes inside of the cluster.</p> <p>but when I changed the annotation to this annotation</p> <pre><code>nginx.ingress.kubernetes.io/upstream-hash-by: &quot;ewma&quot; </code></pre> <p>the client received the same answer every time from the same pod and from the same node, even I send requests for the service 5 times in a second for more than 30 minutes. are they implemented correctly? and why are they different?</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p><strong>Root cause:</strong></p> <p>The nodes should receive equal requests from clients (load balancing) and at the same time saving machine resources.</p> <p>Due to this, need to find the best solution between annotations below (based on the question):</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/load-balance: ewma </code></pre> <p>and</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/upstream-hash-by: ewma </code></pre> <p><strong>Solution:</strong></p> <p>Usage of <code>nginx.ingress.kubernetes.io/load-balance: ewma</code> annotation is preferable solution for the mentioned purpose.</p> <p>Based on the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#load-balance" rel="nofollow noreferrer">documents</a>: <code>load-balance</code> use the Peak EWMA method for routing. In contrast, of <code>upstream-hash-by</code>, which is load balance using consistent hashing of IP or other variables and provides connection to the same pod.</p> <p>For more information, see this article <a href="https://flugel.it/kubernetes/kubernetes-nginx-ingress-consistent-hash-subset-load-balancer/" rel="nofollow noreferrer">Kubernetes Nginx Ingress: Consistent hash subset load balancer</a>.</p>
<p>My application runs over a Kubernetes cluster of 3 nodes and uses Kafka to stream data. I am trying to check my system's ability to recover from node failure, so I deliberately fail one of the nodes for 1 minute.</p> <p>Around 50% of the times, I experience data loss of a single data record after the node failure. If the controller Kafka broker was running on the failed node, I see that a new controller broker was elected as expected. When the data loss occur, I see the following error in the new controller broker log:</p> <blockquote> <p>ERROR [Controller id=2 epoch=13] Controller 2 epoch 13 failed to change state for partition __consumer_offsets-45 from OfflinePartition to OnlinePartition (state.change.logger) [controller-event-thread]</p> </blockquote> <p>I am not sure if that's the problem, but searching the web for information about this error made me suspect that I need to configure Kafka to have more than 1 replica for each topic. This is how my topics/partitions/replicas configuration looks like: <a href="https://i.stack.imgur.com/60S0P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/60S0P.png" alt="enter image description here" /></a></p> <p><strong>My questions:</strong> Is my suspicion that more replicas are required is correct?</p> <p>If yes, how do I increase the number of topics replicas? I played around with a few broker parameters such as <code>default.replication.factor</code> and <code>replication.factor</code> but I did not see the number of replicas change.</p> <p>If no, what is the meaning of this error log?</p> <p>Thanks!</p>
<p>Yes, if the broker hosting the single replica goes down, then you can expect an unclean topic. If you have unclean leader election disabled, however, you shouldn't lose data that's already been persisted to the broker.</p> <p>To modify existing topics, you must use <code>kafka-reassign-partitions</code> tool, not any of the broker settings, as those only apply for brand new topics. <a href="https://stackoverflow.com/questions/52642710/kafka-increase-replication-factor-of-multiple-topics">Kafka | Increase replication factor of multiple topics</a></p> <p>Ideally, you should disable auto topic creation, as well, to force clients to use Topic CRD resources in Strimzi that include a replication factor, and you can use other k8s tools to verify that they have values greater than 1.</p>
<p>I have the following values.yaml</p> <pre><code>ingresses: - name: public class: &quot;nginx&quot; annotations: nginx.ingress.kubernetes.io/proxy-body-size: 122m nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;7&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;60&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;30&quot; labels: {} rules: - host: example.com http: paths: - path: /asd/as pathType: ImplementationSpecific backend: service: name: one port: number: 8080 - backend: service: name: log port: number: 8081 path: /path/log pathType: ImplementationSpecific - backend: service: name: got port: number: 8082 path: /api/got pathType: ImplementationSpecific tls: - hosts: - example.com secretName: cert - name: public annotations: labels: {} rules: - host: example1.com http: paths: - backend: service: name: web port: number: 8090 pathType: ImplementationSpecific tls: - hosts: - example1.com secretName: qwe </code></pre> <p>and I have the following ingress file:</p> <pre><code>{{- $top := . -}} {{- range $ingress := .Values.ingresses }} apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ $ingress.name }} namespace: {{ $ingress.namespace }} {{- with $ingress.annotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} spec: {{- if and $ingress.class (semverCompare &quot;&gt;=1.18-0&quot; $.Capabilities.KubeVersion.GitVersion) }} ingressClassName: {{ $ingress.class }} {{- end }} {{- if $ingress.tls }} tls: {{- range $ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range $ingress.rules }} - host: {{ .host | quote }} http: paths: {{- range .paths }} {{- if and .path (semverCompare &quot;&gt;=1.18-0&quot; $.Capabilities.KubeVersion.GitVersion) }} - path: {{ .path }} {{ end }} {{- if and .pathType (semverCompare &quot;&gt;=1.18-0&quot; $.Capabilities.KubeVersion.GitVersion) }} pathType: {{ .pathType }} {{- end }} backend: service: name: {{ .backend.service.name }} port: number: {{ .backend.service.port.number}} {{- end }} {{- end }} {{- end }} </code></pre> <p>This only generates one ingress (whichever is the last one in values files). I tried using <code>range $ingress := .Values.ingress</code> but it keeps giving me an error whenever I try $ingress.name . What changes do I make to the ingress.yaml to be able to deploy both these ingresses.</p> <p>Edit: Made edits based on David's answer.</p>
<p>You need to break the two separate ingress configurations up in the Helm values somehow. Right now they're in a single map object under <code>ingress:</code>, so <code>.Values.ingress.name</code> for example only has one value rather than being something you can iterate over.</p> <p>A YAML list here makes sense:</p> <pre class="lang-yaml prettyprint-override"><code># values.yaml ingresses: - name: example-com class: nginx rules: [...] - name: example1-com class: nginx rules: [...] </code></pre> <p>Then you can iterate over this list with a <code>range</code> loop. The important thing to know about a <code>range</code> loop is that it rebinds the <code>.</code> special variable, which is the base of constructs like <code>.Values</code>; that means that you need to save the original value of <code>.</code> outside the loop (the <code>$</code> special variable may work as well). You can generate multiple Kubernetes objects in a single Helm template file so long as each begins with the YAML <code>---</code> start-of-document marker (and it's valid to generate no output at all).</p> <pre class="lang-yaml prettyprint-override"><code>{{-/* save the original value of . */-}} {{- $top := . -}} {{-/* iterate over the ingress configurations */-}} {{- range $ingress := .Values.ingresses }} --- {{-/* your existing conditionals can go here, simplifying */}} apiVersion: networking.k8s.io/v1 kind: Ingress metadata: {{-/* this comes from the per-ingress config */}} {{- with $ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} {{-/* if you need to use &quot;standard&quot; helper functions, make sure to pass the saved $top value as their parameter */}} name: {{ include &quot;mychart.fullname $top }}-{{ $ingress.name }} spec: { ... } {{- end }} </code></pre> <p>You also may want to reconsider how much of this is appropriate to include in arbitrarily-configurable values. Rather than essentially write out the entire Ingress object in Helm values, you may find it easier to write out things like the path mappings in the template files themselves, and have a few high-level controls (&quot;enabled&quot;, &quot;host name&quot;, &quot;TLS secret name&quot;) exposed. Things like the backend service name and port will correspond to other things in your chart and you may need to compute the service name; someone just installing your chart shouldn't need to configure this.</p>
<p>I'm trying to migrate from docker-maven-plugin to kubernetes-maven-plugin for an test-setup for local development and jenkins-builds. The point of the setup is to eliminate differences between the local development and the jenkins-server. Since docker built the image, the image is stored in the local repository and doesn't have to be uploaded to a central server where the base-images are located. So we can basically verify our build without uploading anything to the server and the images is discarded after the task is done (running integrationstests).</p> <p>Is there a similar way to trick kubernetes to store the image into the local repository without having to take the roundtrip to a central repository? Eg, behave as if the image is already downloaded? Note that I still need to fetch the base-image from the central repository.</p>
<p>If you don't want to use any docker repo (public or private), you can use what is called <a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images" rel="nofollow noreferrer">Pre-pulled-images</a>. This is a bit annoying as you need to make sure all the kubernetes nodes have the images there and also set the <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">ImagePullPolicy</a> to <code>Never</code> in every kubernetes manifest.</p> <p>In your case, if what you call local repository is some private docker registry, you just need to store the credentials to the private registry in a kubernetes secret and either patch you default service account with <code>ImagePullSecrets</code> or your actual deployment/pod manifest. More details about that <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
<p>I deployed a cluster (on premise) as proof of concept, using this command:</p> <p><code>sudo kubeadm init --upload-certs --pod-network-cidr=x.x.x.x/16 --control-plane-endpoint=x.x.x.x.nip.io</code></p> <p>Now, i need to change the endpoint from <code>x.x.x.x.nip.io</code> to <code>somename.example.com</code>. How can i do this?</p> <hr /> <p>Kubeadm version: <code>&amp;version.Info{Major:&quot;1&quot;, Minor:&quot;23&quot;, GitVersion:&quot;v1.23.4&quot;, GitCommit:&quot;e6c093d87ea4cbb530a7b2ae91e54c0842d8308a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-02-16T12:36:57Z&quot;, GoVersion:&quot;go1.17.7&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}</code></p>
<p>Posting an answer as a community wiki out of comments, feel free to edit and expand.</p> <hr /> <p>Based on the documentation and <a href="https://stackoverflow.com/questions/65505137/how-to-convert-a-kubernetes-non-ha-control-plane-into-an-ha-control-plane/65565377#65565377">very good answer</a> (which is about switching from simple to high availability cluster and it has steps about adding <code>--control-plane-endpoint</code>), there's no easy/straight-forward solution to make it.</p> <p>Considering risks and difficulty it's easier to create another cluster with a correct setup and migrate all workflows there.</p>
<p>By default, when we raise a request from a Pod to another pod, Kubernetes is trying to append <code>.namespace.svc.cluster.local</code> to the domain we gave and try to resolve.</p> <p>But in our case, we are already using a fully qualified URL to raise the request (<a href="http://service-name.namespace.svc.cluster.local/api/..." rel="nofollow noreferrer">http://service-name.namespace.svc.cluster.local/api/...</a>) in all the places, but here also Kubernetes will try to resolve DNS for <code>service-name.namespace.svc.cluster.local.namespace.svc.cluster.local</code> and try a bunch of other domains as well, at last only it will try the actually given domain.</p> <p><strong>Question:</strong> Is there a way to configure Kubernetes to use the given domain for DNS resolve on the first try? If failed then it can try other domains</p> <p><strong>Environment Info:</strong></p> <p>Environment: AKS<br /> Pod OS: Debian GNU v10 (buster)</p> <p><strong>Additional Info:</strong></p> <p>Contents of <code>/etc/resolv.conf</code> inside a Pod</p> <pre><code>search namespance.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com nameserver x.x.x.x options ndots:5 </code></pre> <p>Wireshark:</p> <p><a href="https://i.stack.imgur.com/Xoxam.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xoxam.png" alt="enter image description here" /></a> As you can see, for every single successful request, there are 4 failed request before it.</p>
<p>Lowering the <code>ndots</code> will fix the issue</p> <blockquote> <p>ndots: sets a threshold for the number of dots which must appear in a name before an initial absolute query will be made. The default for n is 1, meaning that if there are any dots in a name, the name will be tried first as an absolute name before any search list elements are appended to it.</p> </blockquote> <p>Try this :</p> <pre><code>spec: containers: - name: ... image: ... dnsConfig: options: - name: ndots value: &quot;1&quot; </code></pre>
<p>I know we can edit the pvc and change to RWX but there is a cache in this, I'm trying to do in GKE, so for my pvc with RWO the storage class is standard, but if edit to RWX i need to change the storage class also to NFS.</p> <p>Is it possible to achieve this without losing data inside PVC ?</p>
<p>Your existing pvc is using the standard storage class which doesn’t allow RWX . So it’s not possible. It means even if you change it in PVC config it’s not going to work.</p> <p>Workaround to the above is take the backup of existing pv data. Create a new pvc with RWX mode for NFS pv and mount that to the application. Copy the backup data to the mounted volume.</p>
<p>Is there a way to determine programatically if a pod is in crashloopbackoff? I tried the following</p> <pre><code>pods,err := client.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{}) if err != nil { return err } for _, item := range pods.Items { log.Printf(&quot;found pod %v with state %v reason %v and phase %v that started at %v&quot;, item.Name, item.Status.Message, item.Status.Reason, item.Status.Phase, item.CreationTimestamp.Time) } </code></pre> <p>However this just prints blank for state and reason, tough it prints the phase.</p>
<p>To clarify I am posting a community wiki answer.</p> <blockquote> <p>It's hiding in <a href="https://pkg.go.dev/k8s.io/api/core/v1#ContainerStateWaiting" rel="nofollow noreferrer"><code>ContainerStateWaiting.Reason</code></a>:</p> </blockquote> <pre><code>kubectl get po -o jsonpath='{.items[*].status.containerStatuses[*].state.waiting.reason}' </code></pre> <blockquote> <p>although be aware that it only <em>intermittently</em> shows up there, since it is an intermittent state of the container; perhaps a more programmatic approach is to examine the <code>restartCount</code> and the <code>Error</code> state</p> </blockquote> <p>See also <a href="https://github.com/kubernetes-client/go/blob/master/kubernetes/docs/V1PodStatus.md" rel="nofollow noreferrer">this repository</a>.</p>
<p>I plan to use Helm for deploying purposes. I have three applications/pods p1,p2,p3 and each of these has 2 enivronments dev, prod and in each environment there is a configmap.yml and deployment.yml.</p> <p>I plan on using helm, however how can I structure these. Do I need three helm charts?, one per application or is it possible to pack everything in one helm, considering the constraints.</p> <p>I thought of the following structure.</p> <pre><code>+-- charts | \-- my-chart | +-- Chart.yaml # Helm chart metadata | +-- templates | \-- p1 +-- configmap1.yml +-- dep1.yaml ............................ similiary for p2,p3 | +-- values.yaml # default values | +-- values.dev.p1.yaml # development override values | +-- values.dev.p2.yaml | +-- values.dev.p3.yaml | +-- values.prod.p1.yaml # production override values | +-- values.prod.p2.yaml | +-- values.prod.p3.yaml </code></pre> <p>Now if I want to deploy p1 in prod , then I simply</p> <pre><code>helm install -f values.prod.p1.yaml helm-app </code></pre> <p>Would this work is this the general convention?</p>
<p>You can use the single helm chart to manage all the deployment and config map.</p> <p>Create the <strong>tpl</strong> for deployment and service so this single tpl (template) will use to generate the multiple deployment YAML configs.</p> <p>So you will get the 3 YAML deployment file as output while you will be managing a single template file.</p> <p>For configmap also you can follow same and keep in single helm chart is it's working fine for you.</p> <p>For different environment you can mange the different <strong>values</strong> into <strong>values.yaml</strong> file like <strong>dev-values.yaml</strong> &amp; <strong>prod-values.yaml</strong></p> <pre><code>helm install -f values.prod.p1.yaml helm-app </code></pre>
<p>I have make my deployment work with istio ingressgateway before. I am not aware of any changes made in istio or k8s side.</p> <p>When I tried to deploy, I see an error in replicaset side that's why it cannot create new pod.</p> <blockquote> <p>Error creating: Internal error occurred: failed calling webhook &quot;namespace.sidecar-injector.istio.io&quot;: Post &quot;https://istiod.istio-system.svc:443/inject?timeout=10s&quot;: dial tcp 10.104.136.116:443: connect: no route to host</p> </blockquote> <p>When I try to go inside api-server and ping 10.104.136.116 (istiod service IP) it just hangs.</p> <p>What I have tried so far:</p> <ul> <li>Deleted all coredns pods</li> <li>Deleted all istiod pods</li> <li>Deleted all weave pods</li> <li>Reinstalling istio via istioctl x uninstall --purge</li> <li>turning all of VMs firewall</li> <li>sudo iptables -P INPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -F</li> <li>restarted all of the nodes</li> <li>manual istio pod injection</li> </ul> <p>Setup</p> <ul> <li>k8s version: 1.21.2</li> <li>istio: 1.10.3</li> <li>HA setup</li> <li>CNI: weave</li> <li>CRI: containerd</li> </ul>
<p>In my case this was related to firewall. More info can be found <a href="https://istio.io/latest/docs/setup/platform-setup/gke/" rel="nofollow noreferrer">here</a>.</p> <p>The gist of it is that on GKE at least you need to open another port 15017 in addition to 10250 and 443. This is to allow communication from your master node(s) to you VPC.</p>
<p>I'm studying how to make a chat service with gRPC. I notice that most of the examples store all the subscriber's connections into a List data structure. When the chatroom got a new message, the server will loop through that List and send new messages.</p> <p>Stores subscribers in <a href="https://github.com/Raaaaaaaay86/grpc-chat-server/blob/master/src/main/java/com/example/grpcchatserver/observer/ChannelObserver.java" rel="nofollow noreferrer">source code's</a> variable <strong>subscribers</strong>.</p> <p>My question is that when the subscriber number grows up and storing all the subscribers in the HashMap seems like a bad idea because that will cost too much memory? I have tried to store those connections in Redis but it might be impossible <a href="https://stackoverflow.com/questions/57848633/how-to-make-grpc-streamobserver-object-serializable-in-java">due to the connection is not serializable.</a></p> <p>My Idea is when I deploy multiple pod instances in Kubernetes. Each chat service becomes independent. How could I get the subscribers correctly when they are distributed on different servers?</p>
<p>Here are a couple of aspects that I'm thinking:</p> <ul> <li>Any chat system has some limits in terms of chatroom size. How many subscribers do you want to support? If you want to support tens (or even hundreds) of subscribers, I believe it works well with an in-memory list of subscribers, since it's much faster than having the subscriber's list in an external cache</li> </ul> <ul> <li>When you need to send a message to the entire list of subscribers, instead of looping on that list, you can do that in parallel, using an IO-optimized thread-pool.</li> <li>Deploying your chat-service in a pod is a good idea, you need the ability to scale horizontally, but you also need a smart-gateway in front of your service, to route the requests from a given user, to the pod that stores its information (if the subscribers list is stored in memory). Otherwise, if this info is stored externally (in a cache), your chat service can be fully stateless.</li> </ul>
<p>We are working on a Java Spring Boot application, that needs access to a database, the password is stored on the <code>application.properties</code> file.</p> <p>Our main issue is that the passwords might be viewable when uploaded to GitLab/GitHub.</p> <p>I found that we can use Jasypt to encrypt the data, but from what I read, I need to use the decryption key on the execution, which is also stored on Git, in order to be deployed using Kubernates.</p> <p>Is there some way to secure our passwords in such a case? We are using AWS if that makes any difference, and we are trying to use the EKS service, but until now we have had a VM with K8s installed.</p>
<p>Do not store passwords in <code>application.properties</code> as you mention is insecure but also you may have a different version of your application (dev, staging, prod) which will use different databases and different passwords.</p> <p>What you can do in this case is maintain the password empty in source files and <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config" rel="nofollow noreferrer">externalize this configuration</a>, i.e you can use an environment variable in your k8 deployment file or VM that the application will be run, spring boot will load it as property value if they have the right format. From spring <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Spring Boot lets you externalize your configuration so that you can work with the same application code in different environments. You can use a variety of external configuration sources, include Java properties files, YAML files, environment variables, and command-line arguments.</p> </blockquote>
<p>The following questions are about an on-prem K3S setup.</p> <p>1] How does HTTP/S traffic reach an ingress controller in say K3S?</p> <p>When I hit any of my nodes on HTTPS port 443 I get the traefik ingress controller. This must be &quot;magic&quot; though because:</p> <ul> <li>There is no process on the host listening on 443 (according to lsof)</li> <li>The actual <code>nodePort</code> on the <code>traefik</code> service (of type LoadBalancer) is 30492</li> </ul> <p>2] Where is the traefik config located inside the ingress controller pod? When I shell into my traefik pods I cannot find the config anywhere - <code>/etc/traefik</code> does not even exist. Is everything done via API (from Ingress resource definitions) and not persisted?</p> <p>3] Is ingress possible without any service of type LoadBalancer? I.e. can I use a nodePort service instead by using an external load balancer (like F5) to balance traffic between nodes and these nodeports?</p> <p>4] Finally, how do the traefik controller pods &quot;know&quot; when a node is down and stop sending/balancing traffic to pods which no longer exist?</p>
<ol> <li>Port-forwarding is responsible for traffic getting mapped to traefik ingress controller by hitting on port 443 and NodePort is generally in between this range 30000-32767 only.</li> </ol> <p>Refer this <a href="https://doc.traefik.io/traefik/user-guides/crd-acme/#port-forwarding" rel="nofollow noreferrer">documentation</a> for more information on port forwarding.</p> <ol start="3"> <li>Yes, An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">Service.Type=NodePort</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Service.Type=LoadBalancer</a>.</li> </ol> <p>Refer this <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">documentation</a> for more information on ingress.</p> <ol start="4"> <li>Kubernetes has a health check mechanism to remove unhealthy pods from Kubernetes services (cf readiness probe). As unhealthy pods have no Kubernetes endpoints, Traefik will not forward traffic to them. Therefore, Traefik health check is not available for kubernetesCRD and kubernetesIngress providers.</li> </ol> <p>Refer this <a href="https://doc.traefik.io/traefik/routing/services/#health-check" rel="nofollow noreferrer">documentation</a> for more information on Health check.</p>
<p>I am trying to send mail using sendgrid api and cronjob in k8s , I tried my python code in cloud function and it is running as expected however when I used my code to create GCR image and deploy it in a k8s cronjob i got an <code>urllib.error.URLError: &lt;urlopen error [Errno 104] Connection reset by peer&gt; </code>error</p> <p><img src="https://i.stack.imgur.com/9OFNG.png" alt="enter image description here" /></p> <p>Well i created a pod for debugging , here is my pod definition that uses my linux image :</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: notifier spec: serviceAccountName: xxxxxxxxxxx containers: - name: test image: eu.gcr.io/xxxxxxxxxxxxxxxxxxxxxx:v1.4 command: [ &quot;/bin/bash&quot;, &quot;-c&quot;, &quot;--&quot; ] args: [ &quot;while true; do sleep 30; done;&quot; ] </code></pre> <p>I know that I need to allow traffic (egress and ingress ) so that my pod can get traffic from api however i don't know how to do it , here is my cronjob definition :</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: zzzzzzzzzzzzzzz spec: schedule: &quot;00 7 * * *&quot; concurrencyPolicy: Forbid successfulJobsHistoryLimit: 5 failedJobsHistoryLimit: 3 jobTemplate: spec: parallelism: 1 backoffLimit: 0 template: spec: restartPolicy: Never serviceAccountName: xxxxxxxxxx containers: - name: yyyyyyyyyyyyy image: eu.gcr.io/xxxxxxxxxxxxxxxxxxxxxxxx:v1.4 resources: requests: memory: &quot;512Mi&quot; cpu: 1 limits: memory: &quot;1024Mi&quot; cpu: 2 </code></pre> <p>I am using kustomize k8s , GKE , python sendgrid api</p> <p>thank you for your support</p>
<p>To communicate from/to sendgrid , you need to add dns confs so that sendgrid api will be recognized , you need to open port (pod level) , I abandoned the sendgrid and I used the company's smtp server.</p>
<p>I am currently working on a side project with Go. I'm trying to get the information about the pods running on the cluster.</p> <p>I can reach the pods according to the namespace value, but in order to reach the working pods with the metadata.labels.applicationGroup value in the service.yaml file, I need to obtain this value first.</p> <p>I added below a part of my service.yaml file.</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: metadata-name labels: service: service-name applicationGroup: beta --&gt; this field spec: replicas: 1 selector: matchLabels: service: service-name template: metadata: labels: service: service-name spec: containers: - name: nginx image: nginx:1.15.8 ports: - containerPort: 80 ... </code></pre> <p>Right now, I can access information pods with &quot;default&quot; namespaces.</p> <pre><code>func getPodsInfo() (string, error) { var kubeConfig *string if home := homedir.HomeDir(); home != &quot;&quot; { kubeConfig = flag.String(&quot;kubeConfig&quot;, filepath.Join(home, &quot;.kube&quot;, &quot;config&quot;), &quot;(optional) absolute path to the kubeConfig file&quot;) } else { kubeConfig = flag.String(&quot;kubeConfig&quot;, &quot;&quot;, &quot;absolute path to the kubeConfig file&quot;) } flag.Parse() config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, *kubeConfig) if err != nil { err = fmt.Errorf(&quot;error occured when build config from flags: %s&quot;, err.Error()) return &quot;&quot;, err } clientSet, err := kubernetes.NewForConfig(config) if err != nil { err = fmt.Errorf(&quot;error occured when build client set from config: %s&quot;, err.Error()) return &quot;&quot;, err } /* listOptions := meta.ListOptions{ FieldSelector: &quot;metadata.labels.applicationGroup=alpha&quot;, } */ pods, err := clientSet.CoreV1().Pods(&quot;default&quot;).List(context.TODO(), metav1.ListOptions{}) if err != nil { err = fmt.Errorf(&quot;error occured when get pods info from k8s: %s&quot;, err.Error()) return &quot;&quot;, err } podsInfo, err := json.Marshal(&amp;pods) if err != nil { err = fmt.Errorf(&quot;error occured when json marshal: %s&quot;, err.Error()) return &quot;&quot;, err } return string(podsInfo), nil } </code></pre> <p>I tried to FieldSelector, but FieldSelector has supported some fields.</p> <pre><code>listOptions := meta.ListOptions{ FieldSelector: &quot;metadata.labels.applicationGroup=alpha&quot;, } </code></pre> <p>And I got the below error.</p> <p><code>field label not supported: metadata.labels.applicationGroup</code></p> <p>You can check this <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">link</a> for the supported fields.</p> <p>To my question, how can I reach the metadata.labels.applicationGroup value with cilent_go, and which function should I use to reach pods working with this value?</p>
<p>Just use LabelSelector instead of FieldSelector</p> <pre><code>listOptions := meta.ListOptions{ LabelSelector: &quot;applicationGroup=alpha&quot;, } </code></pre> <p>UPD: If you want to use it for pods you should set applicationGroup on pod template</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: metadata-name labels: service: service-name spec: replicas: 1 selector: matchLabels: service: service-name template: metadata: labels: service: service-name applicationGroup: beta spec: containers: - name: nginx image: nginx:1.15.8 ports: - containerPort: 80 </code></pre>
<p>I have a pod/deployment initially with one replica. <strong>The code in the pod contains a variable/counter of type int</strong>. Initially equals one, and then when I scale the deployment. I want the second pod replica to set that variable/counter to two only in the second replica (while the value remains one in the first), and similarly when I scale to three replicas {replica1:counter=1, replica2:counter=2, replica3:counter=3) etc...</p> <p>Can you please suggest a simple way to achieve the above if that is possible?</p>
<p>You can use the stateful sets if no issue with it, as it manages the sequence so.</p> <p>You can use the name of POD as an environment so this, as stateful set will scale up it, will have a name as a variable for <strong>int</strong> you might need to parse or slice it as per need.</p> <pre><code>env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: env0 value: value </code></pre> <p>Readmore at : <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</a></p>
<p>First time Kubernetes user here.</p> <p>I deployed a service using <code>kubectl -n my_namespace apply -f new_service.yaml</code></p> <p>It failed, with the pod showing <code>Warning -Back-off restarting failed container</code></p> <p>I am now trying to delete the failed objects and redeploy a fixed version. However, I have tried to delete the service, pod, deployment, and replicaset, but they all keep recreating themselves.</p> <p>I looked at <a href="https://stackoverflow.com/questions/63344878/kubernetes-deployments-replicasets-are-recreated-after-deletion">this thread</a> but don't believe it applies since my deployment list:</p> <pre><code>apiVersion: apps/v1 kind: Deployment </code></pre> <p>Any input appreciated!</p>
<p>Posting this as Community wiki for better visibility. Feel free to expand it.</p> <hr /> <p><strong>In a Kubernetes cluster</strong>:</p> <ul> <li>if you delete Pods, but they are recreated again<br /> <code>there is a Kubernetes Deployment / StatefulSet / DaemonSet / job that recreates them</code><br /> <strong>delete a Deployment / StatefulSet / DaemonSet to delete those pods, check k8s jobs</strong></li> <li>if you delete a ReplicaSet, but it is recreated again<br /> <code>there is a Kubernetes Deployment that recreates it</code><br /> <strong>delete a Deployment to delete this replicaset</strong></li> <li>if you delete a Deployments / Services, etc., but they are recreated again<br /> <code>there is a deployment tool like ArgoCD / FluxCD / other tool that recreates them</code><br /> <strong>configure ArgoCD / FluxCD / other deployment tool to delete them</strong></li> <li>also check if Helm is used, run <code>helm list --all-namespaces</code> to list installed releases.</li> </ul> <p>Thanks to @P....for comments.</p>
<p>I have a GKE cluster running with several persistent disks for storage. To set up a staging environment, I created a second cluster inside the same project. Now I want to use the data from the persistent disks of the production cluster in the staging cluster.</p> <p>I already created persistent disks for the staging cluster. What is the best approach to move over the production data to the disks of the staging cluster.</p>
<p>You can use the open source tool <a href="https://velero.io/" rel="nofollow noreferrer">Velero</a> which is designed to migrate Kubernetes cluster resources.</p> <p>Follow these steps to migrate a persistent disk within GKE clusters:</p> <ol> <li>Create a GCS bucket:</li> </ol> <pre><code>BUCKET=&lt;your_bucket_name&gt; gsutil mb gs://$BUCKET/ </code></pre> <ol start="2"> <li>Create a <a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow noreferrer">Google Service Account</a> and store the associated email in a variable for later use:</li> </ol> <pre><code>GSA_NAME=&lt;your_service_account_name&gt; gcloud iam service-accounts create $GSA_NAME \ --display-name &quot;Velero service account&quot; SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter=&quot;displayName:Velero service account&quot; \ --format 'value(email)') </code></pre> <ol start="3"> <li>Create a custom role for the Service Account:</li> </ol> <pre><code>PROJECT_ID=&lt;your_project_id&gt; ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list ) gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title &quot;Velero Server&quot; \ --permissions &quot;$(IFS=&quot;,&quot;; echo &quot;${ROLE_PERMISSIONS[*]}&quot;)&quot; gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET} </code></pre> <ol start="4"> <li>Grant access to Velero:</li> </ol> <pre><code>gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL </code></pre> <ol start="5"> <li>Download and install Velero on the source cluster:</li> </ol> <pre><code>wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.1/velero-v1.8.1-linux-amd64.tar.gz tar -xvzf velero-v1.8.1-linux-amd64.tar.gz sudo mv velero-v1.8.1-linux-amd64/velero /usr/local/bin/velero velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.4.0 \ --bucket $BUCKET \ --secret-file ./credentials-velero </code></pre> <p>Note: The download and installation was performed on a Linux system, which is the OS used by Cloud Shell. If you are managing your GCP resources via Cloud SDK, the release and installation process could vary.</p> <ol start="6"> <li>Confirm that the velero pod is running:</li> </ol> <pre><code>$ kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-xxxxxxxxxxx-xxxx 1/1 Running 0 11s </code></pre> <ol start="7"> <li>Create a backup for the PV,PVCs:</li> </ol> <pre><code>velero backup create &lt;your_backup_name&gt; --include-resources pvc,pv --selector app.kubernetes.io/&lt;your_label_name&gt;=&lt;your_label_value&gt; </code></pre> <ol start="8"> <li>Verify that your backup was successful with no errors or warnings:</li> </ol> <pre><code>$ velero backup describe &lt;your_backup_name&gt; --details Name: your_backup_name Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.21.6-gke.1503 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=21 Phase: Completed Errors: 0 Warnings: 0 </code></pre> <hr /> <p>Now that the Persistent Volumes are backed up, you can proceed with the migration to the destination cluster following these steps:</p> <ol> <li>Authenticate in the destination cluster</li> </ol> <pre><code>gcloud container clusters get-credentials &lt;your_destination_cluster&gt; --zone &lt;your_zone&gt; --project &lt;your_project&gt; </code></pre> <ol start="2"> <li>Install Velero using the same parameters as step 5 on the first part:</li> </ol> <pre><code>velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.4.0 \ --bucket $BUCKET \ --secret-file ./credentials-velero </code></pre> <ol start="3"> <li>Confirm that the velero pod is running:</li> </ol> <pre><code>kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-xxxxxxxxxx-xxxxx 1/1 Running 0 19s </code></pre> <ol start="4"> <li>To avoid the backup data being overwritten, change the bucket to read-only mode:</li> </ol> <pre><code>kubectl patch backupstoragelocation default -n velero --type merge --patch '{&quot;spec&quot;:{&quot;accessMode&quot;:&quot;ReadOnly&quot;}}' </code></pre> <ol start="5"> <li>Confirm Velero is able to access the backup from bucket:</li> </ol> <pre><code>velero backup describe &lt;your_backup_name&gt; --details </code></pre> <ol start="6"> <li>Restore the backed up Volumes:</li> </ol> <pre><code>velero restore create --from-backup &lt;your_backup_name&gt; </code></pre> <ol start="7"> <li>Confirm that the persistent volumes have been restored on the destination cluster:</li> </ol> <pre><code>kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE redis-data-my-release-redis-master-0 Bound pvc-ae11172a-13fa-4ac4-95c5-d0a51349d914 8Gi RWO standard 79s redis-data-my-release-redis-replicas-0 Bound pvc-f2cc7e07-b234-415d-afb0-47dd7b9993e7 8Gi RWO standard 79s redis-data-my-release-redis-replicas-1 Bound pvc-ef9d116d-2b12-4168-be7f-e30b8d5ccc69 8Gi RWO standard 79s redis-data-my-release-redis-replicas-2 Bound pvc-65d7471a-7885-46b6-a377-0703e7b01484 8Gi RWO standard 79s </code></pre> <p>Check out this <a href="https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8" rel="nofollow noreferrer">tutorial</a> as a reference.</p>
<p>I know I can use <code>kubectl wait</code> to check if a pod is <code>Ready</code> but is there an easy way to check whether the pod is gone or in <code>Terminating</code> state? I'm running some tests and I only want to continue when the pod (or the namespace for that matter) is completely gone.</p> <p>Also a timeout option would come in handy.</p>
<p>It's actually part of the wait command.</p> <pre><code>kubectl wait --for=delete pod/busybox1 --timeout=60s </code></pre> <p>You can check with <code>kubectl wait --help</code> to see this example and some more. For example</p> <blockquote> <p>--for='': The condition to wait on: [delete|condition=condition- name|jsonpath='{JSONPath expression}'=JSONPath Condition]. The default status value of condition-name is true, you &gt; can set false with condition=condition-name=false.</p> </blockquote>
<p>Hi I am trying to add built-in OpenShift(v4.8) prometheus data source to a local grafana server. I have given basic auth with username and password and as of now I have enabled skip tls verify also. Still I'm getting this error</p> <p><a href="https://i.stack.imgur.com/xvKu8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xvKu8.png" alt="error" /></a></p> <p>Prometheus URL = <code>https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com</code></p> <p>this is the grafana log</p> <pre><code> logger=tsdb.prometheus t=2022-04-12T17:35:23.47+0530 lvl=eror msg=&quot;Instant query failed&quot; query=1+1 err=&quot;client_error: client error: 403&quot; logger=context t=2022-04-12T17:35:23.47+0530 lvl=info msg=&quot;Request Completed&quot; method=POST path=/api/ds/query status=400 remote_addr=10.100.95.27 time_ms=36 size=65 referer=https://grafana.xxxx.xxxx.com/datasources/edit/6TjZwT87k </code></pre>
<p>You cannot authenticate to the OpenShift prometheus instance using basic authentication. You need to authenticate using a bearer token, e.g. one obtained from <code>oc whoami -t</code>:</p> <pre><code>curl -H &quot;Authorization: Bearer $(oc whoami -t)&quot; -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/ </code></pre> <p>Or from a <code>ServiceAccount</code> with appropriate privileges:</p> <pre><code>secret=$(oc -n openshift-monitoring get sa prometheus-k8s -o jsonpath='{.secrets[1].name}') token=$(oc -n openshift-monitoring get secret $secret -o jsonpath='{.data.token}' | base64 -d) curl -H &quot;Authorization: Bearer $token&quot; -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/ </code></pre>
<p>I'm very new to k8s and the related stuff, so this may be a stupid question: How to change the pod name?</p> <p>I am aware the pod name seems set in the helm file, in my values.yaml, I have this:</p> <pre><code>... hosts: - host: staging.application.com paths: ... - fullName: application svcPort: 80 path: /* ... </code></pre> <p>Since the application is running in the prod and staging environment, and the pod name is just something like <code>application-695496ec7d-94ct9</code>, I can't tell which pod is for prod or staging and can't tell if a request if come from the prod or not. So I changed it to:</p> <pre><code>hosts: - host: staging.application.com paths: ... - fullName: application-staging svcPort: 80 path: /* </code></pre> <p>I deployed it to staging, pod updated/recreated automatically but the pod name still remains the same. I was confused about that, and I don't know what is missing. I'm not sure if it is related to the <code>fullnameOverride</code>, but it's empty so it should be fine.</p>
<p><code>...the pod name still remains the same</code></p> <p>The code snippet in your question likely the helm values for Ingress. In this case not related to Deployment of Pod.</p> <p>Look into your helm template that define the Deployment spec for the pod, search for the <code>name</code> and see which helm value was assigned to it:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: busybox # &lt;-- change &amp; you will see the pod name change along. the helm syntax surrounding this field will tell you how the name is construct/assign labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: busybox imagePullPolicy: IfNotPresent command: [&quot;ash&quot;,&quot;-c&quot;,&quot;sleep 3600&quot;] </code></pre> <p>Save the spec and apply, check with <code>kubectl get pods --selector app=busybox</code>. You should see 1 pod with name <code>busybox</code> prefix. Now if you open the file and change the name to <code>custom</code> and re-apply and get again, you will see 2 pods with <strong>different</strong> name prefix. Clean up with <code>kubectl delete deployment busybox custom</code>.</p> <p>This example shows how the name of the Deployment is used for pod(s) underneath. You can paste your helm template surrounding the name field to your question for further examination if you like.</p>
<p>I did docker system prune to delete unused images but deleted everything is there way to undo this? is there any ways to fix this out?</p>
<p>Hello, sorry for my answer but its a nope...</p> <p>When you use the prune command you are prompted to know if you are sure or not, sadly this is a last warning before the drama :D</p> <p>Hope you still got the dockerfile to rebuild your own images, if they came from internet, just go back from where you get them :D (Try your browser history if you do not remember)</p> <p>Keep the smile bro ! :) <a href="https://docs.docker.com/engine/reference/commandline/system_prune/" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/commandline/system_prune/</a></p>
<p>My understanding is that the <code>AGE</code> shown for a pod when using <code>kubectl get pod</code>, shows the time that the pod has been running since the last restart. So, for the pod shown below, my understanding is that it intially restarted 14 times, but hasn't restarted in the last 17 hours. Is this correct, and where is a kubernetes reference that explains this?</p> <p><a href="https://i.stack.imgur.com/uW3sY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uW3sY.png" alt="enter image description here" /></a></p>
<p>Hope you're enjoying your Kubernetes journey !</p> <p>In fact, the AGE Headers when using kubectl get pod shows you for how long your <strong>pod</strong> has been created and it's running. But do not confuse POD and container:</p> <p>The header &quot;RESTARTS&quot; is actually linked to the parameters &gt; '.status.containerStatuses[0].restartCount' of the pod manifest. That means that this header is linked to the number of restarts, not of the pod, but of the container inside the pod.</p> <p>Here is an example: I just deployed a new pod:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-bg-7d57d546f4-f4cql 2/2 Running 0 9m38s </code></pre> <p>If I check the yaml configuration of this pod, we can see that in the &quot;status&quot; section we have the said &quot;restartCount&quot; field:</p> <pre><code>❯ k get po test-bg-7d57d546f4-f4cql -o yaml apiVersion: v1 kind: Pod metadata: ... spec: ... status: ... containerStatuses: ... - containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798 ... name: test-bg ready: true restartCount: 0 ... </code></pre> <p>So, to demonstrate what I'm saying, I'm going to connect into my pod and kill the main process's my pod is running:</p> <pre><code>❯ k exec -it test-bg-7d57d546f4-f4cql -- bash I have no name!@test-bg-7d57d546f4-f4cql:/tmp$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000 1 0.0 0.0 5724 3256 ? Ss 03:20 0:00 bash -c source /tmp/entrypoint.bash 1000 22 1.5 0.1 2966140 114672 ? Sl 03:20 0:05 java -jar test-java-bg.jar 1000 41 3.3 0.0 5988 3592 pts/0 Ss 03:26 0:00 bash 1000 48 0.0 0.0 8588 3260 pts/0 R+ 03:26 0:00 ps aux I have no name!@test-bg-7d57d546f4-f4cql:/tmp$ kill 22 I have no name!@test-bg-7d57d546f4-f4cql:/tmp$ command terminated with exit code 137 </code></pre> <p>and after this, if I reexecute the &quot;kubectl get pod&quot; command, I got this:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-bg-7d57d546f4-f4cql 2/2 Running 1 11m </code></pre> <p>Then, if I go back to my yaml config, We can see that the restartCount field is actually linked to my container and not to my pod.</p> <pre><code>❯ k get po test-bg-7d57d546f4-f4cql -o yaml apiVersion: v1 kind: Pod metadata: ... spec: ... status: ... containerStatuses: ... - containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798 ... name: test-bg ready: true restartCount: 1 ... </code></pre> <p>So, to conclude, the <strong>RESTARTS</strong> header is giving you the restartCount of the container not of the pod, but the <strong>AGE</strong> header is giving you the age of the pod.</p> <p>This time, if I delete the pod:</p> <pre><code>❯ k delete pod test-bg-7d57d546f4-f4cql pod &quot;test-bg-7d57d546f4-f4cql&quot; deleted </code></pre> <p>we can see that the restartCount is back to 0 since its a brand new pod with a brand new age:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-bg-7d57d546f4-bnvxx 2/2 Running 0 23s test-bg-7d57d546f4-f4cql 2/2 Terminating 2 25m </code></pre> <p>For your example, it means that the <strong>container</strong> restarted 14 times, but the pod was deployed 17 hours ago.</p> <p>I can't find the exact documentation of this but (as it is explained here: <a href="https://kubernetes.io/docs/concepts/workloads/_print/#working-with-pods" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/_print/#working-with-pods</a>): &quot;Note: Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running container(s). A Pod persists until it is deleted.&quot;</p> <p>Hope this has helped you better understand. Here is a little tip from <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a>: kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' (to sort your pods by their restartCount number :p)</p> <p>Bye</p>
<p>I got a K8S+DinD issue:</p> <ul> <li>launch Kubernetes cluster</li> <li>start a main docker image and a DinD image inside this cluster</li> <li>when running a job requesting GPU, got error <code>could not select device driver &quot;nvidia&quot; with capabilities: [[gpu]]</code></li> </ul> <p>Full error</p> <pre><code>http://localhost:2375/v1.40/containers/long-hash-string/start: Internal Server Error (&quot;could not select device driver &quot;nvidia&quot; with capabilities: [[gpu]]&quot;) </code></pre> <p><code>exec</code> to the DinD image inside of K8S pod, <code>nvidia-smi</code> is not available.</p> <p>Some debugging and it seems it's due to the DinD is missing the Nvidia-docker-toolkit, I had the same error when I ran the same job directly on my local laptop docker, I fixed the same error by installing <strong>nvidia-docker2</strong> <code>sudo apt-get install -y nvidia-docker2</code>.</p> <p>I'm thinking maybe I can try to install nvidia-docker2 to the DinD 19.03 (docker:19.03-dind), but not sure how to do it? By multiple stage docker build?</p> <p>Thank you very much!</p> <hr /> <p>update:</p> <p>pod spec:</p> <pre><code>spec: containers: - name: dind-daemon image: docker:19.03-dind </code></pre>
<p>I got it working myself.</p> <p>Referring to</p> <ul> <li><a href="https://github.com/NVIDIA/nvidia-docker/issues/375" rel="nofollow noreferrer">https://github.com/NVIDIA/nvidia-docker/issues/375</a></li> <li><a href="https://github.com/Henderake/dind-nvidia-docker" rel="nofollow noreferrer">https://github.com/Henderake/dind-nvidia-docker</a></li> </ul> <blockquote> <p>First, I modified the ubuntu-dind image (<a href="https://github.com/billyteves/ubuntu-dind" rel="nofollow noreferrer">https://github.com/billyteves/ubuntu-dind</a>) to install nvidia-docker (i.e. added the instructions in the nvidia-docker site to the Dockerfile) and changed it to be based on nvidia/cuda:9.2-runtime-ubuntu16.04.</p> </blockquote> <blockquote> <p>Then I created a pod with two containers, a frontend ubuntu container and the a privileged docker daemon container as a sidecar. The sidecar's image is the modified one I mentioned above.</p> </blockquote> <p>But since this post is 3 year ago from now, I did spent quite some time to match up the dependencies versions, repo migration over 3 years, etc.</p> <p>My modified version of Dockerfile to build it</p> <pre><code>ARG CUDA_IMAGE=nvidia/cuda:11.0.3-runtime-ubuntu20.04 FROM ${CUDA_IMAGE} ARG DOCKER_CE_VERSION=5:18.09.1~3-0~ubuntu-xenial RUN apt-get update -q &amp;&amp; \ apt-get install -yq \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common &amp;&amp; \ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - &amp;&amp; \ add-apt-repository \ &quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable&quot; &amp;&amp; \ apt-get update -q &amp;&amp; apt-get install -yq docker-ce docker-ce-cli containerd.io # https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies RUN set -eux; \ apt-get update -q &amp;&amp; \ apt-get install -yq \ btrfs-progs \ e2fsprogs \ iptables \ xfsprogs \ xz-utils \ # pigz: https://github.com/moby/moby/pull/35697 (faster gzip implementation) pigz \ # zfs \ wget # set up subuid/subgid so that &quot;--userns-remap=default&quot; works out-of-the-box RUN set -x \ &amp;&amp; addgroup --system dockremap \ &amp;&amp; adduser --system -ingroup dockremap dockremap \ &amp;&amp; echo 'dockremap:165536:65536' &gt;&gt; /etc/subuid \ &amp;&amp; echo 'dockremap:165536:65536' &gt;&gt; /etc/subgid # https://github.com/docker/docker/tree/master/hack/dind ENV DIND_COMMIT 37498f009d8bf25fbb6199e8ccd34bed84f2874b RUN set -eux; \ wget -O /usr/local/bin/dind &quot;https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind&quot;; \ chmod +x /usr/local/bin/dind ##### Install nvidia docker ##### # Add the package repositories RUN curl -fsSL https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add --no-tty - RUN distribution=$(. /etc/os-release;echo $ID$VERSION_ID) &amp;&amp; \ echo $distribution &amp;&amp; \ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ tee /etc/apt/sources.list.d/nvidia-docker.list RUN apt-get update -qq --fix-missing RUN apt-get install -yq nvidia-docker2 RUN sed -i '2i \ \ \ \ &quot;default-runtime&quot;: &quot;nvidia&quot;,' /etc/docker/daemon.json RUN mkdir -p /usr/local/bin/ COPY dockerd-entrypoint.sh /usr/local/bin/ RUN chmod 777 /usr/local/bin/dockerd-entrypoint.sh RUN ln -s /usr/local/bin/dockerd-entrypoint.sh / VOLUME /var/lib/docker EXPOSE 2375 ENTRYPOINT [&quot;dockerd-entrypoint.sh&quot;] #ENTRYPOINT [&quot;/bin/sh&quot;, &quot;/shared/dockerd-entrypoint.sh&quot;] CMD [] </code></pre> <p>When I use <code>exec</code> to login into the Docker-in-Docker container, I can successfully run <code>nvidia-smi</code> (which previously return not found error then cannot run any GPU resource related docker run)</p> <p>Welcome to pull my image at <code>brandsight/dind:nvidia-docker</code></p>
<p>I installed aws-load-balancer-controller on new EKS cluster (version v1.21.5-eks-bc4871b).</p> <p>I installed by this guide <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/" rel="noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/</a> step by step but when I'm trying to deploy ingress object I'm getting the error I mentioned in the title. I tried to do as github issues questions like here <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2039" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2039</a> but didn't find any answer.</p> <p>What else can I do for checking this?</p>
<p>In case it might help others - I also had the original issue using fargate profile and worker-node for core-dns. The solution for me I found in another place was just adding</p> <pre><code>node_security_group_additional_rules = { ingress_allow_access_from_control_plane = { type = &quot;ingress&quot; protocol = &quot;tcp&quot; from_port = 9443 to_port = 9443 source_cluster_security_group = true description = &quot;Allow access from control plane to webhook port of AWS load balancer controller&quot; } } </code></pre>
<p>The env</p> <blockquote> <p>Ansible 2.9.6 (python3)</p> </blockquote> <p>Tried to run a simple playbook</p> <pre><code>- hosts: master gather_facts: no become: yes tasks: - name: create name space k8s: name: testing api_version: v1 kind: Namespace state: present </code></pre> <p>Getting following error</p> <pre><code>The full traceback is: Traceback (most recent call last): File "/tmp/ansible_k8s_payload_u121g92v/ansible_k8s_payload.zip/ansible/module_utils/k8s/common.py", line 33, in &lt;module&gt; import kubernetes ModuleNotFoundError: No module named 'kubernetes' fatal: [192.168.20.38]: FAILED! =&gt; { "changed": false, "error": "No module named 'kubernetes'", "invocation": { "module_args": { "api_key": null, "api_version": "v1", "append_hash": false, "apply": false, "ca_cert": null, "client_cert": null, "client_key": null, "context": null, "force": false, "host": null, "kind": "Namespace", "kubeconfig": null, "merge_type": null, "name": "testing", "namespace": null, "password": null, "proxy": null, "resource_definition": null, "src": null, "state": "present", "username": null, "validate": null, "validate_certs": null, "wait": false, "wait_condition": null, "wait_sleep": 5, "wait_timeout": 120 } }, "msg": "Failed to import the required Python library (openshift) on k8smasternode's Python /usr/bin/python3. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter" } </code></pre> <p>It confuses me that, </p> <ul> <li>the root cause is <strong>"no module named kubernetes"</strong>?</li> <li>or <strong>"Failed to import the required Python library (openshift) on Python /usr/bin/python3"</strong>?</li> </ul> <p>And how to fix that? </p> <p>Any help would be appreciated!</p> <p>btw, </p> <blockquote> <p><strong>Kubernetes master node has /usr/bin/python3</strong></p> </blockquote>
<p>I am a bit late to the party but since I faced this today and don't see an accepted answer, I am posting what worked for me.</p> <p>Since you are running the tasks on remote servers, you must have <code>openshift</code>, <code>pyyaml</code> and <code>kubernetes</code> installed on the remote machines for this to work.</p> <p>Add below tasks prior to creating namespaces:</p> <pre><code>- name: install pre-requisites pip: name: - openshift - pyyaml - kubernetes </code></pre>
<p>I'm trying to setup a gitlab kubernetes agent and runner for my in-cluster CICD pipeline. My gitlab.ci is something on the line of:</p> <pre><code>stages: - deploy deploy-new-images: stage: deploy image: name: alpine/helm:3.7.1 entrypoint: [&quot;&quot;] script: - helm list --all-namespaces tags: - staging - test </code></pre> <p>Gitlab is able to start the container for this particular job but fails with the following error:</p> <pre><code>Error: list: failed to list: secrets is forbidden: User &quot;system:serviceaccount:gitlab:default&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; at the cluster scope </code></pre> <p>My agent deployment has a service account named <code>gitlab</code>. This service account is associated to the default admin ClusterRole via a ClusterRoleBinding; it is also associated with a Role via a RoleBinding resource. The role has the following definition :</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: &quot;2022-04-14T15:27:27Z&quot; labels: app: gitlab-agent name: gitlab-agent namespace: gitlab resourceVersion: &quot;44732&quot; uid: 0b09cabe-826f-4c7c-a864-3192e0c9ea42 rules: - apiGroups: - &quot;&quot; resources: - '*' verbs: - '*' </code></pre> <p>The gitlab runner on the other hand was deployed via Helm using the following values:</p> <pre><code> image: &quot;gitlab/gitlab-runner:alpine-v14.9.0&quot; imagePullPolicy: &quot;Always&quot; replicas: 5 gitlabUrl: https://gitlab.com runnerRegistrationToken: &quot;{{ vault_gitlab_runner_registration_token }}&quot; unregisterRunners: true logLevel: &quot;debug&quot; rbac: create: true rules: - resources: [&quot;*&quot;] verbs: [&quot;*&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;*&quot;] verbs: [&quot;*&quot;] clusterWideAccess: true metrics: enabled: false runners: executor: kubernetes locked: false tags: &quot;staging,test&quot; name: &quot;test-staging-gitlab-runner&quot; runUntagged: false config: | [[runners]] [runners.kubernetes] namespace = &quot;gitlab&quot; image = &quot;ubuntu:20.04&quot; </code></pre> <p>It seems to me that both the agent and the runner have sufficient permissions and yet the helm list command is failing. Any idea?</p>
<p><strong>In short:</strong> In your gitlab-runner values file, add the setting <code>service_account = &quot;gitlab&quot;</code> under <code>[runners.kubernetes]</code>.</p> <p><strong>Explanation:</strong> The actual pod that executes the job is not the gitlab-runner pod, nor the gitlab agent pod. Gitlab runner kubernetes executor spawns a new pod for each job. The place to set the service account name for these pods is in the above setting.</p> <p>The error message above shows that the pod running the job was using the default service account for the <code>gitlab</code> namespace. this service account doesn't (and shouldn't) have the required permissions.</p> <p>The section <code>runners.config</code> in the values file, ends up in the config.toml file that configures the runner. Here is the documentation about config.toml for kubernetes executor: <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#other-configtoml-settings" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html#other-configtoml-settings</a></p> <p>And here is the <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram" rel="nofollow noreferrer">Kubernetes executor interaction diagram</a></p>
<p>I want to deploy keycloak with below custom configuration, before starting it.</p> <ul> <li>new realm</li> <li>role</li> <li>client</li> <li>an admin user under the new realm</li> </ul> <p>I am using below deployment file to create keycloak pod</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: default labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:10.0.1 env: - name: KEYCLOAK_USER value: &quot;admin&quot; - name: KEYCLOAK_PASSWORD value: &quot;admin&quot; - name: REALM value: &quot;ntc&quot; - name: PROXY_ADDRESS_FORWARDING value: &quot;true&quot; volumeMounts: - mountPath: /opt/jboss/keycloak/startup/elements name: elements ports: - name: http containerPort: 8080 - name: https containerPort: 443 readinessProbe: httpGet: path: /auth/realms/master port: 8080 volumes: - name: elements configMap: name: keycloak-elements </code></pre> <p>and using below cilent.json and realm.json file to generate configmap for keycloak.</p> <p><strong>client.json</strong></p> <pre><code>{ &quot;id&quot;: &quot;7ec4ccce-d6ed-461f-8e95-ea98e4912b8c&quot;, &quot;clientId&quot;: &quot;ntc-app&quot;, &quot;enabled&quot;: true, &quot;clientAuthenticatorType&quot;: &quot;client-secret&quot;, &quot;secret&quot;: &quot;0b360a88-df24-48fa-8e96-bf6577bbee95&quot;, &quot;directAccessGrantsEnabled&quot;: true } </code></pre> <p><strong>realm.json</strong></p> <pre><code>{ &quot;realm&quot;: &quot;ntc&quot;, &quot;id&quot;: &quot;ntc&quot;, &quot;enabled&quot;: &quot;true&quot;, &quot;revokeRefreshToken&quot; : true, &quot;accessTokenLifespan&quot; : 900, &quot;passwordPolicy&quot;: &quot;length(8) and digits(1) and specialChars(1)&quot;, &quot;roles&quot; : { &quot;realm&quot; : [ { &quot;id&quot;: &quot;c9253f52-1960-4c9d-af99-5facca0c0846&quot;, &quot;name&quot;: &quot;admin&quot;, &quot;description&quot; : &quot;admin role&quot;, &quot;scopeParamRequired&quot;: false, &quot;composite&quot;: false, &quot;clientRole&quot;: false, &quot;containerId&quot;: &quot;ntc&quot; }, { &quot;id&quot; : &quot;1e7ed0c8-9585-44b0-92f8-59e472573461&quot;, &quot;name&quot; : &quot;user&quot;, &quot;description&quot; : &quot;user role&quot;, &quot;scopeParamRequired&quot; : false, &quot;composite&quot; : false, &quot;clientRole&quot; : false, &quot;containerId&quot; : &quot;ntc&quot; } ] } } </code></pre> <p>Both the files are saved under the <code>elements</code> folder and used in the below command to generate the config map:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create configmap keycloak-elements --from-file=elements </code></pre> <p>Still, I don't see any new realm/role or client created in the <a href="https://i.stack.imgur.com/NXf97.png" rel="nofollow noreferrer">KeyCloak</a> console.</p>
<p>When you are setting up Keycloak on kubernetes, you only need to import the new realm (realm.json) and the corresponding clients (client.json) only during the first run. So a <strong>Job</strong> needs created instead of adding it to the deployment.</p> <p>Once the Job is run, the json will be imported to the Keycloak database and the job can be suspended. Adding it to the deployment will cause Keycloak to try and import the json files during each restart.</p> <p>Please follow the steps in this blog post: <a href="https://blog.knoldus.com/migrate-keycloak-h2-database-to-postgres-on-kubernetes/" rel="nofollow noreferrer">https://blog.knoldus.com/migrate-keycloak-h2-database-to-postgres-on-kubernetes/</a></p>
<p>I'm using zerolog in golang, which outputs json formatted log, the app is running on k8s, and has cri-o format as following. <a href="https://i.stack.imgur.com/M1qMe.png" rel="nofollow noreferrer">actual log screenshot on Grafana loki</a></p> <p>My question is, since there's some non-json text prepended to my json log, I can't seem to effectively query the log, one example is, when I tried to pipe the log into logfmt, exceptions were thrown.</p> <p>What I want is to be able to query into the sub field of the json. My intuition is to maybe for each log, only select the parts from <code>{</code> (start of the json), then maybe I can do more interesting manipulation. I'm a bit stuck and not sure what's the best way to proceed.</p> <p>Any help and comments is appreciated.</p>
<p>after some head scratching, problem solved.</p> <p>As I'm directly using the promtail setup from here <a href="https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh" rel="nofollow noreferrer">https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh</a></p> <p>And within this setup, the default parser is docker, but we need to change it to <code>cri</code>, afterwards, the logs are properly parsed as json in my Grafana dashboard</p>
<p>I am having the following issue. I am new to GCP/Cloud, I have created a cluster in GKE and deployed our application there, installed nginx as a POD in the cluster, our company has a authorized SSL certificate which i have uploaded in Certificates in GCP.</p> <p>In the DNS Service, i have created an A record which matched the IP of Ingress. When i call the URL in the browser, it still shows that the website is still unsecure with message &quot;Kubernetes Ingress controller fake certificate&quot;.</p> <p>I used the following guide <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#console_1" rel="noreferrer">https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#console_1</a></p> <p>however i am not able to execute step 3 &quot;Associate an SSL certificate with a target proxy&quot;, because it asks &quot;URL Maps&quot; and i am not able to find it in the GCP Console.</p> <p>Has anybody gone through the same issue like me or if anybody helps me out, it would be great.</p> <p>Thanks and regards,</p>
<p>I was able to fix this problem by adding an extra argument to the ingress-nginx-controller deployment.</p> <p>For context: my TLS secret was at the default namespace and was named <code>letsencrypt-secret-prod</code>, so I wanted to add this as the default SSL certificate for the Nginx controller.</p> <p>My first solution was to edit the <code>deployment.yaml</code> of the Nginx controller and add at the end of the <code>containers[0].args</code> list the following line:</p> <pre class="lang-yaml prettyprint-override"><code>- '--default-ssl-certificate=default/letsencrypt-secret-prod' </code></pre> <p>Which made that section of the yaml look like this:</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: controller image: &gt;- k8s.gcr.io/ingress-nginx/controller:v1.2.0-beta.0@sha256:92115f5062568ebbcd450cd2cf9bffdef8df9fc61e7d5868ba8a7c9d773e0961 args: - /nginx-ingress-controller - '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller' - '--election-id=ingress-controller-leader' - '--controller-class=k8s.io/ingress-nginx' - '--ingress-class=nginx' - '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller' - '--validating-webhook=:8443' - '--validating-webhook-certificate=/usr/local/certificates/cert' - '--validating-webhook-key=/usr/local/certificates/key' - '--default-ssl-certificate=default/letsencrypt-secret-prod' </code></pre> <p>But I was using the helm chart: <code>ingress-nginx/ingress-nginx</code>, so I wanted this config to be in the <code>values.yaml</code> file of that chart so that I could upgrade it later if necessary.</p> <p>So reading the values file I replaced the attribute: <code>controller.extraArgs</code>, which looked like this:</p> <pre class="lang-yaml prettyprint-override"><code> extraArgs: {} </code></pre> <p>For this:</p> <pre class="lang-yaml prettyprint-override"><code> extraArgs: default-ssl-certificate: default/letsencrypt-secret-prod </code></pre> <p>This restarted the deployment with the argument in the correct place.</p> <p>Now I can use ingresses without specifying the <code>tls.secretName</code> for each of them, which is awesome.</p> <p>Here's an example ingress that is working for me with HTTPS:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: some-ingress-name annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTP&quot; spec: rules: - http: paths: - path: /some-prefix pathType: Prefix backend: service: name: some-service-name port: number: 80 </code></pre>
<p>I am playing around with the Horizontal Pod Autoscaler in Kubernetes. I've set the HPA to start up new instances once the average CPU Utilization passes 35%. However this does not seem to work as expected. The HPA triggers a rescale even though the CPU Utilization is far below the defined target utilization. As seen below the &quot;current&quot; utilization is 10% which is far away from 35%. But still, it rescaled the number of pods from 5 to 6. <a href="https://i.stack.imgur.com/JnAA0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JnAA0.png" alt="enter image description here" /></a></p> <p>I've also checked the metrics in my Google Cloud Platform dashboard (the place at which we host the application). This also shows me that the requested CPU utilization hasn't surpassed the threshold of 35%. But still, several rescales occurred. <a href="https://i.stack.imgur.com/wWYMK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wWYMK.png" alt="enter image description here" /></a></p> <p>The content of my HPA</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: django spec: {{ if eq .Values.env &quot;prod&quot; }} minReplicas: 5 maxReplicas: 35 {{ else if eq .Values.env &quot;staging&quot; }} minReplicas: 1 maxReplicas: 3 {{ end }} scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: django-app targetCPUUtilizationPercentage: 35 </code></pre> <p>Does anyone know what the cause of this might be?</p>
<p>Scaling is based on % of <code>requests</code> not <code>limits</code>. I think we should change this answer as the examples in the accepted answer show:</p> <pre><code> limits: cpu: 1000m </code></pre> <p>But the <code>targetCPUUtilizationPercentage</code> is based on <code>requests</code> like:</p> <pre><code>requests: cpu: 1000m </code></pre> <blockquote> <p>For per-pod resource metrics (like CPU), the controller fetches the metrics from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler. Then, if a target utilization value is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each Pod. If a target raw value is set, the raw metric values are used directly. The controller then takes the mean of the utilization or the raw value (depending on the type of target specified) across all targeted Pods, and produces a ratio used to scale the number of desired replicas.</p> </blockquote> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-a-horizontalpodautoscaler-work" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-a-horizontalpodautoscaler-work</a></p>
<p>I have a Deployment like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment spec: template: volumeMounts: - mountPath: /home name: john-webos-vol subPath: home - mountPath: /pkg name: john-vol readOnly: true subPath: school </code></pre> <p>I want to change the Deloyment with the <code>kubectl patch</code> command, so it has the following <code>volumeMounts</code> in the <code>PodTemplate</code> instead:</p> <p><strong>target.yaml:</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment spec: template: volumeMounts: - mountPath: /home name: john-webos-vol subPath: home </code></pre> <p>I used the below command, but it didn't work.</p> <pre class="lang-sh prettyprint-override"><code>kubectl patch deployment sample --patch &quot;$(cat target.yaml)&quot; </code></pre> <p>Can anyone give me some advice?</p>
<p>You can use JSON patch <a href="http://jsonpatch.com/" rel="nofollow noreferrer">http://jsonpatch.com/</a></p> <h4>Remove specific volume mount</h4> <pre><code>kubectl patch deployment &lt;NAME&gt; --type json -p='[{&quot;op&quot;: &quot;remove&quot;, &quot;path&quot;: &quot;/spec/template/spec/containers/0/volumeMounts/0&quot;}]' </code></pre> <h4>Replace volume mounts with what you need</h4> <pre><code>kubectl patch deployment &lt;NAME&gt; --type json -p='[{&quot;op&quot;: &quot;replace&quot;, &quot;path&quot;: &quot;/spec/template/spec/containers/0/volumeMounts&quot;, &quot;value&quot;: [{&quot;mountPath&quot;: &quot;/home&quot;, &quot;name&quot;: &quot;john-webos-vol&quot;, &quot;subPath&quot;: &quot;home&quot;}]}]' </code></pre> <p>Kubectl cheet sheet for more info: <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources</a></p>
<p>I have deployed an Event Hub triggered Azure Function written in Java on AKS. The function should scale out using KEDA. The function is correctly triggerd and working but it's not scaling out when the load increases. I have added sleep calls to the function implementation to make sure it's not burning through the events too fast and should be forced to scale out but this did not show any change as well.</p> <p>kubectl get hpa shows the following output</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-hpa-eventlogger Deployment/eventlogger 64/64 (avg) 1 20 1 3m41s </code></pre> <p>This seems to be a first indicator that something is not correct as i assume the first number in the targets column is the number of unprocessed events in event hub. This stays the same no matter how many events i pump into the hub.</p> <p>The Function was deployed using the following Kubernetes Deployment Manifest</p> <pre><code>data: AzureWebJobsStorage: &lt;removed&gt; FUNCTIONS_WORKER_RUNTIME: amF2YQ== EventHubConnectionString: &lt;removed&gt; apiVersion: v1 kind: Secret metadata: name: eventlogger --- apiVersion: apps/v1 kind: Deployment metadata: name: eventlogger labels: app: eventlogger spec: selector: matchLabels: app: eventlogger template: metadata: labels: app: eventlogger spec: containers: - name: eventlogger image: &lt;removed&gt; env: - name: AzureFunctionsJobHost__functions__0 value: eventloggerHandler envFrom: - secretRef: name: eventlogger readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 240 httpGet: path: / port: 80 scheme: HTTP startupProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 240 httpGet: path: / port: 80 scheme: HTTP --- apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: eventlogger labels: app: eventlogger spec: scaleTargetRef: name: eventlogger pollingInterval: 5 cooldownPeriod: 5 minReplicaCount: 0 maxReplicaCount: 20 triggers: - type: azure-eventhub metadata: storageConnectionFromEnv: AzureWebJobsStorage connectionFromEnv: EventHubConnectionString --- </code></pre> <p>The Connection String of Event Hub contains the &quot;EntityPath=&quot; Section as described in the <a href="https://keda.sh/docs/2.6/scalers/azure-event-hub/" rel="nofollow noreferrer">KEDA Event Hub Scaler Documentation</a> and has Manage-Permissions on the Event Hub Namespace.</p> <p>The output of <code>kubectl describe ScaledObject</code> is</p> <pre><code>Name: eventlogger Namespace: default Labels: app=eventlogger scaledobject.keda.sh/name=eventlogger Annotations: &lt;none&gt; API Version: keda.sh/v1alpha1 Kind: ScaledObject Metadata: Creation Timestamp: 2022-04-17T10:30:36Z Finalizers: finalizer.keda.sh Generation: 1 Managed Fields: API Version: keda.sh/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:labels: .: f:app: f:spec: .: f:cooldownPeriod: f:maxReplicaCount: f:minReplicaCount: f:pollingInterval: f:scaleTargetRef: .: f:name: f:triggers: Manager: kubectl-client-side-apply Operation: Update Time: 2022-04-17T10:30:36Z API Version: keda.sh/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:&quot;finalizer.keda.sh&quot;: f:labels: f:scaledobject.keda.sh/name: f:status: .: f:conditions: f:externalMetricNames: f:lastActiveTime: f:originalReplicaCount: f:scaleTargetGVKR: .: f:group: f:kind: f:resource: f:version: f:scaleTargetKind: Manager: keda Operation: Update Time: 2022-04-17T10:30:37Z Resource Version: 1775052 UID: 3b6a68c1-c3b9-4cdf-b5d5-41a9721ac661 Spec: Cooldown Period: 5 Max Replica Count: 20 Min Replica Count: 0 Polling Interval: 5 Scale Target Ref: Name: eventlogger Triggers: Metadata: Connection From Env: EventHubConnectionString Storage Connection From Env: AzureWebJobsStorage Type: azure-eventhub Status: Conditions: Message: ScaledObject is defined correctly and is ready for scaling Reason: ScaledObjectReady Status: False Type: Ready Message: Scaling is performed because triggers are active Reason: ScalerActive Status: True Type: Active Status: Unknown Type: Fallback External Metric Names: s0-azure-eventhub-$Default Last Active Time: 2022-04-17T10:30:47Z Original Replica Count: 1 Scale Target GVKR: Group: apps Kind: Deployment Resource: deployments Version: v1 Scale Target Kind: apps/v1.Deployment Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal KEDAScalersStarted 10s keda-operator Started scalers watch Normal ScaledObjectReady 10s keda-operator ScaledObject is ready for scaling </code></pre> <p>So i'm a bit stucked as i don't see any errors but it's still not behaving as expected.</p> <p><strong>Versions:</strong></p> <ul> <li>Kubernetes version: 1.21.9</li> <li>KEDA Version: 2.6.1 installed using kubectl apply -f <a href="https://github.com/kedacore/keda/releases/download/v2.6.1/keda-2.6.1.yaml" rel="nofollow noreferrer">https://github.com/kedacore/keda/releases/download/v2.6.1/keda-2.6.1.yaml</a></li> <li>Azure Functions using Java 11 and extensionBundle in host.json is configured using version [2.8.4, 3.0.0)</li> </ul>
<p>Was able to find a solution to the problem.</p> <p>Event Hub triggered Azure Functions deployed on AKS show the same scaling characteristics as Azure Functions on App Service show:</p> <p>You only get one consumer per partition to allow for ordering per partition. This characteristic overrules maxReplicaCount in the Kubernetes Deployment Manifest.</p> <p>So to solve my own issue: By increasing the partitions for Event Hub i get a pod per partition and KEDA scales the workload as expected.</p>
<p>On an EKS cluster, I want to transition from the existing NodeGroup (say NG1) to a fresh NG2 (with spot instances). NG1 will remain as fall-back.</p> <p>Do I really need to play with Node Affinity in my deployments and make them &quot;prefer&quot; NG2, and then rollout-restart?</p> <p>Or is it enough to set the desired size for NG1 to a very low value, say just one node per AZ, thereby &quot;nudging&quot; the workload to migrate to NG2?</p>
<p>Node affinity sounds better choice to me</p>
<p>I have 2 kubernetes cluster</p> <p>cluster 2 - exposed loadblancer nginx with regional IP <code>IP_PRIVATE</code></p> <pre><code>service 1 - website `/website` service 2 - blog `blog` service 3 - api `api` </code></pre> <p>cluster 1 - exposed loadblancer gce with global IP <code>IP_PUBLIC</code></p> <p>I want to implement this behaviour</p> <pre><code>IP_PUBLIC/api -&gt; service 2 response IP_PUBLIC/* -&gt; cdn -&gt; service 1 response </code></pre> <p>To implement this I have created cdn service layer in cluster 1 as defined here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#expandable-1" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#expandable-1</a></p> <p>Now my domain will be mapped to IP_PUBLIC hence any request will hit cluster 1 first then it should pass the request to cluster 2 and get the response.</p> <p><strong>Note</strong>: I have to create 2 clusters because</p> <ol> <li>I don't want to change anything in cluster</li> <li>I cant have global IP with nginx ingress</li> <li>I cant have 2 loadbalancer (both gce and nginx) in one cluster</li> </ol> <p>I want to pass the exact request (domain, headers etc) from cluster 1 to cluster 2 as if the request is directly hiting the cluster 2.</p> <p>What is the correct way to do this? In case there is any alternate solution to achieve the above with minimum changes then please suggest.</p>
<p>i am not sure which ingress you are running in the background however you can change the config using the annotation.</p> <p>You can enable passing the headers on <strong>Nginx</strong> using this annotation so it will forward all the custom headers and domain details.</p> <pre><code>enable-underscores-in-headers: true </code></pre> <p>with</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header My-Custom-Header $http_my_custom_header; </code></pre> <p>But I am not sure one part when you say :</p> <blockquote> <p>Now my domain will be mapped to IP_PUBLIC hence any request will hit cluster 1 first then it should pass the request to cluster 2 and get the response.</p> </blockquote> <p>One your request hit to domain traffic will maybe flows like</p> <pre><code>Request &gt; domain &gt; cluster 1 &gt; cluster 1's service &gt; IP or Domain of cluster 2 &gt; cluster 2's ingress controller &gt; cluster 2's service </code></pre> <p>in this case you need to pass the headers to request and enable ingress config on cluster2's controller so it will forward to details to backend service running on <strong>cluster2</strong>.</p> <p>Also not sure how your <strong>egress</strong> traffic moving out of the cluster using <strong>NAT gateway</strong> or directly from node, considering you have a public cluster your service will be directly calling cluster2's domain in that case you have to add the headers with request.</p> <p>Just set the proper annotation to ingress and your backend services will get the headers with domain details.</p>
<p>Today I found the host kubernetes(v1.21.3) folder <code>io.containerd.snapshotter.v1.overlayfs</code> takes too much spaces:</p> <pre><code>[root@k8smasterone kubernetes.io~nfs]# pwd /var/lib/kubelet/pods/8aafe99f-53c1-4bec-8cb8-abd09af1448f/volumes/kubernetes.io~nfs [root@k8smasterone kubernetes.io~nfs]# duc ls -Fg /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/ 13.5G snapshots/ [++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++] 2.2M metadata.db [ </code></pre> <p>It takes 13.5GB of disk spaces. is it possible to shrink this folder?</p>
<p>The directory <code>/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs</code> is where the various container and image layers are persisted by containerd. These layers are downloaded based on the containers running on the node. If we start running out of space, the kubelet has the ability to garbage collected unused images - which will reduce the size of this directory. The customer also has the ability to configure the size of the boot disk for the node-pools if needed.</p> <p>It is expected that this would grow from the time a node is created. However when the node disk usage is above 85% then garbage collection will attempt to identify images that can be removed. It may not be able to remove images though if they are currently in use by an existing container running on the node or they have been recently pulled.</p> <p>If you want to remove unused container images with just containerd, you can use the below command:</p> <p><strong>$ <code>crictl rmi --prune</code></strong></p> <p>Also you can use the <strong><code>$ docker image prune</code></strong> command which allows you to clean up unused images. By default, docker image prune only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by any container.</p> <p>To remove all images which are not used by existing containers, use the -a flag:</p> <p><strong><code>$ docker image prune -a</code></strong></p>
<p>I have a local k3d cluster in stalled on Mac (Big Sur 12) that I am attempting to install calico (their default manifest <a href="https://k3d.io/v5.3.0/usage/advanced/calico/" rel="nofollow noreferrer">https://k3d.io/v5.3.0/usage/advanced/calico/</a>). In the logs for the calico-kube-controller pod, I get this:</p> <p>Warning FailedMount 44m kubelet MountVolume. SetUp failed for volume &quot;kube-api-access-vfdd9&quot; : write /var/lib/kubelet/pods/faa7d654-6424-4774-bc40-71de88c1d337/volumes/kubernetes.io~projected/kube-api-access-vfdd9/..2022_03_09_20_46_13.100604692/token: no space left on device</p> <p>There is clearly plenty of space:</p> <p>/var/lib/kubelet/pods/faa7d654-6424-4774-bc40-71de88c1d337/volumes/kubernetes.io~projected # df -h .</p> <p>Filesystem Size Used Avail Use% Mounted on</p> <p>/dev/vda1 79G 7.7G 67G 11% /var/lib/kubelet</p> <p><strong>Google searching has yielded nothing effective.</strong></p> <p>Kubernetes version: K8s Rev: v1.22.4+k3s1</p> <p>Docker version:</p> <p>Server: Docker Desktop 4.5.0 (74594)</p> <p>Engine: Version: 20.10.12</p> <p>API version: 1.41 (minimum version 1.12)</p> <p>Go version: go1.16.12</p> <p>Git commit: 459d0df</p> <p>Built: Mon Dec 13 11:43:56 2021</p> <p>OS/Arch: linux/amd64</p> <p>Experimental: false</p> <p>containerd: Version: 1.4.12</p> <p>GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d</p> <p>runc: Version: 1.0.2</p> <p>GitCommit: v1.0.2-0-g52b36a2</p> <p>docker-init: Version: 0.19.0</p> <p>GitCommit: de40ad0</p> <p>Any tips/docs/analysis would be much appreciated!</p>
<p>double check your container resources configuration for errors. Try to increase the values.</p> <pre><code> resources: limits: cpu: '1' memory: 500m requests: cpu: 50m memory: 100m </code></pre> <p>Mine started working when removed resources section completely. (Docker Desktop 4.3.2)</p>
<p>I am wondering if it is possible to configure the “public access source allowlist” from CDK. I can see and manage this in the console under the networking tab, but can’t find anything in <a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.Cluster.html" rel="nofollow noreferrer">the CDK docs</a> about setting the allowlist during deploy. I tried creating and assigning a security group (code sample below), but this didn't work. Also the security group was created as an &quot;additional&quot; security group, rather than the &quot;cluster&quot; security group.</p> <pre class="lang-ts prettyprint-override"><code>declare const vpc: ec2.Vpc; declare const adminRole: iam.Role; const securityGroup = new ec2.SecurityGroup(this, 'my-security-group', { vpc, allowAllOutbound: true, description: 'Created in CDK', securityGroupName: 'cluster-security-group' }); securityGroup.addIngressRule( ec2.Peer.ipv4('&lt;vpn CIDR block&gt;'), ec2.Port.tcp(8888), 'allow frontend access from the VPN' ); const cluster = new eks.Cluster(this, 'my-cluster', { vpc, clusterName: 'cluster-cdk', version: eks.KubernetesVersion.V1_21, mastersRole: adminRole, defaultCapacity: 0, securityGroup }); </code></pre> <p><strong>Update:</strong> I attempted the following, and it updated the <em>cluster</em> security group, but I'm still able to access the frontend when I'm not on the VPN:</p> <pre class="lang-ts prettyprint-override"><code>cluster.connections.allowFrom( ec2.Peer.ipv4('&lt;vpn CIDER block&gt;'), ec2.Port.tcp(8888) ); </code></pre> <p><strong>Update 2:</strong> I tried this as well, and I can still access my application's frontend even when I'm not on the VPN. However I can now only use <code>kubectl</code> when I'm on the VPN, which is good! It's a step forward that I've at least improved the cluster's security in a useful manner.</p> <pre class="lang-ts prettyprint-override"><code>const cluster = new eks.Cluster(this, 'my-cluster', { vpc, clusterName: 'cluster-cdk', version: eks.KubernetesVersion.V1_21, mastersRole: adminRole, defaultCapacity: 0, endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE.onlyFrom('&lt;vpn CIDER block&gt;') }); </code></pre>
<p>In general EKS has two relevant security groups:</p> <ol> <li><p>The one used by nodes, which AWS calls &quot;cluster security group&quot;. It's setup automatically by EKS. You shouldn't need to mess with it unless you want (a) more restrictive rules the defaults (b) open your nodes to maintenance taks (e.g.: ssh access). This is what you are acessing via <code>cluster.connections</code>.</p> </li> <li><p>The Ingress Load Balancer security group. This is an Application Load balancer created and managed by EKS. In CDK, it can be created like so:</p> </li> </ol> <pre class="lang-js prettyprint-override"><code>const cluster = new eks.Cluster(this, 'HelloEKS', { version: eks.KubernetesVersion.V1_22, albController: { version: eks.AlbControllerVersion.V2_4_1, }, }); </code></pre> <p>This will will serve as a gateway for all internal services that need an Ingress. You can access it via the <code>cluster.albController</code> propriety and add rules to it like a regular Application Load Balancer. I have no idea how EKS deals with task communication when an Ingress ALB is not present.</p> <p>Relevant docs:</p> <ul> <li><a href="https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html" rel="nofollow noreferrer">Amazon EKS security group considerations</a></li> <li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html#alb-controller" rel="nofollow noreferrer">Alb Controller on CDK docs</a></li> <li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.Cluster.html#albcontroller-1" rel="nofollow noreferrer">The ALB propriety for EKS Cluster objects</a></li> </ul>
<p>I run this Job:</p> <pre><code>kubectl apply -f - &lt;&lt;EOF apiVersion: batch/v1 kind: Job metadata: name: sample spec: template: spec: containers: - command: - /bin/bash - -c - | env echo &quot;MY_VAR : ${MY_VAR}&quot; sleep 800000 env: - name: MY_VAR value: MY_VALUE image: mcr.microsoft.com/azure-cli:2.0.80 imagePullPolicy: IfNotPresent name: sample restartPolicy: Never backoffLimit: 4 EOF </code></pre> <p>But when I look at the log the value <code>MY_VALUE</code> its empty even though <code>env</code> prints it:</p> <pre><code>$ kubectl logs -f sample-7p6bp ... MY_VAR=MY_VALUE ... MY_VAR : </code></pre> <p>Why does this line contain an empty value for <code>${MY_VAR}</code>:</p> <pre><code>echo &quot;MY_VAR : ${MY_VAR}&quot; </code></pre> <p>?</p> <p><strong>UPDATE:</strong> Tried the same with a simple pod:</p> <pre><code>kubectl -f - &lt;&lt;EOF apiVersion: v1 kind: Pod metadata: name: sample spec: containers: - name: sample imagePullPolicy: Always command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;echo BEGIN ${MY_VAR} END&quot;] image: radial/busyboxplus:curl env: - name: MY_VAR value: MY_VALUE EOF </code></pre> <p>Same/empty result:</p> <pre><code>$ kubectl logs -f sample BEGIN END </code></pre>
<p>The reason this happens is because your shell expands the variable <code>${MY_VAR}</code> before it's ever sent to the kubernetes. You can disable parameter expansion inside of a heredoc by quoting the terminator:</p> <pre><code>kubectl apply -f - &lt;&lt;'EOF' </code></pre> <p>Adding these quotes should resolve your issue.</p>
<p>how to using crictl to get the dangling images? The server was not install docker right now. is it possible to using crictl to get the dangling images? I have tried to using this command:</p> <pre><code>crictl images </code></pre> <p>but could not recognized which images could be remove.</p>
<p>It is not possible to get the dangling images using crictl.Safest and easiest way to clean up dangling images is by using <a href="https://stackoverflow.com/questions/45142528/what-is-a-dangling-image-and-what-is-an-unused-image">docker</a>.</p> <p>You can use the <code>$ docker image prune</code> command which allows you to clean up unused images. By default, docker image prune only cleans up dangling imagesBy default, docker image prune only cleans up dangling images.</p> <p>Try listing your images with <code>crictl images</code> and if you want to remove all unused images run the below command:</p> <pre><code>crictl rmi --prune </code></pre> <p>You need a rather current crictl for that. From the help:</p> <pre><code>$ crictl rmi --help NAME: crictl rmi - Remove one or more images USAGE: crictl rmi [command options] IMAGE-ID [IMAGE-ID...] OPTIONS: --all, -a Remove all images (default: false) --prune, -q Remove all unused images (default: false) --help, -h show help (default: false) </code></pre> <p>Refer to the <a href="https://stackoverflow.com/questions/69981852/how-to-use-local-docker-images-in-kubernetes-deployments-not-minikube">stackpost</a> for more information.</p>
<p><strong>Is there a way to extend the kustomize image transformer to recognise more keys as image specifiers?</strong> Like the <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#name-reference-transformer" rel="nofollow noreferrer"><code>nameReference</code> transformer</a> does for the <code>namePrefix</code> and <code>nameSuffix</code> transformers.</p> <p><a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md" rel="nofollow noreferrer">The Kustomize <code>images:</code> transformer</a> is very useful for image replacement and registry renaming in k8s manifests.</p> <p>But it <a href="https://github.com/kubernetes-sigs/kustomize/issues/686" rel="nofollow noreferrer">only supports types that embed <code>PodTemplate</code></a> and maybe some hardcoded types. CRDs that don't use <code>PodTemplate</code> are not handled despite them being <em>very</em> common. Examples include the <code>kube-prometheus</code> <code>Prometheus</code> and <code>AlertManager</code> resources and the <code>opentelemetry-operator</code> <code>OpenTelemetryCollector</code> resource.</p> <p>As a result you land up having to maintain a bunch of messy strategic merge or json patches to prefix such images with a trusted registry or the like.</p> <hr /> <p>Here's an example of the problem as things stand. Say I have to deploy everything prefixed with <code>mytrusted.registry</code> with an <code>images:</code> transformer list. For the sake of brevity here I'll use a dummy one that replaces all matched images with <code>MATCHED</code>, so I don't have to list them all:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - &quot;https://github.com/prometheus-operator/kube-prometheus&quot; images: - name: &quot;(.*)&quot; newName: &quot;MATCHED&quot; newTag: &quot;fake&quot; </code></pre> <p>You'd expect the only images in the result to be &quot;MATCHED:fake&quot;, but in reality:</p> <pre><code>$ kustomize build | grep 'image: .*' | sort | uniq -c 12 image: MATCHED:fake 1 image: quay.io/prometheus/alertmanager:v0.24.0 1 image: quay.io/prometheus/prometheus:v2.34.0 </code></pre> <p>the images in the <code>kind: Prometheus</code> and <code>kind: AlertManager</code> resources don't get matched because they are not a <code>PodTemplate</code>.</p> <p>You have to write a custom patch for these, which creates mess like this <code>kustomization.yaml</code> content:</p> <pre><code>patches: - path: prometheus_image.yaml target: kind: Prometheus - path: alertmanager_image.yaml target: kind: Alertmanager </code></pre> <p>with <code>prometheus_image.yaml</code>:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: ignored spec: image: &quot;MATCHED:fake&quot; </code></pre> <p>and <code>alertmanager_image.yaml</code>:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: Alertmanager metadata: name: ignored spec: image: &quot;MATCHED:fake&quot; </code></pre> <p>which is IMO ghastly.</p> <p>What I <em>want</em> to be able to do is tell <code>Kustomize</code>'s image transformer about it, like it can be extended with custom configmap generators, etc, like the following <em>unsupported and imaginary pseudocode</em> modeled on the existing <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#name-reference-transformer" rel="nofollow noreferrer"><code>nameReference</code> transformer</a></p> <pre><code>imageReference: - kind: Prometheus fieldSpecs: - spec/image </code></pre>
<p>Just after writing this up I finally stumbled on the answer: Kustomize does support <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/images/README.md" rel="noreferrer">image transformer configs</a>.</p> <p>The correct way to express the above would be a <code>image_transformer_config.yaml</code> file containing:</p> <pre><code>images: - path: spec/image kind: Prometheus - path: spec/image kind: Alertmanager </code></pre> <p>and a <code>kustomization.yaml</code> entry referencing it, like</p> <pre><code>configurations: - image_transformer_config.yaml </code></pre> <p>This appears to work fine when imported as a <code>Component</code> too.</p> <p>It's even <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#images-transformer" rel="noreferrer">pointed out by the transformer docs</a> so I'm going to blame this one on being blind.</p>
<p>I have setup MinIO with Kubernetes (K3s) in one node.</p> <p>Command <code>kubectl logs mypod-0 -n minio</code> returns the following:</p> <ul> <li><p>API: <a href="http://10.42.0.14:9000" rel="nofollow noreferrer">http://10.42.0.14:9000</a> <a href="http://127.0.0.1:9000" rel="nofollow noreferrer">http://127.0.0.1:9000</a></p> </li> <li><p>Console: <a href="http://10.42.0.14:41989" rel="nofollow noreferrer">http://10.42.0.14:41989</a> <a href="http://127.0.0.1:41989" rel="nofollow noreferrer">http://127.0.0.1:41989</a></p> </li> </ul> <p>I have access to the console from the second link and by using python I can list the buckets inside minio with:</p> <pre><code>import logging from minio import Minio from minio.error import S3Error # execute from IDE terminal minio = Minio( '10.42.0.14:9000', access_key='chesAccesskeyMinio', secret_key='chesSecretkey', secure=False, ) def list_all_buckets(): bucket_list = minio.list_buckets() for bucket in bucket_list: objects = minio.list_objects(bucket.name, recursive=True) print (bucket.name) if __name__ == '__main__': try: list_all_buckets() except S3Error as exc: print(&quot;error occurred.&quot;, exc) logging.critical(&quot;Object storage not reachable&quot;) </code></pre> <p>My question is how to expose this IP to be accesible from outside my network (both he console and the API). Do I have to use ingress?</p> <p><strong>UPDATED based on answer and comments</strong></p> <p>I have two services</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ches namespace: minio labels: app: ches spec: clusterIP: None selector: app: ches ports: - port: 9011 name: ches --- apiVersion: v1 kind: Service metadata: name: ches-service namespace: minio labels: app: ches spec: type: LoadBalancer selector: app: ches ports: - port: 9012 targetPort: 9011 protocol: TCP </code></pre> <p>and then I created an Ingress using</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minio namespace: minio spec: rules: - host: s3.example.com http: paths: - backend: service: name: ches-service port: number: 9000 path: / pathType: Prefix </code></pre> <p>Command <code>kubectl describe Ingress minio -n minio</code> results in:</p> <pre><code>Name: minio Namespace: minio Address: 192.168.1.14 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- s3.example.com / ches-service:9000 (10.42.0.14:9011) Annotations: &lt;none&gt; Events: &lt;none&gt; </code></pre> <p>However I cannot access <code>s3.example.com</code>.</p> <p>Am I missing something here?</p>
<p>Yes, an Ingress would allow you to expose MinIO to clients outside your clusters SDN.</p> <p>You probably have a Service object already (if not, you need one, Ingress points to Service, which resolves to Pods).</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minio spec: rules: - host: s3.example.com http: paths: - backend: service: name: minio-api port: number: 9000 path: / pathType: Prefix </code></pre> <p>If you don't have/want to use an ingresscontroller, you could use a NodePort service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: minio-nodeport spec: ports: - name: http port: 9000 targetPort: 9000 selector: name: my-minio-pod type: NodePort </code></pre> <p>NodePorts services are allocated a unique port (within a range/could vary depending on your cluster configuration). When connecting to any node of your cluster, on that port, we would be redirected to any pod matching that Services selector.</p> <pre><code>$&gt; kubectl get svc my-nodeport-svc NodePort 10.233.7.160 &lt;none&gt; 9000:32133/TCP </code></pre> <p>Getting my service, after its creation, I can see that I can reach port 9000 (in SDN), when connecting to port 32133 (outside SDN). You can &quot;kubectl get nodes -o wide&quot;, to get a list of your nodes IP addresses, all of which would forward connections to your Pods</p>
<p>I have a configmap that looks like this:</p> <pre><code>apiVersion: v1 data: nginx.conf: &quot;events {worker_connections 1024; . . . }&quot; kind: ConfigMap metadata: name: nginx-cfg namespace: nginx </code></pre> <p>I want to add a new line of text to the begining of nginx.conf:</p> <pre><code>apiVersion: v1 data: nginx.conf: &quot; &lt;some line of new text here&gt; events {worker_connections 1024; . . . }&quot; kind: ConfigMap metadata: name: nginx-cfg namespace: nginx </code></pre> <p>I use this patch command to make changes:</p> <pre><code>kubectl patch configmap/nginx-cfg \ -n nginx \ --type merge \ -p '{&quot;data&quot;:{&quot;nginx.conf&quot;:{&quot;load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so&quot;}}}' </code></pre> <p>but got an error: <strong>Error from server: Invalid JSON Patch</strong>. What shoud I do to fix the error?</p> <p>Thank you for your help!</p>
<p>The error is because you're trying to go into file block. If you remove the braces:</p> <p><code>-p '{&quot;data&quot;:{&quot;nginx.conf&quot;:{&quot;load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so&quot;}}}'</code></p> <p>to:</p> <p><code>-p '{&quot;data&quot;:{&quot;nginx.conf&quot;:&quot;load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so&quot;}}'</code></p> <p>Then the patch will then succeed, but would overwrite the config file contents:</p> <pre><code>$ kubectl patch configmap/nginx-cfg --type merge -p '{&quot;data&quot;:{&quot;nginx.conf&quot;:&quot;load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so&quot;}}' -o yaml apiVersion: v1 data: nginx.conf: load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so kind: ConfigMap metadata: name: nginx-cfg </code></pre> <p>If you want to patch the contents of that field, then you have a few options:</p> <p>a) Dump the configmap, modify the confimap locally using <code>sed</code> or similar tools, then <code>kubectl replace</code> it back. Note the <code>&quot;\ \ \ \ &quot;</code> is intentional to pad the line within the yaml</p> <pre><code>kubectl get configmap nginx-cfg -o yaml &gt;nginx-cfg.yaml sed -i '/nginx\.conf.*/a \ \ \ \ load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so nginx-cfg.yaml #Or whatever edits you need to make kubectl replace -f nginx-cfg.yaml </code></pre> <p>b) Extract the config block itself using <code>kubectl get configmap nginx-cfg -o jsonpath=&quot;{.data.nginx\.conf}&quot;</code>, dump it to disk, modify the block, then <code>kubectl create configmap</code> using that and replace</p> <pre><code>kubectl get configmap nginx-cfg -o jsonpath=&quot;{.data.nginx\.conf}&quot; &gt;nginx.conf sed -i '1 i\load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so' nginx.conf #Or whatever edits you need to make kubectl create configmap --dry-run=client --from-file=nginx.conf nginx-cfg -o yaml | kubectl replace -f - </code></pre> <p>c) Using some pipes to do the edits in-line (only useful if it's single line change. Otherwise, it's worth doing one of the above instead). This gets the yaml, edits it in-line, then feeds it back into kubectl to replace the existing configmap</p> <pre><code>kubectl get configmap nginx-cfg -o yaml | \ sed '/nginx\.conf.*/a \ \ \ \ load_module /usr/lib/nginx/modules/ngx_http_vhost_traffic_status_module.so' | \ kubectl replace -f - </code></pre>
<p>I have a few external CRDs with old <code>apiVersion</code> applied in the cluster, and operators based on those CRDs deployed.</p> <p>As said in <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="nofollow noreferrer">official docs</a> about Kubernetes API and Feature Removals in 1.22.</p> <blockquote> <p>You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version. If you defined any custom resources in your cluster, those are still served after you upgrade.</p> </blockquote> <p>Based on the quote, does it mean I could leave those <code>apiextensions.k8s.io/v1beta1</code> CRDs in the cluster? Will controllers/operators continue to work normally?</p>
<p>The custom resources will still be served after you upgrade</p> <p>Suppose we define a resource called <code>mykind</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: mykinds.grp.example.com spec: group: grp.example.com versions: - name: v1beta1 served: true storage: true </code></pre> <p>Then, on any cluster where this has been applied I can always define a <code>mykind</code> resource:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: grp.example.com/v1beta1 kind: Mykind metadata: name: mykind-instance </code></pre> <p>And this resource will still be served normally after upgrade even if the CRD for <code>mykind</code> was created under <code>v1beta1</code>.</p> <p>However, anything in the controller / operator code referencing v1beta1 CRD won't work. This could be applying the CRD itself (if your controller has permissions to do that) for example. That's something to watch out for if your operator is managed by the <a href="https://github.com/operator-framework/operator-lifecycle-manager" rel="nofollow noreferrer">Operator Lifecycle Manager</a>. But watching for changes in the CRs would be unaffected by the upgrade.</p> <p>So if your controller / operator isn't watching <code>CustomResourceDefinitions</code> then technically you can leave these CRDs on the cluster and your operator will work as normal. But you won't be able to uninstall + reinstall should you need to.</p> <p>Another thing to explore is if / how that might affect your ability to bump API versions later though.</p>
<p>I want Ingress to redirect a specific subdomain to one backend and all others to other backend. Basically, I want to define a rule something like the following:</p> <blockquote> <p>If subdomain is <code>foo.bar.com</code> then go to <code>s1</code>, for all other subdomains go to <code>s2</code></p> </blockquote> <p>When I define the rules as shown below in the Ingress spec, I get this exception at deployment:</p> <pre><code>Error: UPGRADE FAILED: cannot re-use a name that is still in use </code></pre> <p>When I change <code>*.bar.com</code> to <code>demo.bar.com</code> it works, however.</p> <p>Here's my Ingress resource spec:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: *.bar.com http: paths: - backend: serviceName: s2 servicePort: 80 </code></pre> <p>Anyone has an idea if it is possible or not?</p>
<p>This is now possible in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards" rel="noreferrer">Kubernetes</a>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-redirect: &quot;false&quot; kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: web-static-ip nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/server-alias: www.foo.bar nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; name: foo-bar-ingress namespace: test spec: rules: - host: 'foo.bar.com' http: paths: - backend: serviceName: specific-service servicePort: 8080 path: /(.*) pathType: ImplementationSpecific - host: '*.bar.com' http: paths: - backend: serviceName: general-service servicePort: 80 path: /(.*) pathType: ImplementationSpecific </code></pre>
<p>We need to disable the automount of service account from our existing deployments in AKS cluster. There are 2 ways to do by adding the property &quot;automountserviceaccount : false&quot; in either in the service account manifest or pod template.</p> <p>We are using separate service account specified in our application deployments, however when we looked in the namespace, there are default service account also created.</p> <p>So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.</p> <p>Since our app already live, will there be any impact by adding this to the service account s.</p> <p>How to know the used service accounts of a pod and it's dependencies ?</p>
<p>Disable auto-mount of default service account:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: default automountServiceAccountToken: false </code></pre> <p><a href="https://gist.github.com/pjbgf/0a8c8a1459e5a2eb20e9d0852ba8c4be" rel="nofollow noreferrer">https://gist.github.com/pjbgf/0a8c8a1459e5a2eb20e9d0852ba8c4be</a></p>