Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>How can I have login page inside my ingress (nginx)? I know I can use basic authentication or OAuth but I want to have a login page just with one user and I don't want it will be like basic authentication. I want it has a specific page.</p>
Ali Rezvani
<p>As per this official <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/#custom-nginx-template" rel="nofollow noreferrer">NGINX Ingress Controller</a> document, you can create a custom nginx page for OAuth or basic authentication nginx ingress controller. For this you have to use the <strong>volume</strong> but at the same time if you are using new template then the <strong>configmap</strong> also needs to be updated.</p> <p>By using a volume you can add your custom template to nginx deployment like this</p> <pre><code>volumeMounts: - mountPath: /etc/nginx/template name: nginx-template-volume readOnly: true volumes: - name: nginx-template-volume configMap: name: nginx-template items: - key: custom-nginx.tmpl path: custom-nginx.tmpl </code></pre> <p>For more detailed information on how to use the custom templates refer this document <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/#custom-nginx-template" rel="nofollow noreferrer">DOC1</a> <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/basic/" rel="nofollow noreferrer">DOC2</a></p> <p>Try this <a href="https://www.youtube.com/watch?v=AXZr2OC8Unc" rel="nofollow noreferrer">tutorial</a> for more details (refer to the custom templates section)</p>
Dharani Dhar Golladasari
<p>I have one node pool named &quot;<strong>application pool</strong>&quot; which contains node vm size of <strong>Standard_D2a_v4</strong>. This node pool is set to &quot;<strong>Autoscaling</strong>&quot;. Is there in solution, where I taint the whole node pool in azure? to restrict the pods to schedule on that node pool?</p>
Kaivalya Dambalkar
<p>You can use below command for adding taint for existing nodepool in the <strong>AKS</strong> clusters:</p> <pre><code>az aks nodepool update --resource-group myResourceGroup --cluster-name myAKSCluste --name taintnp --node-taints sku=gpu:NoSchedule </code></pre>
vinto007
<p>I am trying to attach a clusterrole and a role to a service account in kubernetes so I can limit certain actions to a namespace. When I do this it seems to apply only the cluster role permissions.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: meta.helm.sh/release-name: par meta.helm.sh/release-namespace: par creationTimestamp: &quot;2023-04-10T01:41:20Z&quot; labels: app.kubernetes.io/instance: par app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: chart app.kubernetes.io/version: 0.1.0 helm.sh/chart: chart-0.1.0 name: par-chart-manager-role resourceVersion: &quot;180751&quot; uid: 201f10e1-81c1-4c15-bd0c-4819caf37b64 rules: - apiGroups: - &quot;&quot; resources: - pods - services verbs: - list - watch - apiGroups: - apps resources: - deployments verbs: - create - get - list - patch - update - watch - apiGroups: - dns.par.dev resources: - arecords verbs: - create - delete - get - list - patch - update - watch - apiGroups: - dns.par.dev resources: - arecords/finalizers verbs: - update - apiGroups: - dns.par.dev resources: - arecords/status verbs: - get - patch - update kind: ClusterRoleBinding metadata: annotations: meta.helm.sh/release-name: par meta.helm.sh/release-namespace: par creationTimestamp: &quot;2023-04-10T01:41:20Z&quot; labels: app.kubernetes.io/component: rbac app.kubernetes.io/created-by: par app.kubernetes.io/instance: par app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: chart app.kubernetes.io/part-of: par app.kubernetes.io/version: 0.1.0 helm.sh/chart: chart-0.1.0 name: par-chart-manager-rolebinding resourceVersion: &quot;180754&quot; uid: 99d47f21-a2c9-445b-b3f3-0bb7c8bca4fe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: par-chart-manager-role subjects: - kind: ServiceAccount name: par-chart-controller-manager namespace: par apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: meta.helm.sh/release-name: par meta.helm.sh/release-namespace: par creationTimestamp: &quot;2023-04-10T01:41:20Z&quot; labels: app.kubernetes.io/instance: par app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: chart app.kubernetes.io/version: 0.1.0 helm.sh/chart: chart-0.1.0 name: par-chart-manager-role namespace: par resourceVersion: &quot;180757&quot; uid: b51cbbae-20f7-4927-b7bb-1e654f2f4d53 rules: - apiGroups: - &quot;&quot; resources: - configmaps - secrets verbs: - get - list - patch - update - watch apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: meta.helm.sh/release-name: par meta.helm.sh/release-namespace: par creationTimestamp: &quot;2023-04-10T01:41:20Z&quot; labels: app.kubernetes.io/component: rbac app.kubernetes.io/created-by: par app.kubernetes.io/instance: par app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: chart app.kubernetes.io/part-of: par app.kubernetes.io/version: 0.1.0 helm.sh/chart: chart-0.1.0 name: par-chart-manager-rolebinding namespace: par resourceVersion: &quot;180758&quot; uid: 151148e3-7191-4ef2-bccc-97bd26fd724b roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: par-chart-manager-role subjects: - kind: ServiceAccount name: par-chart-controller-manager namespace: par </code></pre> <p>Permissions of Service account.</p> <pre><code>$ kubectl auth can-i --list --as=system:serviceaccount:par:par-chart-controller-manager Resources Non-Resource URLs Resource Names Verbs arecords.dns.par.dev [] [] [create delete get list patch update watch] deployments.apps [] [] [create get list patch update watch] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.k8s.io [] [] [create] arecords.dns.par.dev/status [] [] [get patch update] [/.well-known/openid-configuration] [] [get] [/api/*] [] [get] [/api] [] [get] [/apis/*] [] [get] [/apis] [] [get] [/healthz] [] [get] [/healthz] [] [get] [/livez] [] [get] [/livez] [] [get] [/openapi/*] [] [get] [/openapi] [] [get] [/openid/v1/jwks] [] [get] [/readyz] [] [get] [/readyz] [] [get] [/version/] [] [get] [/version/] [] [get] [/version] [] [get] [/version] [] [get] pods [] [] [list watch] services [] [] [list watch] arecords.dns.par.dev/finalizers [] [] [update] </code></pre>
jmcgrath207
<p>By default the kubernetes will list the clusterrolebindings without the <code>-n</code> flag in the <code>kubectl auth can-i</code> command. So try giving the namespace flag <code>-n</code> to check the <strong>rolebindings</strong> for <strong>serviceaccount</strong> in that particular namespace.</p> <p>In your case the command looks like this:</p> <pre><code>$Kubectl auth can-i –list –as-system:serviceaccount:par:par-chart-controller-manager -n par </code></pre> <p>You can check which rolebindings and clusterrolebindings are assigned to</p> <pre><code>$ kubectl get rolebindings,clusterrolebindings -o wide -n &lt;namespace&gt; | grep &lt;serviceaccount name&gt; </code></pre>
Dharani Dhar Golladasari
<p>I'm creating a LoadBalancer service in EKS cluster using terraform. The service is getting created as well as the NLB is created too but the targets in the target groups are empty expect one target group. I have total 6 instances in the cluster.</p> <p>I'm using the below code to create the Load Balancer service from terraform</p> <pre><code>resource &quot;kubernetes_service&quot; &quot;ml&quot; { count = (var.enabled_environments[var.namespace] == true &amp;&amp; var.namespace != &quot;prod&quot; &amp;&amp; var.namespace != &quot;demo&quot; ? 1 : 0) metadata { namespace = var.namespace name = &quot;${var.namespace}-xyz-ml-service&quot; labels = { &quot;app.kubernetes.io/component&quot; = &quot;${var.namespace}-xyz-ml&quot; } annotations = { &quot;service.beta.kubernetes.io/aws-load-balancer-type&quot; = &quot;nlb&quot; &quot;service.beta.kubernetes.io/aws-load-balancer-nlb-target-type&quot; = &quot;instance&quot; &quot;service.beta.kubernetes.io/aws-load-balancer-internal&quot; = &quot;true&quot; } } spec { type = &quot;LoadBalancer&quot; port { name = &quot;abc-0&quot; port = 8110 target_port = 8110 } port { name = &quot;abc-1&quot; port = 8111 target_port = 8111 } port { name = &quot;abc-2&quot; port = 8112 target_port = 8112 } port { name = &quot;abc-3&quot; port = 8113 target_port = 8113 } selector = { app = &quot;xyz-ml&quot; } } } </code></pre> <p>Can you let me know what am I missing here?</p> <p>I tried following these steps <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html</a></p>
vishal mehra
<p>Here are a few things to check:</p> <ol> <li><p>Check your pod's selector. Based on TF code, your service will forward requests to pods with the label &quot;app&quot;=&quot;xyz-ml&quot;. Double-check to ensure that your pods have this label.</p> </li> <li><p>Are your pods running? To be targeted by a service, pods must be running and healthy. Check this using the <code>kubectl get pods -n &lt;namespace&gt;</code></p> </li> <li><p>Ensure that your pods are listening on these ports: 8110, 8111, 8112, and 8113.</p> </li> <li><p>If your nodes are not in the correct subnet for the load balancer, they will not be registered as targets.</p> </li> <li><p>Ensure your pods and services are in the same namespace.</p> </li> <li><p>Verify that your Network ACLs and Security Groups are not blocking in/out traffic.</p> </li> </ol>
nickdoesstuff
<p>When I create a dashboard in Grafana and export it in JSON, the role, user and group permissions I define are not saved with it.</p> <p>I am looking for a way to assign permissions for each dashboard in a Grafana deployment with Helm, in which I already include the dashboards to use.</p> <p>Does anyone know if this is possible? I can't find a way to do it, can it only be done from web or from API?</p> <p>Thanks.</p>
agallende
<p>Yes, you can assign permissions to dashboards in Grafana using Helm, as well as through the Grafana web UI or API.</p> <p>To assign permissions using Helm, you can define a custom Grafana dashboard provisioning configuration file in your Helm chart's values.yaml or in a separate YAML file, and specify the appropriate permissions for each dashboard using the datasources, dashboards, and users sections. Here's an example:</p> <h1>values.yaml or custom configuration file</h1> <pre><code>grafana: provisioning: datasources: - name: &lt;datasource_name&gt; type: &lt;datasource_type&gt; access: proxy &lt;datasource-specific_configurations&gt; # e.g., url, basicAuth, etc. dashboards: - name: &lt;dashboard_name&gt; uid: &lt;dashboard_uid&gt; # unique identifier for the dashboard url: &lt;dashboard_url&gt; # URL of the JSON file for the dashboard permissions: role: &lt;role_name&gt; # role to assign the dashboard to user: &lt;user_name&gt; # user to assign the dashboard to team: &lt;team_name&gt; # team to assign the dashboard to users: - username: &lt;user_name&gt; role: &lt;role_name&gt; </code></pre> <p>In this example, you can specify the datasource configuration, dashboard configuration (including permissions), and user configuration using Helm values. Once you apply the Helm chart, Grafana will provision the dashboards with the specified permissions.</p> <p>Note: Make sure to use the appropriate values for &lt;datasource_name&gt;, &lt;datasource_type&gt;, &lt;dashboard_name&gt;, &lt;dashboard_uid&gt;, &lt;dashboard_url&gt;, &lt;role_name&gt;, &lt;user_name&gt;, and &lt;team_name&gt; in your configuration.</p> <p>Alternatively, you can also assign permissions to dashboards using the Grafana web UI or API. In the web UI, you can go to the dashboard settings, navigate to the &quot;Permissions&quot; tab, and specify the roles, users, or teams that should have access to the dashboard. You can also use the Grafana API to create, update, or delete dashboards with specific permissions using the appropriate API endpoints and payload.</p> <p>Please note that in order to assign permissions to dashboards, you need to have appropriate permissions and roles configured in Grafana. Also, make sure to follow Grafana's documentation and best practices for securing your deployment and managing permissions effectively.</p>
Harshika Govind
<p>I have set up Kube Prometheus Stack with Thanos on my Kubernetes cluster, and I'm using the Thanos Receiver instead of the sidecar approach. I have also configured the Thanos Compactor and Minio for offloading data. Here are some key details of my setup:</p> <ul> <li>Prometheus retention is set to 2 hours.</li> <li>Thanos Receiver retention is set to 6 hours.</li> <li>Using Longhorn PV as the persistence volume for the Thanos Receiver.</li> <li>Data is successfully offloaded to Minio storage.</li> </ul> <p>My issue is that, even though I have configured the retention settings for both Prometheus and the Thanos Receiver, the <strong>Receiver doesn't seem to automatically delete old data from the Longhorn PV</strong> when the retention period is exceeded. As a result, my PV is filling up with old data, and I need a way to ensure that old data is deleted as per the configured retention.</p> <p><a href="https://i.stack.imgur.com/7VVJI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7VVJI.png" alt="enter image description here" /></a></p> <p>I'm wondering if there's a specific configuration or step that I might be missing to enable automatic data deletion from the PV when the retention period is exceeded.</p>
dasunNimantha
<p>I think this issue with Thanos itself.</p> <p>There was an <a href="https://github.com/thanos-io/thanos/issues/4420" rel="nofollow noreferrer">GitHub issue</a> for the same problem but it was closed by the stale bot.</p> <p>Try to reach out to Thanos team by creating q new issue in their Github repo.</p>
Saifeddine Rajhi
<p>Proponents say performance cost is negligible and that trade off comes with benefits such as scalability and uptime etc. What I'm trying to get at is when in a real world scenario would container orchestration be justifiable, and what would be the best way to go about it? What are the most common issues with such high traffic? (with an example tech stack preferably) I read recently that prime video had cut costs by 90% by changing the part of the architecture to a monolith style. It's hard to imagine, because I've containerized parts of apps that don't receive anywhere near the number of requests between services - so this is more of a in practice question. Thanks.</p> <p>not really a problem, would like to ask people with practice experience for advice.</p>
M33ps
<p>Everything is a trade-off in the Application world. Being said the choice of System design whether it's a Monolith or MicroService way is a very project-specific decision.</p> <p>You are correct on the Prime video part where they saved tons of $ by switching back to good old monolith Architecture. Being said that is it that easy? no really. Imagine what level of effort was gone into designing a monolith that fulfills the majority of the requirements regarding scale, and availablity needed by the team.</p> <p>Answering your question I think the use/application of container orchestration is justifiable in two major scenarios. first, your application fits well with what container orchestration has to offer. and other and most important one is your or your team is capable of understanding and managing complex builds and orchestration-based tools like let's say Kubernetes. I explicitly mentioned the reason in two cases I have seen major blunders happening due to lack of proper understanding.</p> <p>Lastly here are a few lists of overhead/challenges due to containerization:</p> <ol> <li>Abstraction between host and container which brings additional overhead to managing them</li> <li>As pointed out by Burak containers have their own networking IF. managing that is a bit overhead rather than running the app on the host itself.</li> <li>Docker-App isolation that too in an abstracted way of operating processes which we discussed in point 1.</li> <li>Distributes logging, Assuming container pods/docker instances come on and off as per load requirements. logging/log archiving is a challenging task.</li> </ol> <p>Hope this helps.</p>
lib
<p>I am preparing <code>dev</code> environment and want to create a single host to be master and worker node for kubernetes.</p> <p>How can I achieve my goal?</p>
Govinda Chaulagain
<blockquote> <p>The <em><strong>master node</strong></em> is responsible for running several Kubernetes processes that are absolutely necessary to run and manage the cluster properly. <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1]</a></p> <p>The <em><strong>worker nodes</strong></em> are the part of the Kubernetes clusters which actually execute the containers and applications on them. <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1]</a></p> </blockquote> <hr /> <blockquote> <p><em><strong>Worker nodes</strong></em> are generally more powerful than <em><strong>master nodes</strong></em> because they have to run hundreds of clusters on them. However, <em><strong>master nodes</strong></em> hold more significance because they manage the distribution of workload and the state of the cluster. <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1]</a></p> </blockquote> <hr /> <p>By removing taint you will be able to schedule pods on that node.</p> <p>You should firstly check the present taint by running:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl describe node &lt;nodename&gt; | grep Taints </code></pre> <p>In case the present one is master node you should remove that taint by running:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl taint node &lt;mastername&gt; node-role.kubernetes.io/master:NoSchedule- </code></pre> <hr /> <p>References: <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1] - What is Kubernetes cluster? What are worker and master nodes?</a></p> <p>See also:</p> <ul> <li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">Creating a cluster with kubeadm</a>,</li> <li>This four similar questions: <ol> <li><a href="https://stackoverflow.com/questions/56162944/master-tainted-no-pods-can-be-deployed">Master tainted - no pods can be deployed</a></li> <li><a href="https://stackoverflow.com/questions/55191980/remove-node-role-kubernetes-io-masternoschedule-taint">Remove node-role.kubernetes.io/master:NoSchedule taint</a>,</li> <li><a href="https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master">Allow scheduling of pods on Kubernetes master?</a></li> <li><a href="https://stackoverflow.com/questions/63967089/are-the-master-and-worker-nodes-the-same-node-in-case-of-a-single-node-cluster">Are the master and worker nodes the same node in case of a single node cluster?</a></li> </ol> </li> <li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a>.</li> </ul>
kkopczak
<p><a href="https://i.stack.imgur.com/6b3fx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6b3fx.png" alt="enter image description here" /></a></p> <p>Is this diagram correct? Because, activeDeadlineSeconds takes precedence over backOffLimit, if activeDeadlineSeconds exceeded it should directly mark the job as incomplete right? Why is this checking backOffLimit if activeDeadlineSeconds exceeded?</p>
Vasu Youth
<p>The two parameters serve different purposes. In most cases <strong>ActiveDeadlineSeconds</strong> takes <strong>precedence</strong> over <strong>backoffLimit</strong> because <strong>activedeadline</strong> is a hard limit on the <strong>amount of time</strong> that an operation is allowed to take. If <strong>activeDeadlineSeconds</strong> exceeds then all the pods will <strong>terminate</strong> and mark the job as <strong>failure</strong> with <strong>reason: DeadlineExceeded</strong>. Whereas the <strong>backofflimit</strong> is a limit on the <strong>number of tries</strong> that are attempted. However the specific precedence will depend on the <strong>implementation</strong> of the API or <strong>application system</strong>, as different systems may <strong>prioritize</strong> these parameters differently.</p> <p>The scheduling of <strong>cronjobs</strong> will be as follows</p> <ol> <li>On <strong>scheduling</strong> a cron job.</li> <li>It will create a normal <strong>job</strong>.</li> <li>Job will create a Pod.</li> <li>If Pod has <strong>Error</strong> it will check whether the <strong>backoffLimit</strong> is exceeded or not.</li> <li>If Yes, then it will mark the job as <strong>incomplete</strong>.</li> <li>If No, then it will check if the <strong>activeDeadlineSeconds exceeded</strong> or not.</li> <li>If <strong>activeDeadlineSeconds does not exceed</strong> then the job is marked as complete.</li> <li>If it is <strong>exceeded</strong> then it will <strong>mark as failure with reason DeadlineExceeded</strong>.</li> </ol> <p>For more information refer to this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup" rel="nofollow noreferrer">official k8 document</a></p>
Dharani Dhar Golladasari
<p>I am a k8s beginner.</p> <p>I have a k8s cluster (<strong>not</strong> minikube) in my test computer. <strong>The cluster was created by Kind.</strong> <em>(...no idea what Kind is. It was recommended by my colleague)</em>.</p> <p>Then I created:</p> <ul> <li>a k8s deployment with a small test http server;</li> <li>a k8s service (type:NodePort) associated with the deployment.</li> </ul> <p>I expected to visit the http server using a web browser on a <strong>different</strong> computer connected with the k8s computer via ethernet.</p> <p>However, it's not working. The client computer can visit the http server if it is running without k8s (if I manually start it in command line), but if it is running in a k8s pod, the http server is inaccessible.</p> <p><strong>The <code>kubectl describe nodes</code> command showed that the k8s computer has an &quot;Internal IP&quot; that is different from the &quot;real&quot; IP of that computer:</strong></p> <pre><code>$ kubectl describe nodes Name: kind-control-plane Roles: control-plane ... Addresses: InternalIP: 172.17.0.2 Hostname: kind-control-plane Capacity: ... $ ip addr 1: ... 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:a5:14:ab brd ff:ff:ff:ff:ff:ff inet 10.7.71.173/20 brd 10.7.79.255 scope global eth0 ... </code></pre> <p>In the above output, the 10.7.71.173 is the &quot;real&quot; IP and the 172.17.0.2 is shown as the &quot;Internal IP&quot;.</p> <p>I can access (curl) the http server by the &quot;Internal IP&quot; but it only works on the k8s computer.</p> <p>I Googled it for quite a while for an explanation of k8s internal IP but I only got one semi-comprehensible answer: the k8s internal ip is an NAT IP (I understand what NAT is). But I am still not quite sure what k8s internal IP is and why it has to use an internal IP.</p> <p>More importantly, many internet posts say that the NodePort service allows the app in a pod to be accessed from outside the cluster. It is different from what I experienced.</p> <p>So my question is:</p> <ol> <li>I know that an ingress or a load balancer can expose the app-in-pod to the real external world, but why can't the NodePort service do the same work?</li> <li>What exactly is the &quot;internal IP&quot;?</li> </ol> <p>Any explanations or links to articles/posts are welcomed. Thank you very much.</p>
Zhou
<p>When a kubernetes cluster is created, it will establish an <strong>internal network</strong> within the cluster which <strong>enables the communication</strong> between different types of kubernetes resources like <strong>nodes</strong> , <strong>pods</strong> and <strong>services</strong>. In kubernetes cluster each node will get assigned an <strong>IP address</strong> called ‘<strong>InternalIP</strong>’ address. This address will be used by kubernetes components to communicate with each other.</p> <p>For example if you create a service , kubernetes assigns a <strong>unique IP address</strong> to the <strong>service</strong>, which is used to access the service from within the cluster. The <strong>InternalIP</strong> address of a node where the service is <strong>running</strong> will be used to <strong>route the traffic</strong> to the service.</p> <p>You mentioned you are not able to <code>curl</code> the <code>InternalIP</code> of the node, but did you try pinging with the port number which was exposed in the <strong>NodePort</strong> service? </p> <p>If not, Try with <code>curl http://&lt;Internal IP&gt;:&lt;NodePort&gt;</code></p> <p>For more detailed information about <strong>InternalIP</strong> and <strong>Nodeport</strong> service refer to this Official Documents <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">DOC1</a>, <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">DOC2</a>.</p>
Dharani Dhar Golladasari
<pre><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: local: push: false artifacts: - image: fifapes123/auth context: auth docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . </code></pre> <p>I am getting error on line number 5?I am using the skaffold/v2alpha3 in which manifests under kubectl is allowed then why i am getting &quot;property manifests is not allowed&quot;?</p>
Gaurav Sharma
<pre><code>apiVersion: skaffold/v3 kind: Config build: artifacts: - image: fifapes123/auth context: auth sync: manual: - src: src/**/*.ts dest: . docker: dockerfile: Dockerfile local: push: false manifests: rawYaml: - ./infra/k8s/* deploy: kubectl: {} </code></pre> <p>Try the above YAML config. You should <a href="https://skaffold.dev/docs/upgrading/" rel="noreferrer">update</a> your <code>skaffold.yaml</code> to the latest apiVersion (apiVersion: v3alpha1). This can be easily done by <code>skaffold fix</code> command</p>
Prathibha Ratnayake
<p>I'm currently using the Python APIs for Kubernetes and I have to:</p> <ul> <li><p>Retrieve the instance of a custom resource name <code>FADepl</code>.</p> </li> <li><p>Edit the value of that instance.</p> </li> </ul> <p>In the terminal, I would simply list all <code>FADepls</code> with <code>kubectl get fadepl</code> and then edit the right one using <code>kubectl edit fadepl &lt;fadepl_name&gt;</code>. I checked the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">K8s APIs for Python</a> but I can't find what I need. Is it something I can do with the APIs?</p> <p>Thank you in advance!</p>
texdade
<p>You're right. Using <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#get_namespaced_custom_object" rel="nofollow noreferrer"><strong><code>get_namespaced_custom_object</code></strong></a> you can retrieve the instance. This method returns a namespace scoped custom object. By default it uses a synchronous HTTP request.</p> <p>Since the output of that method returns an object, you can simply replace it using <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#replace_cluster_custom_object" rel="nofollow noreferrer"><strong><code>replace_cluster_custom_object</code></strong></a>.</p> <p><a href="https://www.programcreek.com/python/?CodeExample=replace+namespaced+custom+object" rel="nofollow noreferrer">Here</a> you can find implementation examples.</p> <p>See also whole list of <a href="https://aiokubernetes.readthedocs.io/en/latest/api_reference.html" rel="nofollow noreferrer">API Reference for Python</a>.</p>
kkopczak
<p>How I can change the default ephemeral storage in Rancher.</p> <p>Suppose current ephemeral storage is using /dev/sda1 so I want to change it to /dev/sda2. Any document or steps will be really helpful.</p>
prakasun
<p>There are things to consider in achieving your use case. To start off, you need to change the kubelet ‘–root-dir’ setting. In your case make sure that the directory /dev/sda2 is mounted and formatted. Go to Rancher and modify the cluster configuration and adjust the parameters. In the Extra Args or Extra Binds field, adjust the ‘--root-dir’ parameter to point to your new mount point. Once completed ensure that /dev/sda2 is properly formatted and mounted, restart the services to make sure the changes would take effect. Attached are documentations for good read.[1][2]</p> <p>[1] <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</a></p> <p>[2]</p> <p><a href="https://ranchermanager.docs.rancher.com/v2.5/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration" rel="nofollow noreferrer">https://ranchermanager.docs.rancher.com/v2.5/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration</a></p>
Ray John Navarro
<p>I just installed prometheus operator as indicated here: <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>:</p> <pre><code>kubectl apply --server-side -f manifests/setup kubectl wait \ --for condition=Established \ --all CustomResourceDefinition \ --namespace=monitoring kubectl apply -f manifests/ </code></pre> <p>After that I just tried to setup my own service monitor for grafana as follows:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: in1-grafana-service-monitor namespace: monitoring spec: selector: matchLabels: app.kubernetes.io/name: grafana endpoints: - port: http interval: 10s </code></pre> <p>This monitor works just fine and I can see it in the Prometheus /targets and /service-discovery.</p> <p>The fact is that when I want to create this same service monitor but outside the &quot;monitoring&quot; namespace it just not appears neither in /targets or in /service-discovery. My setup for this service monitor is as follows:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: out1-grafana-service-monitor namespace: other-namespace spec: selector: matchLabels: app.kubernetes.io/name: grafana namespaceSelector: any: true endpoints: - port: http interval: 10s </code></pre> <p>How can I make Prometheus operator to scrape service monitors (and services) outside the monitoring namespace?</p> <p>I checked the output of <code>kubectl get prom -Ao yaml</code> and it just displays an empty list:</p> <pre><code>[...] serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} [...] </code></pre> <p>Any help will be appreciated.</p> <p>Thank you.</p> <p>I expect that the service monitor outside the monitoring namespace works as I need it for other service (Not for Grafana).</p>
Joan
<p>After looking at the yaml files I realized that Prometheus doesn't have the permissions to read all namespaces. And after looking at the repository customization examples I found the solution: <a href="https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md</a></p> <p>Hope this helps someone else in the future.</p>
Joan
<p>I have a Raspberry Pi Cluster consisting of 1-Master 20-Nodes:</p> <ul> <li>192.168.0.92 (Master)</li> <li>192.168.0.112 (Node w/ USB Drive)</li> </ul> <p>I mounted a USB drive to <code>/media/hdd</code> &amp; set a label <code>- purpose=volume</code> to it.</p> <p>Using the following I was able to setup a NFS server:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: storage labels: app: storage --- apiVersion: v1 kind: PersistentVolume metadata: name: local-pv namespace: storage spec: capacity: storage: 3.5Ti accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /media/hdd nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: purpose operator: In values: - volume --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-claim namespace: storage spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 3Ti --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-server namespace: storage labels: app: nfs-server spec: replicas: 1 selector: matchLabels: app: nfs-server template: metadata: labels: app: nfs-server name: nfs-server spec: containers: - name: nfs-server image: itsthenetwork/nfs-server-alpine:11-arm env: - name: SHARED_DIRECTORY value: /exports ports: - name: nfs containerPort: 2049 - name: mountd containerPort: 20048 - name: rpcbind containerPort: 111 securityContext: privileged: true volumeMounts: - mountPath: /exports name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: local-claim nodeSelector: purpose: volume --- kind: Service apiVersion: v1 metadata: name: nfs-server namespace: storage spec: ports: - name: nfs port: 2049 - name: mountd port: 20048 - name: rpcbind port: 111 clusterIP: 10.96.0.11 selector: app: nfs-server </code></pre> <p>And I was even able to make a persistent volume with this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysql-nfs-volume labels: directory: mysql spec: capacity: storage: 200Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: slow nfs: path: /mysql server: 10.244.19.5 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-nfs-claim spec: storageClassName: slow accessModes: - ReadWriteOnce resources: requests: storage: 100Gi selector: matchLabels: directory: mysql </code></pre> <p>But when I try to use the volume like so:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-nfs-claim </code></pre> <p>I get NFS version transport protocol not supported error.</p>
Proximo
<p>When seeing <strong><code>mount.nfs: requested NFS version or transport protocol is not supported</code></strong> error there are three main reasons:</p> <blockquote> <ol> <li>NFS services are not running on NFS server</li> <li>NFS utils not installed on the client</li> <li>NFS service hung on NFS server</li> </ol> </blockquote> <p>According tho <a href="https://kerneltalks.com/troubleshooting/mount-nfs-requested-nfs-version-or-transport-protocol-is-not-supported/" rel="nofollow noreferrer">this artice</a> there are three solutions to resolve the problem with your error.</p> <p><strong>First one:</strong></p> <p>Login to the NFS server and check the NFS services status. If the following command <code>service nfs status</code> returns an information that NFS services are stopped on the server - just start them using <code>service nfs start</code>. To mount NFS share on the client use the same command.</p> <p><strong>Second one:</strong></p> <p>If after trying first solution your problem isn't resolved</p> <blockquote> <p>try <a href="https://kerneltalks.com/tools/package-installation-linux-yum-apt/" rel="nofollow noreferrer">installing package</a> nfs-utils on your server.</p> </blockquote> <p><strong>Third one:</strong></p> <blockquote> <p>Open file <code>/etc/sysconfig/nfs</code> and try to check below parameters</p> </blockquote> <pre><code># Turn off v4 protocol support #RPCNFSDARGS=&quot;-N 4&quot; # Turn off v2 and v3 protocol support #RPCNFSDARGS=&quot;-N 2 -N 3&quot; </code></pre> <blockquote> <p>Removing hash from <code>RPCNFSDARGS</code> lines will turn off specific version support. This way clients with mentioned NFS versions won’t be able to connect to the NFS server for mounting share. If you have any of it enabled, try disabling it and mounting at the client after the NFS server service restarts.</p> </blockquote>
kkopczak
<p>I'm debugging CPU utilization in our kubernetes cluster.</p> <p>When I open the page of a specific kubernetes node in a node pool, GCP reports an average of 14%. But when I open the details of the same node from either Monitoring or Compute Engine -&gt; VM Instances -&gt; Observability I see twice the percentage.</p> <p>Why such difference?</p>
Federico Fissore
<p>The discrepancy could be caused by several factors such as types of CPU utilization metrics such as User-level, System Level or Total CPU utilization. Also the data source could be one of the points to check as it can vary from time to time. Check as well the update frequency or time interval of the refresh since it can influence the report. Attached are documentations that can help you with your use case. [1][2]</p> <p>[1] <a href="https://cloud.google.com/monitoring/api/metrics" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics</a></p> <p>[2] <a href="https://cloud.google.com/monitoring/api/metrics_gcp#gcp-compute" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics_gcp#gcp-compute</a></p>
Ray John Navarro
<p>I have a scenario where I have a <strong>topic_source</strong> which will have all generated messages by another application, these json/messages might be duplicated, so i need to deduplicate the messages `based on &quot;window&quot; size , say for every 10 sec , if there is any duplicates from <strong>topic_source</strong> , i will send deduplicate (based on message_id) messages to <strong>topic_target</strong>.</p> <p>For the same i am using KStream, reading from <strong>topic_source</strong> , I am grouping by message_id using aggregation &quot;count&quot; , for each entry I am sending one message to <strong>topic_target</strong>.</p> <p><strong>Some thing like below</strong></p> <blockquote> <p>final KStream&lt;String, Output&gt; msgs = builder.stream(&quot;topic_source&quot;,Serdes.String());</p> <p>final KTable&lt;Windowed, Long&gt; counts = clickEvents .groupByKey() .windowedBy(TimeWindows.of(Duration.ofSeconds(10))) .count();</p> <p>counts.toStream() .map((key, value) -&gt; KeyValue.pair( key.key(), new Output(key.key(), value, key.window().start()))) .to(&quot;topic_target&quot;, Produced.with(Serdes.Integer(), new JsonSerde&lt;&gt;(Output.class)));</p> </blockquote> <p>This is working fine in my local (windows standalone eclipse IDE ) machine ( when tested ).</p> <p>But when I <strong>deploy the service/application on kubernatics pods</strong> , when I test , i found <strong>topic_target</strong> recieve as many meesages <strong>topic_source</strong>. ( no deduplication ) is happening.</p> <p>I think , <strong>topic_source</strong> messages going/processed on different pods , where aggression of cumulative pods not resulting into single group by (message_id) set, i.e. each pod (group by of same message_id ) sending its own deduplicate messages to <strong>topic_target</strong>, where accumulated result result into duplicates.</p> <p>Is there any way to solve this issue on kubernatics cluster ? i.e. is there any way all pods togther groupBy on one set , and send one distinct/deduplicated messages set to <strong>topic_target</strong> ?</p> <p>This to achieve , what features of kubernatics/dockers should i use ? should there be any design machanisum/pattern I should follow ?</p> <p>Any advice highly thankful.</p>
Shasu
<p>Who processes which messages depends on your partition assignment. Even if you have multiple pods KafkaStreams will allocate the same partitions to the same pods. So pod 1 will have partition 1 of input_topic, and partition 1 of whatever other topic your application is consuming.</p> <p>Granted the specificity of your needs - which is possible to implement using standard operators - I'd probably implement this with processor API. It requires an extra changelog topic versus the repartition you'll need for grouping by key.</p> <p>The processor code would look like something below:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>public class DeduplicationTimedProcessor&lt;Key, Value&gt; implements Processor&lt;Key, Value, Key, Value&gt; { private final String storeName; private final long deduplicationOffset; private ProcessorContext&lt;Key, Value&gt; context; private KeyValueStore&lt;Key, TimestampedValue&lt;Value&gt;&gt; deduplicationStore; @Data @NoArgsConstructor @AllArgsConstructor public static class TimestampedValue&lt;Value&gt; { private long timestamp; private Value value; } // Store needed for deduplication - means one changelog topic public DeduplicationTimedProcessor(String storeName, long deduplicationOffset) { this.storeName = storeName; this.deduplicationOffset = deduplicationOffset; } @Override public void init(ProcessorContext&lt;Key, Value&gt; context) { Processor.super.init(context); this.context = context; this.deduplicationStore = context.getStateStore(storeName); } @Override public void process(Record&lt;Key, Value&gt; record) { var key = record.key(); var value = record.value(); var timestamp = context.currentSystemTimeMs(); // Uses System.currentTimeMillis() by default but easier for testing var previousValue = deduplicationStore.get(key); // New value - no deduplication - store + forward if(previousValue == null) { deduplicationStore.put(key, new TimestampedValue&lt;&gt;(timestamp, value)); context.forward(new Record&lt;&gt;(key, value, timestamp)); return; } // previous value exists - check if duplicate &amp;&amp; in window if(previousValue.equals(value) &amp;&amp; timestamp - previousValue.timestamp &lt; deduplicationOffset) { // skip this message as duplicate within window return; } deduplicationStore.put(key, new TimestampedValue&lt;&gt;(timestamp, value)); context.forward(new Record&lt;&gt;(key, value, timestamp)); } }</code></pre> </div> </div> </p> <p>Added a few comments for clarity in there.</p> <p>Please be mindful that cleanup of the store rests with you, otherwise at some point you'll run out of disk space. Since you mentioned that your operation is for analytics I'd probably utilize a punctuator to routinely cleanup everything that is appropriately &quot;old&quot;.</p> <p>To use the processor use the process method (in older versions of KafkaStreams transform)</p>
Kacper Roszczyna
<p>I have a local website. The website was created by a docker-compose and it is listening on a localhost port 3000.</p> <p>When I try:</p> <pre><code>curl 127.0.0.1:3000 </code></pre> <p>I can see the response.</p> <p>What I did:</p> <p>From my domain provider I edited the DNS to point to my server, then I changed nginx-ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: virtual-host-ingress namespace: ingress-basic annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-pp&quot; spec: tls: - hosts: - nextformulainvesting.com secretName: *** rules: - host: &quot;nextformulainvesting.com&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: e-frontend-saleor port: number: 80 </code></pre> <p>and I created the service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: e-frontend-saleor spec: ports: - protocol: TCP port: 80 targetPort: 3000 </code></pre> <p>But with the service or without the service I receive the error <code>503 Service Temporarily Unavailable</code>.</p> <p>How can I use nginx-ingress to point to my local TCP service?</p>
inyourmind
<p>To clarify the issue I am posting a community wiki answer.</p> <p>The answer that helped to resolve this issue is available at <a href="https://stackoverflow.com/questions/57764237/kubernetes-ingress-to-external-service">this link</a>. Based on that - the clue of the case is to create manually a Service and an Endpoint objects for external server.</p> <p>After that one can create an Ingress object that will point to Service <code>external-ip</code> with adequate port .</p> <p>Here are the examples of objects provided in <a href="https://stackoverflow.com/questions/57764237/kubernetes-ingress-to-external-service">similar question</a>.</p> <ul> <li>Service and an Endpoint objects:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: external-ip spec: ports: - name: app port: 80 protocol: TCP targetPort: 5678 clusterIP: None type: ClusterIP --- apiVersion: v1 kind: Endpoints metadata: name: external-ip subsets: - addresses: - ip: 10.0.40.1 ports: - name: app port: 5678 protocol: TCP </code></pre> <ul> <li>Ingress object:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: external-service spec: rules: - host: service.example.com http: paths: - backend: serviceName: external-ip servicePort: 80 path: / </code></pre> <p>See also <a href="https://github.com/kubernetes/kubernetes/issues/8631#issuecomment-104404768" rel="nofollow noreferrer">this reference</a>.</p>
kkopczak
<p>I am trying to use helm to define some variables (including secrets which have been defined in GCP) and use them in an appsettings.json file. None are pulling through. My helm chart defines the vars as below. Some 'secrets' are pulling from the helm chart and some will be from GCP secrets):</p> <pre><code> - name: ASPNETCORE_ENVIRONMENT value: qa-k8 - name: auth__apiScopeSecret value: foo - name: discovery__uri value: bar envFrom: - secretRef: name: blah-secrets gcpSecrets: enabled: true secretsFrom: - blah-secrets </code></pre> <p>And my appsettings.json file is configured as per the below example. When I check the container, none of the variables within the helm chart have translated and the values are blank. What am I missing? I understand that the double underscores are required to locate the variables in the correct locations and probably aren't required in the appsettings file.</p> <pre><code> &quot;auth&quot;: { &quot;apiScopeSecret&quot;: &quot;${auth__apiScopeSecret}&quot; }, &quot;discovery&quot;: { &quot;uri&quot;: &quot;${discovery__uri}&quot; </code></pre>
Shep85
<p>That is also the expected Behaviour. Usually You create the appsettings file with the helm values as a cm or secret that are replaced during the deployment and then you mount it into your container. In your case I dont see that you mount something into the container you just provide it as an env.</p> <ol> <li>You should specify a secret or configMap with your helm values that provide the appsettings file.</li> </ol> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: appsettings data: appsettings.dev.json: |- { &quot;Logging&quot;: { &quot;LogLevel&quot;: { &quot;Default&quot;: {{my__helmvalue}}, } } } </code></pre> <ol start="2"> <li>In your pod you should specify the volumes and in your container the volumeMounts to specify in wich location the appsetting file should get mounted into.</li> </ol> <pre><code>apiVersion: v1 kind: Pod metadata: name: examplepod spec: containers: - name: test-container image: myimage volumeMounts: - name: config-volume mountPath: /app ## specify your path to overwrite the appsettingsfile! volumes: - name: config-volume configMap: name: appsettings restartPolicy: Never </code></pre>
pwoltschk
<p>I'm trying to generate kubernetes manifests from local ArgoCD manifests that use helm. I'm using a script to parse values and then <code>helm template</code> to generate the resulting template. But I'm having trouble converting the different values into a single template based on precedence</p> <p>ArgoCD's documentation describes <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/helm/" rel="nofollow noreferrer">3 different ways</a> to inject values into a helm chart. Namely <code>valueFiles</code>, in-line <code>values</code> and helm <code>parameters</code>.</p> <p>For example if i have this argo application manifest containing all three:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application ... source: helm: parameters: - name: &quot;param1&quot; value: value1 values: | param1: value2 valuesFile: - values-file-1.yaml - values-file-2.yaml </code></pre> <p>and let's say this is the contents of: values-file-1.yaml</p> <pre><code>param1: value4 </code></pre> <p>values-file-2.yaml</p> <pre><code>param1: value5 </code></pre> <p>what will <code>param1</code> ultimately be when this manifest is deployed on argocd? and what will the precedence be amongst all the values?</p> <p>I tried looking at documentation but wasn't able to find it</p>
Erich Shan
<p>I spent a few hours deploying Argocd with helm and producing manifests.</p> <p>Values injections have the following order of precedence</p> <ol> <li>valueFiles</li> <li>values</li> <li>parameters</li> </ol> <p>so values trumps valueFiles, and parameters trump both.</p> <p>And then out of curiosity I played within each of those injection types and essentially they resolve conflicts by choose the last value from top to bottom.</p> <p>e.g. if we only have <code>values-file-1.yaml</code> and it contains</p> <pre><code>param1: value1 param1: value3000 </code></pre> <p>we get param1=value3000</p> <p>if we have</p> <pre><code>valuesFile: - values-file-2.yaml - values-file-1.yaml </code></pre> <p>the last values-file i.e. <code>values-file-1.yaml</code> will trump the first</p> <p>if we have</p> <pre><code>parameters: - name: &quot;param1&quot; value: value2 - name: &quot;param1&quot; value: value1 </code></pre> <p>the result will be param1=value1</p> <p>and finally if we have</p> <pre><code>values: | param1: value2 param1: value5 </code></pre> <p>the result will be param1=value5</p> <p>This may be obvious for some people but it wasn't for me</p>
Erich Shan
<p>I have a kubernetes cluster in Amazon EKS, Autoscaling is set. So when there is load increase a new node spin-up in the cluster and spin-down with respect to load-running. We are monitoring it with Prometheus and send desired alerts with Alertmanager.</p> <p>So help me with a query that will send alerts whenever Autoscaling is performed in my Cluster.</p>
iemkamran
<p>The logic is not so great, but this works for me in a non-EKS Self Hosted Kubernetes Cluster on AWS EC2s.</p> <pre><code>(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0 </code></pre> <p>The above query fetches the currently up nodes and multiplies them by 0,</p> <pre><code>or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1 </code></pre> <p>Here, it adds all nodes that existed in the last 1 minute through the <code>delta()</code> function. The default value of the nodes in the <code>delta()</code> function output will be 1, but the existing nodes will be overridden by the value 0, because of the <code>OR</code> precedence. So finally, only the newly provisioned node(s) will have the value 1, and they will get filtered by the equality condition. You can also extract whether the new node is master/worker by the <code>kubernetes_io_role</code> label</p> <p>Full Query:</p> <pre><code>(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0 or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1 </code></pre> <p>You can reverse this query for downscaling of nodes, although that will collide with the cases in which your Kubernetes node Shuts Down Abruptly due to reasons other than AutoScaling</p>
Ayush Rathore
<p>I ran into this error while trying to run pod install. Pls what can I do to fix this???</p> <p>Kindly note that I have tried deleting podfile.lock, flutter clean, pod update, pod repo update, delete ios/pods file and .symlinks folder. None of this fix my issue. Pls help</p> <pre><code>Analyzing dependencies firebase_analytics: Using Firebase SDK version '9.5.0' defined in 'firebase_core' firebase_auth: Using Firebase SDK version '9.5.0' defined in 'firebase_core' firebase_core: Using Firebase SDK version '9.5.0' defined in 'firebase_core' firebase_database: Using Firebase SDK version '9.5.0' defined in 'firebase_core' firebase_dynamic_links: Using Firebase SDK version '9.5.0' defined in 'firebase_core' firebase_messaging: Using Firebase SDK version '9.5.0' defined in 'firebase_core' [!] CocoaPods could not find compatible versions for pod &quot;GoogleAppMeasurement&quot;: In snapshot (Podfile.lock): GoogleAppMeasurement (= 9.5.0) In Podfile: google_mobile_ads (from `.symlinks/plugins/google_mobile_ads/ios`) was resolved to 2.1.0, which depends on Google-Mobile-Ads-SDK (= 8.13.0) was resolved to 8.13.0, which depends on GoogleAppMeasurement (&lt; 9.0, &gt;= 7.0) You have either: * out-of-date source repos which you can update with `pod repo update` or with `pod install --repo-update`. * changed the constraints of dependency `GoogleAppMeasurement` inside your development pod `google_mobile_ads`. You should run `pod update GoogleAppMeasurement` to apply changes you've made. </code></pre>
Ola
<p>The solution that works for me is;</p> <ol> <li>I change google_mobile_ads: 1.1.0 from pubspec.yaml to google_mobile_ads: ^2.1.0</li> <li>Run flutter pub get</li> <li>delete podfile.lock and .symlinks folder inside ios folder.</li> <li>Run pod install, flutter clean and then flutter run.</li> <li>It didn't run at first, I tried pod install --repo-update and flutter run again. Boom my build is successful</li> </ol>
Ola
<p>As said in title, I'm trying to add an AKS cluster to my Azure Machine Learning workspace as <code>Attached computes</code>.</p> <p>In the wizard that ML studio shows while adding it</p> <p><a href="https://i.stack.imgur.com/vlVVj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vlVVj.png" alt="enter image description here" /></a></p> <p>there's a link to a guide to <a href="https://learn.microsoft.com/en-gb/azure/machine-learning/how-to-attach-kubernetes-anywhere" rel="nofollow noreferrer">install AzureML extension</a>.</p> <p>Just 4 steps:</p> <ol> <li>Prepare an Azure Kubernetes Service cluster or Arc Kubernetes cluster.</li> <li>Deploy the AzureML extension.</li> <li>Attach Kubernetes cluster to your Azure ML workspace.</li> <li>Use the Kubernetes compute target from CLI v2, SDK v2, and the Studio UI.</li> </ol> <p>My issue comes ad 2nd step.</p> <p>As suggested I'm trying to <a href="https://learn.microsoft.com/en-gb/azure/machine-learning/how-to-deploy-kubernetes-extension?tabs=deploy-extension-with-cli#azureml-extension-deployment---cli-examples-and-azure-portal" rel="nofollow noreferrer">create a POC</a> trough az cli</p> <p><code>az k8s-extension create --name &lt;extension-name&gt; --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name &lt;your-AKS-cluster-name&gt; --resource-group &lt;your-RG-name&gt; --scope cluster</code></p> <p>I'm already logged on right subscription (where I'm owner), ad using right cluster name and resource group. as extension-name I've used <code>test-ml-extension</code>, but I keep to get this error</p> <p><code>(ExtensionOperationFailed) The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/&lt;subscription-id&gt;/resourceGroups/&lt;rg-name&gt;/providers/Microsoft.ContainerService/managedclusters/&lt;cluster-name&gt;/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{&quot;error&quot;:{&quot;code&quot;:&quot;InvalidAuthenticationToken&quot;,&quot;message&quot;:&quot;The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant.&quot;}}. Code: ExtensionOperationFailed Message: The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/&lt;subscription-id&gt;/resourceGroups/&lt;rg-name&gt;/providers/Microsoft.ContainerService/managedclusters/&lt;cluster-name&gt;/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{&quot;error&quot;:{&quot;code&quot;:&quot;InvalidAuthenticationToken&quot;,&quot;message&quot;:&quot;The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant.&quot;}}.</code></p> <p>Am I missing something?</p>
Michele Ietri
<p><em><strong>I tried to reproduce the same issue in my environment and got the below results</strong></em></p> <p><em>I have created the Kubernetes cluster and launched the AML studio</em></p> <p><em>In the AML I have created the workspace and created the compute with AKS cluster</em></p> <p><img src="https://i.stack.imgur.com/X4frz.png" alt="enter image description here" /></p> <p><em>Deployed the azureML extension using the below command</em></p> <pre><code>az k8s-extension create --name Aml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name my-aks-cluster --resource-group Alldemorg --scope cluster </code></pre> <p><img src="https://i.stack.imgur.com/PgKIl.png" alt="enter image description here" /></p> <p><em>I am able to see all the deployed clusters using below commands</em></p> <pre><code>az k8s-extension show --name &lt;extension_name&gt; --cluster-type connectedClusters --cluster-name &lt;connected_cluster_name&gt; --resource-group &lt;rg_name&gt; </code></pre> <p><em>After deploying the AzureML extension I am able to attach the Kubernetes cluster to azureML workspace.</em></p> <p><img src="https://i.stack.imgur.com/iOb1d.png" alt="enter image description here" /></p> <p><em><strong>NOTE:</strong></em></p> <p><em>ExtensionOperationFailed error may occur for the below of reasons,</em></p> <p><em>1). Blocking of the the regions, for some AML clusters few of the regions are not allowed it will be blocked</em></p> <p><em>2). Please check the version and upgrade to the latest version</em></p> <p><em>3). While creating the extension please make sure cluster name it will be there in the AML workspace</em></p> <p><em>4). The service principal for the cluster does not exist in the tenant in which we are trying to access</em></p> <p><em>5). Each tenant in which we want to access must consent to the cluster, this will ensure the service principal exists in their tenant and that service principal has the access</em></p>
Komali Annem
<p>Is there a way to determine programatically if a pod is in crashloopbackoff? I tried the following</p> <pre><code>pods,err := client.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{}) if err != nil { return err } for _, item := range pods.Items { log.Printf(&quot;found pod %v with state %v reason %v and phase %v that started at %v&quot;, item.Name, item.Status.Message, item.Status.Reason, item.Status.Phase, item.CreationTimestamp.Time) } </code></pre> <p>However this just prints blank for state and reason, tough it prints the phase.</p>
NonoPa Naka
<p>To clarify I am posting a community wiki answer.</p> <blockquote> <p>It's hiding in <a href="https://pkg.go.dev/k8s.io/api/core/v1#ContainerStateWaiting" rel="nofollow noreferrer"><code>ContainerStateWaiting.Reason</code></a>:</p> </blockquote> <pre><code>kubectl get po -o jsonpath='{.items[*].status.containerStatuses[*].state.waiting.reason}' </code></pre> <blockquote> <p>although be aware that it only <em>intermittently</em> shows up there, since it is an intermittent state of the container; perhaps a more programmatic approach is to examine the <code>restartCount</code> and the <code>Error</code> state</p> </blockquote> <p>See also <a href="https://github.com/kubernetes-client/go/blob/master/kubernetes/docs/V1PodStatus.md" rel="nofollow noreferrer">this repository</a>.</p>
kkopczak
<p>In the Google documentation there is the following example:</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#autoscaling_limits" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#autoscaling_limits</a></p> <p>Autoscaling limits</p> <p>You can set the minimum and maximum number of nodes for the cluster autoscaler to use when scaling a node pool. Use the <code>--min-nodes</code> and <code>--max-nodes</code> flags to set the minimum and maximum number of nodes per zone</p> <p>Starting in GKE version 1.24, you can use the <code>--total-min-nodes</code> and <code>--total-max-nodes</code> flags for new clusters. These flags set the minimum and maximum number of the total number of nodes in the node pool across all zones.</p> <p><strong>Min and max nodes example</strong></p> <p>The following command creates an autoscaling multi-zonal cluster with six nodes across three zones initially, with a minimum of one node per zone and a maximum of four nodes per zone:</p> <pre><code>gcloud container clusters create example-cluster \ --num-nodes=2 \ --zone=us-central1-a \ --node-locations=us-central1-a,us-central1-b,us-central1-f \ --enable-autoscaling --min-nodes=1 --max-nodes=4 </code></pre> <p>In this example, the total size of the cluster can be between three and twelve nodes, spread across the three zones. If one of the zones fails, the total size of the cluster can be between two and eight nodes.</p> <p>So my question is:</p> <p>What is the effect of setting <code>--num-nodes=2</code> in this example? Does it matter if I set <code>--num-nodes=1</code> or <code>--num-nodes=3</code>?</p>
Johan1us
<p><code>--num-nodes=2</code> represents the number of virtual machine in your kubernetes cluster, setting it to 1 or 3 nodes will matter depending on what will be the usage of your application on your kubernetes cluster, also from the example above <code>--enable-autoscaling --min-nodes=1 --max-nodes=4</code> you can change the value of the nodes depending on the behavior of your application if it will be needing more resource.</p>
生きがい
<p>I have problem with start order of istio-sidecar and main application in kubernetes. When pod starts, main application getting error &quot;connection refused&quot; for external services. When istio-envoy proxy ready, main application starts correctly on next attempt.</p> <p>While Istio is not ready, the main application has time to crash and restart 2-3 times</p> <p>how do I make the main application wait for istio-sidecar to start and only then start running itself?</p>
vladimir vyatkin
<p>I believe if you set holdApplicationUntilProxyStarts to true will solve your issue. You can get more information about it here: <a href="https://istio.io/latest/docs/ops/common-problems/injection/" rel="nofollow noreferrer">https://istio.io/latest/docs/ops/common-problems/injection/</a> Hope it can help you.</p>
Debby
<p><strong>After installing the helm and cert-manager required crds. As per the official docs:</strong></p> <pre><code>kubectl apply -f https://github.com/cert-manager/cert manager/releases/download/v1.12.0/cert-manager.crds.yaml </code></pre> <p><strong>when installing the cert-manager with helm install:</strong></p> <pre><code>helm install \ &gt; cert-manager jetstack/cert-manager \ &gt; --namespace cert-manager \ &gt; --create-namespace \ &gt; --version v1.12.0 \ &gt; Error: INSTALLATION FAILED: failed post-install: 1 error occurred: * timed out waiting for the condition </code></pre>
jackazjimmy
<p>From the CRD link you mentioned above, it looks like there is a typo in the url (hyphen-missing).</p> <p>It should be</p> <pre><code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.crds.yaml </code></pre> <p>As CRDs are not installed in your case, cert-manager helm installation is timing out. Clean up the failed helm release using <code>helm delete cert-manager -n cert-manager</code> and retry your installation steps.</p> <p>Or you can install both CRD and cert-manager in one go using command:</p> <pre><code>helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.12.0 \ --set installCRDs=true </code></pre> <p>Both these steps are mentioned in cert-manager official <a href="https://cert-manager.io/docs/installation/helm/" rel="nofollow noreferrer">doc</a></p>
Abhisek Dwivedi
<p>I've defined an alert for my kubernetes pods as described below to notify through slack. I used the example described in the official documentation for <a href="https://www.prometheus.io/docs/alerting/latest/notification_examples/#ranging-over-all-received-alerts" rel="nofollow noreferrer">ranging over all received alerts</a> to loop over multiple alerts and render them on my slack channel I do get notifications but the new lines do not get rendered correctly somehow. I'm new to prometheus any help is greatly appreciated. Thanks.</p> <pre><code>detection: # Alert If: # 1. Pod is not in a running state. # 2. Container is killed because it's out of memory. # 3. Container is evicted. rules: groups: - name: not-running rules: - alert: PodNotRunning expr: kube_pod_status_phase{phase!=&quot;Running&quot;} &gt; 0 for: 0m labels: severity: warning annotations: summary: &quot;Pod {{ $labels.pod }} is not running.&quot; description: 'Kubernetes pod {{ $labels.pod }} is not running.' - alert: KubernetesContainerOOMKilledOrEvicted expr: kube_pod_container_status_last_terminated_reason{reason=~&quot;OOMKilled|Evicted&quot;} &gt; 0 for: 0m labels: severity: warning annotations: summary: &quot;kubernetes container killed/evicted (instance {{ $labels.instance }})&quot; description: &quot;Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled/Evicted.&quot; route: group_by: ['alertname'] group_wait: 30s group_interval: 3m repeat_interval: 4h receiver: slack-channel routes: - match: alertname: PodNotRunning - match: alertname: KubernetesContainerOOMKilledOrEvicted notifications: receivers: - name: slack-channel slack_configs: - channel: kube-alerts title: &quot;{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}&quot; text: &quot;{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}&quot; </code></pre> <p>How it gets rendered on the actual slack channel:</p> <pre><code>Title: inst-1 down.\ninst-2 down.\ninst-3 down.\ninst-4 down. Text: inst-1 down.\ninst-2 down.\ninst-3 down.\ninst-4 down </code></pre> <p>How I though it would render:</p> <pre><code>Title: inst-1 down. Text: inst-1 down. Title: inst-2 down. Text: inst-2 down. Title: inst-3 down. Text: inst-3 down. Title: inst-4 down. Text: inst-4 down. </code></pre>
Pannu
<p>Use <code>{{ &quot;\n&quot; }}</code> instead of plain <code>\n</code></p> <p>example:</p> <pre><code>... slack_configs: - channel: kube-alerts title: &quot;{{ range .Alerts }}{{ .Annotations.summary }}{{ &quot;\n&quot; }}{{ end }}&quot; text: &quot;{{ range .Alerts }}{{ .Annotations.description }}{{ &quot;\n&quot; }}{{ end }}&quot; </code></pre>
majek
<p>I am using k3d to run local kubernetes</p> <p>I have created a cluster using k3d.</p> <p>Now I want to mount a local directory as a persistent volume.</p> <p>How can i do this while using k3d.</p> <p>I know in minikube</p> <pre><code>$ minikube start --mount-string=&quot;$HOME/go/src/github.com/nginx:/data&quot; --mount </code></pre> <p>Then If you mount /data into your Pod using <code>hostPath</code>, you will get you local directory data into Pod.</p> <p>Is there any similar technique here also while using k3d</p>
Santhosh
<p>According to the answers to <a href="https://github.com/k3d-io/k3d/issues/566" rel="nofollow noreferrer">this Github question</a> the feature you're looking for is not available yet.</p> <p>Here is some idea from this link:</p> <blockquote> <p>The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do <code>k3d cluster create -v &quot;$HOME/git:/git@agent:*&quot;</code> to get all the repositories on my host present in all agent nodes to be used for hot-reloading.</p> </blockquote> <p>According to <a href="https://k3d.io/v5.2.0/usage/commands/k3d_cluster_create/" rel="nofollow noreferrer">this</a> documentation one can use the following command with the adequate flag:</p> <pre class="lang-yaml prettyprint-override"><code>k3d cluster create NAME -v [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]] </code></pre> <p>This command mounts volumes into the nodes</p> <pre><code>(Format:[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]] </code></pre> <p>Example:</p> <pre><code>`k3d cluster create --agents 2 -v /my/path@agent:0,1 -v /tmp/test:/tmp/other@server:0` </code></pre> <p><a href="https://dev.to/bbende/k3s-on-raspberry-pi-volumes-and-storage-1om5" rel="nofollow noreferrer">Here</a> is also an interesting article how volumes and storage work in a K3s cluster (with examples).</p>
kkopczak
<p><strong>strong text</strong>Till the cert-manager every pods working good as followed by aerospike docs. But while installing the operator the operator pods get crash loop backoff.</p> <p>Installing operator using:</p> <pre><code>git clone https://github.com/aerospike/aerospike-kubernetes-operator.git git checkout 2.5.0 cd aerospike-kubernetes-operator/helm-charts helm install aerospike-kubernetes-operator ./aerospike-kubernetes-operator --set replicas=3 </code></pre> <p>Pods running:</p> <pre><code>PS C:\Users\B.Jimmy\aerospike-kubernetes-operator-1.0.0&gt; kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE cert-manager cert-manager-576c79cb45-xkr88 1/1 Running 0 4h41m cert-manager cert-manager-cainjector-664f76bc59-4b5kz 1/1 Running 0 4h41m cert-manager cert-manager-webhook-5d4fd5cb7f-f96qx 1/1 Running 0 4h41m default aerospike-kubernetes-operator-7bbb8745c8-86884 1/2 CrashLoopBackOff 36 (59s ago) 159m default aerospike-kubernetes-operator-7bbb8745c8-jzkww 1/2 Error 36 (5m14s ago) 159m kube-system aws-node-7b4nb 1/1 Running 0 21h kube-system aws-node-llnzh 1/1 Running 0 21h kube-system coredns-6c97f4f789-fhnq6 1/1 Running 0 21h kube-system coredns-6c97f4f789-wmcdm 1/1 Running 0 21h kube-system kube-proxy-5gwld 1/1 Running 0 21h kube-system kube-proxy-z2nwk 1/1 Running 0 21h olm catalog-operator-56db4cd676-hln6h 1/1 Running 0 21h olm olm-operator-5b8f867598-7h9z6 1/1 Running 0 21h olm operatorhubio-catalog-bd8rq 1/1 Running 0 178m olm packageserver-7cbbc9c85f-jms5f 1/1 Running 0 21h olm packageserver-7cbbc9c85f-z45jg 1/1 Running 0 21h </code></pre> <p>Crashing Pod Log:</p> <pre><code>PS C:\Users\B.Jimmy\aerospike-kubernetes-operator-1.0.0&gt; kubectl logs -f aerospike-kubernetes-operator-7bbb8745c8-86884 Defaulted container &quot;manager&quot; out of: manager, kube-rbac-proxy flag provided but not defined: -config Usage of /manager: -health-probe-bind-address string The address the probe endpoint binds to. (default &quot;:8081&quot;) -kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster. -leader-elect Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager. -metrics-bind-address string The address the metric endpoint binds to. (default &quot;:8080&quot;) -zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true) -zap-encoder value Zap log encoding (one of 'json' or 'console') -zap-log-level value Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value &gt; 0 which corresponds to custom debug levels of increasing verbosity -zap-stacktrace-level value Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic'). </code></pre> <blockquote> <p>Do I need to configure nginx ingress after installing cert-manager.</p> </blockquote>
jackazjimmy
<p>I can recreate a similar behavior by following the steps you provided. I think there may be an accidental mistype with those steps in regards to the branch checkout so its attempting to use the <code>master</code> branch instead of <code>2.5.0</code></p> <p>The steps should be:</p> <pre><code>git clone https://github.com/aerospike/aerospike-kubernetes-operator.git cd aerospike-kubernetes-operator/helm-charts git checkout 2.5.0 helm install aerospike-kubernetes-operator ./aerospike-kubernetes-operator --set replicas=3 </code></pre> <p>notice the <code>cd</code> and <code>git checkout</code> commands flipped</p> <p>**NOTE: You may need to uninstall the current helm chart first before reinstalling **</p> <p>Example:</p> <pre><code>helm uninstall aerospike-kubernetes-operator </code></pre> <p>As a side note: I see you also have OLM namespaces already and may benefit using the OLM installation for AKO found here: <a href="https://docs.aerospike.com/cloud/kubernetes/operator/install-operator-operatorhub" rel="noreferrer">https://docs.aerospike.com/cloud/kubernetes/operator/install-operator-operatorhub</a></p> <p>Hopefully this helps!</p>
Colton
<p>Occasionally, Pgadmin gives me the 500 error in a browser. After reloading the page, the issue disappears for a while and then comes back again. Here's the log I see while getting the error:</p> <pre><code> [2022-01-21 14:35:21 +0000] [93] [ERROR] Error handling request /authenticate/login Traceback (most recent call last): File &quot;/venv/lib/python3.8/site-packages/gunicorn/workers/gthread.py&quot;, line 271, in handle keepalive = self.handle_request(req, conn) File &quot;/venv/lib/python3.8/site-packages/gunicorn/workers/gthread.py&quot;, line 323, in handle_request respiter = self.wsgi(environ, resp.start_response) File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 2464, in __call__ return self.wsgi_app(environ, start_response) File &quot;/pgadmin4/pgAdmin4.py&quot;, line 77, in __call__ return self.app(environ, start_response) File &quot;/venv/lib/python3.8/site-packages/werkzeug/middleware/proxy_fix.py&quot;, line 169, in __call__ return self.app(environ, start_response) File &quot;/venv/lib/python3.8/site-packages/flask_socketio/__init__.py&quot;, line 43, in __call__ return super(_SocketIOMiddleware, self).__call__(environ, File &quot;/venv/lib/python3.8/site-packages/engineio/middleware.py&quot;, line 74, in __call__ return self.wsgi_app(environ, start_response) File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 2450, in wsgi_app response = self.handle_exception(e) File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1867, in handle_exception reraise(exc_type, exc_value, tb) File &quot;/venv/lib/python3.8/site-packages/flask/_compat.py&quot;, line 39, in reraise raise value File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 2447, in wsgi_app response = self.full_dispatch_request() File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1953, in full_dispatch_request return self.finalize_request(rv) File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1970, in finalize_request response = self.process_response(response) File &quot;/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 2269, in process_response self.session_interface.save_session(self, ctx.session, response) File &quot;/pgadmin4/pgadmin/utils/session.py&quot;, line 307, in save_session self.manager.put(session) File &quot;/pgadmin4/pgadmin/utils/session.py&quot;, line 166, in put self.parent.put(session) File &quot;/pgadmin4/pgadmin/utils/session.py&quot;, line 270, in put dump( _pickle.PicklingError: Can't pickle &lt;class 'wtforms.form.Meta'&gt;: attribute lookup Meta on wtforms.form failed </code></pre> <p>The issue appeared after enabling Oauth2 authentication. I've tried using different version but no luck.</p> <p>Pgadmin is running in Kubernetes.</p>
Vladimir Tarasov
<p>Please try setting PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION = False in configuration file according to your operating system as mentioned <a href="https://www.pgadmin.org/docs/pgadmin4/6.13/config_py.html" rel="nofollow noreferrer">here</a>.</p>
Yogesh Mahajan
<p>I'm trying to install a keycloak instance locally with minikube, OLM and keycloak-operator. Here is my config:</p> <pre><code> 1 apiVersion: k8s.keycloak.org/v2alpha1 2 kind: Keycloak 3 metadata: 4 name: example-keycloak 5 namespace: my-keycloak-operator 6 labels: 7 app: sso 8 spec: 9 instances: 1 10 image: bsctzz/dockerhub:groupaccess 11 hostname: 12 hostname: keycloak.local 13 ingress: 14 enabled: false 15 http: 16 httpEnabled: false 17 tlsSecret: root-secret </code></pre> <p>When I launch my config I have my instance that doesn't launch completely it blocks on the admin page is loads without limit.</p> <blockquote> <p>2023-08-04 14:44:17,894 INFO [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'</p> </blockquote> <p><a href="https://i.stack.imgur.com/R9wdJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R9wdJ.png" alt="enter image description here" /></a></p> <p>In the k8s container logs, I have these logs that are stuck at this stage, I don't know why.</p> <pre><code>2023-08-07 09:07:21,064 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, micrometer, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, vertx] 2023-08-07 09:07:21,200 INFO [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master' </code></pre> <p>Finally this the description of my pod.</p> <pre><code>Name: example-keycloak-0 Namespace: my-keycloak-operator Priority: 0 Node: minikube/192.168.49.2 Start Time: Mon, 07 Aug 2023 11:31:02 +0200 Labels: app=keycloak app.kubernetes.io/instance=example-keycloak app.kubernetes.io/managed-by=keycloak-operator controller-revision-hash=example-keycloak-dc5544cf9 statefulset.kubernetes.io/pod-name=example-keycloak-0 Annotations: &lt;none&gt; Status: Running IP: 10.244.1.232 IPs: IP: 10.244.1.232 Controlled By: StatefulSet/example-keycloak Containers: keycloak: Container ID: docker://6bf8d1dcc7df0db016904905d8a073430924f881caae50b0ce58b78c1b66f2a2 Image: bsctzz/dockerhub:groupaccess Image ID: docker-pullable://bsctzz/dockerhub@sha256:e3c3d4c99a26ed1b8fb54432194f939e0d86a87561bd949b14df22f745fe281c Ports: 8443/TCP, 8080/TCP Host Ports: 0/TCP, 0/TCP Args: start --optimized State: Running Started: Mon, 07 Aug 2023 11:31:05 +0200 Ready: False Restart Count: 0 Liveness: http-get https://:8443/health/live delay=20s timeout=1s period=2s #success=1 #failure=150 Readiness: http-get https://:8443/health/ready delay=20s timeout=1s period=2s #success=1 #failure=250 Environment: KC_HOSTNAME: localhost KC_HTTP_ENABLED: false KC_HTTP_PORT: 8080 KC_HTTPS_PORT: 8443 KC_HTTPS_CERTIFICATE_FILE: /mnt/certificates/tls.crt KC_HTTPS_CERTIFICATE_KEY_FILE: /mnt/certificates/tls.key KC_HEALTH_ENABLED: true KC_CACHE: ispn KC_CACHE_STACK: kubernetes KC_PROXY: passthrough KEYCLOAK_ADMIN: &lt;set to the key 'username' in secret 'example-keycloak-initial-admin'&gt; Optional: false KEYCLOAK_ADMIN_PASSWORD: &lt;set to the key 'password' in secret 'example-keycloak-initial-admin'&gt; Optional: false jgroups.dns.query: example-keycloak-discovery.my-keycloak-operator Mounts: /mnt/certificates from keycloak-tls-certificates (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwxnx (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: keycloak-tls-certificates: Type: Secret (a volume populated by a Secret) SecretName: root-secret Optional: false kube-api-access-fwxnx: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 43s default-scheduler Successfully assigned my-keycloak-operator/example-keycloak-0 to minikube Normal Pulling 43s kubelet Pulling image &quot;bsctzz/dockerhub:groupaccess&quot; Normal Pulled 42s kubelet Successfully pulled image &quot;bsctzz/dockerhub:groupaccess&quot; in 1.301906831s (1.301917466s including waiting) Normal Created 42s kubelet Created container keycloak Normal Started 41s kubelet Started container keycloak Warning Unhealthy 2s (x10 over 19s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404 Warning Unhealthy 2s (x10 over 19s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404 </code></pre> <p>If anyone has any ideas, thank you in advance.</p>
Dyn amo
<p>What you have used is a fundamental example. This file provides an idea of how to assemble your yaml file. Your deployment has no database or any env in which keycloak needs to work. If you want to run the keycloak you need to add a lot more details via the value.yaml file or add env directly. You are missing Database info which should look like this,</p> <pre><code> spec: containers: - name: keycloak image: quay.io/keycloak/keycloak ports: - containerPort: 8080 name: http - containerPort: 8443 name: https env: - name: &quot;KC_DB&quot; value: &quot;POSTGRES&quot; - name: &quot;KC_DB_URL_HOST&quot; value: &quot;yourDBConnection&quot; - name: &quot;KC_DB_URL_PORT&quot; value: &quot;YourPort&quot; - name: &quot;KC_DB_URL_DATABASE&quot; value: &quot;yourDbName&quot; - name: KC_DB_USER value: yourUserName - name: KC_DB_PASSWORD value: yourPassword - name: KC_TRANSACTION_XA_ENABLED value: 'true' - name: KC_HEALTH_ENABLED value: 'true' - name: KC_METRICS_ENABLED value: 'true' </code></pre> <p>Also you need to add more env for your admin console and username and password. such as</p> <pre><code> - name: KC_USER value: user - name: KC_PASSWORD value: password - name: KC_TRANSACTION_XA_ENABLED value: 'true' - name: KC_PROXY value: edge - name: KC_HOSTNAME_URL value: anyhost.io - name: KC_HOSTNAME_ADMIN_URL value: https://anyhost.io/auth - name: KC_HOSTNAME_PORT value: '8443' - name: KEYCLOAK_FORCE_HTTPS value: 'true' - name: KC_HOSTNAME_STRICT value: 'true' - name: KC_LOG_LEVEL value: INFO </code></pre> <p>These settings depend upon your requirements. You can follow the <a href="https://www.keycloak.org/operator/installation" rel="nofollow noreferrer">office docs</a> for more info or this <a href="https://github.com/keycloak/keycloak-operator/tree/main/deploy" rel="nofollow noreferrer">link</a>. I hope this helps you</p>
tauqeerahmad24
<p>I'm trying to deploy an AWS Ingress Controller with the following manifest:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv-eks annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: ingressClassName: alb rules: - http: paths: - path: /path1 pathType: Prefix backend: service: name: path1 port: number: 4001 - path: /path2 pathType: Prefix backend: service: name: path2 port: number: 4002 </code></pre> <p>And the following command to apply it:</p> <pre><code>kubectl apply -f infra/k8s/ingress-srv-eks.yaml </code></pre> <p>It says the resource is created. But when I describe the ingress, it says 'Failed build model due to ingress: NoCredentialProviders: no valid providers in chain':</p> <pre><code>#kubectl describe ingress ingress-srv-eks Name: ingress-srv-eks Labels: &lt;none&gt; Namespace: default Address: Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- * /path1 path1:4001 () Annotations: alb.ingress.kubernetes.io/healthcheck-path: /healhtz alb.ingress.kubernetes.io/listen-ports: [{&quot;HTTP&quot;: 80},{&quot;HTTPS&quot;: 443}] alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedBuildModel 7m30s (x2 over 7m53s) ingress Failed build model due to ingress: default/ingress-srv-eks: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors </code></pre> <p>Below is the helm command via I installed the LB:</p> <pre><code>helm install -n kube-system aws-load-balancer-controller eks/aws-load-balancer-controller --set clusterName=sample-cluster --set serviceAccount.create=true --set serviceAccount.name=aws-load-balancer-controller --set region=ap-south-1 --set vpcId=vpc-06e0620658310dbfb </code></pre> <p>Tried adding AWS creds as env variables, but that too isn't working.</p> <p>Because of this, the 'Address' field for the Ingress is blank(when doing <code>kubectl get ingress</code>) and I'm unable to get a URL for testing my cluster.</p> <p>Can anyone please help me on fixing this?</p>
Mahesh
<p>Maybe you could check the correct namespace, here it illustrates 'default' which seems not correct.</p> <p>If you are using the script that refers to <a href="https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml" rel="nofollow noreferrer">2048_full.yaml</a> which is coming from the AWS EKS guideline official website.</p> <p>First, ensure that the namespaces of the following three types are all set to 'kube-system'.</p> <pre><code>Deployment Service Ingress </code></pre> <p>And see what happens.</p> <p>If all relevant namespaces set to &quot;Kube-system&quot;, but still not working. Try to check your IAM role instead of service account, it should be detected automatically. If not try to use this &quot;--auto-discover-default-role&quot;.</p> <pre><code>eksctl create iamserviceaccount \ --region region-code \ --name alb-ingress-controller \ --namespace kube-system \ --cluster prod \ --attach-policy-arn arn:aws:iam::XXXXXX:policy/ALBIngressControllerIAMPolicy \ --override-existing-serviceaccounts \ --approve </code></pre> <p>For installing the aws-lb-controller, here are some additional details.</p> <pre><code>helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=&lt;cluster-name&gt; --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller </code></pre> <p>Install the AWS Load Balancer Controller. If you're deploying the controller to Amazon EC2 nodes that have restricted access to the Amazon EC2 instance metadata service (IMDS), or if you're deploying to Fargate, then add the following flags to the helm command that follows:</p> <pre><code>--set region=region-code --set vpcId=vpc-xxxxxxxx </code></pre>
xPetersue
<p>Is there a way to set quotas for directories inside each bucket in a MinIO server and monitor the size and quota with the API of each directory in the bucket?</p>
parham
<p>I have found <a href="https://docs.min.io/minio/baremetal/reference/minio-mc-admin/mc-admin-bucket-quota.html#mc-admin-bucket-quota" rel="nofollow noreferrer">this documentation</a> about bucket quota, but unfortunately it is just for buckets.</p> <blockquote> <p>The <a href="https://docs.min.io/minio/baremetal/reference/minio-mc-admin/mc-admin-bucket-quota.html#command-mc.admin.bucket.quota" rel="nofollow noreferrer" title="mc.admin.bucket.quota"><code>mc admin bucket quota</code></a> command manages per-bucket storage quotas.</p> </blockquote> <p><strong>NOTE</strong>:</p> <blockquote> <p>MinIO does not support using <a href="https://docs.min.io/minio/baremetal/reference/minio-mc-admin.html#command-mc.admin" rel="nofollow noreferrer" title="mc.admin"><code>mc admin</code></a> commands with other S3-compatible services, regardless of their claimed compatibility with MinIO deployments.</p> </blockquote> <hr /> <p>Using following command you can get usage info:</p> <pre class="lang-sh prettyprint-override"><code>mc du </code></pre> <hr /> <p>See also <a href="https://docs.min.io/docs/minio-admin-complete-guide.html" rel="nofollow noreferrer">this doc</a>.</p>
kkopczak
<p>Multi-trust deployment model from <a href="https://istio.io/latest/docs/ops/deployment/deployment-models/#trust-between-meshes" rel="nofollow noreferrer">istio documentation</a></p> <p><a href="https://i.stack.imgur.com/0BX41.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0BX41.jpg" alt="multi trust diagram" /></a></p> <p>I want to connect multiple meshes together. I currently manage 3 different AKS clusters</p> <ul> <li>Operations (aks-ops-euwest-1)</li> <li>Staging (aks-stg-euwest-1)</li> <li>Production (aks-prod-euwest-1)</li> </ul> <p>I have Hashicorp Vault running on Operations, I’d like to be able to reach eg. Postgres that’s running in Staging and Production using istio mTLS (for automatic secret rotation).</p> <p>Each of the clusters are running istio (Multi-Primary) in different networks. Each cluster has a different ClusterName, MeshID, TrustDomain and NetworkID. Though, the <code>cacerts</code> secrets are configured with a <strong>common Root CA</strong></p> <p><a href="https://i.stack.imgur.com/GKBAL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GKBAL.png" alt="diagram" /></a></p> <p>From the <a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/" rel="nofollow noreferrer">istio documentation</a>, to enable cross-cluster communication, a special <code>eastwestgateway</code> has to be deployed. The tlsMode is <code>AUTO_PASSTHROUGH</code>.</p> <p>These are the environment variables for the eastwestgateway</p> <pre><code># sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode - name: ISTIO_META_ROUTER_MODE value: &quot;sni-dnat&quot; # traffic through this gateway should be routed inside the network - name: ISTIO_META_REQUESTED_NETWORK_VIEW value: aks-ops-euwest-1 </code></pre> <p><strong>I don’t want to enable automatic service discovery by sharing secrets between clusters.</strong> Why? Because I want a fine-grained control over which services are going to be exposed between the meshes. I want to be able to specify <code>AuthorizationPolicies</code> that point to serviceaccounts from the remote clusters (because different trust domains)</p> <p>Eg:</p> <pre><code># production cluster kind: AuthorizationPolicy spec: selector: matchLabels: app: postgres rules: - from: source: - principal: spiffe://operations-cluster/ns/vault/sa/vault </code></pre> <p>This is from the <a href="https://istio.io/latest/docs/ops/deployment/deployment-models/?_ga=2.106416323.1124150883.1618461826-1888041227.1611307002#endpoint-discovery-with-multiple-control-planes" rel="nofollow noreferrer">istio documentation</a></p> <blockquote> <p>In some advanced scenarios, load balancing across clusters may not be desired. For example, in a blue/green deployment, you may deploy different versions of the system to different clusters. In this case, each cluster is effectively operating as an independent mesh. This behavior can be achieved in a couple of ways:</p> <ul> <li><strong>Do not exchange remote secrets between the clusters. This offers the strongest isolation between the clusters.</strong></li> <li>Use VirtualService and DestinationRule to disallow routing between two versions of the services.</li> </ul> </blockquote> <p>What the istio documentation doesn't specify, is how to enable cross-cluster communication in the case where secrets are not shared. When sharing secrets, istiod will create additional envoy configuration that will allow pods to transparently communicate through the eastwestgateway. What it doesn't specify, is how to create those configurations manually when not sharing secrets.</p> <p>The tlsMode is <code>AUTO_PASSTHROUGH</code>. Looking at the istio repository</p> <pre><code> // Similar to the passthrough mode, except servers with this TLS // mode do not require an associated VirtualService to map from // the SNI value to service in the registry. The destination // details such as the service/subset/port are encoded in the // SNI value. The proxy will forward to the upstream (Envoy) // cluster (a group of endpoints) specified by the SNI // value. This server is typically used to provide connectivity // between services in disparate L3 networks that otherwise do // not have direct connectivity between their respective // endpoints. Use of this mode assumes that both the source and // the destination are using Istio mTLS to secure traffic. // In order for this mode to be enabled, the gateway deployment // must be configured with the `ISTIO_META_ROUTER_MODE=sni-dnat` // environment variable. </code></pre> <p>The interesting part is <code>The destination details such as the service/subset/port are encoded in the SNI value</code>.</p> <p>It seems that when sharing secrets between clusters, istio will add envoy configurations that will effectively encode these service/subset/port into the SNI value of an envoy cluster. Though, when secrets are not shared, how can we acheive the same result?</p> <p>I've looked at <a href="https://github.com/istio-ecosystem/multi-mesh-examples" rel="nofollow noreferrer">this repository</a>, but it is outdated and does not make use of the <code>eastwestgateway</code>.</p> <p>I've also posted questions on the istio forum <a href="https://discuss.istio.io/t/mesh-federation-without-automatic-service-discovery/10276" rel="nofollow noreferrer">here</a> and <a href="https://discuss.istio.io/t/demistify-mesh-federation-multi-cluster-communication-without-automatic-endpoint-discovery/10179" rel="nofollow noreferrer">here</a>, but it's difficult to get help from there.</p>
Ludovic C
<p>In my company we're working on a concept of a service mesh federation, based on the next points:</p> <ul> <li>Use a private CA to issue certificates for the &quot;client&quot; k8s cluster and a &quot;provider&quot; k8s cluster. Those certificates does not have to be used during the istio install. So it's possible to use any suitable certificates to install istio, and then generate the new certificates from a common private CA to be used specifically with the mesh federation (to be installed on ingress and egress gateways).</li> <li>Setup mtls ingress for the &quot;provider&quot; k8s cluster, as described here: <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-mutual-tls-ingress-gateway" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-mutual-tls-ingress-gateway</a>, use a certificate from the common private CA to setup the gateway.</li> <li>Setup mtls egress origination for the &quot;client&quot; k8s cluster, as described here: <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#deploy-a-mutual-tls-server" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#deploy-a-mutual-tls-server</a>, use a certificate from the common private CA to setup the gateway.</li> <li>Use an external database/registry to store the service-to-service &quot;allowed&quot; connection entries.</li> <li>Use an external UI and API to let &quot;clients&quot; to request a new service-to-service connection and let &quot;providers&quot; to approve such a request. After the approval the service-to-service record in the database marked as &quot;approved&quot;. (Assuming &quot;client&quot; and &quot;provider&quot; clusters owned and operated by different teams.)</li> <li>Use some external automation at the &quot;client&quot; K8s cluster to retrieve the &quot;approved&quot; record(s) from the database and setup respectively the istio objects in that cluster (Service entry, egress gateway, virtual services, destination rules).</li> <li>Use some external automation at the &quot;provider&quot; K8s cluster to retrieve the &quot;approved&quot; record(s) from the database and setup respectively the istio objects in that cluster (ingress gateway, virtual service).</li> <li>Use Authorization Policy to explicitly allow only the specific clients/certificates to connect to ingress at &quot;provider&quot; k8s cluster, and disallow any other certificates (particularly, the other certificates issued by the same private CA) - described here <a href="https://my.f5.com/manage/s/article/K21084547" rel="nofollow noreferrer">https://my.f5.com/manage/s/article/K21084547</a></li> </ul> <p>The configuration might need to be adjusted when we add more &quot;providers&quot; and &quot;clients&quot; to the federation.</p> <p>As a result, we get totally separate clusters and separate meshes, that can be operated by different teams, and we let the services (deployed to the clusters) to connect to services in other clusters using the certificates from the common private CA, and we call it &quot;mesh federation&quot;. Each cluster and each component can be operated separately, updated independently. We don't need to cross-create any secrets in the clusters, as suggested in the istio multi-cluster tutorial. Only a limited number of services exposed or consumed in each cluster, so there is almost no impact on performance because of the mesh federation setup. &quot;Client&quot; services in the clusters can connect to the &quot;provider&quot; services using http, then the request is intercepted by istio and sent to the respective httpS ingress host (i.e. the &quot;provider&quot; cluster), and mtls certificated is included to the request automatically. No need to distribute certificates to client services as they are added on egress gateways. Considering we can keep the &quot;permissive&quot; istio authentication configuration, the &quot;client&quot; services can continue communicating with the rest of the services in the cluster just as before, so adding the new services to the mesh federation should be easy to do, as we would not need to change any interconnection configuration on client or provider side.</p>
Oleg Gurov
<p>I submitted a Spark Job through Airflow with <code>KubernetesPodOperator</code> as the code below; the driver pod is created, but the executor pod keeps being created and deleted over and over.</p> <pre class="lang-py prettyprint-override"><code>spark_submit = KubernetesPodOperator( task_id='test_spark_k8s_submit', name='test_spark_k8s_submit', namespace='dev-spark', image='docker.io/vinhlq9/bitnami-spark-3.3', cmds=['/opt/spark/bin/spark-submit'], arguments=[ '--master', k8s_url, '--deploy-mode', 'cluster', '--name', 'spark-job', '--conf', 'spark.kubernetes.namespace=dev-spark', '--conf', 'spark.kubernetes.container.image=docker.io/vinhlq9/bitnami-spark-3.3', '--conf', 'spark.kubernetes.authenticate.driver.serviceAccountName=spark-user', '--conf', 'spark.kubernetes.authenticate.executor.serviceAccountName=spark-user', '--conf', 'spark.kubernetes.driverEnv.SPARK_CONF_DIR=/opt/bitnami/spark/conf', '--conf', 'spark.kubernetes.driverEnv.SPARK_CONFIG_MAP=spark-config', '--conf', 'spark.kubernetes.file.upload.path=/opt/spark', '--conf', 'spark.kubernetes.driver.annotation.sidecar.istio.io/inject=false', '--conf', 'spark.kubernetes.executor.annotation.sidecar.istio.io/inject=false', '--conf', 'spark.eventLog.enabled=true ', '--conf', 'spark.eventLog.dir=oss://spark/spark-log/', '--conf', 'spark.hadoop.fs.oss.accessKeyId=' + spark_user_access_key , '--conf', 'spark.hadoop.fs.oss.accessKeySecret=' + spark_user_secret_key, '--conf', 'spark.hadoop.fs.oss.endpoint=' + spark_user_endpoint, '--conf', 'spark.hadoop.fs.oss.impl=org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem', '--conf', 'spark.executor.instances=1', '--conf', 'spark.executor.memory=4g', '--conf', 'spark.executor.cores=2', '--conf', 'spark.driver.memory=2g', 'oss://spark/job/test_spark_k8s_job_simple.py' ], is_delete_operator_pod=True, config_file='/opt/airflow/plugins/k8sconfig-spark-user.json', get_logs=True, dag=dag ) </code></pre> <p>And the logs in the driver pod:</p> <pre class="lang-bash prettyprint-override"><code>spark 08:40:12.26 spark 08:40:12.26 Welcome to the Bitnami spark container spark 08:40:12.27 Subscribe to project updates by watching https://github.com/bitnami/containers spark 08:40:12.27 Submit issues and feature requests at https://github.com/bitnami/containers/issues spark 08:40:12.27 23/05/16 08:40:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 23/05/16 08:40:16 INFO SparkContext: Running Spark version 3.3.2 23/05/16 08:40:16 INFO ResourceUtils: ============================================================== 23/05/16 08:40:16 INFO ResourceUtils: No custom resources configured for spark.driver. 23/05/16 08:40:16 INFO ResourceUtils: ============================================================== 23/05/16 08:40:16 INFO SparkContext: Submitted application: spark-read-csv 23/05/16 08:40:16 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -&gt; name: cores, amount: 2, script: , vendor: , memory -&gt; name: memory, amount: 4096, script: , vendor: , offHeap -&gt; name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -&gt; name: cpus, amount: 1.0) 23/05/16 08:40:16 INFO ResourceProfile: Limiting resource is cpus at 2 tasks per executor 23/05/16 08:40:16 INFO ResourceProfileManager: Added ResourceProfile id: 0 23/05/16 08:40:16 INFO SecurityManager: Changing view acls to: spark,root 23/05/16 08:40:16 INFO SecurityManager: Changing modify acls to: spark,root 23/05/16 08:40:16 INFO SecurityManager: Changing view acls groups to: 23/05/16 08:40:16 INFO SecurityManager: Changing modify acls groups to: 23/05/16 08:40:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark, root); groups with view permissions: Set(); users with modify permissions: Set(spark, root); groups with modify permissions: Set() 23/05/16 08:40:16 INFO Utils: Successfully started service 'sparkDriver' on port 7078. 23/05/16 08:40:16 INFO SparkEnv: Registering MapOutputTracker 23/05/16 08:40:16 INFO SparkEnv: Registering BlockManagerMaster 23/05/16 08:40:16 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 23/05/16 08:40:16 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 23/05/16 08:40:16 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 23/05/16 08:40:16 INFO DiskBlockManager: Created local directory at /var/data/spark-77a2ee41-2c8e-45c6-9df6-bb1f549d4566/blockmgr-5350fab4-8dd7-432e-80b3-fbc1924f0dea 23/05/16 08:40:16 INFO MemoryStore: MemoryStore started with capacity 912.3 MiB 23/05/16 08:40:16 INFO SparkEnv: Registering OutputCommitCoordinator 23/05/16 08:40:16 INFO Utils: Successfully started service 'SparkUI' on port 4040. 23/05/16 08:40:16 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file 23/05/16 08:40:18 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 2147483647. 23/05/16 08:40:18 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/bitnami/spark/conf) : spark-env.sh 23/05/16 08:40:18 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/bitnami/spark/conf) : spark-env.sh 23/05/16 08:40:18 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script 23/05/16 08:40:18 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079. 23/05/16 08:40:18 INFO NettyBlockTransferService: Server created on spark-job-84e1f08823b7833d-driver-svc.dev-spark.svc:7079 23/05/16 08:40:18 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 23/05/16 08:40:18 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-job-84e1f08823b7833d-driver-svc.dev-spark.svc, 7079, None) 23/05/16 08:40:18 INFO BlockManagerMasterEndpoint: Registering block manager spark-job-84e1f08823b7833d-driver-svc.dev-spark.svc:7079 with 912.3 MiB RAM, BlockManagerId(driver, spark-job-84e1f08823b7833d-driver-svc.dev-spark.svc, 7079, None) 23/05/16 08:40:18 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-job-84e1f08823b7833d-driver-svc.dev-spark.svc, 7079, None) 23/05/16 08:40:18 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-job-84e1f08823b7833d-driver-svc.baseline-dev-spark.svc, 7079, None) 23/05/16 08:40:18 INFO SingleEventLogFileWriter: Logging events to oss://spark/spark-log/spark-f6f3a41be773442dbc9a30781dffbc11.inprogress 23/05/16 08:40:21 INFO BlockManagerMaster: Removal of executor 1 requested 23/05/16 08:40:21 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 1 23/05/16 08:40:21 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster. 23/05/16 08:40:21 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 2147483647. 23/05/16 08:40:21 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/bitnami/spark/conf) : spark-env.sh 23/05/16 08:40:21 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script 23/05/16 08:40:24 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 2147483647. </code></pre> <p>The loop in the executor pod:</p> <pre class="lang-bash prettyprint-override"><code>23/05/16 08:40:25 INFO BlockManagerMaster: Removal of executor 2 requested 23/05/16 08:40:25 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster. 23/05/16 08:40:25 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asked to remove non-existent executor 2 23/05/16 08:40:27 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, sharedSlotFromPendingPods: 2147483647. 23/05/16 08:40:27 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/bitnami/spark/conf) : spark-env.sh </code></pre> <p>Has anyone encountered this before? Would be great to get an idea about this.</p>
Vinh Lai
<p>I already fix this issue, it cause by the java version in spark image</p>
Vinh Lai
<p>I'm trying to add preexisting volume to one of my deployment to use persistent data to jenkins. I'm using hetzner cloud as cloud provider and also using sci drivers to point the preexisting volume. But I'm getting below error,</p> <p><a href="https://i.stack.imgur.com/whS0b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/whS0b.png" alt="enter image description here" /></a></p> <p>this is my volume.yml file</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: jenkins-pvc namespace: development csi: driver: csi.hetzner.cloud fsType: ext4 volumeHandle: &quot;111111&quot; readOnly: false nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: csi.hetzner.cloud/location operator: In values: - hel1 persistentVolumeReclaimPolicy: Retain storageClassName: hcloud-volumes volumeMode: Filesystem </code></pre> <p>this is my deployment file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: jenkins namespace: development spec: replicas: 1 selector: matchLabels: app: jenkins-server template: metadata: labels: app: jenkins-server spec: imagePullSecrets: - name: regcred securityContext: allowPrivilegeEscalation: true privileged: true readOnlyRootFilesystem: false runAsUser: 0 serviceAccountName: jenkins containers: - name: jenkins image: jenkins/jenkins:lts-jdk11 resources: limits: memory: &quot;2Gi&quot; cpu: &quot;1000m&quot; requests: memory: &quot;500Mi&quot; cpu: &quot;500m&quot; ports: - name: httpport containerPort: 8080 - name: jnlpport containerPort: 50000 livenessProbe: httpGet: path: &quot;/login&quot; port: 8080 initialDelaySeconds: 90 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 5 readinessProbe: httpGet: path: &quot;/login&quot; port: 8080 initialDelaySeconds: 60 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 volumeMounts: - name: jenkins-pv mountPath: /var/jenkins_home volumes: - name: jenkins-pv persistentVolumeClaim: claimName: jenkins-pv </code></pre> <p>is there any way to fix this?</p>
Hasitha Chandula
<p>I have found <a href="https://stackoverflow.com/questions/53238832/persistentvolumeclaim-jenkins-volume-claim-not-found">this similar question</a>.</p> <p>Your error:</p> <pre class="lang-yaml prettyprint-override"><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------ Warning FailedScheduling 18s (x8 over 5m26s) default-scheduler 0/4 nodes are available: 4 persistentvolumeclaim &quot;jenkins-pv&quot; not found persistentvolumeclaim &quot;jenkins-volume-claim&quot; not found. </code></pre> <p>says that you're missing <code>PersistentVolumeClaim</code> named <code>jenkins-pv</code>.</p> <p>Here is an example how to create one:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl -n &lt;namespace&gt; create -f - &lt;&lt;EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pv spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi EOF </code></pre> <p>In case you have more that one PV available, you should use selector(s). In <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector" rel="nofollow noreferrer">this documentation</a> one can find how to do so. Using this the claim will bind to the desired pre-created PV with proper capacity use selector.</p> <p>See also <a href="https://stackoverflow.com/questions/51060027/persistentvolumeclaim-not-found-in-kubernetes">this</a> and <a href="https://stackoverflow.com/questions/53874569/kubernetes-persistent-volume-mount-not-found">this</a> questions.</p>
kkopczak
<p>What is the common way to deploy a chart(which contains sealed secrets) to different clusters?</p> <p>Because the clusters contains different sealed-secret controller (with different secret key) it seems unfeasible.</p> <p>Or is there any way to install a sealed-secret controller with the same secret key as the other clusters?</p>
beatrice
<p><strong>Sealed Secrets</strong> can be created by anyone but can only be decrypted by the controller where the secrets are created, follow this <a href="https://blog.knoldus.com/introduction-to-sealed-secrets-in-kubernetes" rel="nofollow noreferrer"><strong>blog</strong></a> authored by Vidushi Bansal for more information. So in order to use <strong>sealed secrets</strong> for multiple kubernetes clusters you need to maintain helm directories or repositories and set the target clusters as described in this <a href="https://stackoverflow.com/questions/63485239/connect-from-helm-to-kubernetes-cluster"><strong>solution</strong></a> provided by <a href="https://stackoverflow.com/users/13968097/seshadri-c">seshadri_c</a>. Now since these sealed secrets have respective target clusters it will be easy for you to manage multiple clusters.</p>
Kranthiveer Dontineni
<p>I made a service account that bound to clusterRole.</p> <p>Here is the clusterRole</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devops-tools-role namespace: devops-tools rules: - apiGroups: - &quot;&quot; - apps - autoscaling - batch - extensions - policy - rbac.authorization.k8s.io - networking.k8s.io resources: - pods - componentstatuses - configmaps - daemonsets - deployments - events - endpoints - horizontalpodautoscalers - ingress - ingresses - jobs - limitranges - namespaces - nodes - pods - persistentvolumes - persistentvolumeclaims - resourcequotas - replicasets - replicationcontrollers - serviceaccounts - services verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;delete&quot;] </code></pre> <p>I try to read logs from a pod</p> <pre><code>kubectl -n dfg02 logs postgres-69c7bb5cf7-dstzt </code></pre> <p>, and got :</p> <pre><code>Error from server (Forbidden): pods &quot;postgres-69c7bb5cf7-dstzt&quot; is forbidden: User &quot;system:serviceaccount:devops-tools:bino&quot; cannot get resource &quot;pods/log&quot; in API group &quot;&quot; in the namespace &quot;dfg02&quot; </code></pre> <p>So I switch to 'admin' account anda try to find which resource to add to the cluster role</p> <pre><code> ✘ bino@corobalap  ~/gitjece  kubectl config use-context k0s-cluster Switched to context &quot;k0s-cluster&quot;. bino@corobalap  ~/gitjece  kubectl api-resources |grep log </code></pre> <p>and got nothing.</p> <p>My question is how to add 'logs read rights' to a ClusterRole.</p> <p>Sincerely<br /> -bino-</p>
Bino Oetomo
<p>Logs are a sub-resource of Pods and by just specifying pods in the resource sections isn't enough.</p> <p>So simply add the following to your yaml then it should work.</p> <pre><code>resources: - pods - pods/log </code></pre> <p>PS: You've specified <code>pods</code> twice in your <code>resources</code> section, not that it does anything but just wanted to point it out.</p>
Mike
<p>Within my AWS EKS cluster provisioning an AWS application load balancer using annotations on the Ingress object. Additionally an unnecessary classic load balancer is being provisioned. Any ideas or best practice on how to prevent this?</p> <pre><code>resource &quot;kubernetes_service&quot; &quot;api&quot; { metadata { name = &quot;${var.project_prefix}-api-service&quot; } spec { selector = { app = &quot;${var.project_prefix}-api&quot; } port { name = &quot;http&quot; port = 80 target_port = 1337 } port { name = &quot;https&quot; port = 443 target_port = 1337 } type = &quot;LoadBalancer&quot; } } resource &quot;kubernetes_ingress&quot; &quot;api&quot; { wait_for_load_balancer = true metadata { name = &quot;${var.project_prefix}-api&quot; annotations = { &quot;kubernetes.io/ingress.class&quot; = &quot;alb&quot; &quot;alb.ingress.kubernetes.io/scheme&quot; = &quot;internet-facing&quot; &quot;alb.ingress.kubernetes.io/target-type&quot; = &quot;instance&quot; &quot;alb.ingress.kubernetes.io/certificate-arn&quot; = local.api-certificate_arn &quot;alb.ingress.kubernetes.io/load-balancer-name&quot; = &quot;${var.project_prefix}-api&quot; &quot;alb.ingress.kubernetes.io/listen-ports&quot; = &quot;[{\&quot;HTTP\&quot;: 80}, {\&quot;HTTPS\&quot;:443}]&quot; &quot;alb.ingress.kubernetes.io/actions.ssl-redirect&quot; = &quot;{\&quot;Type\&quot;: \&quot;redirect\&quot;, \&quot;RedirectConfig\&quot;: { \&quot;Protocol\&quot;: \&quot;HTTPS\&quot;, \&quot;Port\&quot;: \&quot;443\&quot;, \&quot;StatusCode\&quot;: \&quot;HTTP_301\&quot;}}&quot; } } spec { backend { service_name = kubernetes_service.api.metadata.0.name service_port = 80 } rule { http { path { path = &quot;/*&quot; backend { service_name = &quot;ssl-redirect&quot; service_port = &quot;use-annotation&quot; } } } } } } </code></pre>
florianmaxim
<p>Your LoadBalancer service is responsible for deploying the classic load balancer, and if you just need an application load balancer, is unnecessary.</p> <pre><code>resource &quot;kubernetes_service&quot; &quot;api&quot; { metadata { name = &quot;${var.project_prefix}-api-service&quot; } spec { selector = { app = &quot;${var.project_prefix}-api&quot; } port { name = &quot;http&quot; port = 80 target_port = 1337 } port { name = &quot;https&quot; port = 443 target_port = 1337 } type = &quot;ClusterIP&quot; # See comments below } } resource &quot;kubernetes_ingress&quot; &quot;api&quot; { wait_for_load_balancer = true metadata { name = &quot;${var.project_prefix}-api&quot; annotations = { &quot;kubernetes.io/ingress.class&quot; = &quot;alb&quot; &quot;alb.ingress.kubernetes.io/target-type&quot; = &quot;ip&quot; # See comments below &quot;alb.ingress.kubernetes.io/scheme&quot; = &quot;internet-facing&quot; &quot;alb.ingress.kubernetes.io/target-type&quot; = &quot;instance&quot; &quot;alb.ingress.kubernetes.io/certificate-arn&quot; = local.api-certificate_arn &quot;alb.ingress.kubernetes.io/load-balancer-name&quot; = &quot;${var.project_prefix}-api&quot; &quot;alb.ingress.kubernetes.io/listen-ports&quot; = &quot;[{\&quot;HTTP\&quot;: 80}, {\&quot;HTTPS\&quot;:443}]&quot; &quot;alb.ingress.kubernetes.io/actions.ssl-redirect&quot; = &quot;{\&quot;Type\&quot;: \&quot;redirect\&quot;, \&quot;RedirectConfig\&quot;: { \&quot;Protocol\&quot;: \&quot;HTTPS\&quot;, \&quot;Port\&quot;: \&quot;443\&quot;, \&quot;StatusCode\&quot;: \&quot;HTTP_301\&quot;}}&quot; } } spec { backend { service_name = kubernetes_service.api.metadata.0.name service_port = 80 } rule { http { path { path = &quot;/*&quot; backend { service_name = &quot;ssl-redirect&quot; service_port = &quot;use-annotation&quot; } } } } } } </code></pre> <h2>Traffic Modes</h2> <p>Depending on your cluster and networking setup, you might be able to use <code>ip</code> target type, where the load balancer can communicate directly with Kubernetes pods via their IP (so <code>ClusterIP</code> service types are fine) if you have a CNI configuration, or use <code>instance</code> in conjunction with <code>NodePort</code> service types as the load balancer cannot directly access the pod IPs. Some relevant links below:</p> <p><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#target-type" rel="nofollow noreferrer">ALB Target Types</a></p> <p><a href="https://github.com/aws/amazon-vpc-cni-k8s" rel="nofollow noreferrer">VPC CNI EKS Plugin</a></p> <h2>Load Balancer Types</h2> <p>Some relevant links regarding Kubernetes load balancing and EKS load balancers. Note that Ingress resources are layer 7 and load balance service resources are layer 4, hence ALBs deployed for EKS ingress resources and NLBs for load balanced service resources:</p> <p><a href="https://rancher.com/docs/rancher/v2.5/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/" rel="nofollow noreferrer">Rancher Kubernetes Load Balancers</a></p> <p><a href="https://aws.amazon.com/elasticloadbalancing/features/" rel="nofollow noreferrer">AWS Load Balancer Comparison</a></p>
clarj
<p>I am using clickhouse database and data are stored at <code>/media/user/data/clickhouse</code> and <code>/media/user/data/clickhouse-server</code>. When I run a docker container</p> <pre><code>$ docker run \ --name local-clickhouse \ --ulimit nofile=262144:262144 \ -u 1000:1000 \ -p 8123:8123 \ -p 9000:9000 \ -p 9009:9009 \ -v /media/user/data/clickhouse:/var/lib/clickhouse \ -v /media/user/data/clickhouse-server:/var/log/clickhouse-server \ -dit clickhouse/clickhouse-server </code></pre> <p>I see the data and everything is fine. I am trying to run this in a pod using minikube with following persistent volume configs:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: host-pv-clickhouse spec: capacity: storage: 4000Gi volumeMode: Filesystem storageClassName: standard accessModes: - ReadWriteOnce hostPath: path: /media/user/data/clickhouse type: DirectoryOrCreate </code></pre> <p>and</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: host-pv-clickhouse-server spec: capacity: storage: 4000Gi volumeMode: Filesystem storageClassName: standard accessModes: - ReadWriteOnce hostPath: path: /media/user/data/clickhouse-server type: DirectoryOrCreate </code></pre> <p>Additionally, I also have persistent volume claims:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: host-pvc-clickhouse-server spec: volumeName: host-pv-clickhouse-server storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 2000Gi </code></pre> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: host-pvc-clickhouse spec: volumeName: host-pv-clickhouse storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 2000Gi </code></pre> <p>and finally service and deployment:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: clickhouse spec: type: NodePort ports: - protocol: TCP name: tcp port: 9000 targetPort: 9000 nodePort: 30003 - protocol: TCP name: http port: 8123 targetPort: 8123 nodePort: 30004 - protocol: TCP name: interncomm port: 9009 targetPort: 9009 nodePort: 30005 selector: app: clickhouse --- apiVersion: apps/v1 kind: Deployment metadata: name: clickhouse labels: app: clickhouse spec: replicas: 1 selector: matchLabels: app: clickhouse template: metadata: labels: app: clickhouse spec: containers: - name: clickhouse image: clickhouse/clickhouse-server:latest ports: - containerPort: 8123 name: http - containerPort: 9000 name: tcp - containerPort: 9009 name: interncomm volumeMounts: - name: clickhouse-volume mountPath: /var/lib/clickhouse - name: clickhouse-server-volume mountPath: /var/log/clickhouse-server volumes: - name: clickhouse-volume persistentVolumeClaim: claimName: host-pvc-clickhouse - name: clickhouse-server-volume persistentVolumeClaim: claimName: host-pvc-clickhouse-server </code></pre> <p>When I run <code>kubectl apply -f chdb_node.yaml</code> it works and I can access the database via clickhouse's web gui. However, the data aren't there.</p> <p>Any suggestions to how to fix this?</p>
MoneyBall
<p>Have you got a chance to go through this official kubernetes <strong><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">tutorial</a></strong>, also you are using <code>type: DirectoryOrCreate</code> which will create a directory if the original directory is not available.</p> <p>Hence it is suggested to check whether the directories which you are trying to mount on the pod already exist on the node by using the below command</p> <pre><code>ls -lah /media/user/data/clickhouse and ls -lah /media/user/data/clickhouse-server </code></pre> <p>This command also helps in checking whether there is any data available in these directories. In case data is not available in these directories, copy the data from your docker container or from the source directory to these directories and follow the official tutorial for mounting your persistent volumes.</p>
Kranthiveer Dontineni
<p>I have a k8s cluster that runs the main workload and has a lot of nodes. I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers. I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and <code>taint</code> it to prevent normal workloads to be scheduled on it, and add <code>tolerations</code> on the pods running special containers.</p> <p>The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.</p> <p>So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?</p>
Ahmad
<p>Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.</p> <p>Coming to running separate container runtime on one of these nodes you need to consider two points mainly</p> <ol> <li>This can be done and if you didn’t integrated the container runtime with kubernetes it will be one more software that is running on your server let’s say you used kubeadm on all the nodes and you want to run docker containers this will be separate provided you have drafted a proper architecture and configured separate isolated virtual network accordingly.</li> <li>Now comes the storage part, you need to create separate storage volumes for kubernetes and container runtime separately because if any one software gets failed or corrupted it should not affect the second one and also for providing the isolation.</li> </ol> <p>If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.</p>
Kranthiveer Dontineni
<p>Is there a way to disable impersonation in Kubernetes for all admin/non Admin users?</p> <pre><code>kubectl get pod --as user1 </code></pre> <p>The above command should not provide answer due to security concerns. Thank you in advance.</p>
farhad kazemipour
<p>Unless all your users are already admins they should not be able to impersonate users. As <code>cluster-admin</code> you can do &quot;anything&quot; and pre-installed roles/rb should not be edited under normal circumstances.</p> <p>The necessary Role to <strong>enable</strong> impersonation is:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: impersonator rules: - apiGroups: [&quot;&quot;] resources: [&quot;users&quot;, &quot;groups&quot;, &quot;serviceaccounts&quot;] verbs: [&quot;impersonate&quot;] </code></pre> <p>As long as normal users don't have those permissions, they should not be allowed to perform <code>--as</code>.</p>
Mike
<p>I am using terraform to set up a simple application that has a postgres db via Cloud SQL in google cloud platform (GCP). I set up a GCP Cloud SQL Auth proxy for my postgresql db using <a href="https://github.com/GoogleCloudPlatform/cloud-sql-proxy" rel="nofollow noreferrer">this guide</a>. I set up the proxy as a sidecar to my main kubernetes application. I also set up a GCP service account to be used for authentication in the cloud proxy. In other words, I set the <code>service_account_name</code> in the <code>kubernetes_deployment</code> resource in my terraform file to be a gcp service account with the necessary roles to connect to the database.</p> <p>Now, I'd like to use python and sql alchemy to connect to this postgresql db through the Cloud SQL proxy. Everything I found online (like <a href="https://cloud.google.com/sql/docs/mysql/connect-admin-proxy" rel="nofollow noreferrer">this documentation</a>) suggest that I need to add a username and password like this to connect to the cloud proxy: <code>mysql+pymysql://&lt;db_user&gt;:&lt;db_pass&gt;@&lt;db_host&gt;:&lt;db_port&gt;/&lt;db_name&gt;</code>. However, my google service account doesn't have a username and password.</p> <p>My question: is there a way to connect to the google cloud auth proxy without a password using my gcp service account?</p>
kamykam
<p>The Cloud SQL Python Connector is a Python package that makes connecting to Cloud SQL both easy and secure for all three supported database engines (Postgres, MySQL, and SQL Server), from anywhere (local machine, Cloud Run, App Engine, Cloud Functions, etc.). <a href="https://cloud.google.com/blog/topics/developers-practitioners/how-connect-cloud-sql-using-python-easy-way" rel="nofollow noreferrer">(source: gcp blogs)</a></p> <p>This connector uses IAM permissions and TLS certificates for getting connected to the cloud sql instances. This <a href="https://github.com/GoogleCloudPlatform/cloud-sql-python-connector" rel="nofollow noreferrer">source code</a> is available in github and there are versions available for java and go languages as well.</p>
Kranthiveer Dontineni
<p>I am a beginner with k8s and I followed the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">k8s official docs</a> to create a hello-world ingress, but I can't make it work. First I create a service and just like the tutorial I get:</p> <pre><code>$ kubectl get service web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web NodePort 10.100.208.38 &lt;none&gt; 8080:31921/TCP 19m </code></pre> <p>so, I can access my service via browser:</p> <pre><code>$ minikube service web Hello, world! Version: 1.0.0 Hostname: web-79d88c97d6-xrshs </code></pre> <p>So far so good. However, I get stuck in the ingress part. So I create this ingress like the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">tutorial</a>:</p> <pre><code>$ kubectl describe ingress example-ingress Name: example-ingress Namespace: default Address: Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- hello-world.info / web:8080 (172.17.0.4:8080) Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 ... </code></pre> <p>and even after configuring /etc/hosts with my minikube ip: 192.168.99.102 hello-world.info, when I curl it or access by the browser I get nginx 404. It's strange that my ingress does not get an address, even after a while. Can anyone point me where the error is ?</p> <p>PS. I did my research before asking <a href="https://stackoverflow.com/questions/51511547/empty-address-kubernetes-ingress">here</a>. I check that my minikube ingress addon is enabled and my ingress-nginx-controller pod is running.</p> <p>PS2. My minikube version is 1.23 and my kubectl client and server versions are 1.22.1.</p>
Digao
<p>Seems there is a bug with the Ingress Addon with Minikube 1.23.0, as documented <a href="https://github.com/kubernetes/minikube/issues/12445" rel="nofollow noreferrer">here</a>, which matches the issue you are seeing. ConfigMap issues prevent IngressClass from being generated (usually &quot;nginx&quot; by default) and ingress services won't work.</p> <p>This issue was <a href="https://github.com/kubernetes/minikube/releases/tag/v1.23.1" rel="nofollow noreferrer">fixed in 1.23.1</a>, so updating Minikube should fix your issue.</p>
clarj
<p>I'm seeing a lot of restarts on all the pods of every service that I have deployed on Kubernetes.</p> <p>But when I see the logs in real time:</p> <pre><code>kubectl -n my-namespace logs -c my-pod -f my-pod-some-hash --tail=50 </code></pre> <p>I see nothing, there's no restarts, there's no signal of failure. Readiness keep workings. So what it means all those restarts? Where or how can I get more info about those restarts?</p> <p><a href="https://i.stack.imgur.com/HhAm1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HhAm1.jpg" alt="enter image description here" /></a></p> <p>Edit:</p> <p>By viewing the pod details of the pod that has 158 on the picture above, I can see this, but I don't know what it means or if it's related to the restarts:</p> <p><a href="https://i.stack.imgur.com/O2vTk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2vTk.png" alt="enter image description here" /></a></p>
pmiranda
<p><strong>Replication via one sample example pod with CLI commands</strong></p> <p>If any pod restarts, in order to check the logs of the previous run user &quot;<strong>--previous</strong>&quot;</p> <p>Step1: Connect to cluster using below command</p> <pre><code>az aks get-credentials --resource-group &lt;resourcegroupname&gt; --name &lt;Clustername&gt; </code></pre> <p>Step2: verify the pod logs</p> <pre><code>kubectl get pods </code></pre> <p><img src="https://i.stack.imgur.com/dwDYZ.png" alt="enter image description here" /> Step3: Verify the restart pods logs using command</p> <pre><code>kubectl logs &lt;PodName&gt; --previous </code></pre> <p><img src="https://i.stack.imgur.com/FLse0.png" alt="enter image description here" /></p>
Swarna Anipindi
<p>Suppose I want to implement this architecture deployed on Kubernetes cluster:</p> <p><a href="https://i.stack.imgur.com/LGTLv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LGTLv.png" alt="enter image description here" /></a></p> <p><strong>Gateway</strong> Simple RESTful HTTP microservice accepting scraping tasks (URLs to scrape along with postback urls)</p> <p><strong>Request Queues</strong> - Redis (or other message broker) queues created dynamically per unique domain (when new domain is encountered, gateway should programmatically create new queue. If queue for domain already exists - just place message in it.</p> <p><strong>Response Queue</strong> - Redis (or other message broker) queue used to post Worker results as scraped HTML pages along with postback URLs.</p> <p><strong>Workers</strong> - worker processes which should spin-up at runtime when new queue is created and scale-down to zero when queue is emptied.</p> <p><strong>Response Workers</strong> - worker processes consuming response queue and sending postback results to scraping client. (should be available to scale down to zero).</p> <p>I would like to deploy the whole solution as dockerized containers on Kubernetes cluster.</p> <p>So my main concerns/questions would be:</p> <ol> <li><p>Creating Redis or other message broker queues dynamically at run-time via code. Is it viable? Which broker is best for that purpose? I would prefer Redis if possible since I heard it's the easiest to set up and also it supports massive throughput, ideally my scraping tasks should be short-lived so I think Redis would be okay if possible.</p> </li> <li><p>Creating Worker consumers at runtime via code - I need some kind of Kubernetes-compatible technology which would be able to react on newly created queue and spin up Worker consumer container which would listen to that queue and later on would be able to scale up/down based on the load of that queue. Any suggestions for such technology? I've read a bit about KNative, and it's Eventing mechanism, so would it be suited for this use-case? Don't know if I should continue investing my time in reading it's documentation.</p> </li> <li><p>Best tools for Redis queue management/Worker management: I would prefer C# and Node.JS tooling. Something like Bull for Node.JS would be sufficient? But ideally I would want to produce queues and messages in Gateway by using C# and consume them in Node.JS (Workers).</p> </li> </ol>
Kasparas Taminskas
<p>If you mean vertical scaling it definitely won't be a viable solution, since it requires pod restarts. Horizontal scaling is somewhat viable when compared to vertical scaling, however you need to consider a fact that even for spinning up your nodes or pods it takes some time and it is always suggested to have proper resources in place for serving your upcoming traffic else this delay will affect some features of your application and there might be a business impact. Just having auto scalers isn’t an option; you should also have proper metrics in place for monitoring your application.</p> <p>This <a href="https://itnext.io/autoscaling-redis-applications-on-kubernetes-25c1867e95d7" rel="nofollow noreferrer">documentation</a> details how to scale your redis and worker pods respectively using the KEDA mechanism. KEDA stands for Kubernetes Event-driven Autoscaling, KEDA is a plugins which sits on top of existing kubernetes primitives (such as Horizontal pod autoscaler) to scale any number of kubernetes containers based on the number of events which needs to be processed.</p>
Kranthiveer Dontineni
<p>I am a total newbe with Helm charts, but I have managed to get a pod with with ApacheDS (LDAP server) running on it. I can exec shell into it and I can login and get responses from the LDAP server.</p> <p>However, from outside the cluster, I get a connection refused. Looking this up, I &quot;think&quot; I need a NodePort: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Kube documentation</a> However, I cannot see where to put that spec. I have tried many things but just cant get it. According to the documentation I need something like this:</p> <pre><code>spec: type: NodePort selector: app: MyApp ports: - port: 10389 targetPort: 10389 nodePort: 30007 </code></pre> <p>Here is my deployment.yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ include &quot;buildchart.fullname&quot; . }} labels: {{- include &quot;buildchart.labels&quot; . | nindent 4 }} spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: matchLabels: {{- include &quot;buildchart.selectorLabels&quot; . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include &quot;buildchart.selectorLabels&quot; . | nindent 8 }} spec: {{- if .Values.imagePullSecrets }} imagePullSecrets: - name: {{ .Values.imagePullSecrets }} {{- end }} serviceAccountName: {{ include &quot;buildchart.serviceAccountName&quot; . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: admin-port containerPort: 8080 hostPort: 8080 protocol: TCP - name: ldap-port containerPort: 10389 hostPort: 10389 protocol: UDP livenessProbe: exec: command: - curl ldap://localhost:10389/ initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} periodSeconds: {{ .Values.livenessProbe.periodSeconds }} readinessProbe: exec: command: - sh - -c - curl ldap://localhost:10389/ initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} periodSeconds: {{ .Values.readinessProbe.periodSeconds }} resources: {{- toYaml .Values.resources | nindent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} </code></pre> <p>How do I open this port to the rest of the world? Or at least the box the container is on.</p>
mmaceachran
<p>Yes, you need to create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">Service</a> for your deployment.</p> <p>Also, I suggest you do it without hardcoding, because easier to change a value in the <code>values.yaml</code> file than edit templates files for adding a new hardcode values.</p> <p>In the <code>deployment.yaml</code> set:</p> <pre><code>... {{ if .Values.ports }} ports: {{ range .Values.ports }} - name: {{ .name }} containerPort: {{ .containerPort }} protocol: {{ .protocol }} {{ end }} {{ end }} ... </code></pre> <p>In the <code>values.yaml</code> set:</p> <pre><code>ports: - name: admin-port containerPort: 8080 nodePort: 8080 protocol: TCP - name: ldap-port containerPort: 10389 nodePort: 10389 protocol: UDP </code></pre> <p>Create <code>service.yaml</code> file and set</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{ include &quot;buildchart.fullname&quot; . }} labels: {{- include &quot;buildchart.labels&quot; . | nindent 4 }} spec: type: {{ .Values.service.type }} ports: {{ range .Values.ports }} - port: {{ .nodePort }} targetPort: {{ .containerPort }} protocol: {{ .protocol }} name: {{ .name }} {{ end }} selector: {{- include &quot;buildchart.selectorLabels&quot; . | nindent 4 }} </code></pre>
OuFinx
<p>We have a GKE cluster with one node, we have one load balancer and one ingress to configure 45 rules for our hosts, our developments are microservices and microfrontends, so we need more than 50 Global external proxy LB backend services.</p> <p><a href="https://i.stack.imgur.com/PQPPJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PQPPJ.png" alt="enter image description here" /></a></p> <p>Quota increases was denied several times.</p> <p><a href="https://i.stack.imgur.com/B2NxY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B2NxY.png" alt="enter image description here" /></a></p> <p>Some people have told us to create a new project but I think that this is not a good solution, my cluster still has the capacity (RAM and vCPU) to run more than 50 services.</p> <p>Creating a new cluster or adding more nodes does not increase the quota.</p> <p>Perhaps we should have more than one load balancer to increase the quota of Global external proxy LB backend services? (The word 'global' tells me no)</p> <p>We hope to increase the share of Global external proxy LB backend services without creating a new project or creating a new GCP account.</p>
Camilo Andres Elgueta Basso
<p>Google follows a strict and automated process for quota increase and only few exceptional or unique cases are validated by humans however they also follow strict rules while processing your request as mentioned <a href="https://cloud.google.com/docs/quota#about_increase_requests" rel="nofollow noreferrer">here</a>. If your request does not meet required criteria it will be denied, so go through the reason why your request was denied (as asked by <code>Bijendra</code>) and raise a support ticket if you are still facing this issue. Follow this <a href="https://cloud.google.com/support/docs/manage-cases" rel="nofollow noreferrer">document</a> for creating and managing a support ticket.</p>
Kranthiveer Dontineni
<p>We're trying to install the <code>ingress-nginx</code> controller onto an Azure Kubernetes Service (AKS) cluster, following the steps from the <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller" rel="nofollow noreferrer">Azure documentation</a>.</p> <p>Kubernetes version: 1.21.1 Chart version: 3.36.</p> <p>The command we're using:</p> <pre><code>SET REGISTRY_NAME= SET ACR_URL=%REGISTRY_NAME%.azurecr.io SET CONTROLLER_REGISTRY=k8s.gcr.io SET CONTROLLER_IMAGE=ingress-nginx/controller SET CONTROLLER_TAG=v0.48.1 SET PATCH_REGISTRY=docker.io SET PATCH_IMAGE=jettech/kube-webhook-certgen SET PATCH_TAG=v1.5.1 SET DEFAULTBACKEND_REGISTRY=k8s.gcr.io SET DEFAULTBACKEND_IMAGE=defaultbackend-amd64 SET DEFAULTBACKEND_TAG=1.5 SET NAMESPACE=ingress-basic kubectl create namespace %NAMESPACE% kubectl apply -n %NAMESPACE% -f .\limitRanges.yaml helm install nginx-ingress ingress-nginx/ingress-nginx ^ --namespace %NAMESPACE% ^ --version 3.36.0 ^ --set controller.replicaCount=2 ^ --set controller.nodeSelector.&quot;kubernetes\.io/os&quot;=linux ^ --set controller.image.registry=%ACR_URL% ^ --set controller.image.image=%CONTROLLER_IMAGE% ^ --set controller.image.tag=%CONTROLLER_TAG% ^ --set controller.image.digest=&quot;&quot; ^ --set controller.admissionWebhooks.patch.nodeSelector.&quot;kubernetes\.io/os&quot;=linux ^ --set controller.admissionWebhooks.patch.image.registry=%ACR_URL% ^ --set controller.admissionWebhooks.patch.image.image=%PATCH_IMAGE% ^ --set controller.admissionWebhooks.patch.image.tag=%PATCH_TAG% ^ --set controller.admissionWebhooks.patch.image.digest=&quot;&quot; ^ --set defaultBackend.nodeSelector.&quot;kubernetes\.io/os&quot;=linux ^ --set defaultBackend.image.registry=%ACR_URL% ^ --set defaultBackend.image.image=%DEFAULTBACKEND_IMAGE% ^ --set defaultBackend.image.tag=%DEFAULTBACKEND_TAG% ^ --set defaultBackend.image.digest=&quot;&quot; ^ -f internal-load-balancer.yaml ^ --debug </code></pre> <p>When running, the output is:</p> <pre><code>install.go:173: [debug] Original chart version: &quot;3.36.0&quot; install.go:190: [debug] CHART PATH: C:\Users\......\AppData\Local\Temp\helm\repository\ingress-nginx-3.36.0.tgz client.go:290: [debug] Starting delete for &quot;nginx-ingress-ingress-nginx-admission&quot; ServiceAccount client.go:319: [debug] serviceaccounts &quot;nginx-ingress-ingress-nginx-admission&quot; not found client.go:128: [debug] creating 1 resource(s) client.go:290: [debug] Starting delete for &quot;nginx-ingress-ingress-nginx-admission&quot; ClusterRole client.go:128: [debug] creating 1 resource(s) client.go:290: [debug] Starting delete for &quot;nginx-ingress-ingress-nginx-admission&quot; ClusterRoleBinding client.go:128: [debug] creating 1 resource(s) client.go:290: [debug] Starting delete for &quot;nginx-ingress-ingress-nginx-admission&quot; Role client.go:319: [debug] roles.rbac.authorization.k8s.io &quot;nginx-ingress-ingress-nginx-admission&quot; not found client.go:128: [debug] creating 1 resource(s) client.go:290: [debug] Starting delete for &quot;nginx-ingress-ingress-nginx-admission&quot; RoleBinding client.go:319: [debug] rolebindings.rbac.authorization.k8s.io &quot;nginx-ingress-ingress-nginx-admission&quot; not found client.go:128: [debug] creating 1 resource(s) client.go:290: [debug] Starting delete for &quot;nginx-ingress-ingress-nginx-admission-create&quot; Job client.go:319: [debug] jobs.batch &quot;nginx-ingress-ingress-nginx-admission-create&quot; not found client.go:128: [debug] creating 1 resource(s) client.go:519: [debug] Watching for changes to Job nginx-ingress-ingress-nginx-admission-create with timeout of 5m0s client.go:547: [debug] Add/Modify event for nginx-ingress-ingress-nginx-admission-create: ADDED client.go:586: [debug] nginx-ingress-ingress-nginx-admission-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0 client.go:547: [debug] Add/Modify event for nginx-ingress-ingress-nginx-admission-create: MODIFIED client.go:586: [debug] nginx-ingress-ingress-nginx-admission-create: Jobs active: 1, jobs failed: 0, jobs succeeded: 0 </code></pre> <p>If I look at the pod logs for the job <code>nginx-ingress-ingress-nginx-admission-create</code>, I see the following log:</p> <pre><code>W0909 06:34:24.393154 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. {&quot;err&quot;:&quot;an error on the server (\&quot;\&quot;) has prevented the request from succeeding (get secrets nginx-ingress-ingress-nginx-admission)&quot;,&quot;level&quot;:&quot;fatal&quot;,&quot;msg&quot;:&quot;error getting secret&quot;,&quot;source&quot;:&quot;k8s/k8s.go:109&quot;,&quot;time&quot;:&quot;2021-09-09T06:34:34Z&quot;} </code></pre> <p>I'm a little lost on where to look for additional information. I can see the error seems to be relating to getting a secret, and I can't see that secret under a <code>kubectl get secrets -A</code> command. I'm assuming the <code>\&quot;\&quot;</code> portion is supposed to be the error message, but it's not helping.</p> <p>I have been able to install this chart successfully on a brand new, throwaway cluster. My guess is that it's an RBAC or permissions type problem, but without anything further about where to look, I'm out of ideas.</p>
Daniel Becroft
<p>You need to Quote the Values. I would also suggest to simplify the code bcs all the values are set by default inside the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">Helm Chart of ingress-nginx</a>:</p> <pre><code>SET NAMESPACE=ingress-basic kubectl create namespace %NAMESPACE% kubectl apply -n %NAMESPACE% -f .\limitRanges.yaml helm install nginx-ingress ingress-nginx/ingress-nginx ^ --namespace %NAMESPACE% ^ --version &quot;4.0.1&quot; ^ -set controller.replicaCount=&quot;2&quot; ^ -f internal-load-balancer.yaml ^ --debug </code></pre>
Philip Welz
<p>I have a <code>.gitlab-ci.yml</code> file in which I want to do the following:</p> <ol> <li>Build a Docker image and push it to AWS ECR</li> <li>Restart a specific deployment in my EKS cluster that uses this Docker image</li> </ol> <p>Building and pushing the Docker image works fine, however I'm failing to connect to my EKS cluster.</p> <p>My idea is to use <code>aws eks</code> to update my kubeconfig file, and <code>kubectl</code> to restart my deployment, but I don't know how to use the AWS CLI and Kubectl in my <code>.gitlab-ci.yml</code> file.</p> <p>I have <code>AWS_ACCESS_KEY_ID</code>, <code>AWS_ACCOUNT_ID</code>, and <code>AWS_DEFAULT_REGION</code> defined in my CI/CD variables. I've got the following <code>.gitlab-ci.yml</code> file:</p> <pre><code>stages: - build - deploy staging &lt;build stage omitted for brevity&gt; staging: stage: deploy staging image: bitnami/kubectl:latest only: - staging script: | # install AWS CLI apk add --no-cache python3 py3-pip \ &amp;&amp; pip3 install --upgrade pip \ &amp;&amp; pip3 install awscli \ &amp;&amp; rm -rf /var/cache/apk/* aws eks update-kubeconfig --region eu-west-1 --name my-cluster-name kubectl rollout restart deployment my-deployment </code></pre> <p>This pipeline fails with the error:</p> <pre><code>error: unknown command &quot;sh&quot; for &quot;kubectl&quot; Did you mean this? set cp </code></pre> <p>I've found <a href="https://gitlab.com/gitlab-org/gitlab-foss/-/issues/65110" rel="nofollow noreferrer">this issue and solution</a>, but changing the <code>.gitlab-ci.yml</code> file accordingly prevents me from using <code>apk</code> and installing the AWS cli:</p> <pre><code>stages: - build - deploy staging &lt;build stage omitted for brevity&gt; staging: stage: deploy staging image: name: bitnami/kubectl:latest entrypoint: [&quot;&quot;] only: - staging script: | # install AWS CLI apk add --no-cache python3 py3-pip \ &amp;&amp; pip3 install --upgrade pip \ &amp;&amp; pip3 install awscli \ &amp;&amp; rm -rf /var/cache/apk/* aws eks update-kubeconfig --region eu-west-1 --name my-cluster-name kubectl rollout restart deployment my-deployment </code></pre> <p>Results in the error:</p> <pre><code>$ # install AWS CLI # collapsed multi-line command /bin/bash: line 140: apk: command not found /bin/bash: line 144: aws: command not found </code></pre> <p>So that leads me to the following question: how do I use both the AWS CLI and Kubectl in my <code>.gitlab-ci.yml</code> file? Or is there another easier way that allows me to restart a deployment in my EKS cluster?</p>
Kasper Kooijman
<p>I solved it myself. For future readers: using the <a href="https://hub.docker.com/r/alpine/k8s" rel="nofollow noreferrer">alpine/k8s</a> image solves my problem. It has both Kubectl and AWScli installed.</p>
Kasper Kooijman
<p>I am a beginner in Kubernetes and have been using the kubectl command to create pods for several months. However, I recently encountered a problem where Kubernetes did not create a pod after I executed the <code>kubectl create -f mypod.yaml</code> command. When I run kubectl get pods, the mypod does not appear in the list of pods and I am unable to access it by name as if it does not exist. However, if I try to create it again, I receive a message saying that the pod has already been created.</p> <p>To illustrate my point, let me give you an example. I frequently generate pods using a YAML file called tpcds-25-query.yaml. The contents of this file are as follows:</p> <pre><code>apiVersion: &quot;sparkoperator.k8s.io/v1beta2&quot; kind: SparkApplication metadata: name: tpcds-25-query namespace: default spec: type: Scala mode: cluster image: registry.cn-beijing.aliyuncs.com/kube-ai/ack-spark-benchmark:1.0.1 imagePullPolicy: Always sparkVersion: 2.4.5 mainClass: com.aliyun.spark.benchmark.tpcds.BenchmarkSQL mainApplicationFile: &quot;local:///opt/spark/jars/ack-spark-benchmark-assembly-0.1.jar&quot; arguments: # TPC-DS data localtion - &quot;oss://spark/data/tpc-ds-data/150g&quot; # results location - &quot;oss://spark/result/tpcds-25-query&quot; # Path to kit in the docker image - &quot;/tmp/tpcds-kit/tools&quot; # Data Format - &quot;parquet&quot; # Scale factor (in GB) - &quot;150&quot; # Number of iterations - &quot;1&quot; # Optimize queries - &quot;false&quot; # Filter queries, will run all if empty - &quot;q70-v2.4,q82-v2.4,q64-v2.4&quot; - &quot;q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4&quot; # Logging set to WARN - &quot;true&quot; hostNetwork: true dnsPolicy: ClusterFirstWithHostNet restartPolicy: type: Never timeToLiveSeconds: 86400 hadoopConf: # OSS &quot;fs.oss.impl&quot;: &quot;OSSFileSystem&quot; &quot;fs.oss.endpoint&quot;: &quot;oss.com&quot; &quot;fs.oss.accessKeyId&quot;: &quot;DFDSMGDNDFMSNGDFMNGCU&quot; &quot;fs.oss.accessKeySecret&quot;: &quot;secret&quot; sparkConf: &quot;spark.kubernetes.allocation.batch.size&quot;: &quot;200&quot; &quot;spark.sql.adaptive.join.enabled&quot;: &quot;true&quot; &quot;spark.eventLog.enabled&quot;: &quot;true&quot; &quot;spark.eventLog.dir&quot;: &quot;oss://spark/spark-events&quot; driver: cores: 4 memory: &quot;8192m&quot; labels: version: 2.4.5 spark-app: spark-tpcds role: driver serviceAccount: spark nodeSelector: beta.kubernetes.io/instance-type: ecs.g6.13xlarge executor: cores: 48 instances: 1 memory: &quot;160g&quot; memoryOverhead: &quot;16g&quot; labels: version: 2.4.5 role: executor nodeSelector: beta.kubernetes.io/instance-type: ecs.g6.13xlarge </code></pre> <p>After I executed <code>kubectl create --validate=false -f tpcds-25-query.yaml</code> command, k8s returns this:</p> <pre><code>sparkapplication.sparkoperator.k8s.io/tpcds-25-query created </code></pre> <p>which means the pod has been created. However, when I executed <code>kubectl get pods</code>, it gave me this:</p> <pre><code>No resources found in default namespace. </code></pre> <p>When I created the pod again, it gave me this:</p> <pre><code>Error from server (AlreadyExists): error when creating &quot;tpcds-25-query.yaml&quot;: sparkapplications.sparkoperator.k8s.io &quot;tpcds-25-query&quot; already exists </code></pre> <p>I know the option <code>-v=8</code> can print more detailed logs. So I execute the command <code>kubectl create --validate=false -f tpcds-25-query.yaml -v=8</code>, its output is:</p> <pre><code>I0219 05:50:17.121661 2148722 loader.go:372] Config loaded from file: /root/.kube/config I0219 05:50:17.124735 2148722 round_trippers.go:432] GET https://172.16.0.212:6443/apis/metrics.k8s.io/v1beta1?timeout=32s I0219 05:50:17.124747 2148722 round_trippers.go:438] Request Headers: I0219 05:50:17.124753 2148722 round_trippers.go:442] Accept: application/json, */* I0219 05:50:17.124759 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577 I0219 05:50:17.132864 2148722 round_trippers.go:457] Response Status: 503 Service Unavailable in 8 milliseconds I0219 05:50:17.132876 2148722 round_trippers.go:460] Response Headers: I0219 05:50:17.132881 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1 I0219 05:50:17.132890 2148722 round_trippers.go:463] Content-Length: 20 I0219 05:50:17.132894 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT I0219 05:50:17.132898 2148722 round_trippers.go:463] Audit-Id: 3ab06f73-0c88-469a-834d-54ec06e910f1 I0219 05:50:17.132902 2148722 round_trippers.go:463] Cache-Control: no-cache, private I0219 05:50:17.132906 2148722 round_trippers.go:463] Content-Type: text/plain; charset=utf-8 I0219 05:50:17.132909 2148722 round_trippers.go:463] X-Content-Type-Options: nosniff I0219 05:50:17.132913 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede I0219 05:50:17.134365 2148722 request.go:1181] Response Body: service unavailable I0219 05:50:17.135255 2148722 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string &quot;json:\&quot;apiVersion,omitempty\&quot;&quot;; Kind string &quot;json:\&quot;kind,omitempty\&quot;&quot; } I0219 05:50:17.135265 2148722 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0219 05:50:17.136050 2148722 request.go:1181] Request Body: {&quot;apiVersion&quot;:&quot;sparkoperator.k8s.io/v1beta2&quot;,&quot;kind&quot;:&quot;SparkApplication&quot;,&quot;metadata&quot;:{&quot;name&quot;:&quot;tpcds-25-query&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;arguments&quot;:[&quot;oss://lfpapertest/spark/data/tpc-ds-data/150g&quot;,&quot;oss://lfpapertest/spark/result/tpcds-runc-150g-48core-160g-1pod-25-query&quot;,&quot;/tmp/tpcds-kit/tools&quot;,&quot;parquet&quot;,&quot;150&quot;,&quot;1&quot;,&quot;false&quot;,&quot;q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4&quot;,&quot;true&quot;],&quot;dnsPolicy&quot;:&quot;ClusterFirstWithHostNet&quot;,&quot;driver&quot;:{&quot;cores&quot;:4,&quot;labels&quot;:{&quot;role&quot;:&quot;driver&quot;,&quot;spark-app&quot;:&quot;spark-tpcds&quot;,&quot;version&quot;:&quot;2.4.5&quot;},&quot;memory&quot;:&quot;8192m&quot;,&quot;nodeSelector&quot;:{&quot;beta.kubernetes.io/instance-type&quot;:&quot;ecs.g6.13xlarge&quot;},&quot;serviceAccount&quot;:&quot;spark&quot;},&quot;executor&quot;:{&quot;cores&quot;:48,&quot;instances&quot;:1,&quot;labels&quot;:{&quot;role&quot;:&quot;executor&quot;,&quot;version&quot;:&quot;2.4.5&quot;},&quot;memory&quot;:&quot;160g&quot;,&quot;memoryOverhead&quot;:&quot;16g&quot;,&quot;nodeSelector&quot;:{&quot;beta.kubernetes.io/instance-type&quot;:&quot;ecs.g6.13xlarge&quot;}},&quot;hadoopConf&quot;:{&quot;fs.oss.acce [truncated 802 chars] I0219 05:50:17.136091 2148722 round_trippers.go:432] POST https://172.16.0.212:6443/apis/sparkoperator.k8s.io/v1beta2/namespaces/default/sparkapplications?fieldManager=kubectl-create I0219 05:50:17.136098 2148722 round_trippers.go:438] Request Headers: I0219 05:50:17.136104 2148722 round_trippers.go:442] Accept: application/json I0219 05:50:17.136108 2148722 round_trippers.go:442] Content-Type: application/json I0219 05:50:17.136113 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577 I0219 05:50:17.144313 2148722 round_trippers.go:457] Response Status: 201 Created in 8 milliseconds I0219 05:50:17.144327 2148722 round_trippers.go:460] Response Headers: I0219 05:50:17.144332 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1 I0219 05:50:17.144337 2148722 round_trippers.go:463] Content-Length: 2989 I0219 05:50:17.144341 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT I0219 05:50:17.144345 2148722 round_trippers.go:463] Audit-Id: 8eef9d08-04c0-44f7-87bf-e820853cd9c6 I0219 05:50:17.144349 2148722 round_trippers.go:463] Cache-Control: no-cache, private I0219 05:50:17.144352 2148722 round_trippers.go:463] Content-Type: application/json I0219 05:50:17.144356 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede I0219 05:50:17.144396 2148722 request.go:1181] Response Body: {&quot;apiVersion&quot;:&quot;sparkoperator.k8s.io/v1beta2&quot;,&quot;kind&quot;:&quot;SparkApplication&quot;,&quot;metadata&quot;:{&quot;creationTimestamp&quot;:&quot;2023-02-19T05:50:17Z&quot;,&quot;generation&quot;:1,&quot;managedFields&quot;:[{&quot;apiVersion&quot;:&quot;sparkoperator.k8s.io/v1beta2&quot;,&quot;fieldsType&quot;:&quot;FieldsV1&quot;,&quot;fieldsV1&quot;:{&quot;f:spec&quot;:{&quot;.&quot;:{},&quot;f:arguments&quot;:{},&quot;f:driver&quot;:{&quot;.&quot;:{},&quot;f:cores&quot;:{},&quot;f:labels&quot;:{&quot;.&quot;:{},&quot;f:role&quot;:{},&quot;f:spark-app&quot;:{},&quot;f:version&quot;:{}},&quot;f:memory&quot;:{},&quot;f:nodeSelector&quot;:{&quot;.&quot;:{},&quot;f:beta.kubernetes.io/instance-type&quot;:{}},&quot;f:serviceAccount&quot;:{}},&quot;f:executor&quot;:{&quot;.&quot;:{},&quot;f:cores&quot;:{},&quot;f:instances&quot;:{},&quot;f:labels&quot;:{&quot;.&quot;:{},&quot;f:role&quot;:{},&quot;f:version&quot;:{}},&quot;f:memory&quot;:{},&quot;f:memoryOverhead&quot;:{},&quot;f:nodeSelector&quot;:{&quot;.&quot;:{},&quot;f:beta.kubernetes.io/instance-type&quot;:{}}},&quot;f:hadoopConf&quot;:{&quot;.&quot;:{},&quot;f:fs.oss.accessKeyId&quot;:{},&quot;f:fs.oss.accessKeySecret&quot;:{},&quot;f:fs.oss.endpoint&quot;:{},&quot;f:fs.oss.impl&quot;:{}},&quot;f:image&quot;:{},&quot;f:imagePullPolicy&quot;:{},&quot;f:mainApplicationFile&quot;:{},&quot;f:mainClass&quot;:{},&quot;f:mode&quot;:{},&quot;f:restartPolicy&quot;:{&quot;.&quot;:{},&quot;f:type&quot;:{}},&quot;f:sparkConf&quot;:{&quot;.&quot;:{},&quot;f:spark.eventLog.dir&quot;:{},&quot;f:spark.eventLog.enabled&quot;:{},&quot;f:spark.kubernetes. [truncated 1965 chars] sparkapplication.sparkoperator.k8s.io/tpcds-25-query created </code></pre> <p>From the logs, we can see the only error &quot;Response Status: 503 Service Unavailable in 8 milliseconds&quot;, I don't know what it means.</p> <p>So I want to ask what may cause this, and how would I diagnose the problem? Any help is appreciated!</p>
csbo
<p>There might be multiple reasons for this, initially let’s check whether the pod is really created or not. Like <code>ehmad11</code> suggested use <code>kubectl get pods --all-namespaces</code> for listing pods in all the namespaces. However in your case it might not work because your application is getting directly deployed in defaulf namespace. Regarding the error “Response Status: 503 Service Unavailable in 8 milliseconds” once you are able to locate the pod use <code>kubectl describe &lt;pod&gt;</code> for finding logs specific to your pod and follow the troubleshooting steps provided in this <a href="https://komodor.com/learn/how-to-fix-kubernetes-service-503-service-unavailable-error/" rel="nofollow noreferrer">document</a> for rectifying it.</p> <p><strong>Note:</strong> The reference document is provided from <code>komodor</code> site, here they have articulated each troubleshooting step in highly detailed and understandable manner.</p>
Kranthiveer Dontineni
<p>I am deploying in Azure AKS a regular deployment and i want to use keyvault to store my secrets to get access to a database.</p> <p>This is my deployment file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: sonarqube name: sonarqube spec: selector: matchLabels: app: sonarqube replicas: 1 template: metadata: labels: app: sonarqube spec: containers: - name: sonarqube image: sonarqube:8.9-developer resources: requests: cpu: 500m memory: 1024Mi limits: cpu: 2000m memory: 4096Mi volumeMounts: - mountPath: &quot;/mnt/secrets/&quot; name: secrets-store-inline - mountPath: &quot;/opt/sonarqube/data/&quot; name: sonar-data-new - mountPath: &quot;/opt/sonarqube/extensions/plugins/&quot; name: sonar-extensions-new2 env: - name: &quot;SONARQUBE_JDBC_USERNAME&quot; valueFrom: secretKeyRef: name: test-secret key: username - name: &quot;SONARQUBE_JDBC_PASSWORD&quot; valueFrom: secretKeyRef: name: test-secret key: password - name: &quot;SONARQUBE_JDBC_URL&quot; valueFrom: configMapKeyRef: name: sonar-config key: url ports: - containerPort: 9000 protocol: TCP volumes: - name: sonar-data-new persistentVolumeClaim: claimName: sonar-data-new - name: sonar-extensions-new2 persistentVolumeClaim: claimName: sonar-extensions-new2 - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;azure-kv-provider&quot; </code></pre> <p>and this is my secret storage class:</p> <pre><code>kind: SecretProviderClass metadata: name: azure-kv-provider spec: provider: azure secretObjects: - data: - key: username objectName: username - key: password objectName: password secretName: test-secret type: Opaque parameters: usePodIdentity: &quot;false&quot; useAssignedIdentity: &quot;true&quot; userAssignedIdentityID: &quot;zzzz-zzzz-zzzz-zzzz-zzzz&quot; keyvaultName: &quot;dbkvtz&quot; cloudName: &quot;&quot; objects: | array: - | objectName: test objectType: secret objectAlias: username objectVersion: &quot;&quot; - | objectName: test objectType: secret objectAlias: password objectVersion: &quot;&quot; resourceGroup: &quot;myresourcegroup&quot; subscriptionId: &quot;yyyy-yyyy-yyyy-yyy-yyyy&quot; tenantId: &quot;xxxx-xxxx-xxxx-xxx-xxxx&quot; </code></pre> <p>Where &quot;zzzz-zzzz-zzzz-zzzz-zzzz&quot; is the Client ID of the created Managed Identity.</p> <p>In the Key Vault that i created &quot;dbkvtz&quot; i added through &quot;Access Policy&quot; the Managed Identity that i created. On the other hand in &quot;Manage Identity&quot; i am not able to add any role in &quot;Azure Role Assignement&quot; -- No role assignments found for the selected subscription. I don't know if it is necessary to add any role there.</p> <p>The AKS cluster is setup for system assigned managed identity. I want to use Managed Identities to get access to the key vaults so i created a managed identity with client id &quot;zzzz-zzzz-zzzz-zzzz-zzzz&quot; (where is &quot;z&quot; a value from 0-9a-z).</p> <p>I am not too familiar with keyvault integration in AKS so i am not sure if the config is ok.</p> <p>I am getting this error:</p> <p><strong>kubectl describe pods:</strong></p> <pre><code> Normal Scheduled 19m default-scheduler Successfully assigned default/sonarqube-6bdb9cfc85-npbfw to aks-agentpool-16966606-vmss000000 Warning FailedMount 5m43s (x5 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[secrets-store-inline sonar-data-new sonar-extensions-new2 default-token-t45tw]: timed out waiting for the condition Warning FailedMount 3m27s kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[default-token-t45tw secrets-store-inline sonar-data-new sonar-extensions-new2]: timed out waiting for the condition Warning FailedMount 71s (x2 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[sonar-data-new sonar-extensions-new2 default-token-t45tw secrets-store-inline]: timed out waiting for the condition Warning FailedMount 37s (x17 over 19m) kubelet MountVolume.SetUp failed for volume &quot;secrets-store-inline&quot; : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/sonarqube-6bdb9cfc85-npbfw, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to create auth config, error: failed to get credentials, nodePublishSecretRef secret is not set </code></pre> <p>logs az aks show -g RG -n SonarQubeCluster</p> <pre><code>{ &quot;aadProfile&quot;: null, &quot;addonProfiles&quot;: { &quot;azurepolicy&quot;: { &quot;config&quot;: null, &quot;enabled&quot;: true, &quot;identity&quot;: { &quot;clientId&quot;: &quot;yy&quot;, &quot;objectId&quot;: &quot;zz&quot;, &quot;resourceId&quot;: &quot;/subscriptions/xx/resourcegroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/azurepolicy-sonarqubecluster&quot; } }, &quot;httpApplicationRouting&quot;: { &quot;config&quot;: null, &quot;enabled&quot;: false, &quot;identity&quot;: null }, &quot;omsagent&quot;: { &quot;config&quot;: { &quot;logAnalyticsWorkspaceResourceID&quot;: &quot;/subscriptions/xx/resourceGroups/DefaultResourceGroup-SCUS/providers/Microsoft.OperationalInsights/workspaces/DefaultWorkspace-44e26024-4977-4419-8d23-0e1e22e8804e-SCUS&quot; }, &quot;enabled&quot;: true, &quot;identity&quot;: { &quot;clientId&quot;: &quot;yy&quot;, &quot;objectId&quot;: &quot;zz&quot;, &quot;resourceId&quot;: &quot;/subscriptions/xx/resourcegroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/omsagent-sonarqubecluster&quot; } } }, &quot;agentPoolProfiles&quot;: [ { &quot;availabilityZones&quot;: [ &quot;1&quot; ], &quot;count&quot;: 2, &quot;enableAutoScaling&quot;: false, &quot;enableEncryptionAtHost&quot;: null, &quot;enableFips&quot;: false, &quot;enableNodePublicIp&quot;: null, &quot;enableUltraSsd&quot;: null, &quot;gpuInstanceProfile&quot;: null, &quot;kubeletConfig&quot;: null, &quot;kubeletDiskType&quot;: &quot;OS&quot;, &quot;linuxOsConfig&quot;: null, &quot;maxCount&quot;: null, &quot;maxPods&quot;: 110, &quot;minCount&quot;: null, &quot;mode&quot;: &quot;System&quot;, &quot;name&quot;: &quot;agentpool&quot;, &quot;nodeImageVersion&quot;: &quot;AKSUbuntu-1804gen2containerd-2021.07.25&quot;, &quot;nodeLabels&quot;: {}, &quot;nodePublicIpPrefixId&quot;: null, &quot;nodeTaints&quot;: null, &quot;orchestratorVersion&quot;: &quot;1.20.7&quot;, &quot;osDiskSizeGb&quot;: 128, &quot;osDiskType&quot;: &quot;Managed&quot;, &quot;osSku&quot;: &quot;Ubuntu&quot;, &quot;osType&quot;: &quot;Linux&quot;, &quot;podSubnetId&quot;: null, &quot;powerState&quot;: { &quot;code&quot;: &quot;Running&quot; }, &quot;provisioningState&quot;: &quot;Succeeded&quot;, &quot;proximityPlacementGroupId&quot;: null, &quot;scaleDownMode&quot;: null, &quot;scaleSetEvictionPolicy&quot;: null, &quot;scaleSetPriority&quot;: null, &quot;spotMaxPrice&quot;: null, &quot;tags&quot;: null, &quot;type&quot;: &quot;VirtualMachineScaleSets&quot;, &quot;upgradeSettings&quot;: null, &quot;vmSize&quot;: &quot;Standard_DS2_v2&quot; } ], &quot;apiServerAccessProfile&quot;: { &quot;authorizedIpRanges&quot;: null, &quot;enablePrivateCluster&quot;: false, &quot;enablePrivateClusterPublicFqdn&quot;: null, &quot;privateDnsZone&quot;: null }, &quot;autoScalerProfile&quot;: null, &quot;autoUpgradeProfile&quot;: null, &quot;azurePortalFqdn&quot;: &quot;sonarqubecluster-dns-4b5e95d4.portal.hcp.southcentralus.azmk8s.io&quot;, &quot;disableLocalAccounts&quot;: null, &quot;diskEncryptionSetId&quot;: null, &quot;dnsPrefix&quot;: &quot;SonarQubeCluster-dns&quot;, &quot;enablePodSecurityPolicy&quot;: null, &quot;enableRbac&quot;: true, &quot;extendedLocation&quot;: null, &quot;fqdn&quot;: &quot;sonarqubecluster-dns-4b5e95d4.hcp.southcentralus.azmk8s.io&quot;, &quot;fqdnSubdomain&quot;: null, &quot;httpProxyConfig&quot;: null, &quot;id&quot;: &quot;/subscriptions/xx/resourcegroups/RG/providers/Microsoft.ContainerService/managedClusters/SonarQubeCluster&quot;, &quot;identity&quot;: { &quot;principalId&quot;: &quot;yy&quot;, &quot;tenantId&quot;: &quot;rr&quot;, &quot;type&quot;: &quot;SystemAssigned&quot;, &quot;userAssignedIdentities&quot;: null }, &quot;identityProfile&quot;: { &quot;kubeletidentity&quot;: { &quot;clientId&quot;: &quot;yy&quot;, &quot;objectId&quot;: &quot;zz&quot;, &quot;resourceId&quot;: &quot;/subscriptions/xx/resourcegroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/SonarQubeCluster-agentpool&quot; } }, &quot;kubernetesVersion&quot;: &quot;1.20.7&quot;, &quot;linuxProfile&quot;: null, &quot;location&quot;: &quot;southcentralus&quot;, &quot;maxAgentPools&quot;: 100, &quot;name&quot;: &quot;SonarQubeCluster&quot;, &quot;networkProfile&quot;: { &quot;dnsServiceIp&quot;: &quot;10.0.0.10&quot;, &quot;dockerBridgeCidr&quot;: &quot;172.17.0.1/16&quot;, &quot;loadBalancerProfile&quot;: { &quot;allocatedOutboundPorts&quot;: null, &quot;effectiveOutboundIPs&quot;: [ { &quot;id&quot;: &quot;/subscriptions/xx/resourceGroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.Network/publicIPAddresses/nn&quot;, &quot;resourceGroup&quot;: &quot;MC_xx_SonarQubeCluster_southcentralus&quot; } ], &quot;idleTimeoutInMinutes&quot;: null, &quot;managedOutboundIPs&quot;: { &quot;count&quot;: 1 }, &quot;outboundIPs&quot;: null, &quot;outboundIpPrefixes&quot;: null }, &quot;loadBalancerSku&quot;: &quot;Standard&quot;, &quot;natGatewayProfile&quot;: null, &quot;networkMode&quot;: null, &quot;networkPlugin&quot;: &quot;kubenet&quot;, &quot;networkPolicy&quot;: null, &quot;outboundType&quot;: &quot;loadBalancer&quot;, &quot;podCidr&quot;: &quot;10.244.0.0/16&quot;, &quot;serviceCidr&quot;: &quot;10.0.0.0/16&quot; }, &quot;nodeResourceGroup&quot;: &quot;MC_xx_SonarQubeCluster_southcentralus&quot;, &quot;podIdentityProfile&quot;: null, &quot;powerState&quot;: { &quot;code&quot;: &quot;Running&quot; }, &quot;privateFqdn&quot;: null, &quot;privateLinkResources&quot;: null, &quot;provisioningState&quot;: &quot;Succeeded&quot;, &quot;resourceGroup&quot;: &quot;RG&quot;, &quot;securityProfile&quot;: null, &quot;servicePrincipalProfile&quot;: { &quot;clientId&quot;: &quot;msi&quot; }, &quot;sku&quot;: { &quot;name&quot;: &quot;Basic&quot;, &quot;tier&quot;: &quot;Free&quot; }, &quot;type&quot;: &quot;Microsoft.ContainerService/ManagedClusters&quot;, &quot;windowsProfile&quot;: null } </code></pre> <p>Any idea of what is wrong?</p> <p>Thank you in advance.</p>
X T
<p>The <code>userAssignedIdentityID</code> in your <code>SecretProviderClass</code> must be the User-assigned Kubelet managed identity ID (Managed Identity for the NodePool) and not the Managed Identity created for your AKS bcs the volumes will be access via kubelet on the nodes.</p> <pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: azure-kvname-user-msi spec: provider: azure parameters: usePodIdentity: &quot;false&quot; useVMManagedIdentity: &quot;true&quot; userAssignedIdentityID: &quot;&lt;Kubelet identity ID&gt;&quot; keyvaultName: &quot;kvname&quot; </code></pre> <p>You also need to assign a Role to this Kubelet Identity:</p> <pre><code>resource &quot;azurerm_role_assignment&quot; &quot;akv_kubelet&quot; { scope = azurerm_key_vault.akv.id role_definition_name = &quot;Key Vault Secrets Officer&quot; principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id } </code></pre> <p>or</p> <pre><code>export KUBE_ID=$(az aks show -g &lt;resource group&gt; -n &lt;aks cluster name&gt; --query identityProfile.kubeletidentity.objectId -o tsv) export AKV_ID=$(az keyvault show -g &lt;resource group&gt; -n &lt;akv name&gt; --query id -o tsv) az role assignment create --assignee $KUBE_ID --role &quot;Key Vault Secrets Officer&quot; --scope $AKV_ID </code></pre> <p>Documentation can be found <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/user-assigned-msi-mode/#configure-user-assigned-managed-identity-to-access-keyvault" rel="nofollow noreferrer">here for user-assigned identity</a> and <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/system-assigned-msi-mode/" rel="nofollow noreferrer">here for system-assigned identity.</a></p>
Philip Welz
<p>I am trying to use VPA for autoscaling my deployed services. Due to limitation in resources in my cluster I set the min_replica option to 1. The workflow of VPA that have seen so far is that it first deletes the existing pod and then re-create the pod. This approach will cause a downtime to my services. What I want is that the VPA first create the new pod and then deletes the old pod, completely similar to the rolling updates for deployments. Is there an option or hack to reverse the flow to the desired order in my case?</p>
Saeid Ghafouri
<p>This can be achieved by using python script or by using an IAC pipeline, you can get the metrics of the kubernetes cluster and whenever these metrics exceed a certain threshold, trigger this python code for creating new pod with the required resources and shutdown the old pod. Follow this github link for more info on <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">python plugin</a> for kubernetes.</p> <p>Ansible can also be used for performing this operation. This can be achieved by triggering your ansible playbook whenever the threshold breaches a certain limit and you can specify the new sizes of the pods that need to be created. Follow this official <a href="https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_scale_module.html" rel="nofollow noreferrer">ansible document</a> for more information. However both these procedures involve manual analysis for selecting the desired pod size for scaling. So if you don’t want to use vertical scaling you can go for horizontal scaling.</p> <p><strong>Note:</strong> The information is gathered from official Ansible and github pages and the urls are referred to in the post.</p>
Kranthiveer Dontineni
<p>currently I'm trying the following setup:</p> <p>I have:</p> <ul> <li>one cluster</li> <li>one Ingress Controller</li> <li>one url (myapp.onazure.com)</li> <li>two namespaces for two applications default and default-test</li> <li>two deployments, ingress objects, services for the namespaces</li> </ul> <p>I can easily reach my app from the default namespace with path based routing '/' as a prefix rule Now i have tried to configure the second namespace and following rule: /testing to hit another service</p> <p>Unfortunately i get an HTTP404 when i try to hit the following URL myapp.onazure.com/testing/openapi.json</p> <p>What did I miss?</p> <p><strong>Working Ingress 1</strong></p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: liveapi-ingress-object namespace: default annotations: kubernetes.io/ingress.class: public-nginx spec: tls: - hosts: - myapp-region1.onazure.com - myapp-region2.onazure.com secretName: ingress-tls-csi rules: - host: - myapp-region1.onazure.com http: paths: - path: / pathType: Prefix backend: service: name: liveapi-svc port: number: 8080 - host: myapp-region2.onazure.com http: paths: - path: / pathType: Prefix backend: service: name: liveapi-svc port: number: 8080 </code></pre> <p><strong>Not working Ingress 2</strong></p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: liveapi-ingress-object-testing namespace: default-testing annotations: kubernetes.io/ingress.class: public-nginx #nginx.ingress.kubernetes.io/rewrite-target: /testing spec: tls: - hosts: - myapp-region1.onazure.com - myapp-region2.onazure.com secretName: ingress-tls-csi-testing rules: - host: myapp-region1.onazure.com http: paths: - path: /testing #pathType: Prefix backend: service: name: liveapi-svc-testing port: number: 8080 - host: myapp-region2.onazure.com http: paths: - path: /testing #pathType: Prefix backend: service: name: liveapi-svc-testing port: number: 8080 </code></pre> <p>Maybe I am missing a rewrite target to simply '/' in the testing namespace ingress?</p>
Artur123
<p>Finally I figured out the missing part. I had to add the following statement to the not working ingress object:</p> <pre><code> annotations: kubernetes.io/ingress.class: public-nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>Please see the complete ingress object:</p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: liveapi-ingress-object namespace: default-testing annotations: kubernetes.io/ingress.class: public-nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: tls: - hosts: - myapp.onazure.com secretName: ingress-tls-csi-testing rules: - host: myapp.onazure.com http: paths: - path: /testing/(.*) pathType: Prefix backend: service: name: liveapi-svc-testing port: number: 8000 </code></pre>
Artur123
<p>I have an error trying to deploy the official <code>phpmyadmin</code> image locally in Kubernetes cluster. Please look at my <code>yaml</code> configs. I haven't any idea what I did wrong. I tried <code>phpmyadmin/phpmyadmin</code> image but the 404 error stays. I also viewed configs from other people but it doesn't differ from mine. This is my first experience in Kubernetes so maybe I don't know some development approaches.</p> <p><a href="https://i.stack.imgur.com/Cl7mI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cl7mI.png" alt="enter image description here" /></a></p> <p>ingress-service.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: rules: - http: paths: - path: /phpmyadmin/?(.*) pathType: Prefix backend: service: name: phpmyadmin-cluster-ip-service port: number: 80 </code></pre> <p>phpmyadmin-cluster-ip-service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: phpmyadmin-cluster-ip-service spec: type: ClusterIP selector: app: phpmyadmin ports: - port: 80 targetPort: 80 protocol: TCP </code></pre> <p>phpmyadmin-deployment.yaml Ip 192.168.64.7 is given by minikube.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: phpmyadmin-deployment labels: tier: backend spec: replicas: 1 selector: matchLabels: app: phpmyadmin tier: backend template: metadata: labels: app: phpmyadmin tier: backend spec: restartPolicy: Always containers: - name: phpmyadmin image: phpmyadmin:latest ports: - name: phpmyadmin containerPort: 80 protocol: TCP imagePullPolicy: Always env: - name: PMA_ABSOLUTE_URI value: &quot;http://192.168.64.7/phpmyadmin/&quot; - name: PMA_VERBOSE value: &quot;PhpMyAdmin&quot; - name: PMA_HOST value: mysql-service - name: PMA_PORT value: &quot;3306&quot; - name: UPLOAD_LIMIT value: &quot;268435456&quot; - name: PMA_ARBITRARY value: &quot;0&quot; - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD </code></pre> <p>I omitted MySQL <code>yaml</code> configs thinking it doesn't related to <code>phpmyadmin</code> issue but if they can help I will pushlish it too.</p>
Sergey Filipovich
<p>In this case, adding the annotation <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer"><code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code></a> will fix it. How? (<em>NOTE: I will change service port to <code>8080</code> to better distinguish the ports of the <code>container</code> and <code>Service</code></em>).</p> <ol> <li>You visit <code>http://&lt;MINIKUBE IP&gt;/phpmyadmin/</code> on your browser.</li> <li>The NGINX Ingress controller receives your request, and rewrites the path <code>/phpmyadmin/</code> to <code>/</code>. The NGINX Ingress controller creates the request to the <code>Service</code> in <code>phpmyadmin-cluster-ip-service</code> at port <code>8080</code> (service port) which has the <code>targetPort</code> at <code>80</code> (container port) for the pods containing the label <code>app: phpmyadmin</code>. One of the matching pods happens to be at <code>172.17.0.4</code>:</li> </ol> <pre><code>&quot;GET /phpmyadmin/favicon.ico HTTP/1.1&quot; 200 22486 &quot;-&quot; ... 492 0.001 [default-phpmyadmin-cluster-ip-service-8080] [] 172.17.0.4:80 22486 0.000 200 ... </code></pre> <ol start="3"> <li>Because the request is now using the correct path for the <code>phpmyadmin</code> server, it responds with <code>200</code> and the requested resource. We can also see the corresponding logs in phpmyadmin:</li> </ol> <pre><code>172.17.0.3 - - [14/Nov/2022:21:58:43 +0000] &quot;GET /favicon.ico HTTP/1.1&quot; 200 22733 &quot;-&quot; ... </code></pre> <p>The IP <code>172.17.0.3</code> is of the NGINX Ingress Controller.</p> <p>There is also a similar <a href="https://stackoverflow.com/questions/61541812/ingress-nginx-how-to-serve-assets-to-application">question</a> with an even more detailed answer.</p>
Ryan
<p>I have created a <code>Service</code> with a <strong>static</strong> <code>Endpoint</code> that points to two public IP addresses (<code>A.B.C.D</code> and <code>X.Y.V.Z</code>).</p> <pre><code>apiVersion: v1 kind: Service metadata: name: web-app spec: ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Endpoints metadata: name: web-app subsets: - addresses: - ip: A.B.C.D - ip: X.Y.V.Z ports: - port: 80 </code></pre> <p>Is there a health check for these IP addresses by default, what if one of the IP addresses goes down?<br /> If there is no pre-built health check, how do I configure it?</p>
Xavier123
<p>As per <a href="https://github.com/kubernetes/kubernetes/issues/77738" rel="nofollow noreferrer">this feature request</a> in official kubernetes repo and the <a href="https://github.com/kubernetes/kubernetes/issues/77738#issuecomment-491560980" rel="nofollow noreferrer">comment</a> written by <em><strong>Vllry</strong></em>,</p> <blockquote> <p>Only ready pods have an endpoint added to the Endpoints object (there is no concept of health checks at the service/endpoint level). This allows consumers of endpoints to avoid the need to health check. So I'm afraid this would be an out-of-scope feature request.</p> </blockquote> <p>Kubernetes doesn’t have built-in functionality to monitor the endpoints. If you need to monitor these endpoints you can use some other tools like <strong>servicemesh</strong> or <strong>external monitoring tools</strong>.</p>
Kranthiveer Dontineni
<p>I am working with an AKS cluster and AKS comes with an pre-deployed instance of Gatekeeper for validating webhooks.</p> <p>However by design, AKS would only allow predefined or custom policies to be deployed through Azure policy portal. This is against the developer experience that I am trying to build, where developers would be free to deploy their own gatekeeper policies using kubectl.</p> <p>Hence this got me thinking if I can deploy a separate instance of gatekeeper on the same cluster and create a new validating webhook configuration ? Would that even work ?</p> <p>If yes, what all changes would need to be made.. Any thoughts ?</p>
Utopia
<p>You dont need a another gatekeeper instance. You can apply your policies to the pre-deployed instance of Gatekeeper. For this you have 2 options:</p> <ol> <li><a href="https://open-policy-agent.github.io/gatekeeper/website/docs/howto/" rel="nofollow noreferrer">OPA ConstraintTemplates</a></li> <li><a href="https://learn.microsoft.com/en-us/azure/aks/use-azure-policy#create-and-assign-a-custom-policy-definition-preview" rel="nofollow noreferrer">Azure Custom policy definitions</a></li> </ol>
Philip Welz
<p>Original problem. I would like to have a Kubernetes cluster with at least 2 nodes with zero GPU consumption. If a job is coming and takes one node, then autoscaler should create another spare node.</p> <p>I found out that I can rely on <code>DCGM_FI_DEV_GPU_UTIL</code> metrics. If <code>DCGM_FI_DEV_GPU_UTIL == 0</code> then the node is in &quot;idle&quot; mode. In PromQL I can just write <code>count(DCGM_FI_DEV_GPU_UTIL == 0)</code> and get the number of &quot;idle&quot; nodes.</p> <p>However, I do not understand how to write metricsQuery in Prometheus Adapter config. All examples that I found are about</p> <pre><code>(sum(rate(&lt;&lt;.Series&gt;&gt;{&lt;&lt;.LabelMatchers&gt;&gt;}[1m])) by (&lt;&lt;.GroupBy&gt;&gt;) </code></pre> <p>However, I need something like <code>count(&lt;&lt;.Series&gt;&gt; == 0)</code>, but this does not work. Any idea how I can get this metrics for HPA which indicates the number of nodes with no GPU consumption?</p>
Trarbish
<p>Probably your jobs are running in Kubernetes Pod. You may have a configuration where only one custom Pod with job can run on a single Node. The first step is to configure your metrics for the Prometheus adapter and it's described quite nicely <a href="https://blog.wyrihaximus.net/2021/01/scaling-php-fpm-based-on-utilization-demand-on-kubernetes/" rel="nofollow noreferrer">here</a>. This step will ensure that the Pod is added.</p> <p>In the second step you need to configure a cluster autoscaler that will add another Node when needed. Cluster autoscaler is dependent on your Kubernetes solution provider (AWS, Azure, GCP...) and should be in their documentation. I personally use <a href="https://www.kubecost.com/kubernetes-autoscaling/kubernetes-cluster-autoscaler/" rel="nofollow noreferrer">Cluster autoscaler</a>, <a href="https://karpenter.sh/" rel="nofollow noreferrer">Karpenter</a>.</p>
Vitezslav Skacel
<p>I am experimenting in a small lab created with AutomatedLab that contains Windows Server 2022 machines running ActiveDirectory and SQLServer along with CentOS 8.5 machines running a Kubernetes cluster. My test application is a .Net 6 console application that simply connect to a SQLServer database running in the the lab over a trusted connection. It is containerized based on the official aspnet:6.0 image. The Kubernetes POD contains an InitContainer that executes kinit to generate a Kerberos token placed in a shared volume. I have made two versions of the test application: one that uses an OdbcConnection to connect to the database and the second one uses a SqlConnection. The version with the OdbcConnection successfully connects to the database but the one using the SqlConnection crashes when opening the connection to the database.</p> <p>Here is the code of the application using the OdbcConnection:</p> <pre><code>using (var connection = new OdbcConnection( &quot;Driver={ODBC Driver 17 for SQL Server};Server=sql1.contoso.com,1433;Database=KubeDemo;Trusted_Connection=Yes;&quot;)) { Log.Information(&quot;connection created&quot;); var command = new OdbcCommand (&quot;select * from KubeDemo.dbo.Test&quot;, connection); connection.Open(); Log.Information(&quot;Connection opened&quot;); using (var reader = command.ExecuteReader()) { Log.Information(&quot;Read&quot;); while (reader.Read()) { Console.WriteLine($&quot;{reader[0]}&quot;); } } } </code></pre> <p>The logs of the container show that it can connect to the database and read its content</p> <pre><code>[16:24:35 INF] Starting the application [16:24:35 INF] connection created [16:24:35 INF] Connection opened [16:24:35 INF] Read 1 </code></pre> <p>Here is the code of the application using the SqlConnection:</p> <pre><code>using (var connection = new SqlConnection( &quot;Server=sql1.contoso.com,1433;Initial Catalog=KubeDemo;Integrated Security=True;&quot;)) { Log.Information(&quot;connection created&quot;); var command = new SqlCommand (&quot;select * from KubeDemo.dbo.Test&quot;, connection); connection.Open(); Log.Information(&quot;Connection opened&quot;); using (var reader = command.ExecuteReader()) { Log.Information(&quot;Read&quot;); while (reader.Read()) { Console.WriteLine($&quot;{reader[0]}&quot;); } } } </code></pre> <p>The container crashes, based on the log when the connection is being opened:</p> <pre><code>[16:29:58 INF] Starting the application [16:29:58 INF] connection created </code></pre> <p>I have deployed the Kubernetes pod with a command &quot;tail -f /dev/null&quot; so that I could execute the application manually and I get an extra line:</p> <pre><code>[16:29:58 INF] Starting the application [16:29:58 INF] connection created Segmentation fault (core dumped) </code></pre> <p>According to Google, this is C++ error message that indicates an attempt to access an unauthorized memory section. Unfortunately I have no idea how to work around that. Does anyone has an idea how to get it to work?</p> <p>To be complete, here is the Dockerfile for the containerized application</p> <pre><code>FROM mcr.microsoft.com/dotnet/aspnet:6.0 ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update RUN apt-get install curl gnupg2 -y RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - RUN curl https://packages.microsoft.com/config/debian/11/prod.list &gt; /etc/apt/sources.list.d/mssql-release.list RUN apt-get update RUN ACCEPT_EULA=Y apt-get install --assume-yes --no-install-recommends --allow-unauthenticated unixodbc msodbcsql17 mssql-tools RUN apt-get remove curl gnupg2 -y RUN echo 'export PATH=&quot;$PATH:/opt/mssql-tools/bin&quot;' &gt;&gt; ~/.bash_profile RUN echo 'export PATH=&quot;$PATH:/opt/mssql-tools/bin&quot;' &gt;&gt; ~/.bashrc WORKDIR /app EXPOSE 80 COPY ./ . ENTRYPOINT [&quot;dotnet&quot;, &quot;DbTest.dll&quot;] </code></pre> <p>And the POD Helm template:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dbtest labels: app: test spec: restartPolicy: Never volumes: - name: kbr5-cache emptyDir: medium: Memory - name: keytab-dir secret: secretName: back01-keytab defaultMode: 0444 - name: krb5-conf configMap: name: krb5-conf defaultMode: 0444 initContainers: - name: kerberos-init image: gambyseb/private:kerberos-init-0.2.0 imagePullPolicy: {{ .Values.image.pullPolicy }} securityContext: allowPrivilegeEscalation: false privileged: false readOnlyRootFilesystem: true env: - name: KRB5_CONFIG value: /krb5 volumeMounts: - name: kbr5-cache mountPath: /dev/shm - name: keytab-dir mountPath: /keytab - name: krb5-conf mountPath: /krb5 containers: - name: dbtest image: {{ .Values.image.repository }}:DbTest-{{ .Chart.AppVersion }} imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: ASPNETCORE_ENVIRONMENT value: &quot;{{ .Values.environment.ASPNETCORE }}&quot; - name: KRB5_CONFIG value: /krb5 {{/* command:*/}} {{/* - &quot;tail&quot;*/}} {{/* - &quot;-f&quot;*/}} {{/* - &quot;/dev/null&quot;*/}} securityContext: allowPrivilegeEscalation: true privileged: true ports: - containerPort: 80 volumeMounts: - name: kbr5-cache mountPath: /dev/shm - name: krb5-conf mountPath: /krb5 - name: keytab-dir mountPath: /keytab {{/* - name: kerberos-refresh*/}} {{/* image: gambyseb/private:kerberos-refresh-0.1.0*/}} {{/* imagePullPolicy: {{ .Values.image.pullPolicy }}*/}} {{/* env:*/}} {{/* - name: KRB5_CONFIG*/}} {{/* value: /krb5*/}} {{/* volumeMounts:*/}} {{/* - name: kbr5-cache*/}} {{/* mountPath: /dev/shm*/}} {{/* - name: keytab-dir*/}} {{/* mountPath: /keytab*/}} {{/* - name: krb5-conf*/}} {{/* mountPath: /krb5*/}} imagePullSecrets: - name: {{ .Values.image.pullSecret }} </code></pre>
sebgamby
<p>This may not be Auth reated.</p> <p>If you are deploying to a linux container you need to make sure you don't deploy System.Data.SqlClient as this is a Windows only library. It will just blow up your container (as you are experiencing) when it first loads the library.</p> <p>I found that if I added Microsoft.Data.SqlClient it didn't get added but I think I was leaving Dapper or EF to add the dependency and it went into the release as System.Data.SqlClient. As the container blew up in AWS I had very little feedback as to the cause!</p> <p>See <a href="https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/" rel="nofollow noreferrer">https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/</a></p>
Alex Sheppard-Godwin
<p>In Azure K8s service, you can scale up the node pool but only we define the min and max nodes. When i check the node pool scale set scale settings, i found it set to manual. So i assume that the Node Pool auto scale does't rely on the belonging scale set, but i wonder, can we just rely on the scale set auto scale with the several metric roles instead of the very limited Node Pool scale settings ?</p>
Sameh Selem
<p>The AKS autoscaling works slightly different as the VMSS autoscaling.</p> <p>From the <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler" rel="nofollow noreferrer">official docs</a>:</p> <blockquote> <p>The cluster autoscaler watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.</p> </blockquote> <p>The AKS autoscaler is tightly coupled with the control plane and the kube-scheduler, so it takes resource requests and limits into account that is far the better scaling method as the VMSS autoscaler (for k8s workload) that is anyway not supported for AKS:</p> <blockquote> <p>The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI.</p> </blockquote>
Philip Welz
<p>is it possible to retrieve token to azure aks server (same which is in .kube/config), using Azure Credentials, programatically c#/.net 6.</p> <p>I know that in powershell to retrieve it i can run &quot;az aks get-credentials&quot; (yaml).</p>
Kotletmeknow
<blockquote> <p>Yes, it is possible to retrieve a token from an Azure server using Azure credentials programmatically in C#.</p> </blockquote> <p>Below is the code snippet, which I have followed and it worked for me.</p> <pre><code>IConfidentialClientApplication client = ConfidentialClientApplicationBuilder .Create(&quot;&lt;Client-Id&gt;&quot;) .WithClientSecret(&quot;&lt;Client-Secret&gt;&quot;) .WithAuthority(&quot;https://login.microsoftonline.com/Tenant-ID&quot;) .Build(); List&lt;string&gt; scopes = new List&lt;string&gt; { &quot;https://graph.microsoft.com/.default&quot; }; AuthenticationResult result = await client .AcquireTokenForClient(scopes) .ExecuteAsync(); GraphServiceClient graphClient = new GraphServiceClient( new DelegateAuthenticationProvider( async (requestMessage) =&gt; { requestMessage.Headers.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue(&quot;Bearer&quot;, result.AccessToken); })); var res = await client.AcquireTokenForClient(scopes).ExecuteAsync(); string token = result.AccessToken; </code></pre> <p>Retrieving the token</p> <p><img src="https://i.stack.imgur.com/S5UbT.png" alt="enter image description here" /></p>
Rajesh Mopati
<p>We're using <code>sasl_tls</code> mechanism with bitnami/kafka helm chart. We're using Let's Encrypt and cert-manager for issuing the certificate. Created a secret out of the Let's Encrypt generated certificate and passed the secret to the <code>existingSecrets</code> parameter in the helm chart. Now when I'm using KafkaJS library to connect to the Kafka broker, with <code>ssl: true</code> it is throwing an error:</p> <pre><code>KafkaJSConnectionError: Connection error: unable to verify the first certificate </code></pre> <p><strong>Detailed Steps/How to generate:</strong></p> <ul> <li>Enabled external access to kafka chart so that it gives us an IP at port 9094</li> </ul> <pre><code>externalAccess.enabled: true externalAccess.autoDiscovery.enabled: true externalAccess.service.type: LoadBalancer externalAccess.service.ports.external: 9094 externalAccess.service.domain: &quot;&quot; </code></pre> <ul> <li>Bound this IP to a domain <code>xyz.com</code></li> <li>Bound this domain name to Let's Encrypt certificate issuer to issue certificate for this domain</li> <li><code>tls.crt</code> and <code>tls.key</code> are generated</li> <li>Renamed these files and used these to create a secret</li> </ul> <pre><code>kubectl create secret generic kafka-tls-0 --from-file=tls.crt=kafka-0.tls.crt --from-file=tls.key=kafka-0.tls.key </code></pre> <ul> <li>Modified chart value to configure tls part</li> </ul> <pre><code>tls.type: pem tls.pemChainIncluded: true tls.existingSecrets: [&quot;kafka-tls-0&quot;] </code></pre> <ul> <li>Applied the values of the chart (started broker)</li> <li>Now in KafkaJS client setup, tried to pass value to the <code>brokers</code> parameter in either format <code>ip:9094</code> or <code>xyz.com:9094</code>, also passed <code>ssl:true</code></li> </ul> <p><strong>My Questions:</strong></p> <ul> <li><p>Is the flow correct? Or are we going to the wrong direction?</p> </li> <li><p>What is the reason behind the problem? Is this the certificate chain that is being being wrong? (seems like it is!)</p> </li> <li><p>Is there any other chart that I can use to achieve my goal?</p> </li> </ul> <p><strong>Followup Question:</strong></p> <ol> <li>If we can make it work, what will be the next steps for ensuring auto-renewal of the certificates? Is it managed automatically? Or should we have to maintain a script for Lets' Encrypt certificate auto-renewal?</li> </ol>
Azman Amin
<p>There could be multiple causes. I'll try to list what needs to be true for this to work:</p> <ol> <li>Your node.js KafkaJS client should have a certificate store that is able to verify the CA that signed the Let's Encrypt certificate. node.js has a built-in list of certificates, and you can add to it. I have not checked but I expect the Let's Encrypt root CAs to be there.</li> </ol> <p>A couple years ago Let's Encrypt switched root CAs, so if you have an old version of node.js that could be it.</p> <ol start="2"> <li>The Kafka broker must present a certificate chain (not just your signed certificate) that includes a certificate that the node.js client can verify. Depending on which CAs your client can verify, this could mean the chain needs to go as far as the root CA (as opposed to an intermediate CA).</li> </ol> <p>You should check which certificates are in your chain. You can do this with the OpenSSL CLI: <code>openssl x509 -in cert.pem -text -noout</code></p> <p>Specific advice for the Bitnami Kafka chart: we've had trouble with how the scripts included with the chart deal with PEM keys and cert chains, where it would sometimes not extract the entire chain as it processes the PEM, and then Kafka would only see a partial chain.</p> <p>I would try to use the JKS format (Java keystore &amp; truststore) instead and see if that helps. You would create a JKS keystore with your key and a truststore with all the certificates in the chain.</p> <p>Regarding auto-renewal of certificates - you should be able to achieve that with cert-manager, however that might be challenging with the Bitnami Kafka chart as it's not suited to renewing certificates periodically, and is less suited for short-lived certificates from a CA like Let's Encrypt. Normally, you'd use Let's Encrypt with a load balancer like NGINX, you'd usually have a Kubernetes ingress controller that handles noticing the new certificates and reloading the load balancer.</p> <p>In your case, since you are trying to generate TLS certificates for use by your backend services to communicate with Kafka, you might have an easier time doing this with something that was intended for inter-service communication (which Let's Encrypt is not), like SPIRE and a matching Kubernetes operator.</p> <p><a href="https://github.com/spiffe/spire" rel="nofollow noreferrer">SPIRE</a>, which is a CNCF project that deals with attesting workload identities and representing them cryptographically - in your case as a TLS keypair for the Kafka server. It takes care of things like renewing the certificates.</p> <p>To make SPIRE easy to use in Kubernetes, deploy it together with <a href="https://github.com/otterize/spire-integration-operator" rel="nofollow noreferrer">Otterize SPIRE integration operator</a>, which uses SPIRE to generate TLS credentials, saves them in Kubernetes secrets, and takes care of refreshing the secrets as the certificates require renewal by SPIRE. You deploy it in your cluster than annotate pods with what you'd like the secret to be called that holds the certificates, and you can use other annotations to configure things like whether the format is PEM or JKS or what the TTL is. That set of configuration should make it easy to get it working with Bitnami. We use it with the Bitnami chart successfully, and <a href="https://docs.otterize.com/guides/ibac-for-k8s-kafka/" rel="nofollow noreferrer">even have a tutorial for getting it working with Bitnami</a> - stop at the section that configures ACLs if all you want is TLS.</p> <p>Since you also mentioned you use SASL, you might want to just replace the username/password completely with certificates and switch to mTLS. If you also want to add Kafka ACLs into the mix and allow access to certain topics/operations only for certain workloads, you can also deploy the <a href="https://github.com/otterize/intents-operator" rel="nofollow noreferrer">Otterize intents operator</a>. It lets you declare which topics a workload needs access to, and works together with SPIRE and built-in Kafka ACLs so that workloads can only access what they've declared.</p>
orisho
<p>I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.</p> <p>AWS ALB Ingress is created under the same NameSpace so I was thinking that only using <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/04-deny-traffic-from-other-namespaces.md" rel="nofollow noreferrer">DENY all traffic from other namespaces</a> would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/08-allow-external-traffic.md" rel="nofollow noreferrer">ALLOW traffic from external clients</a> it allows to Ingress but ALSO allows other namespaces too.</p> <p>So my example is like: (this does not work as expected)</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-from-other-namespaces namespace: os spec: podSelector: matchLabels: ingress: - from: - podSelector: {} --- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-external namespace: os spec: podSelector: matchLabels: app: nginx tier: prod customer: os ingress: - ports: - port: 80 from: [] </code></pre> <p>When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.</p>
Omer Sen
<p>The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.</p> <p>The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic <em>not allowed</em> by that policy is blocked.</p> <p>In other words, you can't say something like &quot;deny this traffic, allow all the rest&quot;. You have to effectively say, &quot;allow all the rest&quot;.</p> <p><a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html#:%7E:text=The%20AWS%20Load%20Balancer%20Controller%20supports%20the%20following%20traffic%20modes%3A" rel="nofollow noreferrer">The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods</a>. This means that the traffic originates from an AWS IP address outside the cluster.</p> <p>For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.</p> <p>If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.</p> <p>My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.</p> <p>Any other traffic to these pods will need to be explicitly allowed.</p> <p>For example:</p> <pre><code>--- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-external spec: podSelector: matchLabels: app: &lt;your-app&gt; # app-label ingress: - from: [] ports: - port: 1234 # the port which should be Internet-accessible </code></pre> <p>This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.</p> <p>We settled on automatically detecting whether a <code>Service</code> resource of type <code>LoadBalancer</code> or <code>NodePort</code> exists, or an <code>Ingress</code> resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.</p> <p>Have a look here: <a href="https://github.com/otterize/intents-operator" rel="nofollow noreferrer">https://github.com/otterize/intents-operator</a> And the documentation page explaining this: <a href="https://docs.otterize.com/components/intents-operator/#network-policies" rel="nofollow noreferrer">https://docs.otterize.com/components/intents-operator/#network-policies</a></p> <p>If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.</p>
orisho
<p>I'm trying to fetch the logs from a pod running in GKE, but I get this error:</p> <pre><code>I0117 11:42:54.468501 96671 round_trippers.go:466] curl -v -XGET -H &quot;Accept: application/json, */*&quot; -H &quot;User-Agent: kubectl/v1.26.0 (darwin/arm64) kubernetes/b46a3f8&quot; 'https://x.x.x.x/api/v1/namespaces/pleiades/pods/pleiades-0/log?container=server' I0117 11:42:54.569122 96671 round_trippers.go:553] GET https://x.x.x.x/api/v1/namespaces/pleiades/pods/pleiades-0/log?container=server 500 Internal Server Error in 100 milliseconds I0117 11:42:54.569170 96671 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 100 ms Duration 100 ms I0117 11:42:54.569186 96671 round_trippers.go:577] Response Headers: I0117 11:42:54.569202 96671 round_trippers.go:580] Content-Type: application/json I0117 11:42:54.569215 96671 round_trippers.go:580] Content-Length: 226 I0117 11:42:54.569229 96671 round_trippers.go:580] Date: Tue, 17 Jan 2023 19:42:54 GMT I0117 11:42:54.569243 96671 round_trippers.go:580] Audit-Id: a25a554f-c3f5-4f91-9711-3f2970376770 I0117 11:42:54.569332 96671 round_trippers.go:580] Cache-Control: no-cache, private I0117 11:42:54.571392 96671 request.go:1154] Response Body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;Get \&quot;https://10.6.128.40:10250/containerLogs/pleiades/pleiades-0/server\&quot;: x509: certificate is valid for 127.0.0.1, not 10.6.128.40&quot;,&quot;code&quot;:500} I0117 11:42:54.572267 96671 helpers.go:246] server response object: [{ &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;Get \&quot;https://10.6.128.40:10250/containerLogs/pleiades/pleiades-0/server\&quot;: x509: certificate is valid for 127.0.0.1, not 10.6.128.40&quot;, &quot;code&quot;: 500 }] </code></pre> <p>How do I prevent this from happening?</p>
Sienna
<p>One of the reasons for this error could be because both metrics-server and kubelet listen on port 10250. This is usually not a problem because metrics-server runs in its own namespace but the conflict would have prevented metrics-server from starting when in the host network.</p> <p>You can confirm this behavior by running the following command :</p> <pre><code>$ kubectl -n kube-system get pods -l k8s-app=metrics-server -o yaml | grep 10250 - --secure-port=10250 - containerPort: 10250 </code></pre> <p>If you can see a hostPort: 10250 in the yaml file of the metrics-server, please run the following command to delete metrics-server deployment on that cluster :</p> <pre><code>$ kubectl -n kube-system delete deployment -l k8s-app=metrics-server </code></pre> <p>Metrics server will be recreated correctly by GKE infrastructure. It should be recreated in ~15 seconds on clusters with a new addon manager, but could take up to 15 minutes on very old clusters.</p>
Manish Bavireddy
<p>I have a Kubernetes service running in Azure (AKS - Azure Kubernetes Service). It runs with the external IP. I am trying to access another service that is running in my local machine from Azure AKS. for ex: http://:9089 (Not able to access from Azure AKS)</p> <p>Is there any way where I can access my local URL from Azure AKS? Kindly help.</p>
romanreigns
<p>You could use DynDNS or just deploy your service to another namespace within the existing AKS.</p>
Philip Welz
<p>So I am running a k3s cluster on 3 RHEL 8 Servers and I want to uninstall Longhorn from the cluster using <code>helm uninstall longhorn -n longhorn-system</code></p> <p>Now all Longhorn pods, pvcs, etc. got deleted but one volume remained that is stuck in state deleting! Here some additional infos about the volume:</p> <pre><code>Name: pvc-f1df1bf8-96f4-4b28-a14d-2b20809610df Namespace: longhorn-system Labels: longhornvolume=pvc-f1df1bf8-96f4-4b28-a14d-2b20809610df recurring-job-group.longhorn.io/default=enabled setting.longhorn.io/remove-snapshots-during-filesystem-trim=ignored setting.longhorn.io/replica-auto-balance=ignored setting.longhorn.io/snapshot-data-integrity=ignored Annotations: &lt;none&gt; API Version: longhorn.io/v1beta2 Kind: Volume Metadata: Creation Timestamp: 2023-08-21T07:31:56Z Deletion Grace Period Seconds: 0 Deletion Timestamp: 2023-08-24T09:32:05Z Finalizers: longhorn.io Generation: 214 Resource Version: 7787140 UID: 6ffb214d-8ed7-4b7b-910e-a2936b764223 Spec: Standby: false Access Mode: rwo Backing Image: Base Image: Data Locality: disabled Data Source: Disable Frontend: false Disk Selector: Encrypted: false Engine Image: longhornio/longhorn-engine:v1.4.1 From Backup: Frontend: blockdev Last Attached By: Migratable: false Migration Node ID: Node ID: Node Selector: Number Of Replicas: 3 Recurring Jobs: Replica Auto Balance: ignored Restore Volume Recurring Job: ignored Revision Counter Disabled: false Size: 4294967296 Snapshot Data Integrity: ignored Stale Replica Timeout: 30 Unmap Mark Snap Chain Removed: ignored Status: Actual Size: 0 Clone Status: Snapshot: Source Volume: State: Conditions: Last Probe Time: Last Transition Time: 2023-08-21T07:31:57Z Message: Reason: Status: False Type: toomanysnapshots Last Probe Time: Last Transition Time: 2023-08-21T07:31:57Z Message: Reason: Status: True Type: scheduled Last Probe Time: Last Transition Time: 2023-08-21T07:31:57Z Message: Reason: Status: False Type: restore Current Image: longhornio/longhorn-engine:v1.4.1 Current Node ID: Expansion Required: false Frontend Disabled: false Is Standby: false Kubernetes Status: Last PVC Ref At: 2023-08-24T09:32:04Z Last Pod Ref At: 2023-08-24T09:24:48Z Namespace: backend Pv Name: Pv Status: Pvc Name: pvc-longhorn-db Workloads Status: Pod Name: wb-database-deployment-8685cbdcfc-2dfs2 Pod Status: Failed Workload Name: wb-database-deployment-8685cbdcfc Workload Type: ReplicaSet Last Backup: Last Backup At: Last Degraded At: Owner ID: node3 Pending Node ID: Remount Requested At: 2023-08-24T09:23:55Z Restore Initiated: false Restore Required: false Robustness: unknown Share Endpoint: Share State: State: deleting Events: &lt;none&gt; </code></pre> <p>I tried to remove the finalizers but that didn't help for me. Does anyone have an idea why that volume can't be uninstalled?</p>
Oberwalder Sven
<p>If you deleted the PVC and also used the command for the finalizers it means some resources associated with this PV are still running. Maybe in a different namespace. First, run this command again to make sure the finalizers are applied correctly.</p> <pre><code>kubectl patch pv &lt;pv_name&gt; -p '{&quot;metadata&quot;:{&quot;finalizers&quot;:null}}' and then kubectl delete pv &lt;pv_name&gt; --grace-period=0 --force </code></pre> <p>If still PV not deleted then check the resources with this command.</p> <pre><code>PVC_NAME=&quot;&lt;pvc-name&gt;&quot;; kubectl get pods,deployments,statefulsets,daemonsets,replicasets,jobs,cronjobs --all-namespaces -o json | jq --arg PVC &quot;$PVC_NAME&quot; '.items[] | select(.spec.template.spec.volumes[]?.persistentVolumeClaim.claimName == $PVC) | .metadata.namespace + &quot;/&quot; + .metadata.name + &quot; (&quot; + .kind + &quot;)&quot;' </code></pre> <p>This will return the info where this PV is being used and then delete those resources manually. Once you delete those resources PV will be deleted. I hope this helps.</p>
tauqeerahmad24
<p>Our current Production Elasticsearch cluster for logs collection is manually managed and runs on AWS. I'm creating the same cluster using ECK deployed with Helm under Terraform. I was able to get all the features replicated (S3 repo for snapshots, ingest pipelines, index templates, etc) and deployed, so, first deployment is perfectly working. But when I tried to update the cluster (changing the ES version from 8.3.2 to 8.5.2) I get this error:</p> <pre><code>│ Error: Provider produced inconsistent result after apply │ │ When applying changes to kubernetes_manifest.elasticsearch_deploy, provider &quot;provider\[&quot;registry.terraform.io/hashicorp/kubernetes&quot;\]&quot; produced an unexpected new │ value: .object: wrong final value type: attribute &quot;spec&quot;: attribute &quot;nodeSets&quot;: tuple required. │ │ This is a bug in the provider, which should be reported in the provider's own issue tracker. </code></pre> <p>I stripped down my elasticsearch and kibana manifests to try to isolate the problem. Again, I previously deployed the eck operator with its helm chart: it works, because the first deployment of the cluster is flawless.</p> <p>I have in my main.tf:</p> <pre><code>resource &quot;kubernetes_manifest&quot; &quot;elasticsearch_deploy&quot; { field_manager { force_conflicts = true } computed_fields = \[&quot;metadata.labels&quot;, &quot;metadata.annotations&quot;, &quot;spec.finalizers&quot;, &quot;spec.nodeSets&quot;, &quot;status&quot;\] manifest = yamldecode(templatefile(&quot;config/elasticsearch.yaml&quot;, { version = var.elastic_stack_version nodes = var.logging_elasticsearch_nodes_count cluster_name = local.cluster_name })) } </code></pre> <pre><code>resource &quot;kubernetes_manifest&quot; &quot;kibana_deploy&quot; { field_manager { force_conflicts = true } depends_on = \[kubernetes_manifest.elasticsearch_deploy\] computed_fields = \[&quot;metadata.labels&quot;, &quot;metadata.annotations&quot;, &quot;spec.finalizers&quot;, &quot;spec.nodeSets&quot;, &quot;status&quot;\] manifest = yamldecode(templatefile(&quot;config/kibana.yaml&quot;, { version = var.elastic_stack_version cluster_name = local.cluster_name namespace = local.stack_namespace })) } </code></pre> <p>and my manifests are:</p> <pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: annotations: eck.k8s.elastic.co/downward-node-labels: &quot;topology.kubernetes.io/zone&quot; name: ${cluster_name} namespace: ${namespace} spec: version: ${version} volumeClaimDeletePolicy: DeleteOnScaledownAndClusterDeletion monitoring: metrics: elasticsearchRefs: - name: ${cluster_name} logs: elasticsearchRefs: - name: ${cluster_name} nodeSets: - name: logging-nodes count: ${nodes} config: node.store.allow_mmap: false]] </code></pre> <pre><code>apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: ${cluster_name} namespace: ${namespace} spec: version: ${version} count: 1 elasticsearchRef: name: ${cluster_name} monitoring: metrics: elasticsearchRefs: - name: ${cluster_name} logs: elasticsearchRefs: - name: ${cluster_name} podTemplate: metadata: labels: stack_name: ${stack_name} stack_repository: ${stack_repository} spec: serviceAccountName: ${service_account} containers: - name: kibana resources: limits: memory: 1Gi cpu: &quot;1&quot; </code></pre> <p>When I change the version, testing a cluster upgrade (e.g. going from 8.3.2 to 8.5.2), I get the error mentioned at the beginning of this post. Is it a eck operator bug or I'm doing something wrong? Do I need to add some other entity in the 'computed_fields' and remove 'force_conflicts'?</p>
Roberto D'Arco
<p>In the end, a colleague of mine found that indeed you have to add the whole &quot;spec&quot; to the computed_fields, like this:</p> <pre><code>resource &quot;kubernetes_manifest&quot; &quot;elasticsearch_deploy&quot; { field_manager { force_conflicts = true } computed_fields = [&quot;metadata.labels&quot;, &quot;metadata.annotations&quot;, &quot;spec&quot;, &quot;status&quot;] manifest = yamldecode(templatefile(&quot;config/elasticsearch.yaml&quot;, { version = var.elastic_stack_version nodes = var.logging_elasticsearch_nodes_count cluster_name = local.cluster_name })) } </code></pre> <p>This way I got a proper cluster upgrade, without full cluster restart.</p> <p>Underlying reason: the eck operator makes changes to the spec section. Even if you just do a terraform apply without any changes (and &quot;spec&quot; is not added to the computed_fields), terraform will find that something has changed and will perform an update.</p>
Roberto D'Arco
<p>I am a little confused about API server address host that is provided by AKS why am i not able to access the cluster application via the api server address or is there any way i can do that?? I know we can always use LoadBalancer or a NodePort service to access any application inside the cluster externally, but can we do that with the API server address as well.</p>
ashu8912
<p>no, with the API server address you can not access your application.</p> <p>You only can access the Pod or the Service (just use type <code>ClusterIP</code> to not create an Azure LoadBalancer) from inside you cluster. For external access you would need an <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">ingress-controller</a> combined with a Service type <code>LoadBalancer</code>.</p> <p>You could use the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/" rel="nofollow noreferrer">API server</a> to create, read, update &amp; delete Kubernetes resources like services, pods, deployments, secrets etc.</p>
Philip Welz
<p>I am running k3s version 1.25.5 and I would like to define traefik as an ingress for one of the services defined through <a href="https://helm.camunda.io/" rel="nofollow noreferrer">an external helm chart</a>. I am struggling to find the right ingress definition. I tried with the below yaml file but that gives an error stating</p> <pre><code>error: resource mapping not found for name: &quot;c8-ingress&quot; namespace: &quot;&quot; from &quot;zeebe-traefik.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot; ensure CRDs are installed first </code></pre> <p>This seems to be because of the an old apiVersion used in the yaml file. How to do it the right way?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: c8-ingress annotations: kubernetes.io/ingress.class: &quot;traefik&quot; spec: rules: - http: paths: - path: &quot;/&quot; backend: serviceName: dev-zeebe-gateway servicePort: 26500 </code></pre> <p>Thanks.</p>
Andy Dufresne
<p>Your example is using an outdated Ingress definition. In v1.25.x you need to use the stable <code>networking.k8s.io/v1</code> API, as described <a href="https://v1-25.docs.kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">here</a>.</p> <p>It is also recommended to provide the fitting namespace. This is useful for documentation, but also required for <a href="https://v1-25.docs.kubernetes.io/docs/concepts/services-networking/ingress/#resource-backend" rel="nofollow noreferrer">resource backends</a>. It will also avoid adding <code>-n YOURNAMESPACE</code> to every <code>kubectl apply</code>.</p> <p>In your case, this may look something like:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: c8-ingress namespace: YOURNAMESPACE spec: rules: - http: paths: - pathType: Prefix path: / backend: service: name: dev-zeebe-gateway port: number: 26500 </code></pre> <p>I hope this helps to solve your issue.</p> <p>In many cases, you can run <code>kubectl explain RESOURCE</code> to get useful links and resources for a given api-resource.</p>
dschier
<p>I have a resource group on Azure which contains several resources that were originally created with a Terraform script. Somehow I deleted the Kubernetes cluster resource and also reset the TF state. My intention was recreating this AKS module, but now when I run the terraform script pipeline I get the error that the resource already exists for the following resources:</p> <p><strong>module.keyvault.azurerm_key_vault_access_policy.service_principle_policy: Creating... module.keyvault.azurerm_key_vault_access_policy.users_policy: Creating... module.keyvault.azurerm_key_vault_access_policy.readers_policy: Creating... module.rg.azurerm_resource_group.rg: Creating... module.keyvault.azurerm_key_vault_access_policy.readers_policy: Creating...</strong></p> <p>My question is, how could I recreate the AKS cluster while keeping the current resources?</p> <p>Thanks in advance.</p>
Felipe
<p>As you have deleted the resources from state file, one possible way is to import the same resource via terraform import command like below:-</p> <pre><code>terraform import module.keyvault.azurerm_key_vault_access_policy.service_principle_policy &lt;existing_key_vault_id&gt;/accesspolicies/&lt;policy_id&gt; terraform import module.keyvault.azurerm_key_vault_access_policy.users_policy &lt;existing_key_vault_id&gt;/accesspolicies/&lt;policy_id&gt; terraform import module.keyvault.azurerm_key_vault_access_policy.readers_policy &lt;existing_key_vault_id&gt;/accesspolicies/&lt;policy_id&gt; terraform import module.rg.azurerm_resource_group.rg /subscriptions/&lt;subscription_id&gt;/resourceGroups/&lt;resource_group_name&gt; </code></pre> <p>Another way is to get the terraform configuration of your existing state file and then add terraform configuration code blocks to match the existing state.</p> <p>Check existing state like below:-</p> <pre><code>terraform state list </code></pre> <p><img src="https://i.imgur.com/P9NzYzl.png" alt="enter image description here" /></p> <p>Run terraform show command to check the existing configuration and add the code block of AKS that matches this configuration state.</p> <pre><code>terraform show </code></pre> <p><img src="https://i.imgur.com/2uRuHiV.png" alt="enter image description here" /></p> <p>After creating a configuration code for missing or already existing resources like I created one configuration block for NetWorkWatcherRG in my code and imported it in my tfstate :-</p> <p><img src="https://i.imgur.com/vVX0joZ.png" alt="enter image description here" /></p> <p><strong>Added the configuration block:-</strong></p> <pre><code>resource &quot;azurerm_resource_group&quot; &quot;NetworkWatcherRG&quot; { name = &quot;NetworkWatcherRG&quot; location = var.resource_group_location } </code></pre> <p><img src="https://i.imgur.com/afgKgHy.png" alt="enter image description here" /></p> <p><strong>Reference:-</strong></p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-terraform?tabs=azure-cli" rel="nofollow noreferrer">Quickstart: Create an Azure Kubernetes Service (AKS) cluster by using Terraform - Azure Kubernetes Service | Microsoft Learn</a></p>
SiddheshDesai
<p>I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.</p> <p>This is the deployment .yml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: aspnetapp spec: replicas: 1 selector: matchLabels: app: aspnet template: metadata: labels: app: aspnet spec: containers: - name: aspnetapp image: &lt;my_image&gt; resources: limits: cpu: &quot;0.5&quot; memory: 64Mi ports: - containerPort: 8080 </code></pre> <p>and this is the service .yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: aspnet-loadbalancer spec: type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 8080 selector: name: aspnetapp </code></pre> <p>Everything seems deployed correctly</p> <p><a href="https://i.stack.imgur.com/fxeIr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fxeIr.png" alt="enter image description here" /></a></p> <p>Another check I did was to enter the pod and run <code>curl http://localhost:80</code>, and the application is running correctly, but if I try to access the application from the browser using <a href="http://20.103.147.69" rel="nofollow noreferrer">http://20.103.147.69</a> a timeout is returned.</p> <p>What else could be wrong?</p>
Salvatore Calla'
<p>Seems that you do not have an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a> deployed on your AKS as you have your application exposed directly. You will need that in order to get <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress</a> to work.</p> <p>To verify if your application is working your can use port-forward and then access http://localhost:8080 :</p> <pre><code>kubectl port-forward aspnetapp 8080:8080 </code></pre> <p>But you should def. install a ingress-controller: Here is a Workflow from MS to install <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">ingress-nginx</a> as IC on your Cluster.</p> <p>You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG name: ingress-nginx-controller spec: loadBalancerIP: &lt;YOUR_STATIC_IP&gt; type: LoadBalancer </code></pre> <p>The Ingress Controller then will route incoming traffic to your application with an Ingress resource:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress spec: ingressClassName: nginx # ingress-nginx specifix rules: - http: paths: - path: / pathType: Prefix backend: service: name: test port: number: 80 </code></pre> <p>PS: Never expose your application directly to the internet, always use the ingress controller</p>
Philip Welz
<p>i want to ingested containers json log data using filebeat deployed on kubernetes, i am able to ingest the logs to but i am unable to format the json logs in to fields</p> <p>following is the logs visible in kibana</p> <p><a href="https://i.stack.imgur.com/xd22X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xd22X.png" alt="enter image description here" /></a></p> <p>I want to take out the fields from messages above e.g. field for log.level, message, service.name and so on</p> <p>Following are the filebeat configuration we are using</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container paths: - /var/log/containers/*.log - /var/log/containers/*.json processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: &quot;/var/log/containers/&quot; # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: filebeat.autodiscover: providers: - type: kubernetes node: ${NODE_NAME} templates: - condition: contains: kubernetes.container.name: &quot;no-json-logging&quot; config: - type: container paths: - &quot;/var/log/containers/*-${data.kubernetes.container.id}.log&quot; - condition: contains: kubernetes.container.name: &quot;json-logging&quot; config: - type: container paths: - &quot;/var/log/containers/*-${data.kubernetes.container.id}.log&quot; json.keys_under_root: true json.add_error_key: true json.message_key: message processors: - add_cloud_metadata: - add_host_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:XX.XX.XX.XX}:${ELASTICSEARCH_PORT:9201}'] username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:8.5.3 args: [ &quot;-c&quot;, &quot;/etc/filebeat.yml&quot;, &quot;-e&quot;, ] env: - name: ELASTICSEARCH_HOST value: XX.XX.XX.XX - name: ELASTICSEARCH_PORT value: &quot;9201&quot; - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: elastic - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat-kubeadm-config namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat-kubeadm-config apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: [&quot;apps&quot;] resources: - replicasets verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] - apiGroups: [&quot;batch&quot;] resources: - jobs verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat # should be the namespace where filebeat is running namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: [&quot;get&quot;, &quot;create&quot;, &quot;update&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat-kubeadm-config namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: [&quot;&quot;] resources: - configmaps resourceNames: - kubeadm-config verbs: [&quot;get&quot;] --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat --- </code></pre> <p>How can i take out the fields from json message?</p>
pratiksha tiwari
<p>The issue is from configuration. One possible work around is reinstalling the filebeat and sending the logs to elastic search.</p> <p>Follow the content in the blog by <a href="https://medium.com/@semih.sezer/how-to-send-airflow-logs-to-elasticsearch-using-filebeat-and-logstash-250c074e7575" rel="nofollow noreferrer">Semih Sezer</a> which has the process of sending Airflow logs to elastic search using filebeat.</p>
Murali Sankarbanda
<p>I'm running a deployment with three nginx pods and a service but when I try to connect to the service cluster-IP it doesn't connect. Below are my YAML files and kubectl commands.</p> <p>nginx-deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>nginx-service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: selector: app: nginx ports: - port: 80 protocol: TCP targetPort: 80 ubuntu@k8s-t2-cp:~/nginx$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-55f598f8d-6d25c 1/1 Running 0 23m 192.168.26.152 ip-172-30-1-183 &lt;none&gt; &lt;none&gt; nginx-deployment-55f598f8d-6j92r 1/1 Running 0 23m 192.168.26.151 ip-172-30-1-183 &lt;none&gt; &lt;none&gt; nginx-deployment-55f598f8d-rhpwv 1/1 Running 0 23m 192.168.26.150 ip-172-30-1-183 &lt;none&gt; &lt;none&gt; ubuntu@k8s-t2-cp:~/nginx$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 125m nginx ClusterIP 10.106.154.85 &lt;none&gt; 80/TCP 23m </code></pre> <p><strong>The curl command just hangs:</strong></p> <p>ubuntu@k8s-t2-cp:~/nginx$ curl 10.106.154.85</p>
Philoxopher
<p>ClusterIP is not for external access as I can see you are using the curl command from your Ubuntu machine. ClusterIP is only accessible inside the namespace on the pod level. If you want to check if the service is working or not without shifting service type you can do as follow.</p> <p>Exec into any nginx pod by using this command <code>kubectl exec -it my-pod -- /bin/bash</code> In your case it will be like that <code>kubectl exec -it nginx-deployment-55f598f8d-6d25c -- /bin/bash</code> Now you will be inside of pod and curl command should work. You may have to install curl manually depending upon which Linux image is used by your pods.</p> <p>Another way to test your services is to convert your ClusterIP into nodePort or Loadbalancer.</p>
tauqeerahmad24
<p>I want to define a CRD struct and generate Custom resource that has a field that can take up any value.</p> <p>If the struct was to be something like:</p> <pre><code>type MyStruct struct{ MyField interface{} `json:&quot;myfield&quot;` } </code></pre> <p>I would like <code>MyField</code> to store a number or a string in the CRD:</p> <pre><code>myfield:2 </code></pre> <p>or</p> <pre><code>myfield:&quot;somestring&quot; </code></pre> <p>However I get an error:</p> <pre><code>DeepCopy of &quot;interface{}&quot; is unsupported. Instead, use named interfaces with DeepCopy&lt;named-interface&gt; as one of the methods. </code></pre> <p>How do I work around this problem?</p>
Aishwarya Nagaraj
<p>The error you're encountering, &quot;DeepCopy of 'interface{}' is unsupported,&quot; is due to the fact that Kubernetes's client library (client-go) uses code generation to create DeepCopy functions for your custom resources, and it cannot generate a DeepCopy function for an interface{} field because it doesn't know what specific types may be stored in that field at runtime.</p> <p>To work around this problem and achieve your goal of having a custom resource field that can store either a number or a string, you can use a union type approach with named interfaces. Here's how you can do it:</p> <p>Define named interfaces for the types you want to support:</p> <pre><code>type MyStructIntValue interface { IsMyStructIntValue() bool } type MyStructStringValue interface { IsMyStructStringValue() bool } </code></pre> <p>Implement these interfaces for the types you want to use (int and string in your case):</p> <pre><code>type MyIntValue int func (i MyIntValue) IsMyStructIntValue() bool { return true } type MyStringValue string func (s MyStringValue) IsMyStructStringValue() bool { return true } </code></pre> <p>Modify your MyStruct definition to use the named interfaces:</p> <pre><code>type MyStruct struct { MyField MyStructIntValue `json:&quot;myfield&quot;` MyStringField MyStructStringValue `json:&quot;mystringfield&quot;` } </code></pre> <p>When creating a custom resource, you can use either MyIntValue or MyStringValue to set the field:</p> <pre><code>cr := MyCustomResource{ MyField: MyIntValue(2), MyStringField: MyStringValue(&quot;somestring&quot;), } </code></pre> <p>By using named interfaces with specific types, you provide enough information for Kubernetes's client-go to generate DeepCopy functions for your custom resource. This approach allows you to have a custom resource field that can store either a number or a string while working within the constraints of Kubernetes's code generation.</p>
Corboss
<p>On Azure AKS I have deployed three node cluster, then I used <code>.yaml</code> file in order to deploy application. I have prepared Kubernetes object <code>DeamonSet</code>. It created three PODs, on each POD it deployed container with security application installed. On AKS node I have CRI container runtime instead of Docker runtime. My goal is to prepare app container image. My question is how to prepare container image using CRI runtime? I check Kubernetes documentation --&gt; <a href="https://kubernetes.io/docs/reference/tools/map-crictl-dockercli/" rel="nofollow noreferrer">Docker - CRI commands mapping</a>, and there is no commands for image creation, in case of Docker we have <code>docker commit</code> command, we can use it for image preparation.</p>
tester81
<p>The CRI is the <a href="https://kubernetes.io/docs/concepts/architecture/cri/" rel="nofollow noreferrer">Container Runtime Interface</a>. The purpose is the have a standard protocol for the communication between the kubelet and Container Runtime.</p> <p>The <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">container runtime</a> can be docker, CRI-O or containerd. For <a href="https://learn.microsoft.com/en-gb/azure/aks/cluster-configuration" rel="nofollow noreferrer">AKS the default runtime since 1.19 is containerd</a>. All these runtimes can run <a href="https://github.com/opencontainers/image-spec" rel="nofollow noreferrer">OCI images</a> or also falsely (and widely) known as docker image or container image.</p> <p>So if you create an OCI conform container image with docker your can run it on all container runtimes and also with kubernetes.</p>
Philip Welz
<p>I'm deploying <strong>wazuh-manager on my kubernetes cluster</strong> and I need to disabled some security check features from the <strong>ossec.conf</strong> and I'm trying to copy the <strong>config-map ossec.conf(my setup) with the one from the wazuh-manager image but if I'm creating the &quot;volume mount&quot; on /var/ossec/etc/ossec.conf&quot; it will delete everything from the /var/ossec/etc/(when wazuh-manager pods is deployed it will copy all files that this manager needs).</strong> So, I'm thinking to create a new volume mount <strong>&quot;/wazuh/ossec.conf&quot;</strong> with <strong>&quot;lifecycle poststart sleep &gt; exec command &quot;cp /wazuh/ossec.conf &gt; /var/ossec/etc/ &quot;</strong> but I'm getting an error that <em><strong>&quot;cannot find /var/ossec/etc/&quot;.</strong></em></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: wazuh-manager labels: node-type: master spec: replicas: 1 selector: matchLabels: appComponent: wazuh-manager node-type: master serviceName: wazuh template: metadata: labels: appComponent: wazuh-manager node-type: master name: wazuh-manager spec: volumes: - name: ossec-conf configMap: name: ossec-config containers: - name: wazuh-manager image: wazuh-manager4.8 lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;cp /wazuh/ossec.conf &gt;/var/ossec/etc/ossec.conf&quot;] resources: securityContext: capabilities: add: [&quot;SYS_CHROOT&quot;] volumeMounts: - name: ossec-conf mountPath: /wazuh/ossec.conf subPath: master.conf readOnly: true ports: - containerPort: 8855 name: registration volumeClaimTemplates: - metadata: name: wazuh-disk spec: accessModes: ReadWriteOnce storageClassName: wazuh-csi-disk resources: requests: storage: 50 </code></pre> <p><strong>error:</strong></p> <pre><code>$ kubectl get pods -n wazuh wazuh-1670333556-0 0/1 PostStartHookError: command '/bin/sh -c cp /wazuh/ossec.conf &gt; /var/ossec/etc/ossec.conf' exited with 1: /bin/sh: /var/ossec/etc/ossec.conf: No such file or directory... </code></pre>
Ciocoiu Petrisor
<p>Within the wazuh-kubernetes repository you have a file for each of the Wazuh manager cluster nodes:</p> <p><strong>wazuh/wazuh_managers/wazuh_conf/master.conf</strong> for the Wazuh Manager master node.</p> <p><strong>wazuh/wazuh_managers/wazuh_conf/worker.conf</strong> for the Wazuh Manager worker node.</p> <p>With these files, in the <strong>Kustomization.yml</strong> script, configmaps are created:</p> <pre><code>configMapGenerator: -name: indexer-conf files: - indexer_stack/wazuh-indexer/indexer_conf/opensearch.yml - indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml -name: wazuh-conf files: -wazuh_managers/wazuh_conf/master.conf -wazuh_managers/wazuh_conf/worker.conf -name: dashboard-conf files: - indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml </code></pre> <p>Then, in the deployment manifest, they are mounted to persist the configurations in the ossec.conf file of each cluster node:</p> <p><strong>wazuh/wazuh_managers/wazuh-master-sts.yaml</strong>:</p> <pre><code>... specification: volumes: -name:config configMap: name: wazuh-conf ... volumeMounts: -name:config mountPath: /wazuh-config-mount/etc/ossec.conf subPath: master.conf ... </code></pre> <p>It should be noted that the configuration files that you need to copy into the <strong>/var/ossec/</strong> directory must be mounted on the <strong>/wazuh-config-mount/</strong> directory and then the Wazuh Manager image entrypoint takes care of copying it to its location at the start of the container. As an example, the configmap is mounted to <strong>/wazuh-config-mount/etc/ossec.conf</strong> and then copied to <strong>/var/ossec/etc/ossec.conf</strong> at startup.</p>
Victor Carlos Erenu
<p>I'm currently having some issues to sign in to a private AKS Cluster with the following commands:</p> <pre><code>az account set --subscription [subscription_id] </code></pre> <pre><code>az aks get-credentials --resource-group [resource-group] --name [AKS_cluster_name] </code></pre> <p>After I typed those two commands it ask me to authenticate through the web with a code that is generated by AZ CLI, and after that, I have the following issue on the terminal:</p> <pre><code>To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code RTEEREDTE to authenticate. Unable to connect to the server: dial tcp: lookup aksdusw2aks01-0581cf8f.hcp.westus2.azmk8s.io: i/o timeout </code></pre> <p>What could be the potential issue? How can I successfully login to a private AKS Cluster?</p> <p>Notes:</p> <p>I have some other clusters and I'm able to login to them through the terminal without having any type or kind of errors.</p>
Hvaandres
<p>You cant use kubectl to access the API Server of a private AKS cluster, thats the design by making it private (no public access). You will need to use <a href="https://learn.microsoft.com/en-us/azure/aks/command-invoke" rel="noreferrer">az aks command invoke</a> to invoke commands through the Azure API:</p> <pre><code>az aks command invoke -n &lt;CLUSTER_NAME&gt; -g &lt;CLUSTER_RG&gt; -c &quot;kubectl get pods -A&quot; </code></pre>
Philip Welz
<p>I need to extract the ca-bundle.crt from the configmap &quot;kubelet-serving-ca&quot; in the namespace &quot;openshift-kube-apiserver&quot; to use it on another configmap and on a pod using the path to the file.</p> <p>how I can do this?</p> <p>Thanks!</p>
Syirtblplmj
<p>In order to extract <code>ca-bundle.crt</code> from the config map <code>kubelet-serving-ca</code> in the namespace <code>openshift-kube-apiserver</code> and use it in another configmap, please use the following commands which was provided by “P….” in the comments section.</p> <pre><code>kubectl get configmap -n openshift-kube-apiserver kubelet-serving-ca -o jsonpath='{.data.ca-bundle\.crt}' &gt; ca-bundle.crt </code></pre> <p>If the fields by the jsonpath expression needs to be printed into another file then please use the below command along with the kubectl command.</p> <pre><code>-o jsonpath-file=&lt;filename&gt; </code></pre> <p>The syntax <code>kubectl get configmap</code> retrieves the value of a key from the specified file and the syntax <code>-o jsonpath=&lt;template&gt;</code> prints the fields defined in a jsonpath expression.</p> <p>Please refer to the official <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">documentation</a> for more information.</p>
Kiran Kotturi
<p>I’m having Kubernetes version: v1.25.6+k3s1</p> <p>cert-manager: 1.11.0</p> <p>Host: Ubuntu 22.04</p> <p>I'm creating some certificates with cert-manager and everything looks good, but it turned out it isn't. The problem is that the certificates are not renewed so secrets will have old certificates on them resulting in application requests failing.</p> <p>I investigated that. In the beginning I thought it is a problem with cert-manager [they had this problem before] but after continuing to investigate I think the problem is actually something else and that beeing the time difference between my local time [from where I'm using kubectl to deploy things] and kubernetes hosts machine time.</p> <p>I think the certificates are not renewed because actually they should not based on the hosts machine time.</p> <p>e.g.: My local time it is 3PM so I'm creating some certificates that should renewed after 1H. I'll check the certificates and yes, they should be renewed at 4PM. But of course, they aren't. I checked the kubernetes host machine local time and it was 2AM [so until 4PM to renew the cert [-5m because the notbefore is -5m] it is a lot, but my certs already expired for hours]</p> <p>The question is: What is the best approach to deploy things on Kubernetes, using kubectl from another machine, but in this specific example, when creating certificates to not use my local time but kubernetes machine time?</p> <p>Regards,</p> <p>L.E.: so I changed the timezone into Kubernetes hosts machine to be same as my local machine, but for some reasons it seems the notBefore is with 2h behind so now doesn't make any sense anymore :-(</p>
Astin Gengo
<p>Cert-manager in Kubernetes will not be affected basically by timezone differences as it uses Coordinated Universal Time (UTC) as the standard timezone for all the processes.</p> <p>Cert-manager will automatically renew Certificates. It will calculate when to renew a Certificate based on the issued certificate's <code>duration</code> and a <code>renewBefore</code> value.</p> <p><code>spec.duration</code> and <code>spec.renewBefore</code> fields on a Certificate can be used to specify a certificate's <code>duration</code> and a <code>renewBefore</code> value. Default value for <code>spec.duration</code> is 90 days.The actual duration may be different depending upon the issuers configurations. Minimum value for <code>spec.duration</code> is 1 hour and minimum value for <code>spec.renewBefore</code> is 5 minutes. Also, please note that <code>spec.duration &gt; spec.renewBefore</code>.</p> <p>Once a certificate has been issued, cert-manager will calculate the renewal time for the Certificate. By default this will be 2/3rd of the issued certificate's duration. If <code>spec.renewBefore</code> has been set, it will be <code>spec.renewBefore</code> amount of time before expiry. Cert-manager will set <code>Certificate status.RenewalTime</code> to the time when the renewal will be attempted.</p> <p>The above information is derived from the official <a href="https://cert-manager.io/docs/usage/certificate/#renewal" rel="nofollow noreferrer">documentation</a>.</p>
Kiran Kotturi
<p>I am doing testing which includes the <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster" rel="nofollow noreferrer">Redis Cluster Bitnami Helm Chart</a>. However, some recent changes to the chart means that I can no longer set the <code>persistence</code> option to <code>false</code>. This is highly irritating, as now the cluster is stuck in <code>pending</code> status with the failure message &quot;0/5 nodes are available: 5 node(s) didn't find available persistent volumes to bind&quot;. I assume because it is attempting to fulfill some outstanding PVCs but cannot find a volume. Since this is just for testing and do not need to persist the data to disk, is there a way of disabling this or making a dummy volume? If not, what is the easiest way around this?</p>
Hegemon
<p>As Franxi mentioned in the comments above and provided the PR, there is no way doing a dummy volume. Closest solution for you is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a></p> <p>Note this:</p> <blockquote> <p>Depending on your environment, emptyDir volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage. However, if you set the emptyDir.medium field to &quot;Memory&quot;, Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write count against your container's memory limit.</p> </blockquote> <p>Examples:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} </code></pre> <p>Example with <code>emptyDir.medium</code> field:</p> <pre><code>... volumes: - name: ram-disk emptyDir: medium: &quot;Memory&quot; </code></pre> <p>You can also to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">determine the size limit</a>:</p> <blockquote> <p>Enable kubelets to determine the size limit for memory-backed volumes (mainly emptyDir volumes).</p> </blockquote>
Bazhikov
<p>I am using Apache Nifi on Kubernetes. I have deployed it and pods and service are working well. It works well when I port forward my apache nifi service with :</p> <pre><code>kubectl port-forward service/nifi-svc 8443:8443 -n mynamespace </code></pre> <p>But when I try to create an ingress with Traefik I have the error &quot;Internal server error&quot;. Here is my yaml for ingress:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: nifi-ingress namespace: mynamespace spec: entryPoints: - websecure routes: - kind: Rule match: Host(`XXX`) services: - name: nifi-svc port: 8443 tls: {} </code></pre> <p><strong>I don't know where I am wrong in my yaml file for ingress.</strong></p> <p><em><strong>UPDATE BELOW WITH YAML files I did</strong></em></p> <p>To deploy the pods I did this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ingress-tests-nifi namespace: mynamespace labels: name : ingress-tests-nifi app : ingress-tests-nifi spec: strategy: type: Recreate selector: matchLabels: app: ingress-tests-nifi template: metadata: labels: app: ingress-tests-nifi spec: restartPolicy: Always containers: - name: nifi2 image: XXX imagePullPolicy: IfNotPresent ports: - containerPort: 8443 name: nifi2 env: - name: &quot;NIFI_SENSITIVE_PROPS_KEY&quot; value: &quot;XXX&quot; - name: ALLOW_ANONYMOUS_LOGIN value: &quot;no&quot; - name: SINGLE_USER_CREDENTIALS_USERNAME value: XXX - name: SINGLE_USER_CREDENTIALS_PASSWORD value: XXX - name: NIFI_WEB_HTTPS_HOST value: &quot;0.0.0.0&quot; - name: NIFI_WEB_HTTPS_PORT value: &quot;8443&quot; - name: NIFI_WEB_PROXY_HOST value: 0.0.0.0:8443 - name: HOSTNAME value: &quot;nifi1&quot; - name: NIFI_ANALYTICS_PREDICT_ENABLED value: &quot;true&quot; - name: NIFI_ELECTION_MAX_CANDIDATES value: &quot;1&quot; - name: NIFI_ELECTION_MAX_WAIT value: &quot;20 sec&quot; - name: NIFI_JVM_HEAP_INIT value: &quot;1g&quot; - name: NIFI_JVM_HEAP_MAX value: &quot;1g&quot; volumeMounts: - name: pv-XXX mountPath: /opt/nifi/nifi-current/data subPath: data livenessProbe: exec: command: - pgrep - java initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 1 readinessProbe: tcpSocket: port: 8443 initialDelaySeconds: 240 periodSeconds: 30 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 1 resources: requests: cpu: 400m ephemeral-storage: 1Gi memory: 1Gi limits: cpu: 500m ephemeral-storage: 1Gi memory: 1Gi imagePullSecrets: - name: depot-secret volumes: - name: pv-XXX persistentVolumeClaim: claimName: pv-XXX </code></pre> <p>And for the service yaml I did this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ingress-tests-nifi-svc namespace: mynamespace spec: selector: app: ingress-tests-nifi ports: - port: 8443 protocol: TCP targetPort: 8443 </code></pre>
lbened
<p>Check if the ingress host is present in the nifi.web.proxy.host property in the nifi.properties file. If your nifi is secured, appropriate certificates must be set up (the ingress host and nifi must trust each other).</p> <p>Checking the nifi logs for the exception might help. Check any of app-log.log, nifi-bootstrap.log and nifi-user.log . They are usually in ${NIFI_HOME}/logs/ in your container.</p>
mmml
<p>If I want to add a field in a CRD(without change any exist field), Should I do it by create a new version?</p> <p>If I should create a new version, then what's the disadvantages of directly modifying the original version?</p>
dayeguilaiye
<p>As explained in the <strong><a href="https://faun.dev/c/stories/dineshparvathaneni/kubernetes-crd-versioning-for-operator-developers/" rel="nofollow noreferrer">blog</a></strong> written by <strong>Dinesh Parvathaneni</strong>, you can validate the points as mentioned below:</p> <p>CRDs are similar to K8s built-in types and the expectation for operator developers is to follow the same guidelines when it comes to their versioning.</p> <ul> <li> <blockquote> <p>Adding a <strong>required new field</strong> or removing a field is a <strong>backward incompatible change</strong> to the API. This makes all the old versions immediately unusable. So don’t make backward incompatible CRD API changes.</p> </blockquote> </li> <li> <blockquote> <p>If you want to still continue using the old versions they need to be updated with the <strong>new field</strong> as an <strong>optional parameter</strong> as K8s doesn’t add a required field, instead adds an optional field with a<br /> default value.</p> </blockquote> <blockquote> <p><strong>Example</strong>: when a new field is added, the new version of the operator would still behave as the old version when the new field is not provided in user input. This is more like a <strong>feature flag</strong> for the new features in the operator.</p> </blockquote> </li> <li> <blockquote> <p>CRD can define multiple versions of the custom resource. A version can be marked as served or not served and only one version can be used as a storage version in <strong>etcd</strong>.</p> </blockquote> </li> <li> <blockquote> <p>If there is <strong>schema difference</strong> across versions, conversion <strong>webhooks</strong> are needed to convert between versions when necessary.</p> </blockquote> </li> </ul> <p>Hope the above information is useful to you.</p>
Kiran Kotturi
<p>I have an old nodepool with machine type X, and I am migrating them workload to a new nodepool of machine type Y. Both nodepools are up, and I've disabled autoscaling on the old nodepool. I added a cordon to the old nodepool, then drained the nodes using the command provided by <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="nofollow noreferrer">the GKE doc</a>:</p> <pre><code>for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=old-pool -o=name); do kubectl drain --force --ignore-daemonsets --delete-emptydir-data --grace-period=300 &quot;$node&quot;; done </code></pre> <p>Those have a pod disruption budget (PDB) of Min 1 available, and the new nodepool has autoscaling setup with min 1 and max around 10.</p> <p>My problem is that the drain does not complete, it gets stuck saying <code>Cannot evict pod as it would violate the pod's disruption budget.</code>. It makes sense that it doesn't evict the pod, but at the same time, should it trigger the creation of the same pod on the new nodepool?</p> <p>As new pods are not getting created on the new nodepool to let the old node drain, I cannot migrate my workload (at least not without disruption, which I cannot have).</p> <p>What am I missing here?</p>
David Gourde
<p>The log <code>Cannot evict pod as it would violate the pod's disruption budget</code> confirms that the root cause is due to the pod's disruption budget(PDB) configuration.</p> <p>The main components of the PDB are <strong>minavailable</strong> and <strong>maxunavailable</strong>.</p> <p><code>spec.minAvailable</code> : this is the total number of pods that must be available after the eviction, in the absence of the evicted pod.</p> <p><code>spec.maxUnavailable</code> : this is the maximum number of pods that can go unavailable during an eviction.</p> <p>According to the official <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#scheduling-and-disruption" rel="nofollow noreferrer">documentation</a>, an application's PodDisruptionBudget can also prevent autoscaling; if deleting nodes would cause the budget to be exceeded, the cluster does not scale down.</p> <p>When scaling down, the cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:</p> <blockquote> <ul> <li>The Pod's <strong>affinity</strong> or <strong>anti-affinity</strong> rules prevent rescheduling.</li> <li>The Pod is not managed by a <strong>Controller</strong> such as a Deployment, StatefulSet, Job or ReplicaSet.</li> <li>The Pod has local storage and the GKE control plane version is lower than 1.22. In GKE clusters with control plane version 1.22 or later, Pods with local storage no longer block scaling down.</li> <li>The Pod has the <strong>&quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;false&quot;</strong> annotation.</li> <li>The node's deletion would exceed the configured <strong>PodDisruptionBudget</strong> and the operation is unable to complete because of a deployment which is having PDB .From the behavior identified in the PDB you can <strong>change</strong> the <strong>minAvailable</strong> or <strong>maxUnavailable</strong> values of the <strong>PDB</strong>.</li> </ul> </blockquote> <p>Can you check the above conditions which are preventing the node’s deletion in your deployment along with PDB.</p> <p>Hope the mentioned information is useful to you.</p>
Kiran Kotturi
<p>I have defined a <code>validatingWebhook</code> configuration with a custom controller that is deployed as a deployment, snippet below for <code>validatingWebhook</code>:</p> <pre><code>apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: validate-webhook namespace: admission-test webhooks: - name: admission.validate.com namespaceSelector: matchExpressions: - key: app operator: NotIn values: [&quot;admission-test&quot;] rules: - apiGroups: [&quot;*&quot;] apiVersions: [&quot;v1&quot;,&quot;v1beta1&quot;,&quot;v1alpha1&quot;] operations: [&quot;CREATE&quot;,&quot;UPDATE&quot;] resources: [&quot;deployments&quot;,&quot;daemonsets&quot;,&quot;statefulsets&quot;,&quot;cronjobs&quot;, &quot;rollouts&quot;, &quot;jobs&quot;] scope: &quot;Namespaced&quot; clientConfig: service: namespace: admission-test name: admission-test #service port port: 8090 path: /verify admissionReviewVersions: [&quot;v1&quot;] sideEffects: None </code></pre> <p>and on my application I have defined a <code>http Handler</code>, snippet is below:</p> <pre><code> http.HandleFunc(&quot;/verify&quot;, servePod) http.HandleFunc(&quot;/healthz&quot;, func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(200) klog.Infoln(&quot;hittinh healthz&quot;) w.Write([]byte(&quot;ok&quot;)) }) server := &amp;http.Server{ Addr: fmt.Sprintf(&quot;:%d&quot;, port), TLSConfig: admission.ConfigTLS(config), } </code></pre> <p>I am trying to create another simple nginx deployment, which can be found <a href="https://k8s.io/examples/controllers/nginx-deployment.yaml" rel="nofollow noreferrer">here</a> but when I try to print the the body of <code>/verify</code> in customer controller that I wrote, I don't get anything. In fact it's like the other deployments are not passing through the admission controller.</p> <p>Any pointers on why this is happening? Much appreciated</p> <p>running kubernetes version</p> <pre><code>kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.4&quot;, GitCommit:&quot;3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-08-11T18:16:05Z&quot;, GoVersion:&quot;go1.16.7&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.4&quot;, GitCommit:&quot;3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-08-11T18:10:22Z&quot;, GoVersion:&quot;go1.16.7&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>k8s cluster is running via docker desktop</p>
sai
<p>It's passing through the validation controller due to it's set as <code>scope: &quot;Namespaced&quot;</code> and I can't see any <code>namespace</code> specified in your nginx deployment file. You can add any working <code>namespace</code> or change your <code>scope</code> to <code>&quot;*&quot;</code></p> <p>You can find more information about the rules in <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-rules" rel="nofollow noreferrer">the official documentation</a></p>
Bazhikov
<p>I am reviewing my rke installation:</p> <p><a href="https://docs.rke2.io/security/cis_self_assessment123#1219" rel="nofollow noreferrer">https://docs.rke2.io/security/cis_self_assessment123#1219</a></p> <p>The instruction works, makes sense, but shouldn't I be able to check this by running a <code>kubectl describe po -n kube-system kube-apiserver-{my-ip}</code>. I did a <code>describe po</code> on the resource, expecting to see the <code>audit-log-path</code>, but it was not there. How can I discover this setting if it isn't in the pod description. Is <code>ps</code> the best way? The only way?</p>
smuggledPancakes
<p>Audit backends stores audit logs to an external persistent storage. There are two backends available for kube-apiserver: Log backend, stores logs to a director in the filesystem. Webhook backend, which pushes logs to an external storage using HTTP API. Since you are trying to store data locally we will be using the log backend. As mentioned in the doc provided by you --audit-log-path is used for setting up the path for your audit log files and if haven’t provided any path it will go to the standard output <code>/var/log/kubernetes/audit/audit.log</code> and persistent volumes should be used for storing these logs, so you can get the path details by using below command</p> <p><code>Kubectl get pv</code> (In most cases <em>audit</em> will be the keyword so you can find the path using this)</p> <p>References:</p> <ol> <li><a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/</a></li> <li><a href="https://www.ibm.com/docs/en/mvi/1.1.1?topic=environment-checking-kubernetes-storage-status" rel="nofollow noreferrer">https://www.ibm.com/docs/en/mvi/1.1.1?topic=environment-checking-kubernetes-storage-status</a></li> </ol>
Kranthiveer Dontineni
<p>I want to use vars without ConfigMaps or Secrets. Declaring a value would be sufficient for me. But I couldn't see any documentation regarding vars attributes or how I can use. Do you know any docs about this? Thanks!</p> <pre><code>vars: - name: ROUTE_HOST objref: kind: ConfigMap name: template-vars apiVersion: v1 fieldref: fieldpath: data.ROUTE_HOST </code></pre>
cosmos-1905-14
<p>Summarizing Jonas's comments:</p> <blockquote> <p>WARNING: There are plans to deprecate vars. For existing users of vars, we recommend migration to <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/" rel="nofollow noreferrer">replacements</a> as early as possible. There is a guide for convering vars to replacements at the bottom of this page under “convert vars to replacements”. For new users, we recommend never using vars, and starting with replacements to avoid migration in the future.</p> </blockquote> <p>Please find more information in <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/vars/" rel="nofollow noreferrer">the official documentation</a>.</p> <p>Try to use replacements as it's suggested above.</p>
Bazhikov
<p>Is there any way to restrict the access to the keycloak admin console by IP / IP Range? I have deployed the Keycloak in Azure Kubernetes that uses Nginx Ingress controller. So, I tried to restrict as highlighted below</p> <p><a href="https://i.stack.imgur.com/4tFY5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4tFY5.png" alt="enter image description here" /></a></p> <p>but it blocks everything. I would assume that Ingress receives the incoming request from the Azure Kubernetes Load balancer so it does not consider the client IP to allow access.</p> <p>How do I restrict the access to the keycloak admin console by IP / IP Range?</p> <p><strong>Update#1:</strong> I believe that the above configuration to restrict the path by the IP / IP Range is effective expect that it redirects the coming request to a non-existing location</p> <pre><code>xxx.xxx.xxx.xxx - - [30/Aug/2023:15:42:25 +0000] &quot;GET **/admin/** HTTP/2.0&quot; 404 548 &quot;-&quot; &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36&quot; 507 0.000 [-] [] - - - - bfbe1faa35dcc40e82e5e22bd557cf96 2023/08/30 15:42:25 [error] 1171#1171: *6809656 **&quot;/usr/local/nginx/html/admin/index.html&quot;** is not found (2: No such file or directory), client: 173.32.206.145, server: account.qa.oly.nova-x.co, request: &quot;GET /admin/ HTTP/2.0&quot;, host: &quot;xxxx&quot; </code></pre> <p>I was expecting this to apply just the IP based filter but not change the existing behaviour.</p>
One Developer
<p>You can use <code>loadBalancerSourceRanges</code> on the service as <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard#restrict-inbound-traffic-to-specific-ip-ranges" rel="nofollow noreferrer">mentioned here</a>.</p> <p>To restrict traffic for a certain path use <code>location-snippet</code> instead of <code>server-snippet</code> as detailed in <a href="https://stackoverflow.com/questions/73061277/nginx-ingress-controller-ip-restriction-for-certain-path">this answer</a>.</p>
akathimi
<p>I want to read file paths from a persistent volume and store these file paths into a persistent queue of sorts. This would probably be done with an application contained within a pod. This persistent volume will be updated constantly with new files. This means that I will need to constantly update the queue with new file paths. What if this application that is adding items to the queue crashes? Kubernetes would be able to reboot the application, but I do not want to add in file paths that are already in the queue. The app would need to know what exists in the queue before adding in files, at least I would think. I was leaning on RabbitMQ, but apparently you cannot search a queue for specific items with this tool. What can I do to account for this issue? I am running this cluster on Google Kubernetes Engine, so this would be on the Google Cloud Platform.</p>
Adriano Matos
<p>Have you ever heard about <a href="https://kubemq.io/" rel="nofollow noreferrer">KubeMQ</a>? There is <a href="https://github.com/kubemq-io/kubemq-community" rel="nofollow noreferrer">a KubeMQ community</a> where you can refer to with the guides and help.</p> <p>As an alternative solution you can find useful <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">guide on official Kubernetes documentation</a> on creating working queue with Redis</p>
Bazhikov
<p>so I have a basic minikube cluster configuration for K8s cluster with only 2 pods for Postgres DB and my Spring app. However, I can't get my app to connect to my DB. I know that in <code>Docker</code> such issue could be solved with networking but after a lot of research I can't seem to find the problem and the solution to my issue.</p> <p>Currently, given my configuration I get a Connection refused error by postgres whenever my Spring App tries to start:</p> <p><code>Caused by: org.postgresql.util.PSQLException: Connection to postgres-service:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.</code></p> <p>So my spring-app is a basic REST API with some open endpoints where I query for some data. The app works completely fine and here is my <code>application.properties</code>:</p> <pre><code>spring.datasource.driverClassName=org.postgresql.Driver spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB} spring.datasource.username=${POSTGRES_USER} spring.datasource.password=${POSTGRES_PASSWORD} spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect spring.jpa.hibernate.ddl-auto=update </code></pre> <p>The way I create my Postgres component is by creating a <code>ConfigMap</code>, a <code>Secret</code> and finally a <code>Deployment</code> with it's <code>Service</code> inside. They look like so:</p> <p><code>postgres-config.yaml</code></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: postgres-config data: postgres-url: postgres-service postgres-port: &quot;5432&quot; postgres-db: &quot;test&quot; </code></pre> <p><code>postgres-secret.yaml</code></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: postgres-secret type: Opaque data: postgres_user: cm9vdA== #already encoded in base64 postgres_password: cm9vdA== #already encoded in base64 </code></pre> <p><code>postgres.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgresdb image: postgres ports: - containerPort: 5432 env: - name: POSTGRES_USER valueFrom: secretKeyRef: name: postgres-secret key: postgres_user - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: postgres_password - name: POSTGRES_DB valueFrom: configMapKeyRef: name: postgres-config key: postgres-db --- apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app.kubernetes.io/name: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 </code></pre> <p>and finally here's my <code>Deployment</code> with it's <code>Service</code> for my spring app</p> <p><code>spring-app.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: spring-app-deployment labels: app: spring-app spec: replicas: 1 selector: matchLabels: app: spring-app template: metadata: labels: app: spring-app spec: containers: - name: spring-app image: app #image is pulled from my docker hub ports: - containerPort: 8080 env: - name: POSTGRES_USER valueFrom: secretKeyRef: name: postgres-secret key: postgres_user - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: postgres_password - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: postgres-config key: postgres-url - name: POSTGRES_PORT valueFrom: configMapKeyRef: name: postgres-config key: postgres-port - name: POSTGRES_DB valueFrom: configMapKeyRef: name: postgres-config key: postgres-db --- apiVersion: v1 kind: Service metadata: name: spring-app-service spec: type: NodePort selector: app.kubernetes.io/name: spring-app ports: - protocol: TCP port: 8080 targetPort: 8080 nodePort: 30001 </code></pre>
Richard
<p>A connection refused means that the host you are connecting to, does not have the port you mentioned opened.</p> <p>This leads me to think that the postgres pod isnt running correctly, or the service is not pointing to those pods correctly.</p> <p>By checking the Yamls I can see that the service's pod selector isnt configured correctly:</p> <p>The service is selecting pods with label: <code>app.kubernetes.io/name: postgres</code></p> <p>The deployment is configured with pods with label: <code>app: postgres</code></p> <p>The correct service manifest should look like:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 </code></pre> <p>You can double check that by describing the service using kubectl describe service postgres-service.</p> <p>The output should contain the postgres pods IPs for Endpoints.</p>
akathimi
<p>We have a number of pods communicating with each other and storing state via either etc or shared PVs. Sometimes we get in a messed up state and we want to have a command that will blow away all our saved state and restart things fresh as a debug tool.</p> <p>To do this we need to stop a number of containers, across many nodes. Once stopped we need to delete data in ETC and volumes, then once that's done restart the containers.</p> <p>This seems pretty easy to do with a bash script, but my management doesn't want a bash script since we only deploy kubernetes configuration and he doesn't want a separate script to exist but not be deployed automatically with the rest of our codebase.</p> <p>I'm wondering if I can use something similar to kubernetes jobs, or any other tool, to set up the same logic within kubernetes so it's deployed with the rest of our kubernetes code. Something where I can run a simple command and have my prewritten scripted logic run to completion? I had hoped a kubernetes job would work, but the job wouldn't have access to kubectl or any way of starting and stopping other pods or containers.</p> <p>Is there a kubernetes functionality I missed to make it easier to pause multiple pods while our cleanup logic runs?</p>
dsollen
<p>Ansible can be used for running commands in your nodes, you can use ansible playbooks instead of bash script or you can use this playbook to run your bash script. Similarly there are tools like rundeck and n8n if you prefer a webhook method of implementation. If you are planning to implement a CICD pipeline you can use tools like Jenkins.</p>
Kranthiveer Dontineni
<p>I noticed that during the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a> pod creation/termination, there is a huge number of events created. Further &quot;investigation&quot; showed that each nginx pod creates</p> <pre><code>42s Normal Sync ingress/name Scheduled for sync </code></pre> <p>event for each ingress objects.</p> <p>For perspective, with some approximate imaginary numbers:</p> <ul> <li>The moment you <code>kubectl rollout restart ingress-nginx</code> all ingress-nginx pods will terminate (not simultaneously as there is a proper PDB setup).</li> <li>During restart, each pod will create <code>sync</code> event object for each ingress object in the cluster.</li> <li>So if there are 100 ingress-nginx pods with 500 ingress objects, that will span 50k sync events.</li> </ul> <blockquote> <p>I could not find any mentions about it in the docs/ingress-nginx issues</p> </blockquote> <p><strong>The question</strong>: is it expected behavior?</p>
Evedel
<p>This is expected behavior.</p> <p>As we can see <a href="https://github.com/kubernetes/ingress-nginx/blob/6499393772ee786179b006b423e11950912e8295/internal/ingress/controller/store/store.go#L378" rel="nofollow noreferrer">here</a>, this is an informer, which creates the sync event for each valid ingress. In turn, this informer is added to the store on each ingress-controller pod, see more <a href="https://docs.nginx.com/nginx-ingress-controller/intro/how-nginx-ingress-controller-works/" rel="nofollow noreferrer">here</a>.</p>
Bazhikov
<p>I have used the prometheus helm chart from <a href="https://prometheus-community.github.io/helm-charts" rel="nofollow noreferrer">https://prometheus-community.github.io/helm-charts</a> to setup a prometheus server on EKS for me. That involves prometheus-node-exporter too. Now what I'm trying to do is modify the prometheus-node-exporter service port to another one from 9100.</p> <p>I updated the value under prometheus-node-exportes/values.yaml to this :</p> <pre><code>service: type: ClusterIP port: 9400 targetPort: 9400 nodePort: portName: metrics listenOnAllInterfaces: true annotations: prometheus.io/scrape: &quot;true&quot; </code></pre> <p>but when I do :</p> <p><code>helm upgrade prometheus prometheus-community/prometheus --namespace monitoring</code> the changes do not take effect at all.</p> <p>How can I update values of other subcharts like prometheus-node-exporter in my helm chart?</p>
egib
<p>The way that your command is running, is pulling the chart from the internet, you are not passing any extra values file.</p> <p>If the port is all you are changing, use this command to pass the port value: <code>helm upgrade prometheus prometheus-community/prometheus --namespace monitoring --set prometheus-node-exporter.service.port=&lt;your-port&gt;</code></p> <p>Otherwise, you can download the promethues values file, then add the mods to it, e.g.:</p> <pre><code>prometheus-node-exporter: ## If false, node-exporter will not be installed ## enabled: true rbac: pspEnabled: false containerSecurityContext: allowPrivilegeEscalation: false service: port: &lt;your-port&gt; </code></pre> <p>Then, pass the values file to the upgrade command using <code>-f values.yaml</code></p> <p>Also, check <a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">this doc</a>.</p>
akathimi
<p>I would like to generate dump from a remote PostgreSQL database (PGAAS) with commands or Python code.</p> <p>Firstly I tried locally to do the work but I have an error :</p> <pre><code>pg_dump: error: server version: 13.9; pg_dump version: 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1) </code></pre> <p>I tried this code :</p> <pre><code>import subprocess dump_file = &quot;database_dump.sql&quot; with open(dump_file, &quot;w&quot;) as f: print(f) subprocess.call([&quot;pg_dump&quot;, &quot;-Fp&quot;, &quot;-d&quot;, &quot;dbdev&quot;, &quot;-U&quot;, &quot;pgsqladmin&quot;, &quot;-h&quot;, &quot;hostname&quot;-p&quot;, &quot;32000&quot;], stdout=f) </code></pre> <p>How can I do to have a pod (container) doing this work and where version is the same that server version, without entering pgaas password manually ?</p>
lbened
<p><code>pg_dump: error: server version: 13.9; pg_dump version: 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)</code></p> <p>As you can see this error is caused because of a version mismatch checking the version of your PGaaS database and the database version you are using on your local machine. If your local version is lower than that of the server version you can upgrade the local version. Follow this <a href="https://www.postgresql.org/docs/current/pgupgrade.html" rel="nofollow noreferrer">document</a> for upgrading your pg version.</p> <p>If you want to take dumps at regular intervals in an easy way you can have a cron job scheduled on your vm for running your code. Since you want to use kubernetes, build a docker image with your code in it and create a kubernetes job and run it with kube-scheduler and you can use environment variables for encrypting your password.</p>
Kranthiveer Dontineni
<p>I have a cluster of 4 raspberry pi 4 model b, on which Docker and Kubernetes are installed. The versions of these programs are the same and are as follows:</p> <p>Docker:</p> <pre><code>Client: Version: 18.09.1 API version: 1.39 Go version: go1.11.6 Git commit: 4c52b90 Built: Fri, 13 Sep 2019 10:45:43 +0100 OS/Arch: linux/arm Experimental: false Server: Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.11.6 Git commit: 4c52b90 Built: Fri Sep 13 09:45:43 2019 OS/Arch: linux/arm Experimental: false </code></pre> <p>Kubernetes:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.3&quot;, GitCommit:&quot;c92036820499fedefec0f847e2054d824aea6cd1&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-10-27T18:41:28Z&quot;, GoVersion:&quot;go1.16.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/arm&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.3&quot;, GitCommit:&quot;c92036820499fedefec0f847e2054d824aea6cd1&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-10-27T18:35:25Z&quot;, GoVersion:&quot;go1.16.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/arm&quot;} </code></pre> <p>My problem occurs when a kubernetes pod is deployed on machine &quot;02&quot;. Only on that machine the pod never goes into a running state and the logs say:</p> <pre><code>standard_init_linux.go:207: exec user process caused &quot;exec format error&quot; </code></pre> <p>On the other hand, when the same pod is deployed on any of the other 3 raspberry pi, it goes correctly in a running state and does what it has to do. I have tried to see similar topics to mine, but there seems to be no match with my problem. I put below my Dockerfile and my .yaml file.</p> <p>Dockerfile</p> <pre><code>FROM ubuntu@sha256:f3113ef2fa3d3c9ee5510737083d6c39f74520a2da6eab72081d896d8592c078 CMD [&quot;bash&quot;] </code></pre> <p>yaml file</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: name: mongodb name: mongodb spec: nodeName: diamond02.xxx.xx containers: - name : mongodb image: ohserk/mongodb:latest imagePullPolicy: &quot;IfNotPresent&quot; name: mongodb ports: - containerPort: 27017 protocol: TCP command: - &quot;sleep&quot; - &quot;infinity&quot; </code></pre> <p>In closing, this is what happens when I run <code>kubectl apply -f file.yaml</code> specifying to go to machine 02, while on any other machine the output is this:</p> <p><a href="https://i.stack.imgur.com/ciDrZ.png" rel="nofollow noreferrer">kubectl get pod -w -o wide</a></p> <p>I could solve this problem by specifying precisely on which raspberry to deploy the pod, but it doesn't seem like a decent solution to me. Would you know what to do in this case?</p> <p><strong>EDIT 1</strong></p> <p>Here the <code>journelctl</code> output just after the deploy on machine 02</p> <pre><code>Nov 05 08:33:39 diamond02.xxx.xx kubelet[1563]: I1105 08:33:39.744957 1563 topology_manager.go:200] &quot;Topology Admit Handler&quot; Nov 05 08:33:39 diamond02.xxx.xx systemd[1]: Created slice libcontainer container kubepods-besteffort-pod6a0d621a_55ab_449a_91cb_a88ac10df0cf.slice. Nov 05 08:33:39 diamond02.xxx.xx kubelet[1563]: I1105 08:33:39.906608 1563 reconciler.go:224] &quot;operationExecutor.VerifyControllerAttachedVolume started for volume \&quot;kube-api-access-trqs4\&quot; (UniqueName: \&quot;kubernetes.io/projected/6a0d621a-55ab-449a-91cb-a88ac10df0cf-kube-api-access-trqs4\&quot;) pod \&quot;mongodb\&quot; (UID: \&quot;6a0d621a-55ab-449a-91cb-a88ac10df0cf\&quot;) &quot; Nov 05 08:33:40 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-03b99c20a2e9dd9b6f06a99625272c899d6e7a36e2071e268b326dfee54476c8\x2dinit-merged.mount: Succeeded. Nov 05 08:33:40 diamond02.xxx.xx dockerd[578]: time=&quot;2021-11-05T08:33:40.702427163Z&quot; level=info msg=&quot;shim docker-containerd-shim started&quot; address=/containerd-shim/moby/a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4/shim.sock debug=false pid=15599 Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: libcontainer-15607-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: libcontainer-15607-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15607_systemd_test_default.slice. Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15607_systemd_test_default.slice. Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Started libcontainer container a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15648-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15648-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15648_systemd_test_default.slice. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15648_systemd_test_default.slice. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15654-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15654-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15654_systemd_test_default.slice. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15654_systemd_test_default.slice. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15661-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15661-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15661_systemd_test_default.slice. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15661_systemd_test_default.slice. Nov 05 08:33:41 diamond02.xxx.xx kubelet[1563]: I1105 08:33:41.673178 1563 pod_container_deletor.go:79] &quot;Container not found in pod's containers&quot; containerID=&quot;a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4&quot; Nov 05 08:33:41 diamond02.xxx.xx kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth27f79edb: link becomes ready Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered blocking state Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered disabled state Nov 05 08:33:41 diamond02.xxx.xx kernel: device veth27f79edb entered promiscuous mode Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered blocking state Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered forwarding state Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: IAID 58:9b:78:38 Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: adding address fe80::5979:f76a:862:765a Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: Joining mDNS multicast group on interface veth27f79edb.IPv6 with address fe80::5979:f76a:862:765a. Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: New relevant interface veth27f79edb.IPv6 for mDNS. Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: Registering new address record for fe80::5979:f76a:862:765a on veth27f79edb.*. Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: IAID 58:9b:78:38 Nov 05 08:33:41 diamond02.xxx.xx kubelet[1563]: map[string]interface {}{&quot;cniVersion&quot;:&quot;0.3.1&quot;, &quot;hairpinMode&quot;:true, &quot;ipMasq&quot;:false, &quot;ipam&quot;:map[string]interface {}{&quot;ranges&quot;:[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{&quot;subnet&quot;:&quot;10.244.3.0/24&quot;}}}, &quot;routes&quot;:[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, &quot;type&quot;:&quot;host-local&quot;}, &quot;isDefaultGateway&quot;:true, &quot;isGateway&quot;:true, &quot;mtu&quot;:(*uint)(0xcaa76c), &quot;name&quot;:&quot;cbr0&quot;, &quot;type&quot;:&quot;bridge&quot;} Nov 05 08:33:41 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b\x2dinit-merged.mount: Succeeded. Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b\x2dinit-merged.mount: Succeeded. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded. Nov 05 08:33:42 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded. Nov 05 08:33:42 diamond02.xxx.xx dockerd[578]: time=&quot;2021-11-05T08:33:42.283254485Z&quot; level=info msg=&quot;shim docker-containerd-shim started&quot; address=/containerd-shim/moby/1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234/shim.sock debug=false pid=15718 Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15725-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15725-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15725_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15725_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Started libcontainer container 1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15749-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15749-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15749_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15749_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: soliciting an IPv6 router Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15755-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15755-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15755_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15755_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: docker-1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.scope: Succeeded. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: docker-1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.scope: Consumed 39ms CPU time. Nov 05 08:33:42 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: soliciting a DHCP lease Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15766-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15766-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15766_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15766_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15778-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15778-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15778_systemd_test_default.slice. Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15778_systemd_test_default.slice. Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: libcontainer-15784-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: libcontainer-15784-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15784_systemd_test_default.slice. Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15784_systemd_test_default.slice. Nov 05 08:33:43 diamond02.xxx.xx dockerd[578]: time=&quot;2021-11-05T08:33:43.097966208Z&quot; level=info msg=&quot;shim reaped&quot; id=1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234 Nov 05 08:33:43 diamond02.xxx.xx dockerd[578]: time=&quot;2021-11-05T08:33:43.107322948Z&quot; level=info msg=&quot;ignoring event&quot; module=libcontainerd namespace=moby topic=/tasks/delete type=&quot;*events.TaskDelete&quot; Nov 05 08:33:43 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded. Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded. Nov 05 08:33:43 diamond02.xxx.xx avahi-daemon[389]: Registering new address record for fe80::cc12:58ff:fe9b:7838 on veth27f79edb.*. Nov 05 08:33:44 diamond02.xxx.xx kubelet[1563]: {&quot;cniVersion&quot;:&quot;0.3.1&quot;,&quot;hairpinMode&quot;:true,&quot;ipMasq&quot;:false,&quot;ipam&quot;:{&quot;ranges&quot;:[[{&quot;subnet&quot;:&quot;10.244.3.0/24&quot;}]],&quot;routes&quot;:[{&quot;dst&quot;:&quot;10.244.0.0/16&quot;}],&quot;type&quot;:&quot;host-local&quot;},&quot;isDefaultGateway&quot;:true,&quot;isGateway&quot;:true,&quot;mtu&quot;:1450,&quot;name&quot;:&quot;cbr0&quot;,&quot;type&quot;:&quot;bridge&quot;}I1105 08:33:44.040009 1563 scope.go:110] &quot;RemoveContainer&quot; containerID=&quot;1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234&quot; </code></pre>
OhSerk
<p>Posting comment as the community wiki answer for better visibility:</p> <p>Reinstalling both Kubernetes and Docker solves the issue</p>
Bazhikov
<p>I am trying to run an AWS CLI command in my pod. As the pod may take some time to complete i am trying to run it in the background Here is my command</p> <pre><code>kubectl -it exec &lt;podname&gt; -- bash -c &quot;aws s3api list-objects --bucket bucketname-1 --query 'Contents[?StorageClass==\&quot;ONEZONE_IA\&quot;].[Key,StorageClass]' --output text &gt; /storage/ONEZONE_keys1.txt &amp;&quot; </code></pre> <p>when I run this command it becomes a defunct process <a href="https://i.stack.imgur.com/Key3f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Key3f.png" alt="process" /></a></p> <p>when I run the command without the <code>&amp;</code> in the end it works fine But the process gets terminated once the terminal is closed</p> <p>Ultimately I just want to run this command as a cron job every day</p> <p>Not sure whats wrong ,or this whole process can be done in a better way</p> <p>Any help is much appreciated, Thank you</p>
Tony Frank
<p>I think nohup can help on this scenario. Try to execute this command with &quot;nohup&quot;.</p> <p>Ex-</p> <blockquote> <p>nohup &quot;your command&quot; &gt; &quot;redirect file-name&quot; &amp;</p> </blockquote>
Prasanta Kumar Behera
<p>I have a DAG that uses <code>KubernetesPodOperator</code> and the task <code>get_train_test_model_task_count</code> in the DAG pushes an xcom variable and I want to use it in the following tasks.</p> <pre><code>run_this = BashOperator( task_id=&quot;also_run_this&quot;, bash_command='echo &quot;ti_key={{ ti.xcom_pull(task_ids=\&quot;get_train_test_model_task_count\&quot;, key=\&quot;return_value\&quot;)[\&quot;models_count\&quot;] }}&quot;', ) </code></pre> <p>The above DAG task works and it prints the value as <code>ti_key=24</code>.</p> <p>I want the same value to be used as a variable,</p> <pre><code>with TaskGroup(&quot;train_test_model_config&quot;) as train_test_model_config: models_count = &quot;{{ ti.xcom_pull(task_ids=\&quot;get_train_test_model_task_count\&quot;, key=\&quot;return_value\&quot;)[\&quot;models_count\&quot;] }}&quot; print(models_count) for task_num in range(0, int(models_count)): generate_train_test_model_config_task(task_num) </code></pre> <p><code>int(models_count)</code> doesnot work, by throwing the error -</p> <blockquote> <p>ValueError: invalid literal for int() with base 10: '{{ ti.xcom_pull(task_ids=&quot;get_train_test_model_task_count&quot;, key=&quot;return_value&quot;)[&quot;models_count&quot;] }}'</p> </blockquote> <p>And the <code>generate_train_test_model_config_task</code> looks as below:</p> <pre><code>def generate_train_test_model_config_task(task_num): task = KubernetesPodOperator( name=f&quot;train_test_model_config_{task_num}&quot;, image=build_model_image, labels=labels, cmds=[ &quot;python3&quot;, &quot;-m&quot;, &quot;src.models.train_test_model_config&quot;, &quot;--tenant=neu&quot;, f&quot;--model_tag_id={task_num}&quot;, &quot;--line_plan={{ ti.xcom_pull(key=\&quot;file_name\&quot;, task_ids=\&quot;extract_file_name\&quot;) }}&quot;, &quot;--staging_bucket=cs-us-ds&quot; ], task_id=f&quot;train_test_model_config_{task_num}&quot;, do_xcom_push=False, namespace=&quot;airflow&quot;, service_account_name=&quot;airflow-worker&quot;, get_logs=True, startup_timeout_seconds=300, container_resources={&quot;request_memory&quot;: &quot;29G&quot;, &quot;request_cpu&quot;: &quot;7000m&quot;}, node_selector={&quot;cloud.google.com/gke-nodepool&quot;: NODE_POOL}, tolerations=[ { &quot;key&quot;: NODE_POOL, &quot;operator&quot;: &quot;Equal&quot;, &quot;value&quot;: &quot;true&quot;, &quot;effect&quot;: &quot;NoSchedule&quot;, } ], ) return task </code></pre>
Tom J Muthirenthi
<p>The Jinja template pulls from the Airflow context which you only can do within a task, not in top level code.</p> <p>Also as a commenter said you will need to use dynamic task mapping to change the DAG structure dynamically, even if you hardcode the model_num or use another way to template it in, those code changes are only picked up every 30s by the scheduler on default and you have no backwards visibility into previous tasks, for example if one day there are only 2 models you can't see model 3 through 8 in the logs from the day before so it gets a bit messy when using a loop like that even if you can get it to work.</p> <p>The code below shows the structure that I think will achieve what you want, one model config generated for each <code>task_num</code>. This should work in Airflow 2.3+</p> <pre class="lang-py prettyprint-override"><code>@task def generate_list_of_model_nums(**context): model_count = context[&quot;ti&quot;].xcom_pull(task_ids=&quot;get_train_test_model_task_count&quot;, key=&quot;return_value&quot;)[&quot;models_count&quot;] return list(range(model_count + 1)) @task def generate_train_test_model_config_task(task_num): # code that generates the model config return model_config model_nums=generate_list_of_model_nums() generate_train_test_model_config_task.expand(task_num=model_nums) </code></pre> <p>Notes: I did not test the code above so there might be typos, but this is the general idea, create a list of all the task nums, then use dynamic task mapping to expand over the list.</p> <p>If you pull the XCom from the <code>generate_train_test_model_config_task</code> you should get a list of all the model configs :)</p> <p>Some resources that might help to adapt this to traditional operators:</p> <ul> <li><a href="https://docs.astronomer.io/learn/dynamic-tasks" rel="nofollow noreferrer">Dynamic task mapping guide</a></li> <li><a href="https://docs.astronomer.io/learn/airflow-context" rel="nofollow noreferrer">Airflow context guide</a></li> </ul> <p>Disclaimer: I work at Astronomer the org who created the guides above :)</p> <p>EDIT: thanks for sharing the KPO code! I see you are using the task_num in two parameters, this means you can try to use <code>.expand_kwargs</code> over a list of sets of inputs in form of a dictionaries and then map the KPO directly. Note that this is an Airflow 2.4+ feature.</p> <p>Note on the code: I tested the dict generation function but don't have a K8s cluster running rn so I did not test the latter part, I think <code>name</code> and <code>cmd</code> should be expandable 🤞</p> <pre class="lang-py prettyprint-override"><code>@task def generate_list_of_param_dicts(**context): model_count = context[&quot;ti&quot;].xcom_pull( task_ids=&quot;get_train_test_model_task_count&quot;, key=&quot;return_value&quot; )[&quot;models_count&quot;] param_dicts = [] for i in range(model_count): param_dict = { &quot;name&quot;: f&quot;train_test_model_config_{i}&quot;, &quot;cmds&quot;: [ &quot;python3&quot;, &quot;-m&quot;, &quot;src.models.train_test_model_config&quot;, &quot;--tenant=neu&quot;, f&quot;--model_tag_id={i}&quot;, '--line_plan={{ ti.xcom_pull(key=&quot;file_name&quot;, task_ids=&quot;extract_file_name&quot;) }}', &quot;--staging_bucket=cs-us-ds&quot;, ], } param_dicts.append(param_dict) return param_dicts task = KubernetesPodOperator.partial( image=build_model_image, labels=labels, task_id=f&quot;train_test_model_config&quot;, do_xcom_push=False, namespace=&quot;airflow&quot;, service_account_name=&quot;airflow-worker&quot;, get_logs=True, startup_timeout_seconds=300, container_resources={&quot;request_memory&quot;: &quot;29G&quot;, &quot;request_cpu&quot;: &quot;7000m&quot;}, node_selector={&quot;cloud.google.com/gke-nodepool&quot;: NODE_POOL}, tolerations=[ { &quot;key&quot;: NODE_POOL, &quot;operator&quot;: &quot;Equal&quot;, &quot;value&quot;: &quot;true&quot;, &quot;effect&quot;: &quot;NoSchedule&quot;, } ], ).expand_kwargs(generate_list_of_param_dicts()) </code></pre>
TJaniF
<p>I am mounting an <code>emptyDir</code> volume so it can be used for sharing files between containers running in the same pod. Lets say the mount point is called <code>/var/log/mylogs</code>. When I mount the <code>emptydir</code> all of the pre-existing files that were in mylogs get deleted. I know this is part of the Kubernetes functionality but I was wondering if there is a way to get around it? I <a href="https://medium.com/hackernoon/mount-file-to-kubernetes-pod-without-deleting-the-existing-file-in-the-docker-container-in-the-88b5d11661a6" rel="nofollow noreferrer">tried using subPath</a> but it looks like that only works for singular files.</p>
FestiveHydra235
<p>Consider using <code>PersistentVolumes</code> instead, since it serves as a long-term storage in your Kubernetes cluster. They exist beyond containers, pods, and nodes. A pod uses a persistent volume claim to to get read and write access to the persistent volume. <code>PersistentVolume</code> decouples the storage from the Pod. Its lifecycle is independent. It enables safe pod restarts and sharing data between pods.</p> <blockquote> <p>But will the pvc delete all of the existing contents? How would multiple files work with the subPath?</p> </blockquote> <p>Forget about finding any workaround using emptryDir or SubPath when you can easily use <code>PersistentVolumes</code>. Data persistence — a mechanism that keeps data even after the Pod is deleted — is required.</p> <p>You can find more useful info about <code>PersistentVolumes</code> on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">official documentation</a> or <a href="https://loft.sh/blog/kubernetes-persistent-volumes-examples-and-best-practices/" rel="nofollow noreferrer">here in a fresh new article</a></p>
Bazhikov
<p>I'm using podman 4.5-dev I have two pods deployed using: <em>podman kube play foo.yaml</em> <em>podman kube play bar.yaml</em></p> <p>I specified the pods' hostnames in the files, but they won't get resolved inside the containers. I verified that the pods are in the same network.</p> <p>Is there some DNS configuration missing? Should I use a Services? The official docs lack of a precise indication about this topic</p> <p>Here's one of the two pods's YAML (the other one has the same keys with different values):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: labels: app: postgres name: postgres spec: hostname: postgres containers: - name: pgadmin-container image: docker.io/dpage/pgadmin4:latest ports: - containerPort: 80 hostPort: 9876 </code></pre> <p>Here's some terminal output that could be useful:</p> <pre class="lang-bash prettyprint-override"><code>[daniele@localhost]$ podman pod inspect xptssrv |grep -ni network 25: &quot;HostNetwork&quot;: false, 34: &quot;Networks&quot;: [ 35: &quot;podman-default-kube-network&quot; 37: &quot;NetworkOptions&quot;: null, [daniele@localhost]$ podman pod inspect postgres |grep -ni network 25: &quot;HostNetwork&quot;: false, 34: &quot;Networks&quot;: [ 35: &quot;podman-default-kube-network&quot; 37: &quot;NetworkOptions&quot;: null, [daniele@localhost]$ podman network inspect podman-default-kube-network |grep &quot;&quot; -n 1:[ 2: { 3: &quot;name&quot;: &quot;podman-default-kube-network&quot;, 4: &quot;id&quot;: &quot;a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc&quot;, 5: &quot;driver&quot;: &quot;bridge&quot;, 6: &quot;network_interface&quot;: &quot;cni-podman1&quot;, 7: &quot;created&quot;: &quot;2023-02-21T23:03:25.800256942+01:00&quot;, 8: &quot;subnets&quot;: [ 9: { 10: &quot;subnet&quot;: &quot;10.89.0.0/24&quot;, 11: &quot;gateway&quot;: &quot;10.89.0.1&quot; 12: } 13: ], 14: &quot;ipv6_enabled&quot;: false, 15: &quot;internal&quot;: false, 16: &quot;dns_enabled&quot;: false, 17: &quot;ipam_options&quot;: { 18: &quot;driver&quot;: &quot;host-local&quot; 19: } 20: } 21:] [daniele@localhost pods]$ podman exec xptssrv-xptssrv-container cat /etc/resolv.conf nameserver 192.168.1.6 nameserver 8.8.8.8 </code></pre>
Daniele Navarra
<p>Edit your network settings in <code>/etc/containers/net.d/podman-default-kube-network.conflist</code> Change this line</p> <pre><code>&quot;dns_enabled&quot;: false, </code></pre> <p>To this:</p> <pre><code>&quot;dns_enabled&quot;: true, </code></pre> <p>Then reboot and up your yaml and try to resolve this:</p> <pre><code>postgres_pgadmin-container_1 </code></pre> <p>Or maybe this:</p> <pre><code>postgres_postgres_1 </code></pre> <h2>Edit1</h2> <p>Just copy the config file:</p> <pre><code>sudo cp /usr/share/containers/containers.conf /etc/containers/containers.conf </code></pre> <p>Then in the file change the network backend to netavark using the following command:</p> <pre><code>sed -i &quot;/^\s*\#*\s*network_backend\s*=.*$/ s/^.*$/network_backend = \&quot;netavark\&quot;/&quot; /etc/containers/containers.conf </code></pre> <p><strong>Notice:</strong> I think it's better to restart you system in order to apply changes.</p>
Alijvhr
<p>I have a service that uses Spring Cloud Kubernetes Config to reload its configuration when a value in a ConfigMap changes. That all works great.</p> <p>Is it possible to use Spring Cloud Kubernetes (or one of its dependencies) to <strong>write</strong> a ConfigMap value? I didn't see any examples of this in the documentation (<a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/" rel="nofollow noreferrer">here</a>). Can I do this programmatically, or do I need to call the underlying Kubernetes APIs to do this?</p>
Mark
<p>Based on Eugene's reply:</p> <p>No, this is impossible at the moment to do so. You can go to <a href="https://github.com/spring-cloud/spring-cloud-kubernetes#1-why-do-you-need-spring-cloud-kubernetes" rel="nofollow noreferrer">GitHub</a> and create an issue with the explanation of your use case, and this feature <strong>can be created</strong> within the future releases.</p>
Bazhikov