Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I am trying to set up a Prometheus CloudWatch agent to scrape metrics from Fluent Bit that is running on Amazon EKS. I am following this guide to set up a CloudWatch agent with Prometheus: <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-Setup-configure.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-Setup-configure.html</a>. In this guide prometheus agent is created as a Deployment.</p> <p>My Fluent Bit is deployed as DaemonSet on EKS. I've read this answer that explains difference of Deployment vs DaemonSet: <a href="https://stackoverflow.com/questions/53888389/difference-between-daemonsets-and-deployments">Difference between daemonsets and deployments</a></p> <p>But I still have hard time understanding of what is the best to use for prometheus agent. The way I see it, since Fluent Bit is a DaemonSet, I would want Prometheus agent to be a DaemonSet too to make sure it will grab metrics from those PODs that might spawn in Fluent Bit. Am I right?</p>
Bryuk
<p><code>...since Fluent Bit is a DaemonSet, I would want Prometheus agent to be a DaemonSet too to make sure it will grab metrics from those PODs that might spawn in Fluent Bit. Am I right?</code></p> <p>This is correct in order to scrap from every FluentBit agent in your cluster and adapt to situation where node(s) scale out/in.</p>
gohm'c
<p>We deploy service with helm. The ingress template looks like that:</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ui-app-ingress {{- with .Values.ingress.annotations}} annotations: {{- toYaml . | nindent 4}} {{- end}} spec: rules: - host: {{ .Values.ingress.hostname }} http: paths: - path: / pathType: Prefix backend: service: name: {{ include &quot;ui-app-chart.fullname&quot; . }} port: number: 80 tls: - hosts: - {{ .Values.ingress.hostname }} secretName: {{ .Values.ingress.certname }} </code></pre> <p>as you can see, we already use <code>networking.k8s.io/v1</code> but if i watch the treafik logs, i find this error:</p> <p><code> 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)</code></p> <p>what results in tls cert error:</p> <pre><code>time=&quot;2022-06-07T15:40:35Z&quot; level=debug msg=&quot;Serving default certificate for request: \&quot;example.de\&quot;&quot; time=&quot;2022-06-07T15:40:35Z&quot; level=debug msg=&quot;http: TLS handshake error from 10.1.0.4:57484: remote error: tls: unknown certificate&quot; time=&quot;2022-06-07T15:40:35Z&quot; level=debug msg=&quot;Serving default certificate for request: \&quot;example.de\&quot;&quot; time=&quot;2022-06-07T15:53:06Z&quot; level=debug msg=&quot;Serving default certificate for request: \&quot;\&quot;&quot; time=&quot;2022-06-07T16:03:31Z&quot; level=debug msg=&quot;Serving default certificate for request: \&quot;&lt;ip-adress&gt;\&quot;&quot; time=&quot;2022-06-07T16:03:32Z&quot; level=debug msg=&quot;Serving default certificate for request: \&quot;&lt;ip-adress&gt;\&quot;&quot; PS C:\WINDOWS\system32&gt; </code></pre> <p>i already found out that <code>networking.k8s.io/v1beta1</code> is not longer served, but <code>networking.k8s.io/v1</code> was defined in the template all the time as ApiVersion.</p> <p>Why does it still try to get from <code>v1beta1</code>? And how can i fix this?</p> <p>We use this TLSOptions:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: TLSOption metadata: name: default namespace: default spec: minVersion: VersionTLS12 maxVersion: VersionTLS13 cipherSuites: - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 </code></pre> <p>we use helm-treafik rolled out with terraform:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: &quot;2&quot; meta.helm.sh/release-name: traefik meta.helm.sh/release-namespace: traefik creationTimestamp: &quot;2021-06-12T10:06:11Z&quot; generation: 2 labels: app.kubernetes.io/instance: traefik app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: traefik helm.sh/chart: traefik-9.19.1 name: traefik namespace: traefik resourceVersion: &quot;86094434&quot; uid: 903a6f54-7698-4290-bc59-d234a191965c spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/instance: traefik app.kubernetes.io/name: traefik strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: traefik app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: traefik helm.sh/chart: traefik-9.19.1 spec: containers: - args: - --global.checknewversion - --global.sendanonymoususage - --entryPoints.traefik.address=:9000/tcp - --entryPoints.web.address=:8000/tcp - --entryPoints.websecure.address=:8443/tcp - --api.dashboard=true - --ping=true - --providers.kubernetescrd - --providers.kubernetesingress - --providers.file.filename=/etc/traefik/traefik.yml - --accesslog=true - --accesslog.format=json - --log.level=DEBUG - --entrypoints.websecure.http.tls - --entrypoints.web.http.redirections.entrypoint.to=websecure - --entrypoints.web.http.redirections.entrypoint.scheme=https - --entrypoints.web.http.redirections.entrypoint.permanent=true - --entrypoints.web.http.redirections.entrypoint.to=:443 image: traefik:2.4.8 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /ping port: 9000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 name: traefik ports: - containerPort: 9000 name: traefik protocol: TCP - containerPort: 8000 name: web protocol: TCP - containerPort: 8443 name: websecure protocol: TCP readinessProbe: failureThreshold: 1 httpGet: path: /ping port: 9000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 resources: {} securityContext: capabilities: add: - NET_BIND_SERVICE drop: - ALL readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: false runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data name: data - mountPath: /tmp name: tmp - mountPath: /etc/traefik name: traefik-cm readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65532 serviceAccount: traefik serviceAccountName: traefik terminationGracePeriodSeconds: 60 tolerations: - effect: NoSchedule key: env operator: Equal value: conhub volumes: - emptyDir: {} name: data - emptyDir: {} name: tmp - configMap: defaultMode: 420 name: traefik-cm name: traefik-cm status: availableReplicas: 3 conditions: - lastTransitionTime: &quot;2022-06-07T09:19:58Z&quot; lastUpdateTime: &quot;2022-06-07T09:19:58Z&quot; message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: &quot;True&quot; type: Available - lastTransitionTime: &quot;2021-06-12T10:06:11Z&quot; lastUpdateTime: &quot;2022-06-07T16:39:01Z&quot; message: ReplicaSet &quot;traefik-84c6f5f98b&quot; has successfully progressed. reason: NewReplicaSetAvailable status: &quot;True&quot; type: Progressing observedGeneration: 2 readyReplicas: 3 replicas: 3 updatedReplicas: 3 </code></pre> <pre><code>resource &quot;helm_release&quot; &quot;traefik&quot; { name = &quot;traefik&quot; namespace = &quot;traefik&quot; create_namespace = true repository = &quot;https://helm.traefik.io/traefik&quot; chart = &quot;traefik&quot; set { name = &quot;service.spec.loadBalancerIP&quot; value = azurerm_public_ip.pub_ip.ip_address } set { name = &quot;service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group&quot; value = var.resource_group_aks } set { name = &quot;additionalArguments&quot; value = &quot;{--accesslog=true,--accesslog.format=json,--log.level=DEBUG,--entrypoints.websecure.http.tls,--entrypoints.web.http.redirections.entrypoint.to=websecure,--entrypoints.web.http.redirections.entrypoint.scheme=https,--entrypoints.web.http.redirections.entrypoint.permanent=true,--entrypoints.web.http.redirections.entrypoint.to=:443}&quot; } set { name = &quot;deployment.replicas&quot; value = 3 } timeout = 600 depends_on = [ azurerm_kubernetes_cluster.aks ] } </code></pre>
DaveVentura
<p>I found out that the problem was the version of the traefik image:</p> <p>i quick fixed it by setting the latest image:</p> <p><code>kubectl set image deployment/traefik traefik=traefik:2.7.0 -n traefik</code></p>
DaveVentura
<p>I try to <code>patch</code> a <code>service</code> (add port declaration):</p> <pre><code>kind: Service apiVersion: v1 metadata: name: istio-ingressgateway namespace: istio-system labels: app: istio-ingressgateway istio: ingressgateway release: istio spec: ports: - name: status-port protocol: TCP port: 15021 targetPort: 15021 nodePort: 30805 - name: http2 protocol: TCP port: 80 targetPort: 8080 nodePort: 32130 - name: https protocol: TCP port: 443 targetPort: 8443 nodePort: 30720 - name: tls protocol: TCP port: 15443 targetPort: 15443 nodePort: 31202 selector: app: istio-ingressgateway istio: ingressgateway clusterIP: 172.30.62.239 type: LoadBalancer sessionAffinity: None externalTrafficPolicy: Cluster status: loadBalancer: {} </code></pre> <p>using <code>kubectl</code> or <code>oc</code> <code>patch</code>-command</p> <pre><code>kubectl patch service istio-ingressgateway -n istio-system --patch - &lt;&lt;EOF spec: ports: - name: gw protocol: TCP port: 3080 targetPort: 3080 nodePort: 31230 EOF </code></pre> <p>, but get an error</p> <pre><code>Error from server (BadRequest): json: cannot unmarshal array into Go value of type map[string]interface {} </code></pre> <p>👉🏻 under the hood, <code>k8s/openshift</code> use <code>GoLang</code> to parse <code>yaml</code> 👉 I tried to find same solutions in <code>go</code> - failed...</p> <p>What's wrong?</p>
kozmo
<p>try to use <code>json</code> to <code>patch</code></p> <pre><code>oc patch service/simple-server -p \ '{ &quot;spec&quot;: { &quot;ports&quot;: [ { &quot;name&quot;: &quot;gw&quot;, &quot;protocol&quot;: &quot;TCP&quot;, &quot;port&quot;: 1234,&quot;targetPort&quot;: 1234 } ] } }' </code></pre>
dante
<p>My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamically provisioned</a> by <code>pd.csi.storage.gke.io</code> and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.</p> <p>My problem is I need to set <code>accessModes</code> to <code>ReadOnlyMany</code> and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the <code>initContainer</code>.</p> <p>Is there a solution to this issue? <a href="https://stackoverflow.com/questions/57754103/how-to-pre-populate-a-readonlymany-persistent-volume">Answer to this question</a> suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.</p>
Leonid Bor
<p><code>...I need to have one disk shared among all pods</code></p> <p>You can try Filestore. First your create a FileStore <a href="https://cloud.google.com/filestore/docs/creating-instances" rel="nofollow noreferrer">instance</a> and save your data on a FileStore volume. Then you <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver#enabling_the_on_a_new_cluster" rel="nofollow noreferrer">install</a> FileStore driver on your cluster. Finally you share the data with pods that needs to read the data using a PersistentVolume <a href="https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/blob/master/examples/kubernetes/pre-provision/preprov-pv.yaml" rel="nofollow noreferrer">referring</a> the FileStore instance and volume above.</p>
gohm'c
<p>I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw:</p> <blockquote> <p>179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp </p> </blockquote> <p>As calico doc suggests (<a href="https://docs.projectcalico.org/getting-started/kubernetes/requirements" rel="noreferrer">https://docs.projectcalico.org/getting-started/kubernetes/requirements</a>) and this git repo on kubernetes security too (<a href="https://github.com/freach/kubernetes-security-best-practice" rel="noreferrer">https://github.com/freach/kubernetes-security-best-practice</a>).</p> <p>But when I want to create the cluster, the calico/node pod can't start because Felix is not live (I allowed 9099/tcp on ufw):</p> <blockquote> <p>Liveness probe failed: calico/node is not ready: Felix is not live: Get <a href="http://localhost:9099/liveness" rel="noreferrer">http://localhost:9099/liveness</a>: dial tcp [::1]:9099: connect: connection refused</p> </blockquote> <p>If I disable ufw, the cluster is created and there is no error.</p> <p>So I would like to know how I should configure ufw in order for kubernetes to work. If anyone could help me, it would be very great, thanks !</p> <p><strong>Edit: My ufw status</strong></p> <pre><code>To Action From 6443/tcp ALLOW Anywhere 9099 ALLOW Anywhere 179/tcp ALLOW Anywhere 4789/udp ALLOW Anywhere 5473/tcp ALLOW Anywhere 2379/tcp ALLOW Anywhere 8181 ALLOW Anywhere 8080 ALLOW Anywhere ###### (v6) LIMIT Anywhere (v6) # allow ssh connections in Postfix (v6) ALLOW Anywhere (v6) KUBE (v6) ALLOW Anywhere (v6) 6443 (v6) ALLOW Anywhere (v6) 6783/udp (v6) ALLOW Anywhere (v6) 6784/udp (v6) ALLOW Anywhere (v6) 6783/tcp (v6) ALLOW Anywhere (v6) 443/tcp (v6) ALLOW Anywhere (v6) 80/tcp (v6) ALLOW Anywhere (v6) 4149/tcp (v6) ALLOW Anywhere (v6) 10250/tcp (v6) ALLOW Anywhere (v6) 10255/tcp (v6) ALLOW Anywhere (v6) 10256/tcp (v6) ALLOW Anywhere (v6) 9099/tcp (v6) ALLOW Anywhere (v6) 6443/tcp (v6) ALLOW Anywhere (v6) 9099 (v6) ALLOW Anywhere (v6) 179/tcp (v6) ALLOW Anywhere (v6) 4789/udp (v6) ALLOW Anywhere (v6) 5473/tcp (v6) ALLOW Anywhere (v6) 2379/tcp (v6) ALLOW Anywhere (v6) 8181 (v6) ALLOW Anywhere (v6) 8080 (v6) ALLOW Anywhere (v6) 53 ALLOW OUT Anywhere # allow DNS calls out 123 ALLOW OUT Anywhere # allow NTP out 80/tcp ALLOW OUT Anywhere # allow HTTP traffic out 443/tcp ALLOW OUT Anywhere # allow HTTPS traffic out 21/tcp ALLOW OUT Anywhere # allow FTP traffic out 43/tcp ALLOW OUT Anywhere # allow whois SMTPTLS ALLOW OUT Anywhere # open TLS port 465 for use with SMPT to send e-mails 10.32.0.0/12 ALLOW OUT Anywhere on weave 53 (v6) ALLOW OUT Anywhere (v6) # allow DNS calls out 123 (v6) ALLOW OUT Anywhere (v6) # allow NTP out 80/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTP traffic out 443/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTPS traffic out 21/tcp (v6) ALLOW OUT Anywhere (v6) # allow FTP traffic out 43/tcp (v6) ALLOW OUT Anywhere (v6) # allow whois SMTPTLS (v6) ALLOW OUT Anywhere (v6) # open TLS port 465 for use with SMPT to send e-mails </code></pre> <p>Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.</p>
pchmn
<blockquote> <p>I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw: 179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp</p> </blockquote> <p><strong><em>NOTE:</strong> all executable commands begin with <code>$</code></em></p> <ul> <li>Following this initial instruction, I installed ufw on a Debian 10 and enabled the same ports you mention:</li> </ul> <pre><code>$ sudo apt update &amp;&amp; sudo apt-upgrade -y $ sudo apt install ufw -y $ sudo ufw allow ssh Rule added Rule added (v6) $ sudo ufw enable Command may disrupt existing ssh connections. Proceed with operation (y|n)? y Firewall is active and enabled on system startup $ sudo ufw allow 179/tcp $ sudo ufw allow 4789/tcp $ sudo ufw allow 5473/tcp $ sudo ufw allow 443/tcp $ sudo ufw allow 6443/tcp $ sudo ufw allow 2379/tcp $ sudo ufw allow 4149/tcp $ sudo ufw allow 10250/tcp $ sudo ufw allow 10255/tcp $ sudo ufw allow 10256/tcp $ sudo ufw allow 9099/tcp $ sudo ufw status Status: active To Action From -- ------ ---- 22/tcp ALLOW Anywhere 179/tcp ALLOW Anywhere 4789/tcp ALLOW Anywhere 5473/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 6443/tcp ALLOW Anywhere 2379/tcp ALLOW Anywhere 4149/tcp ALLOW Anywhere 10250/tcp ALLOW Anywhere 10255/tcp ALLOW Anywhere 10256/tcp ALLOW Anywhere 22/tcp (v6) ALLOW Anywhere (v6) 179/tcp (v6) ALLOW Anywhere (v6) 4789/tcp (v6) ALLOW Anywhere (v6) 5473/tcp (v6) ALLOW Anywhere (v6) 443/tcp (v6) ALLOW Anywhere (v6) 6443/tcp (v6) ALLOW Anywhere (v6) 2379/tcp (v6) ALLOW Anywhere (v6) 4149/tcp (v6) ALLOW Anywhere (v6) 10250/tcp (v6) ALLOW Anywhere (v6) 10255/tcp (v6) ALLOW Anywhere (v6) 10256/tcp (v6) ALLOW Anywhere (v6) </code></pre> <hr> <ul> <li>Now I'll install <a href="https://docs.docker.com/install/linux/docker-ce/debian/" rel="noreferrer">Docker</a>:</li> </ul> <pre><code>$ sudo apt-get update $ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common= </code></pre> <ul> <li>Adding Docker repository:</li> </ul> <pre><code>$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - $ sudo apt-key fingerprint 0EBFCD88 $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" </code></pre> <ul> <li>Update source list and install Docker-ce:</li> </ul> <pre><code>$ sudo apt-get update $ sudo apt-get -y install docker-ce </code></pre> <p><strong><em>NOTE:</strong> On production system recomend install a fixed version of docker:</em></p> <pre><code>$ apt-cache madison docker-ce $ sudo apt-get install docker-ce=&lt;VERSION&gt; </code></pre> <hr> <ul> <li>Installing <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl" rel="noreferrer">Kube Tools</a> - kubeadm, kubectl, kubelet:</li> </ul> <pre><code>$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - </code></pre> <ul> <li>Configure Kubernetes repository (copy the 3 lines and paste at once):</li> </ul> <pre><code>$ cat &lt;&lt;EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF </code></pre> <ul> <li>Installing packages:</li> </ul> <pre><code>$ sudo apt-get update $ sudo apt-get install -y kubelet kubeadm kubectl </code></pre> <ul> <li>After installing mark theses packages to don’t update automatically:</li> </ul> <pre><code>$ sudo apt-mark hold kubelet kubeadm kubectl </code></pre> <hr> <ul> <li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node" rel="noreferrer">Initialize the Cluster</a>:</li> </ul> <pre><code>$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 </code></pre> <ul> <li>Make kubectl enabled to non-root user:</li> </ul> <pre><code>$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <ul> <li><a href="https://docs.projectcalico.org/getting-started/kubernetes/quickstart" rel="noreferrer">Installing Calico</a>:</li> </ul> <pre><code>$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created </code></pre> <ul> <li>Check the status:</li> </ul> <pre><code>$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-555fc8cc5c-wnnvq 1/1 Running 0 26m calico-node-sngt8 1/1 Running 0 26m coredns-66bff467f8-2qqlv 1/1 Running 0 55m coredns-66bff467f8-vptpr 1/1 Running 0 55m etcd-kubeadm-ufw-debian10 1/1 Running 0 55m kube-apiserver-kubeadm-ufw-debian10 1/1 Running 0 55m kube-controller-manager-kubeadm-ufw-debian10 1/1 Running 0 55m kube-proxy-nx8cz 1/1 Running 0 55m kube-scheduler-kubeadm-ufw-debian10 1/1 Running 0 55m </code></pre> <hr> <p><strong>Considerations:</strong></p> <blockquote> <p>Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.</p> </blockquote> <ul> <li>It's normal to try many things to make something work, but it sometimes end up becoming the issue itself.</li> <li>I'm posting you the step by step I did to deploy it on the same environment as you so you can follow it once again to achieve the same results.</li> <li>My felix probe didn't got any error, only time it got error was when i tried (on purpose) deploying the kubernetes without creating the rules on ufw.</li> </ul> <p><strong>If it does not solve, next steps:</strong></p> <ul> <li>Now, if after following this tutorial you still get a similar problem, please update the question with the following informations: <ul> <li><code>kubectl describe &lt;pod_name&gt; -n kube-system</code></li> <li><code>kubectl get pod &lt;pod_name&gt; -n kube-system</code></li> <li><code>kubectl logs &lt;pod_name&gt; -n kube-system</code></li> <li>It's always recommended starting with a clean installation of Linux, if you are running a VM, delete the VM and create a new one.</li> <li>If you are running on bare-metal, consider what else is running on the server, maybe there's another software messing with network communication.</li> </ul></li> </ul> <p>Let me know in the comments if you find any problem following these troubleshooting steps.</p>
Will R.O.F.
<p>I'm trying to understand the Arch of Airflow on Kubernetes.</p> <p>Using the helm and Kubernetes executor, the installation mounts 3 pods called: Trigger, WebServer, and Scheduler...</p> <p>When I run a dag using the Kubernetes pod operator, it also mounts 2 pods more: one with the dag name and another one with the task name...</p> <p>I want to understand the communication between pods... So far I know the only the expressed in the image:</p> <p><a href="https://i.stack.imgur.com/TWQjx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TWQjx.png" alt="enter image description here" /></a></p> <p>Note: I'm using the git sync option</p> <p>Thanks in advance for the help that you can give me!!</p>
Reco Jhonatan
<p>Airflow application has some components that require for it to operate normally: Webserver, Database, Scheduler, Trigger, Worker(s), Executor. You can read about it <a href="https://airflow.apache.org/docs/apache-airflow/stable/concepts/overview.html#architecture-overview" rel="nofollow noreferrer">here</a>.</p> <p>Lets go over the options:</p> <ol> <li><a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#kubernetes-executor" rel="nofollow noreferrer">Kubernetes Executor</a> (As you choose): In your instance since you are deploying on Kubernetes with Kubernetes Executor then each task being executed is a pod. Airflow wraps the task with a Pod no matter what task it is. This brings to the front the isolation that Kubernetes offer, this also bring the overhead of creating a pod for each task. Choosing Kubernetes Executor often goes with case where many/most of your tasks takes long time to execute - as if your tasks takes 5 seconds to complete it might not be worth to pay the overhead of creating pod for each task. As for what you see as the DAG -&gt; Task1 in your diagram. Consider that the Scheduler launches the Airflow workers. The workers are starting the tasks in new pods. So the worker needs to monitor the execution of the task.</li> <li><a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/celery.html" rel="nofollow noreferrer">Celery Executor</a> - Setting up a Worker/Pod which tasks can run in it. This gives you speed as there is no need to create pod for each task but there is no isolation for each task. Noting that using this executor doesn't mean that you can't run tasks in their own Pod. User can run <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#kubernetespodoperator" rel="nofollow noreferrer">KubernetesPodOperator</a> and it will create a Pod for the task.</li> <li><a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/celery_kubernetes.html#celerykubernetes-executor" rel="nofollow noreferrer">CeleryKubernetes Executor</a> - Enjoying both worlds. You decide which tasks will be executed by Celery and which by Kubernetes. So for example you can set small short tasks to Celery and longer tasks to Kubernetes.</li> </ol> <p>How will it look like Pod wise?</p> <ol> <li>Kubernetes Executor - Every task creates a pod. PythonOperator, BashOperator - all of them will be wrapped with pods (user doesn't need to change anything on his DAG code).</li> <li>Celery Executor - Every task will be executed in a Celery worker(pod). So the pod is always in Running waiting to get tasks. You can create a dedicated pod for a task if you will explicitly use KubernetesPodOperator.</li> <li>CeleryKubernetes - Combining both of the above.</li> </ol> <p>Note again that you can use each one of these executors with Kubernetes environment. Keep in mind that all of these are just executors. Airflow has other components like mentioned earlier so it's very OK to deploy Airflow on Kubernetes (Scheduler, Webserver) but to use CeleryExecutor thus the user code (tasks) are not creating new pods automatically.</p> <p>As for Triggers since you asked about it specifically - It's a feature added in Airflow 2.2: <a href="https://airflow.apache.org/docs/apache-airflow/stable/concepts/deferring.html" rel="nofollow noreferrer">Deferrable Operators &amp; Triggers</a> it allows tasks to defer and release worker slot.</p>
Elad Kalif
<p>I use helm and I have a problem, when a pod (statefulset) is entring to <code>CrashLoopBackOff</code>, it never exit this state.</p> <p>Even when there is a new rolling update, the statefulset still in the same state <code>CrashLoopBackOff</code> from the old rolling update.</p> <h2>Question</h2> <p>What can I do to force the statefulset to start the new rolling update (or even better, gracefully)?</p> <ul> <li>An answer for k8s-deployment will also be great!</li> </ul>
Stav Alfi
<p>Assumed the installation was succeeded before. You need to fix the <code>CrashLoopBackOff</code> first by running <code>helm rollback &lt;release&gt; --namespace &lt;if not default&gt; --timeout 5m0s &lt;revision #&gt;</code>, then only you do helm upgrade with the new image. You can get the list of revision # by <code>helm history &lt;release&gt; --namespace &lt;if not default&gt;</code>.</p>
gohm'c
<p>I'm trying to setup my Jenkins instance in <strong>Google Kubernetes Engine</strong>, also I am using <strong>Google login plugin</strong> so that I could login with my GCP user to Jenkins, I have installed Ingress controller which is <strong>NGINX</strong> and exposed Jenkins service using ingress.</p> <p>Domain under which I want to access Jenkins is : <strong>util.my-app.com/jenkins</strong></p> <p>In Jenkins config under parameter <em>Jenkins URL</em> I also set this domain name <strong>util.my-app.com/jenkins</strong></p> <p>And here's my Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins-ing annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: util.my-app.com http: paths: - path: /jenkins/* backend: serviceName: jenkins-svc servicePort: 80 </code></pre> <p>In GCP Credentials page under <em>Authorized JavaScript origins</em> I set <strong><a href="http://util.my-app.com" rel="nofollow noreferrer">http://util.my-app.com</a></strong> and under <em>Authorized redirect URIs</em> I set <strong><a href="http://util.my-app.com/jenkins/securityRealm/finishLogin" rel="nofollow noreferrer">http://util.my-app.com/jenkins/securityRealm/finishLogin</a></strong></p> <p>It's either returning me 404 status or doing infinite redirects, what I noticed that when Jenkins Google login plugin does redirect it is like this <strong><a href="http://util.my-app.com/securityRealm/finishLogin" rel="nofollow noreferrer">http://util.my-app.com/securityRealm/finishLogin</a></strong> without "jenkins" part, what is wrong with my setup ?</p>
Laimis
<p>Welcome to Stack Laimis!</p> <p>I tested your ingress object, and there one issue.</p> <p><strong>Your Ingress is missing the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="nofollow noreferrer">rewrite-target</a>:</strong></p> <blockquote> <p>In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.</p> </blockquote> <p>This example from documentation <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite" rel="nofollow noreferrer">shows the structure required</a>:</p> <p>Here is your ingress with the edit:</p> <ul> <li>added the line <code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code></li> </ul> <pre><code>apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1 kind: Ingress metadata: name: jenkins-ing annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: util.my-app.com http: paths: - path: /jenkins/* backend: serviceName: jenkins-svc servicePort: 80 </code></pre> <hr> <p><strong>Reproduction:</strong></p> <ul> <li>First I created a deployment. For that I'm using <code>echo-app</code> for it's elucidative output.</li> <li>Add a service to expose it inside the cluster on <code>port 8080</code> and outside as <code>NodePort</code>.</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: echo1-deploy spec: selector: matchLabels: app: echo1-app template: metadata: labels: app: echo1-app spec: containers: - name: echo1-app image: mendhak/http-https-echo ports: - name: http containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: echo1-svc spec: type: NodePort selector: app: echo1-app ports: - protocol: TCP port: 8080 targetPort: 80 </code></pre> <ul> <li>I'll create one more deployment and service, this way we can try a little further with Ingress and to demonstrate what to do when you have more than one service that you need to expose on ingress</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: echo2-deploy spec: selector: matchLabels: app: echo2-app template: metadata: labels: app: echo2-app spec: containers: - name: echo2-app image: mendhak/http-https-echo ports: - name: http containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: echo2-svc spec: type: NodePort selector: app: echo2-app ports: - protocol: TCP port: 8080 targetPort: 80 </code></pre> <ul> <li>I'll use an ingress, just like yours, the only diferences are: <ul> <li>Changed the service to <code>echo1-svc</code> emulating your jenkins-svc</li> <li>Added another service to <code>echo2-svc</code> to redirect all http requests except the ones that match the first rule.</li> </ul></li> </ul> <pre><code>apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1 kind: Ingress metadata: name: jenkins-ing annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: util.my-app.com http: paths: - path: /jenkins/* backend: serviceName: echo1-svc servicePort: 80 - path: /(.*) backend: serviceName: echo2-svc servicePort: 80 </code></pre> <ul> <li>Now I'll deploy this apps and the ingress:</li> </ul> <pre><code>$ kubectl apply -f echo1-deploy.yaml deployment.apps/echo1-deploy created service/echo1-svc created $ kubectl apply -f echo2-deploy.yaml deployment.apps/echo2-deploy created service/echo2-svc created $ kubectl apply -f jenkins-ing.yaml ingress.networking.k8s.io/jenkins-ing created </code></pre> <ul> <li>Now let's check if everything is running:</li> </ul> <pre><code>$ kubectl get all NAME READY STATUS RESTARTS AGE pod/echo1-deploy-989766d57-8pmhj 1/1 Running 0 27m pod/echo2-deploy-65b6ffbcf-lfgzk 1/1 Running 0 27m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/echo1-svc NodePort 10.101.127.78 &lt;none&gt; 8080:30443/TCP 27m service/echo2-svc NodePort 10.106.34.91 &lt;none&gt; 8080:32628/TCP 27m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/echo1-deploy 1/1 1 1 27m deployment.apps/echo2-deploy 1/1 1 1 27m NAME DESIRED CURRENT READY AGE replicaset.apps/echo1-deploy-989766d57 1 1 1 27m replicaset.apps/echo2-deploy-65b6ffbcf 1 1 1 27m $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE jenkins-ing util.my-app.com 80 4s </code></pre> <ul> <li>As you can see the <code>echo1-svc</code> is exposed outside kubernetes on <code>port 30443</code> and <code>echo2-svc</code> on <code>port 32628</code></li> <li>With the ingress rule we can Curl on port 80 and it will redirect to the specified service.</li> <li>Since I don't have this domain, I'll add a record on my <code>/etc/hosts</code> file to emulate the DNS resolution directing it to my kubernetes IP.</li> </ul> <pre><code>$ cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.39.240 util.my-app.com $ curl util.my-app.com/jenkins { "headers": { "host": "util.my-app.com", "x-real-ip": "192.168.39.1", "x-forwarded-host": "util.my-app.com", "x-forwarded-port": "80", "x-forwarded-proto": "http", "user-agent": "curl/7.52.1", }, "method": "GET", "hostname": "util.my-app.com", "ip": "::ffff:172.17.0.6", "protocol": "http", "subdomains": [ "util" ], "os": { "hostname": "echo1-deploy-989766d57-8pmhj" } </code></pre> <p>You can see the HTTP GET was redirected to the pod on the backend of <code>echo1-svc</code> - Now let's check what happens when we curl the domain without the <code>/jenkins/</code></p> <pre><code>$ curl util.my-app.com { "headers": { "host": "util.my-app.com", "x-real-ip": "192.168.39.1", "x-forwarded-host": "util.my-app.com", "x-forwarded-port": "80", "x-forwarded-proto": "http", "user-agent": "curl/7.52.1", }, "method": "GET", "hostname": "util.my-app.com", "ip": "::ffff:172.17.0.6", "protocol": "http", "subdomains": [ "util" ], "os": { "hostname": "echo2-deploy-65b6ffbcf-lfgzk" </code></pre> <p>You can see the HTTP GET was redirected to the pod on the backend of <code>echo2-svc</code>.</p> <hr> <ul> <li>I know your question also addresses issues about your google authentication, but I'd suggest you to first correct your ingress, then you can paste here the status it will return if you still have problems with the ingress redirecting correctly.</li> </ul> <p>If you have any doubt let me know in the comments.</p>
Will R.O.F.
<p>I can list pods in my prod namespace</p> <pre><code>kubectl get pods -n prod NAME READY STATUS RESTARTS AGE curl-pod 1/1 Running 1 (32m ago) 38m web 1/1 Running 1 (33m ago) 38m </code></pre> <p>I got error</p> <pre><code>kubectl describe pods curl-pod Error from server (NotFound): pods &quot;curl-pod&quot; not found </code></pre> <p>Get events show</p> <pre><code> Normal Scheduled pod/curl-pod Successfully assigned prod/curl-pod to minikube </code></pre> <p>Why?</p>
Richard Rublev
<p>kubernetes manages by namespace, so you must specify namespace otherwise kubernetes will use namespace default.<br /> So, you must type:</p> <pre><code>kubectl describe pod/curl-pod -n prod </code></pre>
quoc9x
<p>I am trying to backup postgres database from RDS using K8s cronjob. I have created cronjob for it my EKS cluster and credentials are in Secrets. When Its try to copy backup fail into AWS S3 bucket pod fails with error: <strong>aws: error: argument command: Invalid choice, valid choices are:</strong> I tried different options but its not working.</p> <p>Anybody please help in resolving this issue. Here is brief info: K8s cluster is on AWS EKS Db is on RDS I am using following config for my cronjob:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: postgres-backup spec: schedule: &quot;*/3 * * * *&quot; jobTemplate: spec: backoffLimit: 0 template: spec: initContainers: - name: dump image: postgres:12.1-alpine volumeMounts: - name: data mountPath: /backup args: - pg_dump - &quot;-Fc&quot; - &quot;-f&quot; - &quot;/backup/redash-postgres.pgdump&quot; - &quot;-Z&quot; - &quot;9&quot; - &quot;-v&quot; - &quot;-h&quot; - &quot;postgress.123456789.us-east-2.rds.amazonaws.com&quot; - &quot;-U&quot; - &quot;postgress&quot; - &quot;-d&quot; - &quot;postgress&quot; env: - name: PGPASSWORD valueFrom: secretKeyRef: # Retrieve postgres password from a secret name: postgres key: POSTGRES_PASSWORD containers: - name: save image: amazon/aws-cli volumeMounts: - name: data mountPath: /backup args: - aws - &quot;--version&quot; envFrom: - secretRef: # Must contain AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION name: s3-backup-credentials restartPolicy: Never volumes: - name: data emptyDir: {} </code></pre>
Bilal Abbasi
<p>Try this:</p> <pre><code>... containers: - name: save image: amazon/aws-cli ... args: - &quot;--version&quot; # &lt;-- the image entrypoint already call &quot;aws&quot;, you only need to specify the arguments here. ... </code></pre>
gohm'c
<p>I am using Airflow2.0 with <code>KubernetesPodOperator</code> want to run a command that use as a parameter a file from inside the image ran by the Operator. This is what I used:</p> <pre><code>KubernetesPodOperator( namespace=commons.kubernetes_namespace, labels=commons.labels, image=f&quot;myregistry.io/myimage:{config['IMAGE_TAG']}&quot;, arguments=[ &quot;python&quot;, &quot;run_module.py &quot;, &quot;-i&quot;, f'args/{config[&quot;INPUT_DIR&quot;]}/{task_id}.json' ], name=dag_name + task_id, task_id=task_id, secrets=[secret_volume] ) </code></pre> <p>But this gives me the error:</p> <pre><code>raise TemplateNotFound(template) jinja2.exceptions.TemplateNotFound: args/airflow2test/processing-pipeline.json </code></pre> <p>The image does not use any macros.</p> <p>Anyone has any clue? What do I do wrong?</p>
salvob
<p>This was a bug that started with <a href="https://github.com/apache/airflow/pull/15942" rel="nofollow noreferrer">PR</a> released in version <code>2.0.0</code> of <a href="https://pypi.org/project/apache-airflow-providers-cncf-kubernetes/" rel="nofollow noreferrer"><code>apache-airflow-providers-cncf-kubernetes</code></a>. The goal of the change was to allow templating of <code>.json</code> files. There was a <a href="https://github.com/apache/airflow/issues/16922" rel="nofollow noreferrer">GitHub issue</a> about the problems it created. The bug was eventually resolved by <a href="https://github.com/apache/airflow/pull/17760" rel="nofollow noreferrer">PR</a> which was released in version 2.0.2 of the provider.</p> <p>Solution:</p> <ol> <li>Upgrade to the latest <code>apache-airflow-providers-cncf-kubernetes</code> (currently 2.0.2)</li> <li>If upgrade is not an option use custom <code>KubernetesPodOperator</code></li> </ol> <p>There are two ways to workaround that problem one is to change <code>template_fields</code> the other is to change <code>template_ext</code>:</p> <p>1st option: As posted on <a href="https://github.com/apache/airflow/issues/16922#issuecomment-892153152" rel="nofollow noreferrer">issue</a> by raphaelauv is not to allow rendering of <code>arguments</code> field:</p> <pre><code>class MyKubernetesPodOperator(KubernetesPodOperator): template_fields = tuple(x for x in KubernetesPodOperator.template_fields if x != &quot;arguments&quot;) </code></pre> <p>2st option: if you prefer not to render <code>.json</code> files:</p> <pre><code>class MyKubernetesPodOperator(KubernetesPodOperator): template_ext = ('.yaml', '.yml',) </code></pre>
Elad Kalif
<p>can I run a job and a deploy in a single config file/action Where the deploy will wait for the job to finish and check if it's successful so it can continue with the deployment? </p>
Igor Igeto Mitkovski
<p>Based on the information you provided I believe you can achieve your goal using a Kubernetes feature called <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">InitContainer</a>:</p> <blockquote> <p>Init containers are exactly like regular containers, except:</p> <ul> <li>Init containers always run to completion.</li> <li>Each init container must complete successfully before the next one starts.</li> </ul> <p>If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a <code>restartPolicy</code> of Never, Kubernetes does not restart the Pod.</p> </blockquote> <ul> <li>I'll create a <code>initContainer</code> with a <code>busybox</code> to run a command linux to wait for the service <code>mydb</code> to be running before proceeding with the deployment.</li> </ul> <p><strong>Steps to Reproduce:</strong> - Create a Deployment with an <code>initContainer</code> which will run the job that needs to be completed before doing the deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: run: my-app name: my-app spec: replicas: 2 selector: matchLabels: run: my-app template: metadata: labels: run: my-app spec: restartPolicy: Always containers: - name: myapp-container image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! &amp;&amp; sleep 3600'] initContainers: - name: init-mydb image: busybox:1.28 command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"] </code></pre> <p>Many kinds of commands can be used in this field, you just have to select a docker image that contains the binary you need (including your <code>sequelize</code> job)</p> <ul> <li>Now let's apply it see the status of the deployment:</li> </ul> <pre><code>$ kubectl apply -f my-app.yaml deployment.apps/my-app created $ kubectl get pods NAME READY STATUS RESTARTS AGE my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 4s my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 4s </code></pre> <p>The pods are hold on <code>Init:0/1</code> status waiting for the completion of the init container. - Now let's create the service which the initcontainer is waiting to be running before completing his task:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 </code></pre> <ul> <li>We will apply it and monitor the changes in the pods:</li> </ul> <pre><code>$ kubectl apply -f mydb-svc.yaml service/mydb created $ kubectl get pods -w NAME READY STATUS RESTARTS AGE my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 91s my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 91s my-app-6b4fb4958f-s7wmr 0/1 PodInitializing 0 93s my-app-6b4fb4958f-44ds7 0/1 PodInitializing 0 94s my-app-6b4fb4958f-s7wmr 1/1 Running 0 94s my-app-6b4fb4958f-44ds7 1/1 Running 0 95s ^C $ kubectl get all NAME READY STATUS RESTARTS AGE pod/my-app-6b4fb4958f-44ds7 1/1 Running 0 99s pod/my-app-6b4fb4958f-s7wmr 1/1 Running 0 99s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mydb ClusterIP 10.100.106.67 &lt;none&gt; 80/TCP 14s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-app 2/2 2 2 99s NAME DESIRED CURRENT READY AGE replicaset.apps/my-app-6b4fb4958f 2 2 2 99s </code></pre> <p>If you need help to apply this to your environment let me know.</p>
Will R.O.F.
<p>We are considering to use HPA to scale number of pods in our cluster. This is how a typical HPA object would like:</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-demo namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hpa-deployment minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 20 </code></pre> <p>My question is - can we have multiple targets (scaleTargetRef) for HPA? Or each deployment/RS/SS/etc. has to have its own HPA?</p> <p>Tried to look into K8s doc, but could not find any info on this. Any help appreciated, thanks.</p> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis</a></p> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
Yogesh Gupta
<p><code>Can we have multiple targets (scaleTargetRef) for HPA ?</code></p> <p>One <code>HorizontalPodAutoscaler</code> has only one <code>scaleTargetRef</code> that hold one referred resource only.</p>
gohm'c
<p>I have created a very simple spring boot application with only one REST service. This app is converted into a docker image (&quot;springdockerimage:1&quot;) and deployed in the Kubernetes cluster with 3 replicas. Contents of my &quot;Deployment&quot; definition is as follows:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: springapp labels: app: distributiondemo spec: selector: matchLabels: app: distributiondemo replicas: 3 template: metadata: labels: app: distributiondemo spec: containers: - name: spring-container image: springdockerimage:1 </code></pre> <p>I have created service for my above deployment as follows:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: springservice labels: app: distributiondemo spec: selector: app: distributiondemo ports: - port: 8080 protocol: TCP targetPort: 8080 name: spring-port nodePort: 32000 type: NodePort </code></pre> <p>After deploying both the above YAML(deployment and service) files, I noticed that everything has been deployed as expected i.e., 3 replicas are created and my service is having 3 endpoints as well. Below screenshot is the proof of the same: <a href="https://i.stack.imgur.com/p7dgY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p7dgY.png" alt="enter image description here" /></a></p> <p>Since I am using minikube for my local testing, I am port forwarding and accessing the application as <strong>kubectl port-forward deployment.apps/springapp 40002:8080</strong> .</p> <p>But one thing I noticed is that all my HTTP requests are getting redirected to only one pod.</p> <pre><code>while true ; do curl http://localhost:40002/docker-java-app/test ;done </code></pre> <p>I am not getting where exactly I am doing it wrong. Any help would be appreciated. Thank you.</p>
Hemanth H L
<p>The loadbalancing might not work with port-forwarded ports as it might be directly redirecting traffic to pod (read more <a href="https://stackoverflow.com/questions/59940833/does-kubectl-port-forward-ignore-loadbalance-services">here</a>). The K8s service is the feature will give you that loadbalancing capability.</p> <p>So you can try either of below instead</p> <ol> <li>Use <code>http://your_service_dns_name:8080/docker-java-app/test</code></li> <li>Use <code>http://service_cluster_ip:8080/docker-java-app/test</code></li> <li>Use <code>http://any_host_ip_from_k8s_cluster:32000/docker-java-app/test</code></li> </ol> <p>Option 1 and 2 works only if you are accessing those urls from a host which is part of K8s cluster. Option 3 just needs connectivity to target host and port, from the host you are accessing url.</p>
Syam Sankar
<p>I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?</p> <p>How to setup above requirement? Can you someone please help on the same?</p> <p>Thanks, Anand</p>
Anandharaj R
<p>You can find information regarding that particular solution in this <a href="https://cloud.google.com/secret-manager/docs/using-other-products#google-kubernetes-engine" rel="nofollow noreferrer">doc</a>.<br /> There are also good examples on medium <a href="https://medium.com/google-cloud/consuming-google-secret-manager-secrets-in-gke-911523207a79" rel="nofollow noreferrer">here</a> and <a href="https://alessio-trivisonno.medium.com/injecting-secrets-in-gke-with-secret-manager-fd961bbeea73" rel="nofollow noreferrer">here</a>.</p> <p>To answer your question regarding updating the secrets:<br /> Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.</p>
Sergiusz
<p>I have a pod in which I'm running a image. The pod is not mine but belongs to the company I work for. Each time I mount a new image in the pod it has access to some predefined &quot;Permanent&quot; folders.</p> <p>When I use the edit deployment command I see this:</p> <pre><code> volumeMounts: - mountPath: /Data/logs name: ba-server-api-dh-pvc subPath: logs - mountPath: /Data/ErrorAndAbortedBlobs name: ba-server-api-dh-pvc subPath: ErrorAndAbortedBlobs - mountPath: /Data/SuccessfullyTransferredBlobs name: ba-server-api-dh-pvc subPath: SuccessfullyTransferredBlobs - mountPath: /Data/BlobsToBeTransferred name: ba-server-api-dh-pvc subPath: BlobsToBeTransferred </code></pre> <p>Now I want to manually add another such mountPath so I get another folder in the pod. But when I add it to the deployment config (the one above) and try saving it I get the following error.</p> <pre><code>&quot;error: deployments.extensions &quot;ba-server-api-dh-deployment&quot; is invalid&quot; </code></pre> <p>What can I do to add another permanent folder to the POD?</p> <p>kind regards</p>
Mr.Gomer
<p>It looks like you haven't specified the volume.</p> <p>Something looks like this.</p> <pre><code>... volumeMounts: - mountPath: /Data/BlobsToBeTransferred name: ba-server-api-dh-pvc subPath: BlobsToBeTransferred ... volume: - name: ba-server-api-dh-pvc persistentVolumeClaim: claimName: ba-server-api-dh-pvc </code></pre> <p>Note that you already have a PersistentVolumeClaim named ba-server-api-dh-pvc, otherwise you will have to create.</p>
quoc9x
<p>I'm using the following command to check if the namespace is active</p> <pre><code>kubectl wait --for=condition=items.status.phase=Active namespace/mynamespace --timeout=2s </code></pre> <p>This always returns &quot;error: timed out waiting for the condition on namespaces/mynamespace&quot; although the namespace is active. Is there a correct way to wait for the namespace to be active? This script is part of a job to check the namespace is active after a AKS cluster restart.</p>
Rajesh Kazhankodath
<p>To date <code>status</code> is not a recognized <code>condition</code>. Try:</p> <p><code>while ! [ &quot;$(kubectl get ns &lt;change to your namespace&gt; -o jsonpath='{.status.phase}')&quot; == &quot;Active&quot; ]; do echo 'Waiting for namespace to come online. CTRL-C to exit.'; sleep 1; done</code></p>
gohm'c
<p>I'm converting volume gp2 to volume gp3 for EKS but getting this error.<br /> <em><strong>Failed to provision volume with StorageClass &quot;gp3&quot;: invalid AWS VolumeType &quot;gp3&quot;</strong></em><br /> This is my config.</p> <p>StorageClass</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: &quot;true&quot; name: gp3 parameters: fsType: ext4 type: gp3 provisioner: kubernetes.io/aws-ebs reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer </code></pre> <p>PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: test-pvc name: test-pvc namespace: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3 </code></pre> <p>When I type <code>kubectl describe pvc/test</code>. This is response:</p> <pre><code>Name: test-pvc Namespace: default StorageClass: gp3 Status: Pending Volume: Labels: app=test-pvc Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 58s (x9 over 4m35s) persistentvolume-controller Failed to provision volume with StorageClass &quot;gp3&quot;: invalid AWS VolumeType &quot;gp3&quot; </code></pre> <p>I'm using Kubernetes version 1.18.<br /> Can someone help me. Thanks!</p>
quoc9x
<p>I found the solution to use volume <code>gp3</code> in storage class on EKS.</p> <ol> <li>First, you need to install <code>Amazon EBS CSI driver</code> with offical instruction <a href="https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html" rel="nofollow noreferrer">here</a>.</li> <li>The next, you need to create the storage class <code>ebs-sc</code> after <code>Amazon EBS CSI driver</code> is installed, example:</li> </ol> <hr /> <pre><code>cat &lt;&lt; EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs-sc provisioner: ebs.csi.aws.com parameters: type: gp3 reclaimPolicy: Retain volumeBindingMode: WaitForFirstConsumer EOF </code></pre> <p>So, you can use volume <code>gp3</code> in storage class on EKS.<br /> You can check by deploying resources:</p> <pre><code>cat &lt;&lt; EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ebs-gp3-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ebs-sc --- apiVersion: v1 kind: Pod metadata: name: app-gp3-in-tree spec: containers: - name: app image: nginx volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html volumes: - name: persistent-storage persistentVolumeClaim: claimName: ebs-gp3-claim EOF </code></pre> <p>Detailed documentation on Migrating Amazon EKS clusters from gp2 to gp3 EBS volumes: <a href="https://aws.amazon.com/vi/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/" rel="nofollow noreferrer">https://aws.amazon.com/vi/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/</a></p> <p>References: <a href="https://stackoverflow.com/questions/69290796/persistent-storage-in-eks-failing-to-provision-volume">Persistent Storage in EKS failing to provision volume</a></p>
quoc9x
<p>I've installed, Docker, Kubectl and kubeAdm. I want to create my device model and device CRDs (I'm following this <a href="https://docs.kubeedge.io/en/latest/setup/setup.html" rel="noreferrer">guide</a>. So, when I run the command :</p> <pre><code>kubectl create -f devices_v1alpha1_devicemodel.yaml </code></pre> <p>as a user I get the following out:</p> <pre><code>The connection to the server 10.0.0.68:6443 was refused - did you specify the right host or port? </code></pre> <p>(I have added the permission for the user to access the .kube folder)</p> <p>With netstat, I get :</p> <pre><code>&gt; ubuntu@kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$ &gt; sudo netstat -atunp Active Internet connections (servers and &gt; established) Proto &gt; Recv-Q Send-Q Local Address Foreign Address State &gt; PID/Program name tcp 0 0 0.0.0.0:22 &gt; 0.0.0.0:* LISTEN 1298/sshd tcp 0 224 10.0.0.68:22 160.98.31.160:52503 ESTABLISHED &gt; 2061/sshd: ubuntu [ tcp6 0 0 :::22 :::* &gt; LISTEN 1298/sshd udp 0 0 0.0.0.0:68 &gt; 0.0.0.0:* 910/dhclient udp 0 0 10.0.0.68:123 0.0.0.0:* &gt; 1241/ntpd udp 0 0 127.0.0.1:123 &gt; 0.0.0.0:* 1241/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* &gt; 1241/ntpd udp6 0 0 fe80::f816:3eff:fe0:123 :::* &gt; 1241/ntpd udp6 0 0 2001:620:5ca1:2f0:f:123 :::* &gt; 1241/ntpd udp6 0 0 ::1:123 :::* &gt; 1241/ntpd udp6 0 0 :::123 :::* &gt; 1241/ntpd </code></pre> <p>With lsof -i :</p> <pre><code>ubuntu@kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$ sudo lsof -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dhclient 910 root 6u IPv4 12765 0t0 UDP *:bootpc ntpd 1241 ntp 16u IPv6 15340 0t0 UDP *:ntp ntpd 1241 ntp 17u IPv4 15343 0t0 UDP *:ntp ntpd 1241 ntp 18u IPv4 15347 0t0 UDP localhost:ntp ntpd 1241 ntp 19u IPv4 15349 0t0 UDP 10.0.0.68:ntp ntpd 1241 ntp 20u IPv6 15351 0t0 UDP ip6-localhost:ntp ntpd 1241 ntp 21u IPv6 15353 0t0 UDP [2001:620:5ca1:2f0:f816:3eff:fe0a:874a]:ntp ntpd 1241 ntp 22u IPv6 15355 0t0 UDP [fe80::f816:3eff:fe0a:874a]:ntp sshd 1298 root 3u IPv4 18821 0t0 TCP *:ssh (LISTEN) sshd 1298 root 4u IPv6 18830 0t0 TCP *:ssh (LISTEN) sshd 2061 root 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh-&gt;160.98.31.160:52503 (ESTABLISHED) sshd 2124 ubuntu 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh-&gt;160.98.31.160:52503 (ESTABLISHED) </code></pre> <p>I've already tried <a href="https://stackoverflow.com/questions/51121136/the-connection-to-the-server-localhost8080-was-refused-did-you-specify-the-ri">this</a> and:<code>sudo swapoff -a</code></p>
Warok
<p>I am facing similar problem with following error while deploying the pod network into a cluster using flannel:</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml The connection to the server 192.168.1.101:6443 was refused - did you specify the right host or port? </code></pre> <p>I performed below steps to solved the issue:</p> <pre><code>$ sudo systemctl stop kubelet $ sudo systemctl start kubelet $ strace -eopenat kubectl version </code></pre> <p>then apply the yml file</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created </code></pre>
Prasetyo
<p>Running Spring Boot 2.6.6 and Spring Cloud 2021.0.1</p> <p>I'm attempting to migrate an existing service to Kubernetes so I added a dependency on <code>spring-cloud-starter-kubernetes-client-all</code>. By default, I have <code>spring.cloud.kubernetes.enable=false</code> and use the <code>kubernetes</code> profile to enable it. This is intended to allow this service to operate in both Kubernetes and the legacy environment.</p> <p>My unit-tests complete successfully when building locally but fail in my Bitbucket pipeline with the following error:</p> <pre><code>java.lang.IllegalStateException: Failed to load ApplicationContext Caused by: org.springframework.cloud.kubernetes.commons.config.NamespaceResolutionFailedException: unresolved namespace </code></pre> <p>I suspect this occurs because Bitbucket Pipelines are deployed in Kubernetes and Spring somehow detects this. I have tried the following to no avail</p> <ul> <li>Pass <code>--define SPRING_CLOUD_KUBERNETES_ENABLED=false</code> to Maven on the command line</li> <li>Set this as an environment variable e.g., <code>export SPRING_CLOUD_KUBERNETES_ENABLED=false</code></li> <li>Pass <code>--define spring.cloud.kubernetes.enabled=false</code> to Maven on the command line</li> </ul> <p>I have also checked StackOverflow for similar issues and investigated the code also without avail. The class that is actually raising the issue is <code>KubernetesClientConfigUtils</code>, which should be disabled.</p> <p>I would appreciate any guidance you can provide.</p>
Faron
<p>Spring Cloud checks whether the application is running in a K8s environment before loading the active spring profile configuration and adds <code>kubernetes</code> to the active profiles. Previously, in Hoxton SR10, the profile was identified and <code>bootstrap-&lt;profile&gt;.yml</code> loaded before checking for Kubernetes. <code>spring.cloud.kubernetes.enabled</code> was picked up from there if set in the profile configuration or the maven pom properties.</p> <p>As maven allows setting system properties on the command line, kubernetes detection can be disabled by setting it there:</p> <pre><code>mvn test -Dspring.cloud.kubernetes.enabled=false </code></pre> <p>The surefire maven plugin allows setting system properties for all tests, so it's possible to set <code>spring.cloud.kubernetes.enabled</code> to be <code>false</code> in the surefire plugin configuration.</p> <pre><code>&lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-surefire-plugin&lt;/artifactId&gt; &lt;configuration&gt; &lt;systemPropertyVariables&gt; &lt;spring.cloud.kubernetes.enabled&gt;false&lt;/spring.cloud.kubernetes.enabled&gt; &lt;/systemPropertyVariables&gt; &lt;/configuration&gt; &lt;/plugin&gt; </code></pre> <p>It is also possible to set the configuration on individual test classes using @Faron's approach to explicitly set the property in any <code>WebMvcTest</code> annotated unit test, e.g.:</p> <pre><code>@WebMvcTest(properties = { &quot;spring.cloud.kubernetes.enabled=false&quot; }) </code></pre> <p>It should also work on other unit test annotation that loads a Spring application context, such as <code>WebFluxTest</code>.</p>
Simon Hogg
<p>I'm trying to deploy and run a simple PHP application that will only show a <code>Hello World</code> message through my Kubernetes cluster which is only a master node cluster, unfortunately, I can't do that.</p> <p>I'm describing my project structure - I have a root project directory called <code>kubernetes-test</code> and under that directory, I have 3 <code>yaml</code> files and one directory called <code>code</code> under that directory I have a PHP file called <code>index.php</code></p> <p><strong>hello-world-service.yaml:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: tier: backend spec: selector: app: nginx tier: backend type: NodePort ports: - nodePort: 30500 port: 80 targetPort: 80 </code></pre> <p><strong>nginx-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: tier: backend spec: replicas: 1 selector: matchLabels: app: nginx tier: backend template: metadata: labels: app: nginx tier: backend spec: volumes: - name: code hostPath: path: /code - name: config configMap: name: nginx-config items: - key: config path: site.conf containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: code mountPath: /var/www/html - name: config mountPath: /etc/nginx/conf.d </code></pre> <p><strong>php-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: php labels: tier: backend spec: replicas: 1 selector: matchLabels: app: php tier: backend template: metadata: labels: app: php tier: backend spec: volumes: - name: code hostPath: path: /code containers: - name: php image: php:7-fpm volumeMounts: - name: code mountPath: /var/www/html </code></pre> <p><strong>code/index.php</strong></p> <pre><code>&lt;?php echo 'Hello World'; </code></pre> <p>Above all those things I've found through the internet.</p> <p>When I ran this command <code>kubectl get pods</code> then the status is showing <code>ContainerCreating</code> forever for the Nginx deployment like this</p> <pre><code>NAME READY STATUS RESTARTS AGE nginx-64c9df788f-jxwzx 0/1 ContainerCreating 0 12h php-55f974bb4-qvv9x 1/1 Running 0 25s </code></pre> <p><strong>Command:</strong> <code>kubectl describe pod nginx-64c9df788f-jxwzx</code></p> <p><strong>Output:</strong></p> <pre><code>Name: nginx-64c9df788f-jxwzx Namespace: default Priority: 0 Node: bablu-node/192.168.43.123 Start Time: Mon, 11 May 2020 03:20:58 +0600 Labels: app=nginx pod-template-hash=64c9df788f tier=backend Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: ReplicaSet/nginx-64c9df788f Containers: nginx: Container ID: Image: nginx Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /etc/nginx/conf.d from config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-l2zp2 (ro) /var/www/html from code (rw) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: code: Type: HostPath (bare host directory volume) Path: /code HostPathType: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: nginx-config Optional: false default-token-l2zp2: Type: Secret (a volume populated by a Secret) SecretName: default-token-l2zp2 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 31m (x14 over 147m) kubelet, bablu-node Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[default-token-l2zp2 code config]: timed out waiting for the condition Warning FailedMount 16m (x82 over 167m) kubelet, bablu-node MountVolume.SetUp failed for volume "config" : configmap "nginx-config" not found Warning FailedMount 6m53s (x44 over 165m) kubelet, bablu-node Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[code config default-token-l2zp2]: timed out waiting for the condition Warning FailedMount 2m23s (x10 over 163m) kubelet, bablu-node Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config default-token-l2zp2 code]: timed out waiting for the condition </code></pre> <p>Command: <code>kubectl get events -n default</code></p> <p><strong>Output:</strong></p> <pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE 18m Warning FailedMount pod/nginx-64c9df788f-jxwzx MountVolume.SetUp failed for volume "config" : configmap "nginx-config" not found 8m45s Warning FailedMount pod/nginx-64c9df788f-jxwzx Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[code config default-token-l2zp2]: timed out waiting for the condition 4m15s Warning FailedMount pod/nginx-64c9df788f-jxwzx Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config default-token-l2zp2 code]: timed out waiting for the condition 33m Warning FailedMount pod/nginx-64c9df788f-jxwzx Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[default-token-l2zp2 code config]: timed out waiting for the condition 18m Normal Scheduled pod/php-55f974bb4-qvv9x Successfully assigned default/php-55f974bb4-qvv9x to bablu-node 18m Normal Pulled pod/php-55f974bb4-qvv9x Container image "php:7-fpm" already present on machine 18m Normal Created pod/php-55f974bb4-qvv9x Created container php 18m Normal Started pod/php-55f974bb4-qvv9x Started container php 18m Normal SuccessfulCreate replicaset/php-55f974bb4 Created pod: php-55f974bb4-qvv9x 18m Normal ScalingReplicaSet deployment/php Scaled up replica set php-55f974bb4 to 1 </code></pre> <p>Can anyone please help me? Thanks in advance!!</p>
Bablu Ahmed
<p>I ran your environment and here are the main issues I found:</p> <ul> <li>First of all, you don't have the nginx-config deployed, but this would be only your first issue and easily addressable (more on the example below).</li> <li>The second (and on my opinion the main) issue, is the usage of <code>HostPath</code>: <ul> <li>As I explained <a href="https://stackoverflow.com/questions/61853851/the-mountpath-should-be-absolute-in-kubernetes-is-it/61872581#61872581">here</a> <code>HostPath</code> requires that the container process run as root.</li> <li>php-fpm runs as <code>www-data</code> therefore he cannot use the mount files at <code>/code</code> if this folder is mounted through <code>hostPath</code>.</li> </ul></li> </ul> <p>From here our options now are:</p> <ul> <li>Bake the php file inside the image (or as a configmap) and run both nginx and php in the same pod (sharing an emptydir folder), more about this process in this guide: <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/php-fpm-nginx-kubernetes.html" rel="nofollow noreferrer">PHP-FPM, Nginx, Kubernetes, and Docker</a> - while in one hand it involves creating a new Docker image, on the other hand it spare you of configuring a storage provisioner if you don't already have one.</li> <li>Use an external storage to mount the file in a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume</a> downloading the php file from an external repository. This approach requires nginx and php to run on the same node - because storage is RWO, meaning that can be mount as read-write on only one node. Since your setup is on a single node, I'll use this approach on this example.</li> </ul> <hr> <p>I tried to reproduce as close as your example, but I had to do some changes. Here are the files:</p> <ul> <li><code>cm-nginx.yaml</code>:</li> </ul> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config labels: tier: backend data: config : | server { index index.php index.html; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /code; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } } </code></pre> <ul> <li><code>root /code</code> is pointing the directory where to look for <code>index.php</code></li> <li><code>fastcgi_pass php:9000</code> pointing to the service called <code>php</code> service listening on <code>port 9000</code>.</li> </ul> <hr> <ul> <li>Storage:</li> </ul> <p>This is mutable depending on the storage type you are using. Minikube comes with storage provider and storageclass configured out of the box. And although minikube storage provider is called <code>minikube-hostpath</code> it's a CSI that will not require root access on container level to run.</p> <ul> <li>That being said, here is the pvc.yaml:</li> </ul> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: code spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: standard </code></pre> <p>Note that <code>standard</code> is the name of the dynamic storage provider built in minikube. What we are doing here is to create a PVC called <code>code</code> for our app to run.</p> <ul> <li><code>php.yaml</code>:</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: php labels: tier: backend spec: replicas: 1 selector: matchLabels: app: php tier: backend template: metadata: labels: app: php tier: backend spec: volumes: - name: code persistentVolumeClaim: claimName: code containers: - name: php image: php:7-fpm volumeMounts: - name: code mountPath: /code initContainers: - name: install image: busybox volumeMounts: - name: code mountPath: /code command: - wget - "-O" - "/code/index.php" - https://raw.githubusercontent.com/videofalls/demo/master/index.php </code></pre> <ul> <li><p>here we are using a <code>busybox</code> initContainer to wget this php file (which is identical to the one you are using) and save it inside the mounted volume <code>/code</code>.</p></li> <li><p>PHP service <code>svc-php.yaml</code>:</p></li> </ul> <pre><code>apiVersion: v1 kind: Service metadata: name: php labels: tier: backend spec: selector: app: php tier: backend ports: - protocol: TCP port: 9000 </code></pre> <ul> <li>The Nginx deployment <code>nginx.yaml</code>:</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: tier: backend spec: replicas: 1 selector: matchLabels: app: nginx tier: backend template: metadata: labels: app: nginx tier: backend spec: volumes: - name: code persistentVolumeClaim: claimName: code - name: config configMap: name: nginx-config items: - key: config path: site.conf containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: code mountPath: /code - name: config mountPath: /etc/nginx/conf.d </code></pre> <p>The key points here, is the mount of the PVC called <code>code</code> on <code>mountPath</code> <code>/code</code> and the configmap we created being monted as a file called <code>site.conf</code> inside the folder <code>/etc/nginx/conf.d</code></p> <ul> <li>The Nginx service <code>svc-nginx.yaml</code>:</li> </ul> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: tier: backend spec: type: NodePort selector: app: nginx tier: backend ports: - protocol: TCP port: 80 </code></pre> <p>I'm using NodePort to ease the output test.</p> <hr> <p><strong>Reproduction:</strong></p> <ul> <li>Let's create the files: first the <code>configmap</code> and <code>pvc</code>, since they are required for the pods to start correctly, then the services and deployments:</li> </ul> <pre><code>$ ls cm-nginx.yaml nginx.yaml php.yaml pvc.yaml svc-nginx.yaml svc-php.yaml $ kubectl apply -f cm-nginx.yaml configmap/nginx-config created $ kubectl apply -f pvc.yaml persistentvolumeclaim/code created $ kubectl get cm NAME DATA AGE nginx-config 1 52s $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE code Bound pvc-b63559a0-a306-46f2-942b-0a063bc4ab6b 1Gi RWO standard 17s $ kubectl apply -f svc-php.yaml service/php created $ kubectl apply -f svc-nginx.yaml service/nginx created $ kubectl apply -f php.yaml deployment.apps/php created $ kubectl get pods NAME READY STATUS RESTARTS AGE php-69d5c956ff-8tjfn 1/1 Running 0 5s $ kubectl apply -f nginx.yaml deployment.apps/nginx created $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6854dcb7db-75zxt 1/1 Running 0 4s php-69d5c956ff-8tjfn 1/1 Running 0 22s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx NodePort 10.107.16.212 &lt;none&gt; 80:31017/TCP 41s php ClusterIP 10.97.237.214 &lt;none&gt; 9000/TCP 44s $ minikube service nginx --url http://172.17.0.2:31017 $ curl -i http://172.17.0.2:31017 HTTP/1.1 200 OK Server: nginx/1.7.9 Date: Thu, 28 May 2020 19:04:48 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/7.4.6 Demo Test </code></pre> <p>Here we can see the <code>curl</code> returned <code>200 OK</code> from nginx server, powered by PHP 7 and the content of the <code>index.php</code> file.</p> <p>I hope it helps you have a clearer understanding of this scenario.</p> <p>If you have any question, let me know in the comments.</p>
Will R.O.F.
<p>When we run the below command from root user in kubernetes master node:</p> <ul> <li>kubectl create deployment nginx --image=nginxD</li> </ul> <p>on which path the yaml file gets stored ?</p> <ul> <li>kubectl get deployment nginx -o yaml</li> </ul> <p>from which path it provides us the yaml body ?</p>
user3839347
<p>Raw k8s stores everything within etcd. When running commands like <code>kubectl get deployment nginx -o yaml</code> kubectl talks to the kubeapi which talks to etcd to get the yaml for you.</p> <p>etcd is a key-value store so any <code>kubectl get XYZ</code> is reading a specific key. Any <code>kubectl create XYZ</code> is creating a new key/value within etcd.</p> <p>Because of the importance of etcd within k8s, it is heavily recommended you back it up in production environments.</p> <p>The components and how they talk to everything can be found here: <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/components/</a></p>
Richard Kostyn
<p>I have 2 microservices running in a Kubernetes cluster on Azure. The &quot;Project&quot;-microservice sends out an event to Kafka when a project has been updated. The &quot;Analytics&quot;-microservice consumes the event, does some expensive calculations, and finally sends out an e-mail containing the results.</p> <p>To avoid spamming the receiver, I want to implement some kind of debouncing pattern so that calculations are only done when an hour has passed since receiving the last update-event. Since the &quot;Analytics&quot;-microservice would be idle for most of the time and cold starts are not a problem, it would be advantageous if resources are unreserved while it is inactive.</p> <p>How would one implement a debouncing scenario when using Kafka? I have thought of introducing a less resource-intensive microservice with the sole purpose of triggering the &quot;Analytics&quot;-microservice once an internal clock has expired. Is this a sensible solution? I would appreciate the input of someone who has dealt with a similar problem before.</p>
andreas_rwth
<p>Did you consider Temporal’s <em>durable timers</em>. See here: <a href="https://docs.temporal.io/docs/workflows/" rel="nofollow noreferrer">https://docs.temporal.io/docs/workflows/</a></p> <p>You can also consider combining Temporal with <a href="https://spiderwiz.org/project/" rel="nofollow noreferrer">Spiderwiz</a> to streamline your entire workflow.</p>
zvil
<p>For two Statefulsets <code>sts1</code> and <code>sts2</code>, would it be possible to schedule:</p> <ul> <li>sts1-pod-0 and sts2-pod-0 on the same node,</li> <li>sts1-pod-1 and sts2-pod-1 on the same node,</li> <li>...</li> <li>sts1-pod-n and sts2-pod-n on the same node,</li> </ul> <p>An in addition do not collocate two pods of a given Statefulset on the same node?</p>
Fabrice Jammes
<pre><code>sts1-pod-0 and sts2-pod-0 on the same node, sts1-pod-1 and sts2-pod-1 on the same node, ... sts1-pod-n and sts2-pod-n on the same node, </code></pre> <p>One possible way is run the paired containers in the same StatefulSet, this has the same effect as running side by side as pod on the same node. In this case your affinity rule only need to ensure no two pods run on same node.</p>
gohm'c
<p>I am creating a micro services infrastructure with Kubernetes. I have two services, an authentication module and client service. The client service make some requests to authentication module in order to get information about the user, the authentication cookie and etc. etc. etc.</p> <p>This is the infraestructure:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: auth-depl spec: replicas: 1 selector: matchLabels: app: auth template: metadata: labels: app: auth spec: containers: - name: auth image: theimage env: - name: JWT_KEY valueFrom: secretKeyRef: name: jwt-secret key: JWT_KEY resources: limits: cpu: &quot;1&quot; memory: &quot;512Mi&quot; requests: cpu: &quot;1&quot; memory: &quot;512Mi&quot; --- apiVersion: v1 kind: Service metadata: name: auth-srv spec: selector: app: auth ports: - name: auth protocol: TCP port: 3000 targetPort: 3000 </code></pre> <p>Apparently working properly. Then, the client module:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: client-depl spec: replicas: 1 selector: matchLabels: app: client template: metadata: labels: app: client spec: containers: - name: client image: theimage --- apiVersion: v1 kind: Service metadata: name: client-srv spec: selector: app: client ports: - name: client protocol: TCP port: 3000 targetPort: 3000 </code></pre> <p>In order to maintain API consistency, I am using nginx-ingress. It seems to work perfectly and here is the configuration:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: rules: - host: kome.dev http: paths: - path: /api/users/?(.*) pathType: Prefix backend: service: name: auth-srv port: number: 3000 - path: /?(.*) pathType: Prefix backend: service: name: client-srv port: number: 3000 </code></pre> <p>The problem is that my client, written in nextjs, has to make a request to a specific route of the authentication module:</p> <pre><code>Index.getInitialProps = async () =&gt; { if (typeof window === 'undefined') { // server side const response = await axios.get('http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser') return response } else { // client side const response = await axios.get('/api/users/currentuser') return response } } </code></pre> <p>is getting a 404 error, but the endpoint exists and works fine outside of ingress-nginx.</p>
Diesan Romero
<p>You can keep the <code>- host: kome.dev</code>, your axios call can include the header like:</p> <pre><code>await axios.get('...', { headers: { 'Host': 'kome.dev' } }) </code></pre>
gohm'c
<p>My question is pretty straightforward: since you can <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">force a job to run on a specific node</a>, is it possible to disable preemption on a given node ?</p> <p>Like, if I force a pod to execute on that node I know for sure that it won't get preempted ?</p>
Gaëtan
<p><code>...is it possible to disable preemption on a given node </code></p> <p>No, pod preemption doesn't apply at worker node level. Instead, define a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#user-exposed-information" rel="nofollow noreferrer">priority class</a> with <code>preemptionPolicy: Never</code>, then use this class in your pod spec <code>priorityClassName: &lt;the top priority class that you defined&gt;</code>.</p>
gohm'c
<p>I have an app which will start a server at <code>127.0.0.1:8080</code> and using a <code>Dockerfile</code> to create an image for hosting it on GKE. I deployed this app on <code>port 8080</code> on the kubernetes cluster. Then I EXPOSE the service as <code>LoadBalancer</code> for the same port 8080 but it is not allowing it to be accessed from outside. So I created an ingress for external access but still doesn't work. When I click at the IP provided by ingress, I get this Error:</p> <pre><code>Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds. </code></pre> <p>I would like to ask is there something I have missed or done wrong in the implementation.</p> <p>My YAML file:</p> <pre><code>--- apiVersion: &quot;apps/v1&quot; kind: &quot;Deployment&quot; metadata: name: &quot;app&quot; namespace: &quot;default&quot; labels: app: &quot;app&quot; spec: replicas: 3 selector: matchLabels: app: &quot;app&quot; template: metadata: labels: app: &quot;app&quot; spec: containers: - name: &quot;app-sha256-1&quot; image: &quot;gcr.io/project-1234/github.com/user/app@sha256:b17b8159668d44fec3d&quot; --- apiVersion: &quot;autoscaling/v2beta1&quot; kind: &quot;HorizontalPodAutoscaler&quot; metadata: name: &quot;app-hpa-y3ay&quot; namespace: &quot;default&quot; labels: app: &quot;app&quot; spec: scaleTargetRef: kind: &quot;Deployment&quot; name: &quot;app&quot; apiVersion: &quot;apps/v1&quot; minReplicas: 1 maxReplicas: 5 metrics: - type: &quot;Resource&quot; resource: name: &quot;cpu&quot; targetAverageUtilization: 80 --- apiVersion: &quot;v1&quot; kind: &quot;Service&quot; metadata: name: &quot;app-service&quot; namespace: &quot;default&quot; labels: app: &quot;app&quot; spec: ports: - protocol: &quot;TCP&quot; port: 8080 selector: app: &quot;app&quot; type: &quot;LoadBalancer&quot; loadBalancerIP: &quot;&quot; --- apiVersion: &quot;extensions/v1beta1&quot; kind: &quot;Ingress&quot; metadata: name: &quot;ingress&quot; namespace: &quot;default&quot; spec: backend: serviceName: &quot;app-service&quot; servicePort: 8080 </code></pre> <p>Thanks! Look forward to the suggestions.</p>
Mohammad Saad
<p><code>start a server at 127.0.0.1:8080 </code> this would allow your app to accept connection within the pod only. Bind to <code>0.0.0.0</code> instead.</p> <p>Also your Deployment:</p> <pre><code>... containers: - name: &quot;app-sha256-1&quot; image: &quot;gcr.io/project-1234/github.com/user/app@sha256:b17b8159668d44fec3d&quot; ports: - containerPort: &lt;the port# that your container serves&gt; </code></pre> <p>And your Service:</p> <pre><code>... ports: - protocol: &quot;TCP&quot; port: 8080 targetPort: &lt;the port that your container serves&gt; </code></pre>
gohm'c
<p>I am using a deployment of Spring Boot (typical micro-service web server deployment, with Gateway, separate authentication server, etc, fronted with a reverse proxy/load balancing nginx deployment). We orchestrate Docker containers with Kubernetes. We are preparing for production deployment and have recently started load testing, revealing some issues in the handling of these loads.</p> <p>My issue is that when subjecting the server to high loads (here, performance testing with Gatling), the liveness probes return 503 errors, because of heavy load; this triggers a restart by Kubernetes.</p> <p>Naturally, the liveness probe is important, but when the system starts dropping requests, the last thing we should do is to kill pods, which causes cascading failures by shifting load to the remaining pods.</p> <p>This specific problem with the Spring Actuator health check is described <a href="https://stackoverflow.com/questions/50005849/kubernetes-liveness-reserve-threads-memory-for-a-specific-endpoint-with-spring">in this SO question</a>, and offers some hints, but the answers are not thorough. Specifically, the idea of using a liveness command (e.g. to check if the java process is running) seems to me inadequate, since it would miss actual down-time if the java process is running but there is some exception, or some missing resource (database, Kafka...)</p> <ol> <li>Is there a good guide for configuring production Spring on Kubernetes/Cloud deployments?</li> <li>How do I deal with the specific issue of the liveness probe failing when subjected to high loads, does anyone have experience with this?</li> </ol>
Alexandre Cassagne
<p><strong>Note:</strong> This is the answer provided by @AndyWilkinson and @ChinHuang on comments which @AlexandreCassagne stated that solved the issue:</p> <blockquote> <p>If a liveness probe indicates that the current level of traffic is overwhelming your app such that it cannot handle requests, trying to find a way to suppress that seems counter-productive to me. Do you have a readiness probe configured? When your app becomes overwhelmed, you probably want it to indicate that it is unable to handle traffic for a while. Once the load has dropped and it's recovered, it can then start handling traffic again without the need for a restart.</p> <p>Also, a liveness probe should only care about a missing resource (database, Kafka, etc) if that resource is only used by a single instance. If multiple instances all access the resource and it goes down, all of the liveness probes will fail. This will cause cascading failures and restarts across your deployment. There's some guidance on this in the <a href="https://docs.spring.io/spring-boot/docs/2.3.0.RELEASE/reference/htmlsingle/#boot-features-application-availability" rel="nofollow noreferrer">Spring Boot 2.3 reference documentation.</a></p> <p>Spring Boot 2.3 introduces <a href="https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot" rel="nofollow noreferrer">separate liveness and readiness probes</a>.</p> </blockquote>
Will R.O.F.
<p>As I know if we need adjust "open files" <code>nofile</code> (soft and hard) in linux system, we need run command <code>ulimit</code> or set in related configuraiton file to get the setting permanently. But I am little bit confused about the setting for containers running in a host</p> <p>For example, If a Linux OS has <code>ulimit</code> nofile set to 1024 (soft) and Hard (4096) , and I run docker with <code>--ulimit nofile=10240:40960</code>, could the container use more nofiles than its host?</p> <h1>Update</h1> <p>In my environment, current setting with dockers running, </p> <ul> <li>On host (Debian)- 65535 (soft) 65535 (hard) </li> <li>Docker Daemon setting Max - 1048576 (soft) 1048576 (hard) </li> <li>default docker run - 1024 (soft) 4096 (hard) </li> <li>customized docker run - 10240 (soft) 40960 (hard)</li> </ul> <p>I found the application can run with about 100K open files, then crash. How to understand this?</p> <p>What's the real limits?</p>
Bill
<blockquote> <p>For example, If a Linux OS has <code>ulimit</code> nofile set to 1024 (soft) and Hard (4096) , and I run docker with <code>----ulimit nofile=10240:40960</code>, could the container use more nofiles than its host?</p> </blockquote> <ul> <li>Docker has the <code>CAP_SYS_RESOURCE</code> capability set on it's permissions. This means that Docker is able to set an <code>ulimit</code> different from the host. according to <code>man 2 prlimit</code>:</li> </ul> <blockquote> <p>A privileged process (under Linux: one with the CAP_SYS_RESOURCE capability in the initial user namespace) may make arbitrary changes to either limit value.</p> </blockquote> <ul> <li>So, for containers, the limits to be considered are the ones set by the docker daemon. You can check the docker daemon limits with this command:</li> </ul> <pre><code>$ cat /proc/$(ps -A | grep dockerd | awk '{print $1}')/limits | grep &quot;files&quot; Max open files 1048576 1048576 files </code></pre> <ul> <li><p>As you can see, the docker 19 has a pretty high limit of <code>1048576</code> so your 40960 <strong>will work</strong> like a charm.</p> </li> <li><p>And if you run a docker container with <code>--ulimit</code> set to be higher than the node but lower than the daemon itself, you won't find any problem, and won't need to give additional permissions like in the example below:</p> </li> </ul> <pre><code>$ cat /proc/$(ps -A | grep dockerd | awk '{print $1}')/limits | grep &quot;files&quot; Max open files 1048576 1048576 files $ docker run -d -it --rm --ulimit nofile=99999:99999 python python; 354de39a75533c7c6e31a1773a85a76e393ba328bfb623069d57c38b42937d03 $ cat /proc/$(ps -A | grep python | awk '{print $1}')/limits | grep &quot;files&quot; Max open files 99999 99999 files </code></pre> <ul> <li>You can set a new limit for dockerd on the file <code>/etc/init.d/docker</code>:</li> </ul> <pre><code>$ cat /etc/init.d/docker | grep ulimit ulimit -n 1048576 </code></pre> <ul> <li>As for the container itself having a <code>ulimit</code> higher than the docker daemon, it's a bit more tricky, but doable, refer <a href="https://github.com/m3db/m3/issues/1671#issuecomment-502243705" rel="nofollow noreferrer">here</a>.</li> <li>I saw you have tagged the Kubernetes tag, but didn't mention it in your question, but in order to make it work on Kubernetes, the container will need <code>securityContext.priviledged: true</code>, this way you can run the command <code>ulimit</code> as root inside the container, here an example:</li> </ul> <pre><code>image: image-name command: [&quot;sh&quot;, &quot;-c&quot;, &quot;ulimit -n 65536&quot;] securityContext: privileged: true </code></pre>
Will R.O.F.
<p>I am currently working on integrating <strong>Sumo Logic</strong> in a <strong>AWS EKS</strong> cluster. After going through <strong>Sumo Logic</strong>'s documentation on their integration with k8s I have arrived at the following section <a href="https://github.com/SumoLogic/sumologic-kubernetes-collection/blob/master/deploy/docs/Installation_with_Helm.md#installation-steps" rel="nofollow noreferrer">Installation Steps</a>. This section of the documentation is a fork in the road where one must figure out if you want to continue with the installation :</p> <ul> <li>side by side with your existing Prometheus Operator</li> <li>and update your existing Prometheus Operator</li> <li>with your standalone Prometheus (not using Prometheus Operator)</li> <li>with no pre-existing Prometheus installation</li> </ul> <p>With that said I am trying to figure out which scenario I am in as I am unsure. Let me explain, previous to working on this Sumo Logic integration I have completed the <a href="https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-integration-install-configure#configure-the-integration" rel="nofollow noreferrer">New Relic integration</a> which makes me wonder if it uses Prometheus in any ways that could interfere with the Sumo Logic integration ?</p> <p>So in order to figure that out I started by executing:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE aws-alb-ingress-controller-1600289507-7c7dc6f57d-sgpd8 1/1 Running 1 7d19h f5-admin-ui-5cbcc464df-lh8nl 1/1 Running 0 7d19h f5-ambassador-5b5db5ff88-k5clw 1/1 Running 0 7d19h f5-api-gateway-7bdfc9cb-q57lt 1/1 Running 0 7d19h f5-argo-ui-7b98dd67-2cwrz 1/1 Running 0 7d19h f5-auth-ui-58794664d9-rbccn 1/1 Running 0 7d19h f5-classic-rest-service-0 1/1 Running 0 7d19h f5-connector-plugin-service-box-7f8b48b88-8jxxq 1/1 Running 0 7d19h f5-connector-plugin-service-ldap-5d79fd4b8b-8kpcj 1/1 Running 0 7d19h f5-connector-plugin-service-sharepoint-77b5bdbf9b-vqx4t 1/1 Running 0 7d19h f5-devops-ui-859c97fb97-ftdxh 1/1 Running 0 7d19h f5-fusion-admin-64fb9df99f-svznw 1/1 Running 0 7d19h f5-fusion-indexing-6bbc7d4bcd-jh7cf 1/1 Running 0 7d19h f5-fusion-log-forwarder-78686cb8-shd6p 1/1 Running 0 7d19h f5-insights-6d9795f57-62qbg 1/1 Running 0 7d19h f5-job-launcher-9b659d984-n7h65 1/1 Running 3 7d19h f5-job-rest-server-55586d8db-xrzcn 1/1 Running 2 7d19h f5-ml-model-service-6c5bfd5b68-wwdkq 2/2 Running 0 7d19h f5-pm-ui-cc64c9498-gdmvp 1/1 Running 0 7d19h f5-pulsar-bookkeeper-0 1/1 Running 0 7d19h f5-pulsar-bookkeeper-1 1/1 Running 0 7d19h f5-pulsar-bookkeeper-2 1/1 Running 0 7d19h f5-pulsar-broker-0 1/1 Running 0 7d19h f5-pulsar-broker-1 1/1 Running 0 7d19h f5-query-pipeline-84749b6b65-9hzcx 1/1 Running 0 7d19h f5-rest-service-7855fdb676-6s6n8 1/1 Running 0 7d19h f5-rpc-service-676bfbf7f-nmbgp 1/1 Running 0 7d19h f5-rules-ui-6677475b8b-vbhcj 1/1 Running 0 7d19h f5-solr-0 1/1 Running 0 20h f5-templating-b6b964cdb-l4vjq 1/1 Running 0 7d19h f5-webapps-798b4d6864-b92wt 1/1 Running 0 7d19h f5-workflow-controller-7447466c89-pzpqk 1/1 Running 0 7d19h f5-zookeeper-0 1/1 Running 0 7d19h f5-zookeeper-1 1/1 Running 0 7d19h f5-zookeeper-2 1/1 Running 0 7d19h nri-bundle-kube-state-metrics-cdc9ffd85-2s688 1/1 Running 0 2d21h nri-bundle-newrelic-infrastructure-fj9g9 1/1 Running 0 2d21h nri-bundle-newrelic-infrastructure-jgckv 1/1 Running 0 2d21h nri-bundle-newrelic-infrastructure-pv27n 1/1 Running 0 2d21h nri-bundle-newrelic-logging-694hl 1/1 Running 0 2d21h nri-bundle-newrelic-logging-7w8cj 1/1 Running 0 2d21h nri-bundle-newrelic-logging-8gjw8 1/1 Running 0 2d21h nri-bundle-nri-kube-events-865664658d-ztq89 2/2 Running 0 2d21h nri-bundle-nri-metadata-injection-557855f78d-rzjxd 1/1 Running 0 2d21h nri-bundle-nri-metadata-injection-job-cxmqg 0/1 Completed 0 2d21h nri-bundle-nri-prometheus-ccd7b7fbd-2npvn 1/1 Running 0 2d21h seldon-controller-manager-5b5f89545-6vxgf 1/1 Running 1 7d19h </code></pre> <p>As you can see New Relic is running <code>nri-bundle-nri-prometheus-ccd7b7fbd-2npvn</code> which seems to correspond to the New Relic OpenMetric integration for Kubernetes or Docker. Browsing through New Relic's <a href="https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/send-prometheus-metric-data-new-relic" rel="nofollow noreferrer">documentation</a> I found:</p> <blockquote> <p>We currently offer two integration options:</p> <ul> <li>Prometheus remote write integration. Use this if you currently have Prometheus servers and want an easy access to your combined metrics from New Relic.</li> <li>Prometheus OpenMetrics integration for Kubernetes or Docker. Use this if you’re looking for an alternative or replacement to a Prometheus server and store all your metrics directly in New Relic.</li> </ul> </blockquote> <p>So from what I can gather I am not running <strong>Prometheus</strong> <em>server</em> or <em>operator</em> and I can continue with the Sumo Logic integration setup by following the section dedicated to <em>installation with no pre-existing Prometheus installation</em> ? This is what I am trying to clarify, wondering if someone can help as I am new to <strong>Kubernetes</strong> and <strong>Prometheus</strong>.</p>
nabello
<p>I think you most likely will have to go with the below installation option :</p> <ul> <li>with your standalone Prometheus (not using Prometheus Operator)</li> </ul> <p>Can you check and paste the output of <code>kubectl get prometheus</code>. If you see any running prometheus, you can run <code>kubectl describe prometheus $prometheus_resource_name</code> and check the labels to verify if it is deployed by the operator or it is a standalone prometheus.</p> <p>In case it is deployed by Prometheus operator, you can use either of these approaches:</p> <ul> <li>side by side with your existing Prometheus Operator</li> <li>update your existing Prometheus Operator</li> </ul>
Vijit Singhal
<p>I have a need to define a standalone patch as YAML.</p> <p>More specifically, I want to do the following:</p> <pre><code>kubectl patch serviceaccount default -p '{&quot;imagePullSecrets&quot;: [{&quot;name&quot;: &quot;registry-my-registry&quot;}]}' </code></pre> <p>The catch is I can't use <code>kubectl patch</code>. I'm using a GitOps workflow with flux, and that resource I want to patch is a default resource created outside of flux.</p> <p>In other terms, I need to do the same thing as the command above but with <code>kubectl apply</code> only:</p> <pre><code>kubectl apply patch.yaml </code></pre> <p>I wasn't able to figure out if you can define such a patch.</p> <p><strong>The key bit is that I can't predict the name of the default secret token on a new cluster (as the name is random, i.e. <code>default-token-uudge</code>)</strong></p>
Juicy
<blockquote> <ul> <li>Fields set and deleted from Resource Config are merged into Resources by <code>Kubectl apply</code>:</li> <li>If a Resource already exists, Apply updates the Resources by merging the local Resource Config into the remote Resources</li> <li>Fields removed from the Resource Config will be deleted from the remote Resource</li> </ul> </blockquote> <p>You can learn more about <a href="https://kubectl.docs.kubernetes.io/pages/app_management/field_merge_semantics.html" rel="nofollow noreferrer">Kubernetes Field Merge Semantics</a>.</p> <ul> <li><p>If your limitation is not knowing the secret <code>default-token-xxxxx</code> name, no problem, just keep that field out of your yaml.</p> </li> <li><p>As long as the yaml has enough fields to identify the target resource (name, kind, namespace) it will add/edit the fields you set.</p> </li> <li><p>I created a cluster (minikube in this example, but it could be any) and retrieved the current default serviceAccount:</p> </li> </ul> <pre><code>$ kubectl get serviceaccount default -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: &quot;2020-07-01T14:51:38Z&quot; name: default namespace: default resourceVersion: &quot;330&quot; selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11 secrets: - name: default-token-j6zx2 </code></pre> <ul> <li>Then, we create a yaml file with the content's that we want to add:</li> </ul> <pre><code>$ cat add-image-pull-secrets.yaml apiVersion: v1 kind: ServiceAccount metadata: name: default namespace: default imagePullSecrets: - name: registry-my-registry </code></pre> <ul> <li>Now we apply and verify:</li> </ul> <pre><code>$ kubectl apply -f add-image-pull-secrets.yaml serviceaccount/default configured $ kubectl get serviceaccount default -o yaml apiVersion: v1 imagePullSecrets: - name: registry-my-registry kind: ServiceAccount metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;imagePullSecrets&quot;:[{&quot;name&quot;:&quot;registry-my-registry2&quot;}],&quot;kind&quot;:&quot;ServiceAccount&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;default&quot;,&quot;namespace&quot;:&quot;default&quot;}} creationTimestamp: &quot;2020-07-01T14:51:38Z&quot; name: default namespace: default resourceVersion: &quot;2382&quot; selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11 secrets: - name: default-token-j6zx2 </code></pre> <p>As you can see, the ImagePullPolicy was added to the resource.</p> <p>I hope it fits your needs. If you have any further questions let me know in the comments.</p>
Will R.O.F.
<p>I have a nodeJS api services and .env file with some configuration.</p> <p>Currently, in my local environment I can run my service. I read the .env file on startup and access the value via process.env.[name] command</p> <pre><code>const myEnv = dotenv.config({ path: path.resolve(path.join(__dirname, '/configuration/.env')) }); dotenvExpand(myEnv); </code></pre> <p>This is my setting in my deployment yaml file</p> <pre><code> envFrom: - configMapRef: name: jobs-nodeapi-qa--configmap </code></pre> <p>I create a configMap in GCP and deploy. How do I change my code so that it read from the config map</p> <p>Thanks</p>
user2570135
<p>No need to change your code which refer to the .env file. Since you have created the ConfigMap, you can then mount it to the correct path that your code expected it to be.</p> <p>Create the ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: env data: .env: | key=value </code></pre> <p>Use it in the pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busybox labels: app: busybox spec: volumes: - name: env configMap: name: env containers: - name: busybox image: busybox command: - sh - -c - while :; do cat /configuration/.env; sleep 1; done volumeMounts: - name: env mountPath: /configuration # update this path to the path your app expects </code></pre> <p>A simple usage above is output the .env content every second.</p>
gohm'c
<p>how to persist the parameter key values to values.yaml file while use command line to set the values.helm install . --name test --set image.tag=2020 --set image.version=20 how to update this image.tag and image.version values to values.yaml? dry run will give the result but wont update the values.yaml</p>
veer1987
<p><strong>Helm</strong> is a package manager, and it's all about automating deployment of kubernetes apps. It's designed to be somewhat static, and only being changed by the creator of the chart.</p> <ul> <li><p><a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">Values Files</a> provides access to values passed into the chart. Its contents come from multiple sources:</p> <blockquote> <ul> <li>The <code>values.yaml</code> file <strong>in the chart</strong></li> <li>If this is a subchart, the <code>values.yaml</code> file <strong>of a parent chart</strong></li> <li><strong>A values file if passed into helm install</strong> or helm upgrade with the <code>-f</code> flag (<code>helm install -f myvals.yaml ./mychart</code>)</li> <li><strong>Individual parameters</strong> passed with <code>--set</code> (such as <code>helm install --set foo=bar ./mychart</code>)</li> </ul> </blockquote></li> <li><p>This is the base Hierarchy of the values files, but there is more to it:</p></li> </ul> <p><a href="https://i.stack.imgur.com/BsGIk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BsGIk.png" alt="enter image description here"></a> <em>Kudos to the creator of this image, unfortunately I wasn't able to find the author to credit him.</em></p> <ul> <li>You can't change the chart <code>values.yaml</code> file exacly as you are thinking, because the original <code>values.yaml</code> will keep the state desired by the creator of the chart.</li> <li>The flowchart above is all about changes made during <code>helm install</code> or <code>helm upgrade</code>.</li> </ul> <hr> <p>I'll try to exemplify your use scenario:</p> <ul> <li>Chart 1 has the default values with:</li> </ul> <pre><code>image: original-image version: original-version </code></pre> <ul> <li>You decided to deploy this chart changing some values using <code>--set</code> as in your example <code>helm install --name abc --set image=abc --set version-123</code>. Resulting in:</li> </ul> <pre><code>image: abc version: 123 </code></pre> <ul> <li>Then you want to upgrade the chart and modify the <code>version</code> value but keeping the other values as set, you run: `helm upgrade --set version=124 --reuse-values, here is the result values in effect:</li> </ul> <pre><code>image: abc version: 124 </code></pre> <p><strong>NOTE:</strong> As we seen in the flowchart, <strong>if you don't specify --reuse-values</strong> it will reset the values that were not <code>--set</code> during the upgrade to back to the original of the chart. In this case <code>image</code> would again be <code>original-image</code>.</p> <hr> <p>So, to wrap up your main question:</p> <blockquote> <p>how to persist --set key values to values.yaml in helm install/upgrade?</p> </blockquote> <p>You can persist the <code>--set</code> values during the <code>upgrade</code> by always using <code>--reuse-values</code>, however the changes will never be commited to the original template of <code>values.yaml</code> file.</p> <p>If you are the owner of the chart, It's the recommended behavior that you create release versions of your chart, so you can keep track of what were the default in each version.</p> <p>I hope it helps clarifying the issue.</p> <p>If I can help you any further, let me know in the comments.</p>
Will R.O.F.
<p>Below is a <code>k8s</code> <code>configmap</code> configuration, I need to use the <code>kubectl patch</code> command to update it, but don't know how to do it</p> <pre><code># kubectl get configmap myconfig -o yaml apiVersion: v1 kind: ConfigMap metadata: name: debug-config data: config.json: |- { &quot;portServiceDMS&quot;: 500, &quot;Buggdse&quot;: { &quot;Enable&quot;: false }, &quot;GHInterval&quot;: { &quot;Start&quot;: 5062, &quot;End&quot;: 6000 }, &quot;LOPFdFhd&quot;: false, &quot;CHF&quot;: { &quot;DriverName&quot;: &quot;mysql&quot; }, &quot;Paralbac&quot;: { &quot;LoginURL&quot;: &quot;https://127.0.0.1:7788&quot;, &quot;Sources&quot;: [ { &quot;ServiceName&quot;: &quot;Hopyyu&quot;, &quot;Status&quot;: false, &quot;ServiceURL&quot;: &quot;https://127.0.0.1:9090/ft/test&quot; }, { &quot;SourceName&quot;: &quot;Bgudreg&quot;, &quot;Status&quot;: false, # need to patch here to true &quot;ServiceURL&quot;: &quot;https://127.0.0.1:9090&quot; # need to patch here to &quot;https://192.168.123.177:45663&quot; } ] } } </code></pre> <p>I searched on <code>google</code> site to find a similar way to deal with <a href="https://stackoverflow.com/questions/62578789/kubectl-patch-is-it-possible-to-add-multiple-values-to-an-array-within-a-sinlge">it</a>, but it doesn't work</p> <p>I tried this command and it doesn't work:</p> <pre><code>kubectl get cm myconfig -o json | jq -r '.data.&quot;config.json&quot;.Paralbac.Sources[1]={&quot;SourceName&quot;: &quot;Bgudreg&quot;, &quot;Status&quot;: true, &quot;ServiceURL&quot;: &quot;https://192.168.123.177:45663&quot;}' | kubectl apply -f - </code></pre> <p>I reduced the command to here:</p> <pre><code>kubectl get cm myconfig -o json | jq -r '.data.&quot;config.json&quot; # it works (The double quotes are for escaping the dot) kubectl get cm myconfig -o json | jq -r '.data.&quot;config.json&quot;.Paralbac # it can't work: jq: error (at &lt;stdin&gt;:18): Cannot index string with string &quot;Paralbac&quot; </code></pre> <p>So, I think my current problem is in how to keep working after <code>escaped</code> symbols in <code>jq</code></p>
cydia
<p>Here's how you can update the ConfigMap in the question:</p> <pre><code>myconfig=$(mktemp) \ &amp;&amp; kubectl get configmap debug-config -o jsonpath='{.data.config\.json}' \ | jq '.Paralbac.Sources[1].Status = true' \ | jq '.Paralbac.Sources[1].ServiceURL = &quot;https://192.168.123.177:45663&quot;' &gt; myconfig \ &amp;&amp; kubectl create configmap debug-config --from-file=config.json=myconfig --dry-run=client -o yaml | kubectl replace -f - \ &amp;&amp; rm myconfig </code></pre> <p>Now do <code>kubectl get configmap debug-config -o jsonpath='{.data.config\.json}' | jq</code> will show you the updated <strong>config.json</strong> in the ConfigMap.</p>
gohm'c
<p>I have Keycloak (10.0.3) server configured inside a Kubernetes Cluster.</p> <p>The keycloak server has to handle authentification for external user (using an external url) and also handle oauth2 token for Spring microservices communications.</p> <p>Then web application spring services uses oidc providers :</p> <pre><code>spring: security: oauth2: client: provider: oidc: issuer-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm authorization-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm/protocol/openid-connect/auth jwk-set-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm/protocol/openid-connect/certs token-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm/protocol/openid-connect/token user-name-attribute: preferred_username </code></pre> <p>The external URL of keycloak is <a href="https://keycloak.localhost" rel="nofollow noreferrer">https://keycloak.localhost</a>, managed by ingress redirection handled by Traefik v2</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: keycloak-https namespace: keycloak-cluster annotations: traefik.frontend.passHostHeader: &quot;true&quot; spec: entryPoints: - websecure routes: - match: Host(`keycloak.localhost`) kind: Rule services: - name: keycloak-cluster-http port: 80 tls: options: name: mytlsoption namespace: traefik store: name: default </code></pre> <p>I can access Keycloak using <a href="https://keycloak.localhost" rel="nofollow noreferrer">https://keycloak.localhost</a>, no problem, it works.</p> <p>The problem is that when I try to access my web application, it will always redirect to 'http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm', which is not resolved outside k8s.</p> <p>If I change issuer-uri to <a href="http://keycloak.localhost" rel="nofollow noreferrer">http://keycloak.localhost</a> then it doesn't work as keycloak.locahost is not resolved inside k8s.</p> <p>I tried to set the KEYCLOAK_FRONTEND_URL to <a href="https://keycloak.localhost/auth" rel="nofollow noreferrer">https://keycloak.localhost/auth</a>, but no change.</p> <p>Please, does someone has the same kind of settings and managed to make it working ?</p> <p>Best regards</p>
JHapy
<p>Managed to fix it using coredns and adding a rewrite rule... :</p> <p><strong>rewrite name keycloak.localhost keycloak-cluster-http.keycloak-cluster.svc.cluster.local</strong></p> <pre><code>apiVersion: v1 data: Corefile: | .:53 { errors health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } rewrite name keycloak.localhost keycloak-cluster-http.keycloak-cluster.svc.cluster.local prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap metadata: name: coredns namespace: kube-system </code></pre>
JHapy
<p>I have implemented a gRPC service, build it into a container, and deployed it using k8s, in particular AWS EKS, as a DaemonSet.</p> <p>The Pod starts and turns to be in Running status very soon, but it takes very long, typically 300s, for the actual service to be accessible.</p> <p>In fact, when I run <code>kubectl logs</code> to print the log of the Pod, it is empty for a long time.</p> <p>I have logged something at the very starting of the service. In fact, my code looks like</p> <pre class="lang-golang prettyprint-override"><code>package main func init() { log.Println(&quot;init&quot;) } func main() { // ... } </code></pre> <p>So I am pretty sure when there are no logs, the service is not started yet.</p> <p>I understand that there may be a time gap between the Pod is running and the actual process inside it is running. However, 300s looks too long for me.</p> <p>Furthermore, this happens randomly, sometimes the service is ready almost immediately. By the way, my runtime image is based on <a href="https://hub.docker.com/r/chromedp/headless-shell/" rel="nofollow noreferrer">chromedp headless-shell</a>, not sure if it is relevant.</p> <p>Could anyone provide some advice for how to debug and locate the problem? Many thanks!</p> <hr /> <p>Update</p> <p>I did not set any readiness probes.</p> <p>Running <code>kubectl get -o yaml</code> of my DaemonSet gives</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: annotations: deprecated.daemonset.template.generation: &quot;1&quot; creationTimestamp: &quot;2021-10-13T06:30:16Z&quot; generation: 1 labels: app: worker uuid: worker name: worker namespace: collection-14f45957-e268-4719-88c3-50b533b0ae66 resourceVersion: &quot;47265945&quot; uid: 88e4671f-9e33-43ef-9c49-b491dcb578e4 spec: revisionHistoryLimit: 10 selector: matchLabels: app: worker uuid: worker template: metadata: annotations: prometheus.io/path: /metrics prometheus.io/port: &quot;2112&quot; prometheus.io/scrape: &quot;true&quot; creationTimestamp: null labels: app: worker uuid: worker spec: containers: - env: - name: GRPC_PORT value: &quot;22345&quot; - name: DEBUG value: &quot;false&quot; - name: TARGET value: localhost:12345 - name: TRACKER value: 10.100.255.31:12345 - name: MONITOR value: 10.100.125.35:12345 - name: COLLECTABLE_METHODS value: shopping.ShoppingService.GetShop - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: DISTRIBUTABLE_METHODS value: collection.CollectionService.EnumerateShops - name: PERFORM_TASK_INTERVAL value: 0.000000s image: xxx imagePullPolicy: Always name: worker ports: - containerPort: 22345 protocol: TCP resources: requests: cpu: 1800m memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - env: - name: CAPTCHA_PARALLEL value: &quot;32&quot; - name: HTTP_PROXY value: http://10.100.215.25:8080 - name: HTTPS_PROXY value: http://10.100.215.25:8080 - name: API value: 10.100.111.11:12345 - name: NO_PROXY value: 10.100.111.11:12345 - name: POD_IP image: xxx imagePullPolicy: Always name: source ports: - containerPort: 12345 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/ssl/certs/api.crt name: ca readOnly: true subPath: tls.crt dnsPolicy: ClusterFirst nodeSelector: api/nodegroup-app: worker restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: ca secret: defaultMode: 420 secretName: ca updateStrategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 2 desiredNumberScheduled: 2 numberAvailable: 2 numberMisscheduled: 0 numberReady: 2 observedGeneration: 1 updatedNumberScheduled: 2 </code></pre> <p>Furthermore, there are two containers in the Pod. Only one of them is exceptionally slow to start, and the other one is always fine.</p>
HanXu
<p>When you use HTTP_PROXY for your solution, watchout how it may route differently from your underlying cluster network - which often result to unexpected timeout.</p>
gohm'c
<p>I have created 2 tenants(tenant1,tenant2) in 2 namespaces tenant1-namespace,tenant2-namespace</p> <p>Each tenant has db pod and its services</p> <p>How to isolate db pods/service i.e. how to restrict pod/service from his namespace to access other tenants db pods ?</p> <p>I have used service account for each tenant and applied network policies so that namespaces are isolated. </p> <pre><code>kubectl get svc --all-namespaces tenant1-namespace grafana-app LoadBalancer 10.64.7.233 104.x.x.x 3000:31271/TCP 92m tenant1-namespace postgres-app NodePort 10.64.2.80 &lt;none&gt; 5432:31679/TCP 92m tenant2-namespace grafana-app LoadBalancer 10.64.14.38 35.x.x.x 3000:32226/TCP 92m tenant2-namespace postgres-app NodePort 10.64.2.143 &lt;none&gt; 5432:31912/TCP 92m </code></pre> <p>So</p> <p>I want to restrict grafana-app to use only his postgres db in his namespace only, not in other namespace.</p> <p>But problem is that using DNS qualified service name (<code>app-name.namespace-name.svc.cluster.local</code>) its allowing to access each other db pods (grafana-app in namespace tenant1-namespace can have access to postgres db in other tenant2-namespace via <code>postgres-app.tenant2-namespace.svc.cluster.local</code></p> <p>Updates : network policies</p> <p>1) </p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-from-other-namespaces spec: podSelector: matchLabels: ingress: - from: - podSelector: {} </code></pre> <p>2) </p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external spec: podSelector: matchLabels: app: grafana-app ingress: - from: [] </code></pre>
Developer Desk
<ul> <li><p><strong>Your <code>NetworkPolicy</code> objects are correct</strong>, I created an example with them and will demonstrate bellow.</p></li> <li><p><strong>If you still have access</strong> to the service <strong>on the other namespace</strong> using FQDN, <strong>your <code>NetworkPolicy</code> may not be fully enabled</strong> on your cluster.</p></li> </ul> <p>Run <code>gcloud container clusters describe "CLUSTER_NAME" --zone "ZONE"</code> and look for these two snippets:</p> <ul> <li>At the beggining of the description it shows if the NetworkPolicy Plugin is enabled <strong>at Master level</strong>, it should be like this:</li> </ul> <pre><code>addonsConfig: networkPolicyConfig: {} </code></pre> <ul> <li>At the middle of the description, you can find if the NetworkPolicy is <strong>enabled on the nodes</strong>. It should look like this:</li> </ul> <pre><code>name: cluster-1 network: default networkConfig: network: projects/myproject/global/networks/default subnetwork: projects/myproject/regions/us-central1/subnetworks/default networkPolicy: enabled: true provider: CALICO </code></pre> <ul> <li>If any of the above is different, check here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy#gcloud" rel="nofollow noreferrer">How to Enable Network Policy in GKE</a></li> </ul> <hr> <p><strong>Reproduction:</strong></p> <ul> <li>I'll create a simple example, I'll use <code>gcr.io/google-samples/hello-app:1.0</code> image for tenant1 and <code>gcr.io/google-samples/hello-app:2.0</code> for tenant2, so it's simplier to see where it's connecting but i'll use the names of your environment:</li> </ul> <pre><code>$ kubectl create namespace tenant1 namespace/tenant1 created $ kubectl create namespace tenant2 namespace/tenant2 created $ kubectl run -n tenant1 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0 pod/grafana-app created $ kubectl run -n tenant1 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0 pod/postgres-app created $ kubectl run -n tenant2 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0 pod/grafana-app created $ kubectl run -n tenant2 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0 pod/postgres-app created $ kubectl expose pod -n tenant1 grafana-app --port=8080 --type=LoadBalancer service/grafana-app exposed $ kubectl expose pod -n tenant1 postgres-app --port=8080 --type=NodePort service/postgres-app exposed $ kubectl expose pod -n tenant2 grafana-app --port=8080 --type=LoadBalancer service/grafana-app exposed $ kubectl expose pod -n tenant2 postgres-app --port=8080 --type=NodePort service/postgres-app exposed $ kubectl get all -o wide -n tenant1 NAME READY STATUS RESTARTS AGE IP NODE pod/grafana-app 1/1 Running 0 100m 10.48.2.4 gke-cluster-114-default-pool-e5df7e35-ez7s pod/postgres-app 1/1 Running 0 100m 10.48.0.6 gke-cluster-114-default-pool-e5df7e35-c68o NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/grafana-app LoadBalancer 10.1.23.39 34.72.118.149 8080:31604/TCP 77m run=grafana-app service/postgres-app NodePort 10.1.20.92 &lt;none&gt; 8080:31033/TCP 77m run=postgres-app $ kubectl get all -o wide -n tenant2 NAME READY STATUS RESTARTS AGE IP NODE pod/grafana-app 1/1 Running 0 76m 10.48.4.8 gke-cluster-114-default-pool-e5df7e35-ol8n pod/postgres-app 1/1 Running 0 100m 10.48.4.5 gke-cluster-114-default-pool-e5df7e35-ol8n NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/grafana-app LoadBalancer 10.1.17.50 104.154.135.69 8080:30534/TCP 76m run=grafana-app service/postgres-app NodePort 10.1.29.215 &lt;none&gt; 8080:31667/TCP 77m run=postgres-app </code></pre> <ul> <li>Now, let's deploy your two rules: The first blocking all traffic from outside the namespace, the second allowing ingress the <code>grafana-app</code> from outside of the namespace:</li> </ul> <pre><code>$ cat default-deny-other-ns.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-from-other-namespaces spec: podSelector: matchLabels: ingress: - from: - podSelector: {} $ cat allow-grafana-ingress.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external spec: podSelector: matchLabels: run: grafana-app ingress: - from: [] </code></pre> <ul> <li>Let's review the rules for <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods" rel="nofollow noreferrer">Network Policy Isolation</a>:</li> </ul> <blockquote> <p>By default, pods are non-isolated; they accept traffic from any source.</p> <p>Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)</p> <p><strong>Network policies do not conflict; they are additive</strong>. If any policy or policies select a pod, the pod is restricted to what is allowed <strong>by the union of those policies</strong>' ingress/egress rules. Thus, order of evaluation does not affect the policy result.</p> </blockquote> <ul> <li>Then we will apply the rules on both namespaces because the scope of the rule is the namespace it's assigned to:</li> </ul> <pre><code>$ kubectl apply -n tenant1 -f default-deny-other-ns.yaml networkpolicy.networking.k8s.io/deny-from-other-namespaces created $ kubectl apply -n tenant2 -f default-deny-other-ns.yaml networkpolicy.networking.k8s.io/deny-from-other-namespaces created $ kubectl apply -n tenant1 -f allow-grafana-ingress.yaml networkpolicy.networking.k8s.io/web-allow-external created $ kubectl apply -n tenant2 -f allow-grafana-ingress.yaml networkpolicy.networking.k8s.io/web-allow-external created </code></pre> <ul> <li>Now for final testing, I'll log inside <code>grafana-app</code> in <code>tenant1</code> and try to reach the <code>postgres-app</code> in both namespaces and check the output:</li> </ul> <pre><code>$ kubectl exec -n tenant1 -it grafana-app -- /bin/sh / ### POSTGRES SAME NAMESPACE ### / # wget -O- postgres-app:8080 Connecting to postgres-app:8080 (10.1.20.92:8080) Hello, world! Version: 1.0.0 Hostname: postgres-app / ### GRAFANA OTHER NAMESPACE ### / # wget -O- --timeout=1 http://grafana-app.tenant2.svc.cluster.local:8080 Connecting to grafana-app.tenant2.svc.cluster.local:8080 (10.1.17.50:8080) Hello, world! Version: 2.0.0 Hostname: grafana-app / ### POSTGRES OTHER NAMESPACE ### / # wget -O- --timeout=1 http://postgres-app.tenant2.svc.cluster.local:8080 Connecting to postgres-app.tenant2.svc.cluster.local:8080 (10.1.29.215:8080) wget: download timed out </code></pre> <ul> <li>You can see that the DNS is resolved, but the networkpolicy blocks the access to the backend pods.</li> </ul> <p>If after double checking NetworkPolicy is enabled on Master and Nodes you still face the same issue let me know in the comments and we can dig further.</p>
Will R.O.F.
<p>I have a 3-node ubuntu microk8s installation and it seems to be working ok. All 3 nodes are management nodes.</p> <p>On only one of the nodes, I get an error message and associated delay whenever I use a <strong>kubectl</strong> command. It looks like this:</p> <pre class="lang-sh prettyprint-override"><code>$ time kubectl get pods I0324 03:49:44.270996 514696 request.go:665] Waited for 1.156689289s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/authentication.k8s.io/v1?timeout=32s NAME READY STATUS RESTARTS AGE sbnweb-5f9d9b977f-lw7t9 1/1 Running 1 (10h ago) 3d3h shell-6cfccdbd47-zd2tn 1/1 Running 0 6h39m real 0m6.558s user 0m0.414s sys 0m0.170s </code></pre> <p>The error message always shows a different URL each time. I tried looking up the error code (I0324) and haven't found anything useful.</p> <p>The other two nodes don't show this behavior. No error message and completes the request in less than a second.</p> <p>I'm new to k8s so I am not sure how to diagnose this kind of problem. Any hints on what to look for would be greatly appreciated.</p>
AlanObject
<p>Here's a good <a href="https://jonnylangefeld.com/blog/the-kubernetes-discovery-cache-blessing-and-curse" rel="noreferrer">write-up</a> about the issue. For some cases <code>rm -rf ~/.kube/cache</code> will remove the issue.</p>
gohm'c
<p>I have been trying to get prestop to run a script before the pod terminates (to prolong the termination until the current job has finished), but command doesn't seem to be executing the commands. I've temporarily added an echo command, which i would expect to see in kubectl logs for the pod, i can't see this either.</p> <p>This is part of the (otherwise working) deployment spec:</p> <pre><code> containers: - name: file-blast-app image: my_image:stuff imagePullPolicy: Always lifecycle: preStop: exec: command: [&quot;echo&quot;,&quot;PRE STOP!&quot;] </code></pre> <p>Does anyone know why this would not be working and if i'm correct to expect the logs from hook in kubectl logs for the pod?</p>
Happy Machine
<p>You forgot to mention the shell through which you want this command to be executed.</p> <p>Try using the following in your YAML.</p> <pre><code> containers: - name: file-blast-app image: my_image:stuff imagePullPolicy: Always lifecycle: preStop: exec: command: [&quot;/bin/sh&quot;,&quot;-c&quot;,&quot;echo PRE STOP!&quot;] </code></pre> <p>Also, one thing to note is that a preStop hook only gets executed when a pod is terminated, and not when it is completed. You can read more on this <a href="https://github.com/kubernetes/kubernetes/issues/55807" rel="nofollow noreferrer">here</a>.</p> <p>You can also refer to the K8S official documentation for lifecycle hooks <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">here</a>.</p>
Abhinav Thakur
<p>How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?</p> <p>e.g. Duplicate default to my-namespace which will have the same content.</p> <p>I'm interested just by services and deployments, so when I try with method with kubectl get all and with api-resources i have error with services IP like :</p> <pre><code>Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated </code></pre>
Inforedaster
<p>You can backup your namespace using <strong>Velero</strong> and then you can restore it to another namespace or cluster!</p>
786Logix
<p>I have a k8s cluster with one master node (node one) and three worker nodes(node two, three, and four). Is there any way to change node two to the master node and change node one to the worker node? In other words, switch the role of node one and node two.</p> <p>Thanks</p>
Erika
<p><code>Is there any way to change node two to the master node and change node one to the worker node?</code></p> <p>K8s control plane (aka master) is make up of <code>kubectl get componentstatuses</code>. This is not like <code>docker node promote/demote</code>. In your case, you need to delete node 2 from the cluster, re-join the cluster as <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-rest-of-the-control-plane-nodes" rel="nofollow noreferrer">control plane</a>. Then delete node 1 and re-join as worker node.</p>
gohm'c
<p>I have a problem with grpc communication on kubernetes. I have a java client calling a simple helloworld grpc service (in java too). Everything works fine in Docker but not in kubernetes :'(.</p> <p>You can get the whole code here: <a href="https://github.com/hagakure/testgrpc" rel="nofollow noreferrer">https://github.com/hagakure/testgrpc</a></p> <p>How to reproduce?</p> <ul> <li>just run <code>./start.bat</code></li> <li>then I create my deployments and I expose them on kubectl.</li> <li>You can access the service with this address: <code>http://localhost:8080/hello?firstName=cedric</code> but it doesn't work :'(</li> <li>I added a second method (http://localhost:8080/check?address=host.docker.internal&amp;port=8081) to try different address/port combination to connect to server but yet it never works.</li> <li>I also tried a simple http connection between two exposed services and that works fine so it's not just the ip/port that doesn't match :(</li> </ul> <p>If you have any clue on how to make grpc works in kubernetes, thanks for your help.</p>
Fieux cédric
<ul> <li>Your deployment yamls are lacking the service part to route the traffic to the pods appropriately.</li> <li>Also, the deployments are not passing the Environment Variables as specified in the <code>docker-compose.yml</code>. Here are the fixed yamls.</li> <li><code>deployment_server.yml</code>:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: grpc-server name: grpc-server spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: grpc-server template: metadata: labels: app.kubernetes.io/name: grpc-server spec: containers: - image: poc_grpc-server:latest imagePullPolicy: Never name: grpc-server ports: - containerPort: 8081 env: - name: GRPC_SERVER_PORT value: &quot;8081&quot; --- apiVersion: v1 kind: Service metadata: name: grpc-server-svc spec: selector: app.kubernetes.io/name: grpc-server ports: - protocol: TCP port: 8081 </code></pre> <ul> <li><code>deployment_client.yml</code>:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: grpc-client name: grpc-client spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: grpc-client template: metadata: labels: app.kubernetes.io/name: grpc-client spec: containers: - image: poc_grpc-client imagePullPolicy: Never name: grpc-client ports: - containerPort: 8080 env: - name: GRPC_SERVER_ADDRESS value: &quot;grpc-server-svc&quot; - name: GRPC_SERVER_PORT value: &quot;8081&quot; --- apiVersion: v1 kind: Service metadata: name: grpc-client-svc spec: selector: app.kubernetes.io/name: grpc-client ports: - protocol: TCP port: 8080 type: NodePort </code></pre> <p><strong>Highlights:</strong></p> <ul> <li>created <code>grpc-server-svc</code> service as <code>ClusterIP</code> to be available only inside the cluster serving the GRPC Server.</li> <li>created <code>grpc-client-svc</code> service to <code>NodePort</code> to demonstrate receiving curl requests from outside the cluster.</li> <li>Added the <code>env</code> section: Note that I set <code>GRPC_SERVER_ADDRESS</code> to <code>grpc-server-svc</code> since we are not in docker environment anymore <code>host.docker.internal</code> is no longer an option we are now targetting the service mentioned above.</li> <li><em>I'm using <code>imagePullPolicy</code> set to <code>Never</code> only for this example, since I'm using my local docker registry.</em></li> </ul> <hr /> <p><strong>Reproduction:</strong></p> <ul> <li>After building the images:</li> </ul> <pre class="lang-sh prettyprint-override"><code>$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE poc_grpc-client latest 7f6d886a1612 24 minutes ago 660MB poc_grpc-server latest d46bf9481d1c 24 minutes ago 658MB </code></pre> <ul> <li>Deployed the fixed yamls as above:</li> </ul> <pre><code>$ kubectl apply -f deployment_server.yml deployment.apps/grpc-server created service/grpc-server-svc created $ kubectl apply -f deployment_client.yml deployment.apps/grpc-client created service/grpc-client-svc created $ kubectl get all NAME READY STATUS RESTARTS AGE pod/grpc-client-6ffcf6b6c8-846s5 1/1 Running 0 3s pod/grpc-server-5d7fd9cb89-dkqlb 1/1 Running 0 7s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grpc-client-svc NodePort 10.99.58.76 &lt;none&gt; 8080:32224/TCP 3s service/grpc-server-svc ClusterIP 10.96.67.139 &lt;none&gt; 8081/TCP 7s service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h36m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grpc-client 1/1 1 1 3s deployment.apps/grpc-server 1/1 1 1 7s NAME DESIRED CURRENT READY AGE replicaset.apps/grpc-client-6ffcf6b6c8 1 1 1 3s replicaset.apps/grpc-server-5d7fd9cb89 1 1 1 7s </code></pre> <ul> <li>Now I'll curl the client, since I'm running on minikube (1 node cluster) the IP of the node is the IP of the cluster, I'll pair the IP with the NodePort assigned.</li> </ul> <pre><code>$ kubectl cluster-info Kubernetes master is running at https://172.17.0.4 $ curl http://172.17.0.4:32224 Client is running!!! $ curl http://172.17.0.4:32224/hello?firstName=Cedric Hello Cedric </code></pre> <p>If you have any questions, let me know in the comments!</p>
Will R.O.F.
<p>I'm expecting that <code>kubectl get nodes &lt;node&gt; -o yaml</code> to show the <code>spec.providerID</code> (see reference below) once the kubelet has been provided the additional flag <code>--provider-id=provider://nodeID</code>. I've used <code>/etc/default/kubelet</code> file to add more flags to the command line when kubelet is start/restarted. (On a k8s 1.16 cluster) I see the additional flags via a <code>systemctl status kubelet --no-pager</code> call, so the file is respected.</p> <p>However, I've not seen the value get returned by <code>kubectl get node &lt;node&gt; -o yaml</code> call. I was thinking it had to be that the node was already registered, but I think kubectl re-registers when it starts up. I've seen the log line via <code>journalctl -u kubelet</code> suggest that it has gone through registration.</p> <p>How can I add a provider ID to a node manually?</p> <p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#nodespec-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#nodespec-v1-core</a></p>
lucidquiet
<p>How a <code>kubelet</code> is configured on the node itself is separate (AFAIK) from its definition in the <code>master</code> control plane, which is responsible for updating state in the central <code>etcd</code> store; so it's possible for these to fall out of sync. i.e., you need to communicate to the control place to update its records.</p> <p>In addition to Subramanian's suggestion, <code>kubectl patch node</code> would also work, and has the added benefit of being easily reproducible/scriptable compared to manually editing the YAML manifest; it also leaves a "paper trail" in your shell history should you need to refer back. Take your pick :) For example,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl patch node my-node -p '{"spec":{"providerID":"foo"}}' node/my-node patched $ kubectl describe node my-node | grep ProviderID ProviderID: foo </code></pre> <p>Hope this helps!</p>
Jesse Stuart
<p>I am new to <code>Kubernetes</code> and using EKS cluster end-point provided by third party. I trying to create a simple ngnix deployment using following command:</p> <pre><code>kubectl create deployment nginx-depl --image=nginx </code></pre> <p>It gives me following error:</p> <pre><code>error: failed to create deployment: admission webhook &quot;validate.kyverno.svc&quot; denied the request: resource Deployment/comp-dev/nginx-depl was blocked due to the following policies edison-platform-policy-disallow-pod-without-resources: validate-resources: 'validation error: Error : Unable to install - container spec does not specify resource request. Rule validate-resources[0] failed at path /spec/template/spec/containers/0/resources/requests/. Rule validate-resources[1] failed at path /metadata/labels/AllowContainerWithoutResourcesRequests/.' edison-platform-policy-disallow-privileged-container: autogen-validate-allowPrivilegeEscalation: 'validation error: Privileged mode is not allowed. Set allowPrivilegeEscalation to false. Rule autogen-validate-allowPrivilegeEscalation[0] failed at path /spec/template/spec/containers/0/securityContext/. Rule autogen-validate-allowPrivilegeEscalation[1] failed at path /spec/template/metadata/labels/AllowPrivilegedEscalation/.' edison-platform-policy-disallow-root-user: autogen-validate-runAsNonRoot: 'validation error: Running as root user is not allowed. Set runAsNonRoot to true. Rule autogen-validate-runAsNonRoot[0] failed at path /spec/template/spec/securityContext/runAsNonRoot/. Rule autogen-validate-runAsNonRoot[1] failed at path /spec/template/spec/securityContext/runAsUser/. Rule autogen-validate-runAsNonRoot[2] failed at path /spec/template/spec/containers/0/securityContext/. Rule autogen-validate-runAsNonRoot[3] failed at path /spec/template/spec/containers/0/securityContext/. Rule autogen-validate-runAsNonRoot[4] failed at path /spec/template/metadata/labels/AllowRootUserAccess/.' edison-platform-policy-disallow-unknown-registries: autogen-validate-registries: 'validation error: Unknown image registry. Rule autogen-validate-registries failed at path /spec/template/spec/containers/0/image/' </code></pre> <p>Is public image registry is blocked in ECS? Or do the third party EKS provider has not enabled the public docker repository?</p>
sandeep.ganage
<p>The cluster is installed with <a href="https://thenewstack.io/kyverno-a-new-cncf-sandbox-project-offers-kubernetes-native-policy-management/" rel="nofollow noreferrer">Kyverno</a>. Your <code>create</code> request was rejected by this policy engine base on a policy setup by the provider. Try the following spec:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: securityContext: runAsUser: 1000 containers: - name: busybox image: docker.io/busybox:latest command: [&quot;sh&quot;,&quot;-c&quot;] args: [&quot;sleep 3600&quot;] resources: requests: cpu: 100m memory: 100Mi securityContext: allowPrivilegeEscalation: false runAsNonRoot: true </code></pre> <p>Note how to run Nginx as non-root is not cover here.</p>
gohm'c
<p>I have deployed an application on Azure kubernetes without authentication and I have the Azure API management in front of the API.</p> <p>How do I use the Azure API management to authenticate kubernetes APIs?</p> <pre><code>&lt;validate-jwt header-name=&quot;Authorization&quot; failed-validation-httpcode=&quot;401&quot; failed-validation-error-message=&quot;Unauthorized. Access token is missing or invalid.&quot;&gt; &lt;openid-config url=&quot;https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration&quot; /&gt; &lt;audiences&gt; &lt;audience&gt;25eef6e4-c905-4a07-8eb4-0d08d5df8b3f&lt;/audience&gt; &lt;/audiences&gt; &lt;required-claims&gt; &lt;claim name=&quot;id&quot; match=&quot;all&quot;&gt; &lt;value&gt;insert claim here&lt;/value&gt; &lt;/claim&gt; &lt;/required-claims&gt; &lt;/validate-jwt&gt; </code></pre>
One Developer
<p>How are you authenticating your APIM url?</p> <p>Here is a raw way of achieving authentication</p> <ol> <li>Generate a JwT from Azure AD (this could be your Web UI)</li> <li>Enable OAuth2 for your APIM</li> <li>While calling APIM from your UI JwT token will be passed</li> <li>Upon receiving the token at the APIM, create an inbound policy to Validate the JwT <a href="https://learn.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies</a></li> <li>Once the JwT is validated call the backend Kubernetes deployed endpoints.</li> <li>You may want to restrict your ingress controller to only accept traffic from the APIM</li> <li>Your http context will contain the user information from the JwT token at the api endpoint</li> <li>If you want you can further use this info from #7 at your middleware time write your custom auth logic.</li> </ol>
subhankars
<p>I am deploying a stateful set with Helm and the pods are complaining about volumes.</p> <p>What is the proper way of doing this with AWS EBS? Considering the Helm templates.</p> <pre><code>Warning FailedScheduling 30s (x112 over 116m) default-scheduler 0/9 nodes are available: 9 pod has unbound immediate PersistentVolumeClaims. </code></pre> <p>deployment.yaml</p> <pre><code>volumeClaimTemplates: - metadata: name: {{ .Values.storage.name }} labels: app: {{ template &quot;etcd.name&quot; . }} chart: {{ .Chart.Name }}-{{ .Chart.Version }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: storageClassName: {{ .Values.storage.class | default .Values.global.storage.class }} accessModes: - {{ .Values.storage.accessMode }} resources: requests: storage: {{ .Values.storage.size }} </code></pre> <p>values.yaml</p> <pre><code>storage: name: etcd-data mountPath: /somepath/etcd class: &quot;default&quot; size: 1Gi accessMode: ReadWriteOnce </code></pre>
Morariu
<p>Try change the class name to the default name on EKS:</p> <pre><code>... spec: storageClassName: {{ .Values.storage.class | default &quot;gp2&quot; | quote }} accessModes: - ... storage: ... class: &quot;gp2&quot; ... </code></pre>
gohm'c
<p>I have been trying to port over some infrastructure to K8S from a VM docker setup.</p> <p>In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an <code>.env</code> file via: <code>docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container</code></p> <blockquote> <p>172.17.0.2</p> </blockquote> <p>Then within the <code>.env</code> file: <code>URL=ws://172.17.0.2:4000/</code></p> <p>This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.</p> <p>My <code>env-configmap.yaml</code> is:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: env-config data: URL: &quot;ws://$(proxy-service):4000/&quot; </code></pre>
Matt - Block-Farms.io
<p>Containers that run in the same pod can connect to each other via <code>localhost</code>. Try <code>URL: &quot;ws://localhost:4000/&quot;</code> in your ConfigMap. Otherwise, you need to specify the service name like <code>URL: &quot;ws://proxy-service.&lt;namespace&gt;:4000&quot;</code>.</p>
gohm'c
<p>I've been reading through docs and can't find a clear answer on how to do this.</p> <p>I have an AWS EFS claim and a storage class which i have applied. I have a PV and PVC that are in a namespace and can see the storage class, which i assume is cluster wide.</p> <p>If i try to apply the same PV and PVC manifests to a different namespace i get an error that the storage class is not found.</p> <p>If i then delete these i get the following warning:</p> <pre><code>warning: deleting cluster-scoped resources, not scoped to the provided namespace persistentvolume &quot;efs-pv&quot; deleted </code></pre> <p>This is really confusing.</p> <p>How would i share this efs storage between different namespaces? do i need to make changes to the PV and PVC (at the moment i already have persistentVolumeReclaimPolicy: Retain), and what do i apply to which namespace? what is cluster wide and what is namespace scoped?</p>
Happy Machine
<p>Presumed you can do <code>aws efs describe-file-systems</code> to see your EFS is correctly configured and you have the <code>FileSystemId</code> ready for mount.</p> <p>You have also installed the AWS EFS CSI driver and all driver pods are running with no issue.</p> <p>For StorageClass you only create <strong>once</strong> for each type.</p> <p>For PersistentVolume you only create <strong>once</strong> for each mount point (static).</p> <p>In case of simultaneous access from <strong>different</strong> namespaces, you create one PersistentVolumeClaim in each namespace that referred the same StorageClass.</p> <pre><code>... kind: StorageClass metadata: name: my-efs-sc &lt;--- This name MUST be referred across PVC/PC provisioner: efs.csi.aws.com &lt;--- MUST have EFS CSI driver installed ... kind: PersistentVolume spec: storageClassName: my-efs-sc &lt;--- Have you verify the name? ... accessModes: - ReadWriteMany &lt;--- MUST use this mode ... kind: PersistentVolumeClaim metadata: ... namespace: my-namespace-A &lt;--- namespace scope spec: storageClassName: my-efs-sc &lt;--- Have you verify the name? accessModes: - ReadWriteMany &lt;--- MUST use this mode ... kind: PersistentVolumeClaim metadata: ... namespace: my-namespace-B &lt;--- namespace scope spec: storageClassName: my-efs-sc &lt;--- Have you verify the name? accessModes: - ReadWriteMany &lt;--- MUST use this mode ... </code></pre> <p>Check the PVC/PV status with kubectl to ensure all are bound correctly.</p>
gohm'c
<p>Setup: Linux VM where Pod (containing 3 containers) is started. Only 1 of the containers needs the NFS mount to the remote NFS server. This &quot;app&quot; container is based Alpine linux.</p> <p>Remote NFS server is up &amp; running. If I create a separate yaml file for persistent volume with that server info - it's up &amp; available.</p> <p>In my pod yaml file I define Persistent Volume (with that remote NFS server info), Persistent Volume Claim and associate my &quot;app&quot; container's volume with that claim. Everything works as a charm if on the hosting linux VM I install the NFS library, like: <code>sudo apt install nfs-common</code>. (That's why I don't share my kubernetes yaml file. Looks like problem is not there.)</p> <p>But that's a development environment. I'm not sure how/where those containers would be used in production. For example they would be used in AWS EKS. I hoped to install something like <code>apk add --no-cache nfs-utils</code> in the &quot;app&quot; container's Dockerfile. I.e. on container level, not on a pod level - could it work?</p> <p>So far getting the pod initialization error:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 35s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. Warning FailedScheduling 22s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. Normal Scheduled 20s default-scheduler Successfully assigned default/delphix-masking-0 to masking-kubernetes Warning FailedMount 4s (x6 over 20s) kubelet MountVolume.SetUp failed for volume &quot;nfs-pv&quot; : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o hard,nfsvers=4.1 maxTestNfs1.dlpxdc.co:/var/tmp/masking-mount /var/snap/microk8s/common/var/lib/kubelet/pods/2e6b7aeb-5d0d-4002-abba-88de032c12dc/volumes/kubernetes.io~nfs/nfs-pv Output: mount: /var/snap/microk8s/common/var/lib/kubelet/pods/2e6b7aeb-5d0d-4002-abba-88de032c12dc/volumes/kubernetes.io~nfs/nfs-pv: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.&lt;type&gt; helper program. </code></pre> <p>And the process is stuck in that step forever. Looks like it happens before even trying to initialize containers. So I wonder if approach of enabling NFS-client on the container's level is valid. Thanks in ahead for any insights!</p>
Max
<p><code>I hoped to install something like apk add --no-cache nfs-utils in the &quot;app&quot; container's Dockerfile. I.e. on container level, not on a pod level - could it work?</code></p> <p>Yes, this could work. This is normally what you would do if you have no control to the node (eg. you can't be sure if the host is ready for NFS calls). You need to ensure your pod can reach out to the NFS server and in between all required ports are opened. You also needs to ensure required NFS program (eg. rpcbind) is <a href="https://github.com/walkerk1980/docker-nfs-client/blob/master/entry.sh" rel="nofollow noreferrer">started</a> before your own program in the container.</p> <p><code>...For example they would be used in AWS EKS.</code></p> <p>EKS optimized AMI come with NFS supports, you can leverage K8S PV/PVC support using this image for your worker node, there's no need to initialize NFS client support in your container.</p>
gohm'c
<p>We are facing strange issue with EKS Fargate Pods. We want to push logs to cloudwatch with sidecar fluent-bit container and for that we are mounting the separately created <code>/logs/boot</code> and <code>/logs/access</code> folders on both the containers with <code>emptyDir: {}</code> type. But somehow the <code>access</code> folder is getting deleted. When we tested this setup in local docker it produced desired results and things were working fine but not when deployed in the EKS fargate. Below is our manifest files</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM anapsix/alpine-java:8u201b09_server-jre_nashorn ARG LOG_DIR=/logs # Install base packages RUN apk update RUN apk upgrade # RUN apk add ca-certificates &amp;&amp; update-ca-certificates # Dynamically set the JAVA_HOME path RUN export JAVA_HOME=&quot;$(dirname $(dirname $(readlink -f $(which java))))&quot; &amp;&amp; echo $JAVA_HOME # Add Curl RUN apk --no-cache add curl RUN mkdir -p $LOG_DIR/boot $LOG_DIR/access RUN chmod -R 0777 $LOG_DIR/* # Add metadata to the image to describe which port the container is listening on at runtime. # Change TimeZone RUN apk add --update tzdata ENV TZ=&quot;Asia/Kolkata&quot; # Clean APK cache RUN rm -rf /var/cache/apk/* # Setting JAVA HOME ENV JAVA_HOME=/opt/jdk # Copy all files and folders COPY . . RUN rm -rf /opt/jdk/jre/lib/security/cacerts COPY cacerts /opt/jdk/jre/lib/security/cacerts COPY standalone.xml /jboss-eap-6.4-integration/standalone/configuration/ # Set the working directory. WORKDIR /jboss-eap-6.4-integration/bin EXPOSE 8177 CMD [&quot;./erctl&quot;] </code></pre> <p><strong>Deployment</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: vinintegrator namespace: eretail labels: app: vinintegrator pod: fargate spec: selector: matchLabels: app: vinintegrator pod: fargate replicas: 2 template: metadata: labels: app: vinintegrator pod: fargate spec: securityContext: fsGroup: 0 serviceAccount: eretail containers: - name: vinintegrator imagePullPolicy: IfNotPresent image: 653580443710.dkr.ecr.ap-southeast-1.amazonaws.com/vinintegrator-service:latest resources: limits: memory: &quot;7629Mi&quot; cpu: &quot;1.5&quot; requests: memory: &quot;5435Mi&quot; cpu: &quot;750m&quot; ports: - containerPort: 8177 protocol: TCP # securityContext: # runAsUser: 506 # runAsGroup: 506 volumeMounts: - mountPath: /jboss-eap-6.4-integration/bin name: bin - mountPath: /logs name: logs - name: fluent-bit image: 657281243710.dkr.ecr.ap-southeast-1.amazonaws.com/fluent-bit:latest imagePullPolicy: IfNotPresent env: - name: HOST_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: limits: memory: 200Mi requests: cpu: 200m memory: 100Mi volumeMounts: - name: fluent-bit-config mountPath: /fluent-bit/etc/ - name: logs mountPath: /logs readOnly: true volumes: - name: fluent-bit-config configMap: name: fluent-bit-config - name: logs emptyDir: {} - name: bin persistentVolumeClaim: claimName: vinintegrator-pvc </code></pre> <p>Below is the /logs folder ownership and permission. <strong>Please notice the 's' in <code>drwxrwsrwx</code></strong></p> <pre><code>drwxrwsrwx 3 root root 4096 Oct 1 11:50 logs </code></pre> <p>Below is the content inside logs folder. <strong>Please notice the access folder is not created or deleted.</strong></p> <pre><code>/logs # ls -lrt total 4 drwxr-sr-x 2 root root 4096 Oct 1 11:50 boot /logs # </code></pre> <p>Below is the configmap of Fluent-Bit</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: eretail labels: k8s-app: fluent-bit data: fluent-bit.conf: | [SERVICE] Flush 5 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE application-log.conf application-log.conf: | [INPUT] Name tail Path /logs/boot/*.log Tag boot [INPUT] Name tail Path /logs/access/*.log Tag access [OUTPUT] Name cloudwatch_logs Match *boot* region ap-southeast-1 log_group_name eks-fluent-bit log_stream_prefix boot-log- auto_create_group On [OUTPUT] Name cloudwatch_logs Match *access* region ap-southeast-1 log_group_name eks-fluent-bit log_stream_prefix access-log- auto_create_group On parsers.conf: | [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ </code></pre> <p>Below is error log of Fluent-bit container</p> <pre><code>AWS for Fluent Bit Container Image Version 2.14.0 Fluent Bit v1.7.4 * Copyright (C) 2019-2021 The Fluent Bit Authors * Copyright (C) 2015-2018 Treasure Data * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io [2021/10/01 06:20:33] [ info] [engine] started (pid=1) [2021/10/01 06:20:33] [ info] [storage] version=1.1.1, initializing... [2021/10/01 06:20:33] [ info] [storage] in-memory [2021/10/01 06:20:33] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 [2021/10/01 06:20:33] [error] [input:tail:tail.1] read error, check permissions: /logs/access/*.log [2021/10/01 06:20:33] [ warn] [input:tail:tail.1] error scanning path: /logs/access/*.log [2021/10/01 06:20:38] [error] [net] connection #33 timeout after 5 seconds to: 169.254.169.254:80 [2021/10/01 06:20:38] [error] [net] socket #33 could not connect to 169.254.169.254:80 </code></pre>
Nitin G
<p>Suggest remove the following from your Dockerfile:</p> <pre><code>RUN mkdir -p $LOG_DIR/boot $LOG_DIR/access RUN chmod -R 0777 $LOG_DIR/* </code></pre> <p>Use the following method to setup the log directories and permissions:</p> <pre><code>apiVersion: v1 kind: Pod # Deployment metadata: name: busy labels: app: busy spec: volumes: - name: logs # Shared folder with ephemeral storage emptyDir: {} initContainers: # Setup your log directory here - name: setup image: busybox command: [&quot;bin/ash&quot;, &quot;-c&quot;] args: - &gt; mkdir -p /logs/boot /logs/access; chmod -R 777 /logs volumeMounts: - name: logs mountPath: /logs containers: - name: app # Run your application and logs to the directories image: busybox command: [&quot;bin/ash&quot;,&quot;-c&quot;] args: - &gt; while :; do echo &quot;$(date): $(uname -r)&quot; | tee -a /logs/boot/boot.log /logs/access/access.log; sleep 1; done volumeMounts: - name: logs mountPath: /logs - name: logger # Any logger that you like image: busybox command: [&quot;bin/ash&quot;,&quot;-c&quot;] args: # tail the app logs, forward to CW etc... - &gt; sleep 5; tail -f /logs/boot/boot.log /logs/access/access.log volumeMounts: - name: logs mountPath: /logs </code></pre> <p>The snippet runs on Fargate as well, run <code>kubectl logs -f busy -c logger</code> to see the tailing. In real world, the &quot;app&quot; is your java app, &quot;logger&quot; is any log agent you desired. Note Fargate has <a href="https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/" rel="nofollow noreferrer">native logging capability</a> using AWS Fluent-bit, you do not need to run AWS Fluent-bit as sidecar.</p>
gohm'c
<p>I have deployed a node js Application on K8 1.16 version.I notice that post deployment the backend pods are not registering the end points , The back end pods thus keep restarting and go into CrashLoopBackOff.</p> <pre><code> kubectl describe svc Name: backend-xx-backend-svc Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: chart=backend-xx-backend,tier=backend Type: ClusterIP IP: 192.168.246.12 Port: &lt;unset&gt; 80/TCP TargetPort: 8800/TCP Endpoints: Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Any suggestions as to why the backend pod end points are blank.</p>
ptilloo
<p>As per result of describe command mentioned in the comment, it looks like Readiness probe is failing. Unless a pod is in Ready state, k8s won't forward traffic to that pod; so may be that's why end point in the Service object is blank as none of the pod is in Ready state. Check why Readiness probe i.e. Get 10.39.67.76:8800/api/health is failing. If your app takes time at the start then increase initialDelaySeconds or failureThreshold of Readiness Probe configuration.</p>
Rushikesh
<p>I have setup a kubernetes cluster with kubeadm with a 3 node vagrant setup. I have installed ArgoCD and when I use vagrant ssh into the kubemaster vm, I can run:</p> <pre><code>kubectl port-forward svc/argocd-server -n argocd 8080:443 </code></pre> <p>And I can curl it in the ssh session successfully with:</p> <pre><code>curl -k https://localhost:8080 </code></pre> <p>I have a static ip for the nodes with the master being 192.168.56.2, and a port forward set for that vm</p> <pre><code>config.vm.define &quot;kubemaster&quot; do |node| ... node.vm.network :private_network, ip: 192.168.56.2 node.vm.network &quot;forwarded_port&quot;, guest: 8080, host: 8080 ... end </code></pre> <p>On the host I try to access ArgoCD UI in browser with:</p> <pre><code>https://localhost:8080 https://192.168.56.2:8080 </code></pre> <p>And I get connection refused</p> <p>What am I missing?</p> <p><strong>Edit:</strong></p> <p>The nodes are running ubuntu 22 and ufw is not enabled. Im running on a Mac</p>
Dreamystify
<p>It turns out I needed to add the address flag to the port forwarding command</p> <pre><code>// from kubectl port-forward svc/argocd-server -n argocd 8080:443 // to kubectl port-forward --address 0.0.0.0 svc/argocd-server -n argocd 8080:443 </code></pre>
Dreamystify
<p>I am deploying kubernete cluster on AWS EKS and using EBS as persist volume. Below is the spec for a StatefulSet pods who are using the volume. It works fine after deployment. But when I delete the pods by running <code>kubectl delete -f spec.yml</code>, the <code>pvc</code> are not deleted. Their status is still <code>Bound</code>. I think it makes sense because deleting the volume will cause loosing data.</p> <p>When I redeploy the pods <code>kubectl apply -f spec.yml</code>, the first pod is running successfully but the second one failed. <code>kubectl describe pod</code> command gives me this error: <code>0/1 nodes are available: 1 node(s) had volume node affinity conflict.</code>.</p> <p>It works fine if I delete all <code>pvc</code>. What is the correct way to redeploy all the pods without deleting <code>pvc</code>?</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: es namespace: default spec: serviceName: es-entrypoint replicas: 3 selector: matchLabels: name: es volumeClaimTemplates: - metadata: name: ebs-claim spec: accessModes: - ReadWriteOnce storageClassName: ebs-sc resources: requests: storage: 1024Gi template: ... </code></pre>
Joey Yi Zhao
<p>This is because the pod got scheduled on a worker node that resides in the different available zone than the previously created PV. Can't really solve it here as you didn't post the StorageClass (ebs-sc) spec, description of the PV. But you can see <a href="https://stackoverflow.com/a/55514852/14704799">here</a> which explained the same.</p>
gohm'c
<p>I am trying to create a local cluster on a centos7 machine using eks anywhere. However I am getting below error. Please let me know if I am missing anything? Here is the link I am following to create the cluster. I have also attached the cluster create yaml file</p> <p>Link: <a href="https://aws.amazon.com/blogs/aws/amazon-eks-anywhere-now-generally-available-to-create-and-manage-kubernetes-clusters-on-premises/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/amazon-eks-anywhere-now-generally-available-to-create-and-manage-kubernetes-clusters-on-premises/</a></p> <p>Error: Error: failed to create cluster: error waiting for external etcd for workload cluster to be ready: error executing wait: error: timed out waiting for the condition on clusters/dev-cluster</p> <p><a href="https://i.stack.imgur.com/gUbVt.png" rel="nofollow noreferrer">clustercreate yaml file</a></p> <hr />
bomsabado
<p>The default spec will look for external etcd. To test it locally remove the <code>externalEtcdConfiguration</code>:</p> <pre><code>apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: dev-cluster spec: clusterNetwork: cni: cilium pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12 controlPlaneConfiguration: count: 1 datacenterRef: kind: DockerDatacenterConfig name: dev-cluster kubernetesVersion: &quot;1.21&quot; workerNodeGroupConfigurations: - count: 1 --- apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: DockerDatacenterConfig metadata: name: dev-cluster spec: {} --- </code></pre>
gohm'c
<p>I have a k8s cluster with master (controll plane) @ 192.168.1.66 and only one worker node @ 192.18.1.67 All node have no public IP address.</p> <p>I'm trying to deploy ingress nginx controller per <a href="https://devopscube.com/setup-ingress-kubernetes-nginx-controller/" rel="nofollow noreferrer">https://devopscube.com/setup-ingress-kubernetes-nginx-controller/</a></p> <p>I just arrieved at step : 'Create Ingress Controller &amp; Admission Controller Services'</p> <p>But the 'ingress-nginx-controller' LoadBalancer got pending External IP.</p> <pre><code>bino@corobalap  ~/k8nan/ingresnginx/nginx-ingress-controller/manifests   main  kubectl --namespace ingress-nginx get services -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR ingress-nginx-controller LoadBalancer 10.100.42.100 &lt;pending&gt; 80:30482/TCP,443:31697/TCP 6m32s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx ingress-nginx-controller-admission ClusterIP 10.106.242.13 &lt;none&gt; 443/TCP 6m32s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx </code></pre> <p>Kindly please what I need to read or do.</p> <p>Sincerely,</p> <p>-bino-</p> <pre><code>bino@corobalap  ~/k8nan  kubectl describe service ingress-nginx-controller --namespace ingress-nginx Name: ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/name=ingress-nginx Annotations: &lt;none&gt; Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.100.42.100 IPs: 10.100.42.100 Port: http 80/TCP TargetPort: http/TCP NodePort: http 30482/TCP Endpoints: 10.244.1.11:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 31697/TCP Endpoints: 10.244.1.11:443 Session Affinity: None External Traffic Policy: Local Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Type 54m service-controller LoadBalancer -&gt; NodePort </code></pre>
Bino Oetomo
<p><code>LoadBalancer</code> refers to cloud load balancer, e.g ELB on AWS, Cloud Load Balancing on GCP. If you are running your own cluster on your machine, you can change the <code>type: LoadBalancer</code> to <code>type: NodePort</code> and access your ingress controller via &lt; node ip&gt;:&lt;node port#&gt;.</p>
gohm'c
<p>I deployed K8S cluster on EKS nodegroup and deployed auto scalar based on this doc <a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html</a></p> <p>The node size is <code>t3.large</code> which is 2 cpu and 8G memory and the size is:</p> <pre><code>desired_size = 1 max_size = 3 min_size = 1 </code></pre> <p>when I deploy a Elasticsearch pod on this cluster:</p> <pre><code>containers: - name: es image: elasticsearch:7.10.1 resources: requests: cpu: 2 memory: 8Gi </code></pre> <p>got this error:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 57s (x11 over 11m) default-scheduler 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory. Normal NotTriggerScaleUp 49s (x54 over 10m) cluster-autoscaler pod didn't trigger scale-up: 1 Insufficient cpu, 1 Insufficient memory </code></pre> <p>I wonder why the scaler is not triggered.</p> <p>One thing I can think of is the pod requested resource meet the node's maximum capacity. Is this the reason it can't scale up? Does the scale work to combine multiple small nodes resources to big one? like I spin up 3 small nodes which can be consumed by one pod?</p>
Joey Yi Zhao
<p>Instance type is not the actual allocatable capacity, check with:</p> <p><code>kubectl describe node &lt;name&gt; | grep Allocatable -A 7</code></p> <p>Update: You can add additional node group with ASG that uses larger instance type for autoscaler to select the right size. Ensure that your ASG are tagged so that autoscaler can automatically discover these ASG(s).</p> <pre><code>k8s.io/cluster-autoscaler/enabled k8s.io/cluster-autoscaler/&lt;cluster-name&gt; </code></pre>
gohm'c
<p>I have all sorts of problems with Kubernetes/helm, but I'm really new to it and so I'm not sure what I'm doing at all, despite spending a day trying to work it out.</p> <p>I have a pod that's in a CrashLoopBackOff situation as I entered an incorrect port number in the Dockerfile. When I do a <code>kubectl -n dev get pods</code>, I can see it in the crash loop. I tried to kill it with <code>helm delete --purge emails</code> but I get the error <code>Error: unknown flag: --purge</code>. I tried to edit the chart with <code>kubectl edit pod emails -n dev</code>, but I get an error saying that the field cannot be changed.</p> <p>But I can't delete the pod, so I'm not quite sure where to go from here. I've tried without the --purge flag and I get the error <code>Error: uninstall: Release not loaded: emails: release: not found</code>. I get the same if I try <code>helm uninstall emails</code> or pretty much anything.</p> <p>To get to the crux of the matter, I believe it's because there's been an upgrade to the helm client to version v3.1.0 but the pods were created with v2.11.0. But I don't know how to roll back the client to this version. I've downloaded it via <code>curl -L https://git.io/get_helm.sh | bash -s -- --version v2.11.0</code> but I can't run <code>helm init</code> and so I'm still on v3.1.0</p> <p>If I run <code>helm list</code>, I get an empty list. I have 16 running pods I can see through <code>kubectl -n dev get pods</code> but I don't seem to be able to do anything to any of them.</p> <p>Is this likely to be because my helm client is the wrong version and, if so, how do I roll it back?</p> <p>Thanks for any suggestions.</p>
EricP
<p>EricZ's answer hit the main points, but just to provide some context and recommend a couple of resources — first off, <strong>you can find the binaries for the current v2 release <a href="https://github.com/helm/helm/releases/tag/v2.16.1" rel="nofollow noreferrer">here</a></strong>. Just drop that in your path and everything should be as you left it with all your releases.</p> <p>As mentioned, the problem is you're trying to query Helm v2 releases with a v3 client<sup>1</sup>. However, Helm v3 was <em>designed</em> to have a separate release "store", allowing a rolling migration of your workloads; this is why the v3 client doesn't "see" v2 releases, and vice versa (i.e. you could, for example, "convert" a v2 release to v3 in a test environment while the deployed one keeps running, verify that everything looks good, and then rollover traffic to the new release and remove the old one). While you're migrating, you'll probably want both versions in your <code>PATH</code> — I just <code>alias</code>ed the v2 client to <code>helm2</code>, so e.g., <code>helm2 list</code> would show me all my v2 releases, and <code>helm list</code> the new migrated releases.</p> <p>That being said — there's nothing preventing you from continuing to use Helm v2 as you have been. If you've already got a working cluster running with v2, it may be worth just sticking to that while you get more familiar with the Kubernetes fundamentals. I was at a meetup with one of the core maintainers of Helm last week, and it sounds like Helm v2 will still be supported for the next year or so; so you've got time. (In fact, they <em>recommend</em> taking the time to test your migrations on a development cluster for production-critical applications.)</p> <hr> <p>When you're ready to migrate... <strong>I'd highly recommend checking out the <a href="https://github.com/helm/helm-2to3" rel="nofollow noreferrer">Helm <code>2to3</code> plugin</a></strong> (an official plugin maintained by the core Helm team), which was designed to automate migrating v2 releases to v3 in an easy CLI interface. YMMV, but it's worked great for me. In short:</p> <pre class="lang-sh prettyprint-override"><code>$ helm plugin install https://github.com/helm/helm-2to3 # Note: the following commands can be also be run with the `--dry-run` # flag to preview their effects. $ helm 2to3 move config [...] [Move Config/confirm] Are you sure you want to move the v2 configuration? [y/N]: y 2020/02/18 23:02:08 Helm v2 configuration will be moved to Helm v3 configuration. [...] $ helm 2to3 convert some-helm2-release 2020/02/19 00:30:35 Release "some-helm2-release" will be converted from Helm v2 to Helm v3. 2020/02/19 00:30:35 [Helm 3] Release "some-helm2-release" will be created. [...] </code></pre> <p>Hope this helps!</p> <hr> <p><sup>1</sup> Check out the documentation <a href="https://helm.sh/docs/faq/#changes-since-helm-2" rel="nofollow noreferrer">here</a> on "changes since Helm 2" for the details. Worth noting in your case:</p> <ul> <li>Releases are now scoped to the namespace they're deployed in; to see all releases, use <code>helm list -A</code>.</li> <li>No need to <code>helm init</code> in Helm v3 — since the v3 client doesn't rely on <code>tiller</code> anymore.</li> <li>As you noticed — <code>helm delete --purge</code> is now <code>helm uninstall</code> in v3; if you want the v2 functionality of persisting a "deleted" release, use the <code>--keep-history</code> flag.</li> </ul>
Jesse Stuart
<p>I am using Istio 1.8.0 with on-prem k8s v1.19..We have several microservices running where I am using <code>STRICT</code> mode for peerauthentication. And I can verify that if I use <code>PERMISSIVE</code> mode I did not receive any 503 errors.</p> <p><a href="https://i.stack.imgur.com/KbaqG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KbaqG.png" alt="Network console" /></a></p> <p>I really get stuck to find any solution cause I do not want to use <code>PERMISSIVE</code> mode as recommended.</p> <p>Here is the log for istio ingressgateway.</p> <pre><code>$kubectl logs -f istio-ingressgateway-75496c97df-44g6l -n istio-system [2021-01-16T07:28:51.852Z] &quot;GET /config HTTP/1.1&quot; 503 URX &quot;-&quot; 0 95 57 57 &quot;95.0.145.40,10.6.0.21&quot; &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36&quot; &quot;foo-bar.example.com&quot; &quot;10.6.25.34:3000&quot; outbound|80||oneapihub-ui-dev.hub-dev.svc.cluster.local 10.6.5.216:46364 10.6.5.216:8080 10.6.0.21:14387 - - </code></pre> <p>Here is the log for my ui microservice.. I did not catch any 503 errors for rest of services</p> <pre><code>$ kubectl get ns --show-labels NAME STATUS AGE LABELS hub-dev Active 2d19h istio-injection=enabled istio-system Active 3d5h istio-injection=disabled $ kubectl get pods -n hub-dev NAME READY STATUS RESTARTS AGE oneapihub-api-dev-79dff67cdb-hx754 3/3 Running 0 15h oneapihub-auth-dev-76cfcb6cb4-74ljq 3/3 Running 0 15h oneapihub-backend-dev-d76799bcd-bmwjn 2/2 Running 0 15h oneapihub-cronjob-dev-6879dbf9b8-wvpnp 3/3 Running 0 15h oneapihub-mp-dev-864794d446-cfqj7 3/3 Running 0 15h oneapihub-ui-dev-67d7bb6779-8z4xt 2/2 Running 0 14h redis-hub-master-0 2/2 Running 0 15h $ kubectl logs -f oneapihub-ui-dev-67d7bb6779-8z4xt -n hub-dev -c istio-proxy [2021-01-15T14:17:24.698Z] &quot;GET /config HTTP/1.1&quot; 503 URX &quot;-&quot; 0 95 65 64 &quot;95.0.145.40,10.6.0.19&quot; &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36&quot; &quot;foo-bar.example.com&quot; &quot;10.6.25.241:3000&quot; outbound|80||oneapihub-ui-dev.hub-dev.svc.cluster.local 10.6.5.216:37584 10.6.5.216:8080 10.6.0.19:31138 - - [2021-01-15T14:17:24.817Z] </code></pre> <p>Here is peerauthentication</p> <pre><code>$ kubectl get peerauthentication -n istio-system -o yaml apiVersion: v1 items: - apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;security.istio.io/v1beta1&quot;,&quot;kind&quot;:&quot;PeerAuthentication&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;default&quot;,&quot;namespace&quot;:&quot;istio-system&quot;},&quot;spec&quot;:{&quot;mtls&quot;:{&quot;mode&quot;:&quot;STRICT&quot;}}} name: default namespace: istio-system spec: mtls: mode: STRICT </code></pre> <p>Here is the service.yaml of my frontend app.</p> <pre><code>capel0068340585:~ semural$ kubectl get svc -n hub-dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oneapihub-ui-dev ClusterIP 10.254.47.95 &lt;none&gt; 80/TCP 48m # Source: oneapihub-ui/templates/service.yaml apiVersion: v1 kind: Service metadata: name: RELEASE-NAME-oneapihub-ui labels: app.kubernetes.io/name: oneapihub-ui app.kubernetes.io/instance: RELEASE-NAME app.kubernetes.io/managed-by: Helm helm.sh/chart: oneapihub-ui-0.1.0 spec: type: ClusterIP ports: - port: 80 targetPort: 3000 protocol: TCP name: http </code></pre> <p>Here is the Gateway and VS that I created in same namepace where microservices are running.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: hub-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - foo-bar.example.com apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: hub spec: hosts: - foo-bar.example.com gateways: - hub-gateway http: - route: - destination: host: oneapihub-ui-dev.hub-dev.svc.cluster.local port: number: 80 </code></pre> <p>Here is the proxy config for ingressgateway</p> <pre><code>$ istioctl proxy-config route istio-ingressgateway-75496c97df-44g6l -n istio-system -o json [ { &quot;name&quot;: &quot;http.80&quot;, &quot;virtualHosts&quot;: [ { &quot;name&quot;: &quot;foo-baexample.com:80&quot;, &quot;domains&quot;: [ &quot;foo-bar.example.com&quot;, &quot;foo-bar.example.com:*&quot; ], &quot;routes&quot;: [ { &quot;match&quot;: { &quot;prefix&quot;: &quot;/&quot; }, &quot;route&quot;: { &quot;cluster&quot;: &quot;outbound|80||oneapihub-ui-dev.hub-dev.svc.cluster.local&quot;, &quot;timeout&quot;: &quot;0s&quot;, &quot;retryPolicy&quot;: { &quot;retryOn&quot;: &quot;connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes&quot;, &quot;numRetries&quot;: 2, &quot;retryHostPredicate&quot;: [ { &quot;name&quot;: &quot;envoy.retry_host_predicates.previous_hosts&quot; } ], </code></pre> <p>Here are the outputs of istioctl describe pods.</p> <pre><code> $ istioctl x describe pod oneapihub-api-dev-78fbccf48c-5hb4c -n hub-dev Pod: oneapihub-api-dev-78fbccf48c-5hb4c Pod Ports: 50004 (oneapihub-api), 15090 (istio-proxy) -------------------- Service: oneapihub-api-dev Port: http 80/HTTP targets pod port 50004 $ istioctl x describe pod oneapihub-auth-dev-7f8998cd69-gzmnm -n hub-dev Pod: oneapihub-auth-dev-7f8998cd69-gzmnm Pod Ports: 50002 (oneapihub-auth), 15090 (istio-proxy) -------------------- Service: oneapihub-auth-dev Port: http 80/HTTP targets pod port 5000 $ istioctl x describe pod oneapihub-backend-dev-849b4bcd5d-fcm4l -n hub-dev Pod: oneapihub-backend-dev-849b4bcd5d-fcm4l Pod Ports: 50001 (oneapihub-backend), 15090 (istio-proxy) -------------------- Service: oneapihub-backend-dev Port: http 80/HTTP targets pod port 50001 $ istioctl x describe pod oneapihub-cronjob-dev-58b64d9c68-lv5bk -n hub-dev Pod: oneapihub-cronjob-dev-58b64d9c68-lv5bk Pod Ports: 50005 (oneapihub-cronjob), 15090 (istio-proxy) -------------------- Service: oneapihub-cronjob-dev Port: http 80/HTTP targets pod port 50005 $ istioctl x describe pod oneapihub-mp-dev-74fd6ffc9f-65gh5 -n hub-dev Pod: oneapihub-mp-dev-74fd6ffc9f-65gh5 Pod Ports: 50003 (oneapihub-mp), 15090 (istio-proxy) -------------------- Service: oneapihub-mp-dev Port: http 80/HTTP targets pod port 50003 $ istioctl x describe pod oneapihub-ui-dev-7fd56f747c-nr5fk -n hub-dev Pod: oneapihub-ui-dev-7fd56f747c-nr5fk Pod Ports: 3000 (oneapihub-ui), 15090 (istio-proxy) -------------------- Service: oneapihub-ui-dev Port: http 80/HTTP targets pod port 3000 Exposed on Ingress Gateway http://53.6.48.168 VirtualService: hub 1 HTTP route(s) $ istioctl x describe pod redis-hub-master-0 -n hub-dev Pod: redis-hub-master-0 Pod Ports: 6379 (redis-hub), 15090 (istio-proxy) Suggestion: add 'version' label to pod for Istio telemetry. -------------------- Service: redis-hub-headless Port: redis 6379/Redis targets pod port 6379 -------------------- Service: redis-hub-master Port: redis 6379/Redis targets pod port 6379 </code></pre> <p>Here is my destination rule</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: backend-destination-rule namespace: hub-dev spec: host: oneapihub-backend-dev.hub-dev.svc.cluster.local trafficPolicy: tls: mode: ISTIO_MUTUAL sni: oneapihub-backend-dev.hub-dev.svc.cluster.local </code></pre>
semural
<p>The solution worked for me to add rewrite:authority to the virtual service</p> <pre><code> apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: hub spec: hosts: - foo-example.com gateways: - hub-gateway http: - route: - destination: host: oneapihub-ui-dev.hub-dev.svc.cluster.local port: number: 80 rewrite: authority: oneapihub-backend-dev.hub-dev.svc.cluster.local </code></pre>
semural
<p>Hi there I am trying to do a lab of kubernetes but I am stuck in a step where I need to deploy a yaml file.</p> <p>&quot;5. Create a job that creates a pod, and issues the etcdctl snapshot save command to back up the cluster:&quot;</p> <p>I think my yaml file has some errors with the spaces (I am new with yaml files) I have checked documentation but I can not find the mistake.</p> <p>This is the content of the file:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: backup namespace: management spec: template: spec: containers: # Use etcdctl snapshot save to create a snapshot in the /snapshot directory - command: - /bin/sh args: - -ec - etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key snapshot save /snapshots/backup.db # The same image used by the etcd pod image: k8s.gcr.io/etcd-amd64:3.1.12 name: etcdctl env: # Set the etcdctl API version to 3 (to match the version of etcd installed by kubeadm) - name: ETCDCTL_API value: '3' volumeMounts: - mountPath: /etc/kubernetes/pki/etcd name: etcd-certs readOnly: true - mountPath: /snapshots name: snapshots # Use the host network where the etcd port is accessible (etcd pod uses hostnetwork) # This allows the etcdctl to connect to etcd that is listening on the host network hostNetwork: true affinity: # Use node affinity to schedule the pod on the master (where the etcd pod is) nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/master operator: Exists restartPolicy: OnFailure tolerations: # tolerate the master's NoSchedule taint to allow scheduling on the master - effect: NoSchedule operator: Exists volumes: # Volume storing the etcd PKI keys and certificates - hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate name: etcd-certs # A volume to store the backup snapshot - hostPath: path: /snapshots type: DirectoryOrCreate name: snapshots </code></pre> <p>this is the error I am getting:</p> <pre><code>johsttin@umasternode:~$ kubectl create -f snapshot.yaml error: error validating &quot;snapshot.yaml&quot;: error validating data: [ValidationError(Job.spec.template.spec.volumes[0]): unknown field &quot;path&quot; in io.k8s.api.core.v1.Volume, ValidationError(Job.spec.template.spec.volumes[0]): unknown field &quot;type&quot; in io.k8s.api.core.v1.Volume, ValidationError(Job.spec.template.spec.volumes[1]): unknown field &quot;path&quot; in io.k8s.api.core.v1.Volume, ValidationError(Job.spec.template.spec.volumes[1]): unknown field &quot;type&quot; in io.k8s.api.core.v1.Volume]; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>Can someone help me with this? Thanks in advance</p>
Johsttin Curahua
<p>The <code>hostPath</code> indent is incorrect at volumes section:</p> <pre><code>... volumes: # Volume storing the etcd PKI keys and certificates - name: etcd-certs hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate # A volume to store the backup snapshot - name: snapshots hostPath: path: /snapshots type: DirectoryOrCreate </code></pre>
gohm'c
<p>I want to collect metrics from my opensearch</p> <p>I've found this plugin <a href="https://github.com/aiven/prometheus-exporter-plugin-for-opensearch" rel="nofollow noreferrer">https://github.com/aiven/prometheus-exporter-plugin-for-opensearch</a></p> <p>but I have no idea how to connect it to my opensearch:</p> <p>My current definition looks like this:</p> <p><strong>chart.yaml:</strong></p> <pre><code> - name: opensearch version: 1.8.0 repository: https://opensearch-project.github.io/helm-charts/ </code></pre> <p><strong>values.yaml:</strong></p> <pre><code>opensearch: plugins: enabled: true installList: - what should I write here ? addIndexedAt: true clusterName: ... masterService:... resources: requests: cpu: 500m memory: 1000Mi limits: cpu: 3000m memory: 2000Mi config: opensearch.yml: ... </code></pre> <p>Could you please help me how to connetc plugin to opensearch ?</p>
gstackoverflow
<p><code>- what should I write here ?</code></p> <p>Try specify the plugin download URL:</p> <pre><code>... installList: - &quot;https://github.com/aiven/prometheus-exporter-plugin-for-opensearch/releases/download/2.1.0.0/prometheus-exporter-2.1.0.0.zip&quot; ... </code></pre> <p>The url you passed gets to <a href="https://github.com/opensearch-project/helm-charts/blob/eeb0ae026694fbedba686b5f8e5ea0dea31d93b2/charts/opensearch/templates/statefulset.yaml#L297" rel="nofollow noreferrer">here</a> for installation.</p>
gohm'c
<p>I have followed the steps from the following guide to set up an agic in azure: <a href="https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/setup/install-existing.md" rel="nofollow noreferrer">https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/setup/install-existing.md</a></p> <p>I have a vnet with an aks cluster(rbac enabled) in one subnet and an app gateway in another. I have followed the steps for authorizing ARM using service principal as well as aad pod identity.</p> <p>However, in both cases, once the ingress controller has been installed using the helm-config.yaml file, the pod's logs show that it is running but not ready.</p> <p><strong>The following are when using the aad pod identity to authenticate</strong></p> <p>The events shown by <code>kubectl describe pod</code> are: <a href="https://i.stack.imgur.com/B5Wvb.png" rel="nofollow noreferrer">events</a></p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 20m default-scheduler Successfully assigned default/ingress-azure-57bcc69687-bqbdn to aks-agentpool-29530272-vmss000002 Normal Pulling 20m kubelet Pulling image &quot;mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.2.1&quot; Normal Pulled 20m kubelet Successfully pulled image &quot;mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.2.1&quot; Normal Created 20m kubelet Created container ingress-azure Normal Started 20m kubelet Started container ingress-azure Warning Unhealthy 41s (x117 over 20m) kubelet Readiness probe failed: Get http://10.2.0.83:8123/health/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>The logs shown by <code>kubectl logs -f</code> contain the following errors: <a href="https://i.stack.imgur.com/3D7JS.png" rel="nofollow noreferrer">logs error</a></p> <pre><code>ERROR: logging before flag.Parse: I1015 07:29:04.152565 1 utils.go:115] Using verbosity level 3 from environment variable APPGW_VERBOSITY_LEVEL ERROR: logging before flag.Parse: I1015 07:29:04.152726 1 main.go:78] Unable to load cloud provider config '/etc/appgw/azure.json'. Error: Reading Az Context file &quot;/etc/appgw/azure.json&quot; failed: open /etc/appgw/azure.json: permission denied E1015 07:29:04.172959 1 context.go:198] Error fetching AGIC Pod (This may happen if AGIC is running in a test environment). Error: pods &quot;ingress-azure-57bcc69687-bqbdn&quot; is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot get resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; I1015 07:29:04.172990 1 environment.go:240] KUBERNETES_WATCHNAMESPACE is not set. Watching all available namespaces. I1015 07:29:04.173096 1 main.go:128] Appication Gateway Details: Subscription=&quot;e14827fd-ae03-4832-9388-ef0aa3f28693&quot; Resource Group=&quot;rg-test&quot; Name=&quot;appGateway&quot; I1015 07:29:04.173107 1 auth.go:46] Creating authorizer from Azure Managed Service Identity I1015 07:29:04.173365 1 httpserver.go:57] Starting API Server on :8123 I1015 07:33:07.865519 1 main.go:175] Ingress Controller will observe all namespaces. I1015 07:33:07.894383 1 context.go:132] k8s context run started I1015 07:33:07.894419 1 context.go:176] Waiting for initial cache sync E1015 07:33:07.913698 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;ingresses&quot; in API group &quot;extensions&quot; at the cluster scope E1015 07:33:07.914239 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: services is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.914307 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Secret: secrets is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.914613 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.915265 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;ingresses&quot; in API group &quot;extensions&quot; at the cluster scope E1015 07:33:07.914752 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints:endpoints is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;endpoints&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.917430 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: services is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.919146 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Secret: secrets is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.919932 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:07.922582 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints:endpoints is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;endpoints&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:09.877700 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints:endpoints is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;endpoints&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:09.977016 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: services is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:09.994355 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Secret: secrets is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:10.030444 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;ingresses&quot; in API group &quot;extensions&quot; at the cluster scope E1015 07:33:10.612903 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:13.730098 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints:endpoints is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;endpoints&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:14.333551 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: services is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:14.752686 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:15.022569 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Secret: secrets is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:15.992773 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;ingresses&quot; in API group &quot;extensions&quot; at the cluster scope E1015 07:33:22.033914 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints:endpoints is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;endpoints&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:22.477987 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E1015 07:33:25.552073 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: services is forbidden: User &quot;system:serviceaccount:default:ingress-azure&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; at the cluster scope </code></pre> <p>I have created the three role assignments as stated in the guide:</p> <ul> <li>AGIC's identity Contributor access to the App Gateway</li> <li>AGIC's identity Reader access to the App Gateway resource group</li> <li>Managed Identity Operator role to AGIC's identity for the cluster</li> </ul> <p>Kindly help me in understanding the error.</p>
Laiba Abid
<p>So I followed this <a href="https://jessicadeen.com/how-to-configure-azure-application-gateway-v2-on-an-existing-aks-cluster/" rel="nofollow noreferrer">blogpost</a> and was able to solve this. There were two things I changed from the guide I was following before:</p> <ul> <li>changed rbac enabled in helm-config.yaml to true</li> <li>used the following command to install ingress:</li> </ul> <pre><code>helm upgrade --install appgw-ingress-azure -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure </code></pre> <p>While the pod was ready and running after this, the events did show that it was unhealthy. so there is that. However, it solved the earlier issue</p>
Laiba Abid
<p>I have a Spring boot application and I deploy the application to Kubernetes using a single k8s.yml manifest file via Github actions. This k8s.yml manifest contains Secrets, Service, Ingress, Deployment configurations. I was able to deploy the application successfully as well. Now I plan to separate the Secrets, Service, Ingress, Deployment configurations into a separate file as secrets.yml, service.yml, ingress.yml and deployment.yml.</p> <p>Previously I use the below command for deployment</p> <pre><code>kubectl: 1.5.4 command: | sed -e 's/$SEC/${{ secrets.SEC }}/g' -e 's/$APP/${{ env.APP_NAME }}/g' -e 's/$ES/${{ env.ES }}/g' deployment.yml | kubectl apply -f - </code></pre> <p>Now after the separation I use the below commands</p> <pre><code>kubectl: 1.5.4 command: | kubectl apply -f secrets.yml kubectl apply -f service.yml sed -e 's/$ES/${{ env.ES }}/g' ingress.yml | kubectl apply -f - sed -e 's/$SEC/${{ secrets.SEC }}/g' -e 's/$APP/${{ env.APP_NAME }}/g' deployment.yml | kubectl apply -f - </code></pre> <p>But some how the application is not deploying correctly, I would like to know if the command which I am using is correct or not</p>
Alex Man
<p>You can consider perform the <code>sed</code> first, then apply all files <code>kubectl apply -f .</code> instead of going one by one. Append <code>--recursive</code> if you have files in sub folder to apply, too.</p>
gohm'c
<p>I want to set-up liveness and readiness probes for Celery worker pods. Since these worker pods doesn't have a specific port associated to them I am finding it difficult. Main Django app nginx server was easier to set-up.</p> <p>I am very new to k8s so not much familiar to the different ways to do it.</p>
Dev
<p>liveness probe for celery worker: This command only works when remote control is enabled.</p> <pre><code>$ celery inspect ping -d &lt;worker_name&gt; --timeout=&lt;timeout_time&gt; </code></pre> <p>When a celery worker uses a solo pool, healthcheck waits for the task to finish. In this case, you must increase the timeout waiting for a response.</p> <p>so in yaml:</p> <pre><code> livenessProbe: initialDelaySeconds: 45 periodSeconds: 60 timeoutSeconds: &lt;timeout_time&gt; exec: command: - &quot;/bin/bash&quot; - &quot;-c&quot; - &quot;celery inspect ping -d &lt;worker_name&gt; | grep -q OK&quot; </code></pre> <p>Of course you have to change the <strong>worker name</strong> and <strong>timeout</strong> to your own values</p>
kjaw
<p>I am running a GPU intensive workload on demand on GKE Standard, where I have created the appropriate node pool with minimum 0 and maximum 5 nodes. However, when a Job is scheduled on the node pool, GKE presents the following error:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 59s (x2 over 60s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. Normal NotTriggerScaleUp 58s cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had taint {nvidia.com/gpu: present}, that the pod didn't tolerate, 1 in backoff after failed scale-up </code></pre> <p>I have set up nodeSelector according to the documentation and I have autoscaling enabled, I can confirm it does find the node pool in spite of the error saying &quot;didn't match Pod's node affinity/selector&quot; and tries to scale up the cluster. But then it fails shortly thereafter saying 0/1 nodes are available? Which is completely false, seeing there are 0/5 nodes used in the node pool. What am I doing wrong here?</p>
Walter Morawa
<p><code>1 node(s) had taint {nvidia.com/gpu: present}, that the pod didn't tolerate...</code></p> <p>Try add <code>tolerations</code> to your job's pod spec:</p> <pre><code>... spec: containers: - name: ... ... tolerations: - key: nvidia.com/gpu value: present operator: Exists </code></pre>
gohm'c
<p>I can find plenty of documentation on the fact that since Kubernetes 1.21, service account tokens are now no longer non-expiring, but are time and scope bound.</p> <p>What I can't find anywhere is if there is still a way to generate a &quot;legacy token&quot; for an existing service account in Kubernetes 1.21+; that is, a token that lives for a very long time or forever.</p> <p>Is this described anywhere in the Kubernetes documentation?</p>
riccardo_92
<blockquote> <p>Is this described anywhere in the Kubernetes documentation?</p> </blockquote> <p>You can find the details <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md#serviceaccount-admission-controller-migration" rel="nofollow noreferrer">here</a>.</p>
gohm'c
<p>am trying to deploy angular frontend app on kubernetes, but i always get this error:</p> <pre><code>NAME READY STATUS RESTARTS AGE common-frontend-f74c899cc-p6tdn 0/1 CrashLoopBackOff 7 15m </code></pre> <p>when i try to see logs of pod, it print just empty line, so how can i find out where could be problem</p> <p>this is dockerfile, build pipeline with this dockerfile alwys passed:</p> <pre><code>### STAGE 1: Build ### # We label our stage as 'builder' FROM node:10.11 as builder COPY package.json ./ COPY package-lock.json ./ RUN npm set progress=false &amp;&amp; npm config set depth 0 &amp;&amp; npm cache clean --force ARG NODE_OPTIONS=&quot;--max_old_space_size=4096&quot; ## Storing node modules on a separate layer will prevent unnecessary npm installs at each build RUN npm i &amp;&amp; mkdir /ng-app &amp;&amp; cp -R ./node_modules ./ng-app WORKDIR /ng-app COPY . . ## Build the angular app in production mode and store the artifacts in dist folder RUN $(npm bin)/ng build --prod --output-hashing=all ### STAGE 2: Setup ### FROM nginx:1.13.3-alpine ## Copy our default nginx config COPY nginx/default.conf /etc/nginx/conf.d/ ## Remove default nginx website RUN rm -rf /usr/share/nginx/html/* ## From 'builder' stage copy the artifacts in dist folder to default nginx public folder COPY --from=builder /ng-app/dist /usr/share/nginx/html CMD [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;] </code></pre> <p>and deployment.yaml</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: common-frontend labels: app: common-frontend spec: type: ClusterIP selector: app: common-frontend ports: - port: 80 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: common-frontend labels: app: common-frontend spec: replicas: 1 selector: matchLabels: app: common-frontend strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 33% template: metadata: labels: app: common-frontend spec: containers: - name: common-frontend image: skunkstechnologies/common-frontend:&lt;VERSION&gt; ports: - containerPort: 8080 livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 timeoutSeconds: 1 </code></pre> <p>I really dont know what could be problem,can anyone help? Thanks!</p>
Sizor
<p>Looks like Kubernetes fails liveness probe and will restart pod. Try to comment 'liveness probe' section and start it again. If it helps, correct probe parameters -- timeout, delay, etc.</p>
Ivan
<p>I have ingress as:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mongoexpress-ingress spec: rules: - host: mylocalmongoexpress.com http: paths: - backend: serviceName: mongoexpress-service servicePort: 8081 </code></pre> <p>When I run 'kubectl apply -f mongoexpress-ingress.yaml', I get error:</p> <blockquote> <p>error: error validating &quot;mongoexpress-ingress.yaml&quot;: error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field &quot;serviceName&quot; in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field &quot;servicePort&quot; in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0]): missing required field &quot;pathType&quot; in io.k8s.api.networking.v1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>Going through online resources, I couldn't find issue in yaml file.</p> <p>So what am I missing here?</p>
Mandroid
<p>Ingress specification <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122" rel="nofollow noreferrer">has changed</a> from v1beta1 to v1. Try:</p> <pre><code>... spec: rules: - host: mylocalmongoexpress.com http: paths: - path: / pathType: Prefix backend: service: name: mongoexpress-service port: number: 8081 </code></pre>
gohm'c
<p>I'm new at Kubernetes and trying to do a simple project to connect MySQL and PhpMyAdmin using Kubernetes on my Ubuntu 20.04. I created the components needed and here is the components.</p> <p>mysql.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-root-password - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secret key: mysql-user-username - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-user-password - name: MYSQL_DATABASE valueFrom: configMapKeyRef: name: mysql-configmap key: mysql-database --- apiVersion: v1 kind: Service metadata: name: mysql-service spec: selector: app: mysql ports: - protocol: TCP port: 3306 targetPort: 3306 </code></pre> <p>phpmyadmin.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: phpmyadmin labels: app: phpmyadmin spec: replicas: 1 selector: matchLabels: app: phpmyadmin template: metadata: labels: app: phpmyadmin spec: containers: - name: phpmyadmin image: phpmyadmin ports: - containerPort: 3000 env: - name: PMA_HOST valueFrom: configMapKeyRef: name: mysql-configmap key: database_url - name: PMA_PORT value: &quot;3306&quot; - name: PMA_USER valueFrom: secretKeyRef: name: mysql-secret key: mysql-user-username - name: PMA_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-user-password --- apiVersion: v1 kind: Service metadata: name: phpmyadmin-service spec: selector: app: phpmyadmin ports: - protocol: TCP port: 8080 targetPort: 3000 </code></pre> <p>ingress-service.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: defaultBackend: service: name: phpmyadmin-service port: number: 8080 rules: - host: test.com http: paths: - path: / pathType: Prefix backend: service: name: phpmyadmin-service port: number: 8080 </code></pre> <p>when I execute <code>microk8s kubectl get ingress ingress-service</code>, the output is:</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE ingress-service public test.com 127.0.0.1 80 45s </code></pre> <p>and when I tried to access test.com, that's when I got 502 error.</p> <p>My kubectl version:</p> <pre><code>Client Version: v1.22.2-3+9ad9ee77396805 Server Version: v1.22.2-3+9ad9ee77396805 </code></pre> <p>My microk8s' client and server version:</p> <pre><code>Client: Version: v1.5.2 Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a Go version: go1.15.15 Server: Version: v1.5.2 Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a UUID: b2bf55ad-6942-4824-99c8-c56e1dee5949 </code></pre> <p>As for my microk8s' own version, I followed the installation instructions from <a href="https://microk8s.io/docs" rel="nofollow noreferrer">here</a>, so it should be <code>1.21/stable</code>. (Couldn't find the way to check the exact version from the internet, if someone know how, please tell me how)</p> <p><code>mysql.yaml</code> logs:</p> <pre><code>2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started. 2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started. 2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Initializing database files 2021-10-14T07:05:38.960693Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.26) initializing of server in progress as process 41 2021-10-14T07:05:38.967970Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2021-10-14T07:05:39.531763Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2021-10-14T07:05:40.591862Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main 2021-10-14T07:05:40.592247Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main 2021-10-14T07:05:40.670594Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. 2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Database files initialized 2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Starting temporary server 2021-10-14T07:05:45.362827Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 90 2021-10-14T07:05:45.486702Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2021-10-14T07:05:45.845971Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2021-10-14T07:05:46.022043Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main 2021-10-14T07:05:46.022189Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main 2021-10-14T07:05:46.023446Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2021-10-14T07:05:46.023728Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2021-10-14T07:05:46.026088Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2021-10-14T07:05:46.044967Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock 2021-10-14T07:05:46.045036Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL. 2021-10-14 07:05:46+00:00 [Note] [Entrypoint]: Temporary server started. Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. 2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating database testing-database 2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating user testinguser 2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Giving user testinguser access to schema testing-database 2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Stopping temporary server 2021-10-14T07:05:48.422053Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.26). 2021-10-14T07:05:50.543822Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.26) MySQL Community Server - GPL. 2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: Temporary server stopped 2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up. 2021-10-14T07:05:51.711889Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 1 2021-10-14T07:05:51.725302Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2021-10-14T07:05:51.959356Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2021-10-14T07:05:52.162432Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main 2021-10-14T07:05:52.162568Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main 2021-10-14T07:05:52.163400Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2021-10-14T07:05:52.163556Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2021-10-14T07:05:52.165840Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2021-10-14T07:05:52.181516Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2021-10-14T07:05:52.181562Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. </code></pre> <p><code>phpmyadmin.yaml</code> logs:</p> <pre><code>AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message [Thu Oct 14 03:57:32.653011 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.51 (Debian) PHP/7.4.24 configured -- resuming normal operations [Thu Oct 14 03:57:32.653240 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND' </code></pre> <p>Here is also my <code>Allocatable</code> on <code>describe nodes</code> command:</p> <pre><code>Allocatable: cpu: 4 ephemeral-storage: 113289380Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 5904508Ki pods: 110 </code></pre> <p>and the <code>Allocated resources</code>:</p> <pre><code>Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 550m (13%) 200m (5%) memory 270Mi (4%) 370Mi (6%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) </code></pre> <p>Any help? Thanks in advance.</p>
felixbmmm
<p>Turns out it is a fudged up mistake of mine, where I specify the phpmyadmin's container port to be 3000, while the default image port opens at 80. After changing the <code>containerPort</code> and <code>phpmyadmin-service</code>'s <code>targetPort</code> to 80, it opens the phpmyadmin's page.</p> <p>So sorry for kkopczak and AndD for the fuss and also big thanks for trying to help! :)</p>
felixbmmm
<p>I am deploying kafka on a kube cluster. For that, I found bitnami chart that seems excellent. I deploy the chat with this command</p> <pre><code> helm install kafka-release bitnami/kafka --set persistence.enabled=false </code></pre> <p>It seems that the charts considers the option persistence.enabled=false for kafka pods but not for zookeeper one. The kafka pods are scheduled but not the zookeeper pod.</p> <pre><code>ubuntu@ip-172-31-23-248:~$ kubectl get pods NAME READY STATUS RESTARTS AGE kafka-release-0 0/1 CrashLoopBackOff 1 30s kafka-release-zookeeper-0 0/1 Pending 0 30s </code></pre> <p>In the documentation of bitnami kafka chart, I found no option to disable the persistance on the zookeeper pod level. Could you help please regards</p>
lemahdois
<p>To configure zookeeper without persistence you can: <code>helm install kafka-release bitnami/kafka --set persistence.enabled=false --set zookeeper.persistence.enabled=false</code></p>
gohm'c
<p>Sorry I'm quite a noob at Go so hope this isn't a stupid question. I know pointers in a general sense but struggling with Go semantics.</p> <p>I can't get this to work:</p> <pre><code>func DeleteOldCronJob(c client.Client, ctx context.Context, namespace string, name string) error { cronJob := batchv1beta1.CronJob{} key := client.ObjectKey{ Namespace: namespace, Name: name, } return DeleteOldAny(c, ctx, name, key, &amp;cronJob) } func DeleteOldAny(c client.Client, ctx context.Context, name string, key client.ObjectKey, resource interface{}) error { err := c.Get(ctx, key, resource) if err == nil { err := c.Delete(ctx, resource) if err != nil { return err } } else { return err } return nil } </code></pre> <p>I get an error:</p> <pre><code>interface {} does not implement &quot;k8s.io/apimachinery/pkg/runtime&quot;.Object (missing DeepCopyObject method) </code></pre> <p>The point is so that I can reuse DeleteOldAny on multiple different types, to make my codebase more compact (I could just copy+paste DeleteOldCronjob and change the type). As far as I read, pointers to interfaces in Go are usually wrong. Also, <a href="https://pkg.go.dev/k8s.io/api/batch/v1beta1?utm_source=gopls#CronJob" rel="nofollow noreferrer">the k8s type I'm importing</a> is just a struct. So, I thought since it's a struct not an interface I should pass resource a a pointer like:</p> <pre><code> err := c.Get(ctx, key, &amp;resource) </code></pre> <p>But that gives me another error:</p> <pre><code>*interface {} is pointer to interface, not interface </code></pre> <p>So I'm a bit stuck. Am I doomed to copy+paste the same function for each type or is it a simple syntax mistake I'm making?</p>
james hedley
<pre><code>func (client.Reader).Get(ctx context.Context, key types.NamespacedName, obj client.Object) error Get retrieves an obj for the given object key from the Kubernetes Cluster. obj must be a struct pointer so that obj can be updated with the response returned by the Server. </code></pre> <p>so the idea is right to have modular approach and stop reusing the code .</p> <p>But implementation is wrong , the obj would be the resource you are trying to fetch from cluster which should be passed in the function as a struct pointer.</p> <pre><code>err := c.Get(ctx, key, &amp;resource) </code></pre> <p>here <strong>resource</strong> should be a struct as <strong>Get , delete etc.</strong> expects as pointer to the respective object to be passed .</p>
Abhijeet Jha
<p>I currently have a Kubernetes cluster running on GCP. In this cluster I have a working NGINX Ingress, but now I'm trying add a certificate to this by using cert-manager.</p> <p>Everything works fine except the ACME challenge. When I do a <code>kubectl describe challenge</code> I get the following:</p> <pre><code>Status: Presented: true Processing: true Reason: Waiting for HTTP-01 challenge propagation: failed to perform self check GET request </code></pre> <p>When the acme challenge creates a solver service I get the follow error message on GCP:</p> <pre><code>&quot;All hosts are taken by other resources&quot; </code></pre> <p><a href="https://i.stack.imgur.com/S0p5f.png" rel="noreferrer">Image of the error I'm getting in google cloud</a></p> <p>I have tried to create a <strong>Issuer</strong> and <strong>ClusterIssuer</strong> but the same problem keeps popping up.</p>
Modx
<p>After trying to solve the issues and browsing the web, I have figured out the solution. It is possible to add the following annotation:</p> <pre><code>annotations: acme.cert-manager.io/http01-edit-in-place: &quot;true&quot; </code></pre> <p>After adding this line to my Ingress resource everything seemed to work perfectly. When this annotation is not passed in, cert-manager will create an extra Ingress for the acme challenge</p> <p>See: <a href="https://cert-manager.io/docs/usage/ingress/#:%7E:text=acme.cert%2Dmanager.io%2Fhttp01%2Dedit%2D,the%20annotation%20assumes%20%E2%80%9Cfalse%E2%80%9D." rel="noreferrer">Cert-manager, using an Ingress</a></p>
Modx
<p>I have a Django project deployed in Kubernetes and I am trying to deploy Prometheus as a monitoring tool. I have successfully done all the steps needed to include <code>django_prometheus</code> in the project and locally I can go go <code>localhost:9090</code> and play around with querying the metrics.</p> <p>I have also deployed Prometheus to my Kubernetes cluster and upon running a <code>kubectl port-forward ...</code> on the Prometheus pod I can see some metrics of my Kubernetes resources.</p> <p>Where I am a bit confused is how to make the deployed Django app metrics available on the Prometheus dashboard just like the others. I deployed my app in <code>default</code> namespace and prometheus in a <code>monitoring</code> dedicated namespace. I am wondering what am I missing here. Do I need to expose the ports on the service and deployment from 8000 to 8005 according to the number of workers or something like that?</p> <p>My Django app runs with gunicorn using <code>supervisord</code> like so:</p> <pre><code>[program:gunicorn] command=gunicorn --reload --timeout 200000 --workers=5 --limit-request-line 0 --limit-request-fields 32768 --limit-request-field_size 0 --chdir /code/ my_app.wsgi </code></pre> <ul> <li><code>my_app</code> service:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: my_app namespace: default spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: my-app sessionAffinity: None type: ClusterIP </code></pre> <ul> <li>Trimmed version of the <code>deployment.yaml</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app name: my-app-deployment namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: my-app strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: my-app spec: containers: - image: ... imagePullPolicy: IfNotPresent name: my-app ports: - containerPort: 80 name: http protocol: TCP dnsPolicy: ClusterFirst imagePullSecrets: - name: regcred restartPolicy: Always schedulerName: default-scheduler terminationGracePeriodSeconds: 30 </code></pre> <ul> <li><code>prometheus configmap</code></li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 data: prometheus.rules: |- ... some rules prometheus.yml: |- global: scrape_interval: 5s evaluation_interval: 5s rule_files: - /etc/prometheus/prometheus.rules scrape_configs: - job_name: prometheus static_configs: - targets: - localhost:9090 - job_name: my-app metrics_path: /metrics static_configs: - targets: - localhost:8000 - job_name: 'node-exporter' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_endpoints_name] regex: 'node-exporter' action: keep kind: ConfigMap metadata: labels: name: prometheus-config name: prometheus-config namespace: monitoring </code></pre>
everspader
<p>You do not have to expose services, if the promehteus is installed on the same cluster as your app. You can communicate with apps between namespaces by using Kubernetes DNS resolution, going by the rule:</p> <pre><code>SERVICENAME.NAMESPACE.svc.cluster.local </code></pre> <p>so one way is to change your prometheus job target to something like this</p> <pre class="lang-yaml prettyprint-override"><code> - job_name: speedtest-ookla metrics_path: /metrics static_configs: - targets: - 'my_app.default.svc.cluster.local:9000' </code></pre> <p>And this is the &quot;manual&quot; way. A better approach will be to use prometheus <code>kubernetes_sd_config</code>. It will autodiscover your services and try to scrape them.</p> <p>Reference: <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config</a></p>
Cloudziu
<p>Kubernetes support <code>load-balancing</code>.</p> <p>Let's take a simple senario:</p> <ul> <li>Process runs on one node</li> <li>That process creates multiple processes</li> <li>Do all new processes will run on same Kubernetes node ? or on multiple nodes (while supporting load-balancing ? )</li> </ul> <p>simple example:</p> <pre><code>from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': with Pool(5) as p: print(p.map(f, [1, 2, 3]) </code></pre> <p>In the example above, we are creating 3 processes.</p> <ul> <li>Do all the 3 processes run on same k8s node ? or it may be that 2 processes run on one node and the third process will run on the third node ?</li> </ul>
user3668129
<p>I think you don't quite understand docker yet. Since a container contains this application, all of the memory/executions will be done in this container only, with or without K8s.</p> <p>K8s only serves as a docker orchestrator for multiple containers, which normally, there should not be any communication at all between containers.</p> <p>So the answer is all of the processes it created in the same node will be executed on that particular node only.</p>
kennysliding
<p>Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:</p> <pre><code>apiVersion: v1 kind: Service metadata: namespace: mynamespace name: myservice spec: type: ClusterIP selector: app: my-app ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 8080 targetPort: 8080 protocol: TCP name: grpc </code></pre> <p>We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.</p> <p>This could be achieved by changing the service type from <code>ClusterIP</code> to <code>LoadBalancer</code>. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.</p> <p>An alternative approach would be to use the <a href="https://kubernetes.io/docs/concepts/cluster-administration/proxies" rel="nofollow noreferrer">apiserver proxy</a> which</p> <blockquote> <p>connects a user outside of the cluster to cluster IPs which otherwise might not be reachable</p> </blockquote> <p>This works with the http endpoint. For example, if the http API exposes an endpoint <code>/api/foo</code>, it can be reached like this:</p> <pre><code>http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo </code></pre> <p>Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...</p> <pre><code>grpc.Dial(&quot;myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy&quot;) </code></pre> <p>... won't work.</p> <p>Is there a way to connect to a gRPC service via the apiserver proxy?</p> <p>If not, is there a different way to connect to the gRPC service from external, without using a <code>LoadBalancer</code> service?</p>
Max
<p><code>...not to require an additional public IP</code></p> <p>NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use <code>kubectl port-forward --namespace mynamespace service myservice 8080:8080</code> and connect thru localhost.</p>
gohm'c
<p>I try to test Kubernetes Ingress on Minikube. My OS is Windows 10. Minikube is installed successfully as well as Nginx ingress controller.</p> <pre><code>&gt; minikube addons enable ingress </code></pre> <p>Below is my Kubernetes manifest file:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress namespace: ingress-nginx annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/default-backend: app-nginx-svc spec: rules: - host: boot.aaa.com http: paths: - path: /path pathType: Prefix backend: service: name: app-nginx-svc port: number: 80 --- apiVersion: v1 kind: Service metadata: name: app-nginx-svc namespace: ingress-nginx spec: type: NodePort selector: app: test-nginx ports: - name: http port: 80 targetPort: 80 nodePort: 30000 --- apiVersion: v1 kind: Pod metadata: name: app-nginx namespace: ingress-nginx labels: app: test-nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>Kubernetes Pod and Service are generated on Minikube without errors. When I test service with the below commands, the pod shows the right values.</p> <pre><code>&gt; minikube service -n ingress-nginx app-nginx-svc --url * app-nginx-svc 서비스의 터널을 시작하는 중 |--------------------|---------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |--------------------|---------------|-------------|------------------------| | ingress-nginx | app-nginx-svc | | http://127.0.0.1:63623 | |-------------------|---------------|-------------|------------------------| http://127.0.0.1:63623 </code></pre> <p>But the problem occurs in the Ingress object. The Minikube ingress generates the endpoint and host domain.</p> <p><a href="https://i.stack.imgur.com/LpFyx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LpFyx.png" alt="enter image description here" /></a></p> <p>I type in the domain mapping hostname in Windows 10 host file</p> <pre><code>192.168.49.2 boot.aaa.com </code></pre> <p>But I can not receive any response from Nginx container:</p> <p><a href="http://boot.aaa.com/path" rel="nofollow noreferrer">http://boot.aaa.com/path</a></p> <p>The above URL does not work at all.</p>
Joseph Hwang
<p>When you try to access <a href="http://boot.aaa.com/path" rel="nofollow noreferrer">http://boot.aaa.com/path</a> - do you provide the port on which it listens? From what I see from the output of:</p> <pre class="lang-sh prettyprint-override"><code>minikube service -n ingress-nginx app-nginx-svc --url * app-nginx-svc 서비스의 터널을 시작하는 중 |--------------------|---------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |--------------------|---------------|-------------|------------------------| | ingress-nginx | app-nginx-svc | | http://127.0.0.1:63623 | |--------------------|---------------|-------------|------------------------| ==&gt; http://127.0.0.1:63623 &lt;== </code></pre> <p>I think that you need to make request on: <strong><a href="http://boot.aaa.com:63623/path" rel="nofollow noreferrer">http://boot.aaa.com:63623/path</a></strong></p> <p>If you don't want to use hostname in you Ingress, just remove it from manifest.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress namespace: ingress-nginx annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/default-backend: app-nginx-svc spec: rules: - http: paths: - path: /path pathType: Prefix backend: service: name: app-nginx-svc port: number: 80 </code></pre> <p>You should be able then to access your pod by only <strong>http://{IP}:{PORT}/path</strong></p> <p>My additional questions:</p> <ul> <li>Are you trying to make request from the same OS where the minikube is installed?</li> <li>Is the hostfile edited on the OS you are making requests from?</li> <li>If yes, is the Windows firewall turned on?</li> </ul> <p>Also, I see that you <strong>Service</strong> expose a <strong>NodePort</strong> directly to your App on <strong>port 30000</strong> (it will not pass through Ingress controller).</p> <p>Usually if we are setting up an Ingress endpoint to a Pod, we do it to avoid exposing it directly by the NodePort. Using <strong>ClusterIP</strong> service type will do so.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-nginx-svc namespace: ingress-nginx spec: type: ClusterIP selector: app: test-nginx ports: - name: http port: 80 targetPort: 80 </code></pre>
Cloudziu
<p>I have the following Parameters in a shell script</p> <pre><code>#!/bin/bash DOCKER_REGISTRY=gcr.io GCP_PROJECT=sample ELASTICSEARCH_VERSION=:7.3.1 </code></pre> <p>When i run this command the parameters are all replaced correctly </p> <pre><code>ELASTICSEARCH_IMAGE=$DOCKER_REGISTRY/$GCP_PROJECT/elasticsearch$ELASTICSEARCH_VERSION echo $ELASTICSEARCH_IMAGE gcr.io/sample/elasticsearch:7.3.1 </code></pre> <p>I need to replace the ELASTICSEARCH_IMAGE on a kubernetes deployment file</p> <blockquote> <pre><code> containers: - image: {{ELASTICSEARCH}} name: elasticsearch </code></pre> </blockquote> <p>So when i run the below command it is not replaced. The issue is because of "/". I tried various sed command but not able to resolve it </p> <p>YAML_CONTENT=<code>cat "app-search-application.yaml.template" | sed "s/{{ELASTICSEARCH_IMAGE}}/$ELASTICSEARCH_IMAGE/g"</code></p> <p>My intension is to dynamically change the image on kubernetes yaml file with the sed command.</p>
klee
<p>The <code>s</code> command in <code>sed</code> can have different delimiters, if your pattern has <code>/</code> use something else in <code>s</code> command</p> <pre><code>ELASTICSEARCH_IMAGE='gcr.io/sample/elasticsearch:7.3.1' $ echo " containers: - image: {{ELASTICSEARCH}} name: elasticsearch " | sed "s|{{ELASTICSEARCH}}|$ELASTICSEARCH_IMAGE|" containers: - image: gcr.io/sample/elasticsearch:7.3.1 name: elasticsearch </code></pre>
Ivan
<p>I want to update k8s deployment image from 22.41.70 to 22.41.73,as follow:</p> <pre><code>NewReplicaSet: hiroir-deployment-5b9f574565 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set hiroir-deployment-7ff8845548 to 3 Normal ScalingReplicaSet 8m56s deployment-controller Scaled up replica set hiroir-deployment-5b9f574565 to 1 Normal ScalingReplicaSet 8m56s deployment-controller Scaled down replica set hiroir-deployment-7ff8845548 to 2 Normal ScalingReplicaSet 8m56s deployment-controller Scaled up replica set hiroir-deployment-5b9f574565 to 2 Normal ScalingReplicaSet 8m52s deployment-controller Scaled down replica set hiroir-deployment-7ff8845548 to 1 Normal ScalingReplicaSet 8m52s deployment-controller Scaled up replica set hiroir-deployment-5b9f574565 to 3 Normal ScalingReplicaSet 8m52s deployment-controller Scaled down replica set hiroir-deployment-7ff8845548 to 0 </code></pre> <p>I want to know how to ensure Scaled down pod replica is success?</p>
jixiang8320216_container
<p>You can check using <code>kubectl get deployment &lt;name, eg. hiroir&gt; --namespace &lt;namespace if not default&gt; -o wide</code>. Look at the &quot;AVAILABLE&quot; column and check the count if it aligns to last scaled replicas count, &quot;IMAGES&quot; column for the image that you have updated.</p>
gohm'c
<p>I am new to kubernetes and using AWS EKS cluster 1.21. I am trying to write the nginx ingress config for my k8s cluster and blocking some request using <strong>server-snippet</strong>. My ingress config is below</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: abc-ingress-external namespace: backend annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx-external nginx.ingress.kubernetes.io/server-snippet: | location = /ping { deny all; return 403; } spec: rules: - host: dev-abc.example.com http: paths: - backend: service: name: miller port: number: 80 path: / pathType: Prefix </code></pre> <p>When I apply this config, I get this error:</p> <pre><code>for: &quot;ingress.yml&quot;: admission webhook &quot;validate.nginx.ingress.kubernetes.io&quot; denied the request: nginx.ingress.kubernetes.io/server-snippet annotation contains invalid word location </code></pre> <p>I looked into this and got this is something related to <em><strong>annotation-value-word-blocklist</strong></em>. However i don't know how to resolve this. Any help would be appreciated.</p>
Rahul Kumar Aggarwal
<p>Seems there's <a href="https://github.com/kubernetes/ingress-nginx/issues/5738#issuecomment-971799464" rel="nofollow noreferrer">issue</a> using <code>location</code> with some versions. The following was tested successfully on EKS cluster.</p> <p>Install basic ingress-nginx on EKS:</p> <p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/aws/deploy.yaml</code></p> <p><strong>Note:</strong> If your cluster version is &lt; 1.21, you need to comment out <code>ipFamilyPolicy</code> and <code>ipFamilies</code> in the service spec.</p> <p>Run a http service:</p> <p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml</code></p> <p>Create an ingress for the service:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: http-svc annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/server-snippet: | location = /ping { deny all; return 403; } spec: rules: - host: test.domain.com http: paths: - path: / pathType: ImplementationSpecific backend: service: name: http-svc port: number: 8080 </code></pre> <p>Return 200 as expected: <code>curl -H 'HOST: test.domain.com' http://&lt;get your nlb address from the console&gt;</code></p> <p>Return 200 as expected: <code>curl -H 'HOST: test.domain.com' -k https://&lt;get your nlb address from the console&gt;</code></p> <p>Return 403 as expected, the snippet is working: <code>curl -H 'HOST: test.domain.com' -k https://&lt;get your nlb address from the console&gt;/ping</code></p> <p><a href="https://i.stack.imgur.com/n8BRc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n8BRc.png" alt="enter image description here" /></a></p> <p>Use the latest release to avoid the &quot;annotation contains invalid word location&quot; issue.</p>
gohm'c
<p>I have Cron name &quot;cronX&quot; and a Job name &quot;JobY&quot; how can I configure kubernetes to run &quot;JobY&quot; after &quot;cronX&quot; finished?</p> <p>I know I can do it using API call from &quot;cronX&quot; to start &quot;JobY&quot; but I don't want to do that using an API call.</p> <p>Is there any Kubernetes configuration to schedule this?</p>
larry ckey
<p><code>is it possible that this pod will contain 2 containers and one of them will run only after the second container finish?</code></p> <p>Negative, more details <a href="https://github.com/kubernetes/kubernetes/issues/1996" rel="nofollow noreferrer">here</a>. If you only have 2 containers to run, you can place the first one under <code>initContainers</code> and another under <code>containers</code> and schedule the pod.</p> <p>No built-in K8s configuration available to do workflow orchestration. You can try Argo <a href="https://argoproj.github.io/argo-workflows/cron-workflows/" rel="nofollow noreferrer">workflow</a> to do this.</p>
gohm'c
<p>I'm following <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/" rel="nofollow noreferrer">this AWS documentation</a> which explains how to properly configure AWS Secrets Manager to let it works with EKS through Kubernetes Secrets.</p> <p>I successfully followed step by step all the different commands as explained in the documentation.</p> <p>The only difference I get is related to <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/#:%7E:text=kubectl%20get%20po%20%2D%2Dnamespace%3Dkube%2Dsystem" rel="nofollow noreferrer">this step</a> where I have to run:</p> <pre><code>kubectl get po --namespace=kube-system </code></pre> <p>The expected output should be:</p> <pre><code>csi-secrets-store-qp9r8 3/3 Running 0 4m csi-secrets-store-zrjt2 3/3 Running 0 4m </code></pre> <p>but instead I get:</p> <pre><code>csi-secrets-store-provider-aws-lxxcz 1/1 Running 0 5d17h csi-secrets-store-provider-aws-rhnc6 1/1 Running 0 5d17h csi-secrets-store-secrets-store-csi-driver-ml6jf 3/3 Running 0 5d18h csi-secrets-store-secrets-store-csi-driver-r5cbk 3/3 Running 0 5d18h </code></pre> <p>As you can see the names are different, but I'm quite sure it's ok :-)</p> <p>The real problem starts <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/#:%7E:text=kubectl%20apply%20%2Df%20%2D-,Step%204,-%3A%20Create%20and%20deploy" rel="nofollow noreferrer">here in step 4</a>: I created the following YAML file (as you ca see I added some parameters):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: aws-secrets spec: provider: aws parameters: objects: | - objectName: &quot;mysecret&quot; objectType: &quot;secretsmanager&quot; </code></pre> <p>And finally I created a deploy (as explain <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/#:%7E:text=MySecret2%22%0A%20%20%20%20%20%20%20%20objectType%3A%20%22secretsmanager%22-,Step%205,-%3A%20Configure%20and%20deploy" rel="nofollow noreferrer">here in step 5</a>) using the following yaml file:</p> <pre class="lang-yaml prettyprint-override"><code># test-deployment.yaml kind: Pod apiVersion: v1 metadata: name: nginx-secrets-store-inline spec: serviceAccountName: iamserviceaccountforkeyvaultsecretmanagerresearch containers: - image: nginx name: nginx volumeMounts: - name: mysecret-volume mountPath: &quot;/mnt/secrets-store&quot; readOnly: true volumes: - name: mysecret-volume csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;aws-secrets&quot; </code></pre> <p>After the deployment through the command:</p> <pre><code>kubectl apply -f test-deployment.yaml -n mynamespace </code></pre> <p>The pod is not able to start properly because the following error is generated:</p> <pre><code>Error from server (BadRequest): container &quot;nginx&quot; in pod &quot;nginx-secrets-store-inline&quot; is waiting to start: ContainerCreating </code></pre> <p>But, for example, if I run the deployment with the following yaml <strong>the POD will be successfully created</strong></p> <pre class="lang-yaml prettyprint-override"><code># test-deployment.yaml kind: Pod apiVersion: v1 metadata: name: nginx-secrets-store-inline spec: serviceAccountName: iamserviceaccountforkeyvaultsecretmanagerresearch containers: - image: nginx name: nginx volumeMounts: - name: keyvault-credential-volume mountPath: &quot;/mnt/secrets-store&quot; readOnly: true volumes: - name: keyvault-credential-volume emptyDir: {} # &lt;&lt;== !! LOOK HERE !! </code></pre> <p>as you can see I used</p> <pre><code>emptyDir: {} </code></pre> <p>So as far I can see the <strong>problem</strong> here is related to the following YAML lines:</p> <pre class="lang-yaml prettyprint-override"><code> csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;aws-secrets&quot; </code></pre> <p>To be honest it's even not clear in my mind what's happing here. Probably I didn't properly enabled the volume permission in EKS?</p> <p>Sorry but I'm a newbie in both AWS and Kubernetes configurations. Thanks for you time</p> <p>--- NEW INFO ---</p> <p>If I run</p> <pre><code>kubectl describe pod nginx-secrets-store-inline -n mynamespace </code></pre> <p>where <em>nginx-secrets-store-inline</em> is the name of the pod, I get the following output:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned mynamespace/nginx-secrets-store-inline to ip-10-0-24-252.eu-central-1.compute.internal Warning FailedMount 14s (x6 over 29s) kubelet MountVolume.SetUp failed for volume &quot;keyvault-credential-volume&quot; : rpc error: code = Unknown desc = failed to get secretproviderclass mynamespace/aws-secrets, error: SecretProviderClass.secrets-store.csi.x-k8s.io &quot;aws-secrets&quot; not found </code></pre> <p>Any hints?</p>
brian enno
<p>Finally I realized why it wasn't working. As explained <a href="https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html#common-errors" rel="nofollow noreferrer">here</a>, the error:</p> <pre><code> Warning FailedMount 3s (x4 over 6s) kubelet, kind-control-plane MountVolume.SetUp failed for volume &quot;secrets-store-inline&quot; : rpc error: code = Unknown desc = failed to get secretproviderclass default/azure, error: secretproviderclasses.secrets-store.csi.x-k8s.io &quot;azure&quot; not found </code></pre> <p>is related to namespace:</p> <blockquote> <p>The SecretProviderClass being referenced in the volumeMount needs to exist in the same namespace as the application pod.</p> </blockquote> <p>So both the yaml file should be deployed in the same namespace (adding, for example, the <em>-n mynamespace</em> argument). Finally I got it working!</p>
brian enno
<p>When I try to create a Jenkins X Kubernetes cluster with GKE using this command:</p> <pre><code>jx create cluster gke --skip-login </code></pre> <p>The following exeption is thrown at the end of installation:</p> <pre><code>error creating cluster configuring Jenkins: creating Jenkins API token: after 3 attempts, last error: creating Jenkins Auth configuration: secrets &quot;jenkins&quot; not found </code></pre> <p>During installation I select the default settings and provide my own github settings, including generated personal access token, but I don't think that the github token is the issue in this case (I'm pretty sure all my github settings are correct)</p>
Mykhailo Skliar
<p>The problem has been solved by using --tekton flag:</p> <pre><code>jx create cluster gke --skip-login --tekton </code></pre>
Mykhailo Skliar
<p>I want to deploy an app on K8s cluster using the deployment, service object. I want to create two service objects and map it to the deployment (using labels and selectors). I want to know if this possible?</p> <pre><code>deployment: name: test-deploy labels: test service 1: name: test-service-1 selectors: test service 2: name: test-service-2 selectors: test </code></pre> <p>I'm confused on how K8s resolves this conflict? Can I try this in order to create two services?</p> <pre><code>deployment: name: test-deploy labels: - test - svc1 - svc2 service 1: name: test-service-1 selectors: - test - svc1 service 2: name: test-service-2 selectors: - test - svc2 </code></pre>
enigma
<blockquote> <p>I want to create two service objects and map it to the deployment (using labels and selectors). I want to know if this possible?</p> </blockquote> <p>Yes. Apply the following spec:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: key1: test key2: svc1 key3: svc2 template: metadata: labels: key1: test key2: svc1 key3: svc2 spec: containers: - name: nginx image: nginx:alpine ports: - name: http containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service-1 spec: selector: key1: test key2: svc1 ports: - name: http port: 8080 targetPort: http --- apiVersion: v1 kind: Service metadata: name: nginx-service-2 spec: selector: key1: test key3: svc2 ports: - name: http port: 8080 targetPort: http ... </code></pre> <p>Test with:</p> <p>kubectl port-forward svc/nginx-service-1 8080:http</p> <p>kubectl port-forward svc/nginx-service-2 8081:http</p> <p>curl localhost:8080</p> <p>curl localhost:8081</p> <p>You get response from respective service that back'ed by the same Deployment.</p>
gohm'c
<p>When defining the Taints &amp; Tolerations, we defined the Taint as below:</p> <pre><code>kubectl taint nodes node1 key1=value1:NoSchedule </code></pre> <p>Now any pod that does not have toleration defined as below will not be scheduled on node1. And the one that has toleration defined, gets scheduled on this node. But, why do we need to define NoSchedule on the POD? It is already defined on the node.</p> <pre><code>tolerations: - key: &quot;key1&quot; operator: &quot;Equal&quot; value: &quot;value1&quot; effect: &quot;NoSchedule&quot; </code></pre> <p>What impact does it have if:</p> <ol> <li>The node effect is NoSchedule</li> </ol> <pre><code>kubectl taint nodes node1 key1=value1:NoSchedule </code></pre> <ol start="2"> <li>But the POD toleration is NoExecute</li> </ol> <pre><code>tolerations: - key: &quot;key1&quot; operator: &quot;Equal&quot; value: &quot;value1&quot; effect: &quot;NoExecute&quot; </code></pre> <p>Note: I understand that it is trying to match not just &quot;taint value&quot; but also the &quot;taint effect&quot;. But is there any use case for matching &quot;taint effect&quot; as well?</p> <blockquote> <p>tolerations.effect (string) Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.</p> </blockquote> <p>Thanks</p>
Buggy B
<blockquote> <p>What impact does it have if:</p> <ol> <li>The node effect is NoSchedule</li> </ol> <p>kubectl taint nodes node1 key1=value1:NoSchedule</p> <ol start="2"> <li>But the POD toleration is NoExecute</li> </ol> </blockquote> <p>Pod will not be schedule on the node where it failed to tolerate, eg. your sample pod will not be schedule on node that tainted with <code>NoSchdule</code> because it only tolerates <code>NoExecute</code>.</p> <p><code>...use case for matching &quot;taint effect&quot;</code></p> <p>Not sure what it means here; but it is possible to tolerate a key with any effect by only specified the key and value.</p>
gohm'c
<p>I just updated an old kubernetes cluster from version 1.12 to 1.13, and i'm trying to generate new config files for it so i can continue to use the cluster after the old expiration date.</p> <p>It doesn't appear that i am able to generate new config files in this version of kubeadm though, which seems odd. So hoping im missing some painfully obvious solution here.</p> <p>I know in other versions of kubeadm(both older and newer), you can run commands similar to</p> <pre><code>sudo kubeadm alpha phase kubeconfig all </code></pre> <p>or</p> <pre><code>kubeadm alpha certs renew admin.conf </code></pre> <p>to generate new confs, but from what i can tell, kubeadm in 1.13 does not have options for the conf files, only the certs. So I was hoping someone might know of a way to generate new versions of the following files for a v1.13 kubernetes cluster using kubeadm,,,</p> <ul> <li>admin.conf</li> <li>kubelet.conf</li> <li>controller-manager.conf</li> <li>scheduler.conf</li> </ul>
Shirkie
<p>Found the answer. Was missing something painfully obvious as i expected. The command to gen the configs was in the init branch of commands unlike in the alpha branch of commands where i was used to them being.</p> <pre><code>kubeadm init phase kubeconfig all </code></pre>
Shirkie
<p>In one of our customer's kubernetes cluster(v1.16.8 with kubeadm) RBAC does not work at all. We creating a ServiceAccount, read-only ClusterRole and ClusterRoleBinding with the following yamls but when we login trough dashboard or kubectl user can almost do anything in the cluster. What can cause this problem?</p> <pre class="lang-yaml prettyprint-override"><code>kind: ServiceAccount apiVersion: v1 metadata: name: read-only-user namespace: permission-manager secrets: - name: read-only-user-token-7cdx2 </code></pre> <pre class="lang-yaml prettyprint-override"><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-only-user___template-namespaced-resources___read-only___all_namespaces labels: generated_for_user: '' subjects: - kind: ServiceAccount name: read-only-user namespace: permission-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: template-namespaced-resources___read-only </code></pre> <pre class="lang-yaml prettyprint-override"><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: template-namespaced-resources___read-only rules: - verbs: - get - list - watch apiGroups: - '*' resources: - configmaps - endpoints - persistentvolumeclaims - pods - pods/log - pods/portforward - podtemplates - replicationcontrollers - resourcequotas - secrets - services - events - daemonsets - deployments - replicasets - ingresses - networkpolicies - poddisruptionbudgets </code></pre> <p>Here is the cluster's kube-apiserver.yaml file content:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=192.168.1.42 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --insecure-port=0 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: k8s.gcr.io/kube-apiserver:v1.16.8 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 192.168.1.42 path: /healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {} </code></pre>
Zekeriya Akgül
<p>What you have defined is only control the service account. Here's a tested spec; create a yaml file with:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: v1 kind: ServiceAccount metadata: name: test-sa namespace: test --- kind: ClusterRoleBinding # &lt;-- REMINDER: Cluster wide and not namespace specific. Use RoleBinding for namespace specific. apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-role-binding subjects: - kind: ServiceAccount name: test-sa namespace: test - kind: User name: someone apiGroup: rbac.authorization.k8s.io roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: test-cluster-role --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-cluster-role rules: - verbs: - get - list - watch apiGroups: - '*' resources: - configmaps - endpoints - persistentvolumeclaims - pods - pods/log - pods/portforward - podtemplates - replicationcontrollers - resourcequotas - secrets - services - events - daemonsets - deployments - replicasets - ingresses - networkpolicies - poddisruptionbudgets </code></pre> <p>Apply the above spec: <code>kubectl apply -f &lt;filename&gt;.yaml</code></p> <p>Work as expected:</p> <p><a href="https://i.stack.imgur.com/OmWuP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OmWuP.png" alt="enter image description here" /></a></p> <p>Delete the test resources: <code>kubectl delete -f &lt;filename&gt;.yaml</code></p>
gohm'c
<p>What is the default value for the allowVolumeExpansion? I create my volumes through a statefulset from apiVersion: apps/v1 volumeClaimTemplates In the case that the answer is false, how can I change it to true?</p> <p>Potentially relevant info: the cluster running on GKE autopilot.</p>
Alex Skotner
<p>You can find out by looking into the StorageClass that your claim is using <code>kubectl describe StorageClass &lt;name&gt;</code></p> <pre><code>volumeClaimTemplates: - ... spec: storageClassName: &lt;name&gt; # &lt;-- check using this name </code></pre> <p>Recent version of GKE the default is true. More about this field can be found <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-expansion" rel="nofollow noreferrer">here</a>.</p>
gohm'c
<p>I have got 2 deployments in my cluster UI and USER. Both of these are exposed by Cluster IP service. There is an ingress which makes both the services publicly accessible.</p> <p>Now when I do &quot;kubectl exec -it UI-POD -- /bin/sh&quot; and then try to &quot;ping USER-SERVICE-CLUSTER-IP:PORT&quot; it doesn't work.</p> <p>All I get is No packet returned i.e. a failure message.</p> <p>Attaching my .yml file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: user-service-app labels: app: user-service-app spec: replicas: 1 selector: matchLabels: app: user-service-app template: metadata: labels: app: user-service-app spec: containers: - name: user-service-app image: &lt;MY-IMAGE-URL&gt; imagePullPolicy: Always ports: - containerPort: 3000 livenessProbe: httpGet: path: /ping port: 3000 readinessProbe: httpGet: path: /ping port: 3000 --- apiVersion: &quot;v1&quot; kind: &quot;Service&quot; metadata: name: &quot;user-service-svc&quot; namespace: &quot;default&quot; labels: app: &quot;user-service-app&quot; spec: type: &quot;ClusterIP&quot; selector: app: &quot;user-service-app&quot; ports: - protocol: &quot;TCP&quot; port: 80 targetPort: 3000 --- apiVersion: apps/v1 kind: Deployment metadata: name: ui-service-app labels: app: ui-service-app spec: replicas: 1 selector: matchLabels: app: ui-service-app template: metadata: labels: app: ui-service-app spec: containers: - name: ui-service-app image: &lt;MY-IMAGE-URL&gt; imagePullPolicy: Always ports: - containerPort: 3000 --- apiVersion: &quot;v1&quot; kind: &quot;Service&quot; metadata: name: &quot;ui-service-svc&quot; namespace: &quot;default&quot; labels: app: &quot;ui-service-app&quot; spec: type: &quot;ClusterIP&quot; selector: app: &quot;ui-service-app&quot; ports: - protocol: &quot;TCP&quot; port: 80 targetPort: 3000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: awesome-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx defaultBackend: service: name: ui-service-svc port: number: 80 rules: - http: paths: - path: /login pathType: Prefix backend: service: name: ui-service-svc port: number: 80 - path: /user(/|$)(.*) pathType: Prefix backend: service: name: user-service-svc port: number: 80 </code></pre>
Yashvardhan Nathawat
<p><a href="https://en.wikipedia.org/wiki/Ping_(networking_utility)" rel="nofollow noreferrer">Ping</a> operates by means of Internet Control Message Protocol (ICMP) packets. This is not what your service is serving. You can try <code>curl USER-SERVICE-CLUSTER-IP/ping</code> or <code>curl http://user-service-svc/ping</code> within your UI pod.</p>
gohm'c
<p>I am doing with k8s and Istio as service meshing. I wonder what if pod in a service A is not ready(Readiness is unhealthy), How Istio will treat this pod? Is there way i can config rule of load balance(Load balance at L3/L4 layer)?</p>
Blind
<p>You might want to check <a href="https://istio.io/latest/docs/ops/configuration/mesh/app-health-check/" rel="nofollow noreferrer">Health Checking of Istio Services</a> to check the health of your pods. As mentioned in the post, you would have to configure the containers with <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">liveness probes</a> using kubectl before you can actually do health checking.</p>
CyG
<p>I'm trying to deploy a docker image (<a href="https://hub.docker.com/r/digitorus/eramba-db" rel="nofollow noreferrer">https://hub.docker.com/r/digitorus/eramba-db</a>) to Kubernetes. My workflow is using <code>docker pull digitorus/eramba-db</code> to pull the image and using the below .yaml file to deploy to a separate namespace (eramba-1)</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: eramba namespace: eramba-1 labels: app: eramba spec: replicas: 1 selector: matchLabels: app: eramba template: metadata: labels: app: eramba spec: containers: - name: eramba image: docker.io/digitorus/eramba:latest ports: - containerPort: 80 </code></pre> <p>The master node has a status of (notReady) and the pod is pending.</p>
Bryan
<pre><code>Taints: node.kubernetes.io/not-ready:NoSchedule ... Namespace Name --------- ---- kube-system etcd-osboxes kube-system kube-apiserver-osboxes kube-system kube-controller-manager-osboxes kube-system kube-proxy-hhgwr kube-system kube-scheduler-osboxes ... </code></pre> <p>After you ran kubeadm that installed core k8s components, your cluster needs to have network plugin installed and functioning so your node can be ready for workload deployment. Then you can remove the &quot;master&quot; taint <code>kubectl taint nodes --all node-role.kubernetes.io/master-</code> so that pod can be deploy on this single node.</p>
gohm'c
<p>I've got a database running in a private network (say IP 1.2.3.4).</p> <p>In my own computer, I can do these steps in order to access the database:</p> <ul> <li>Start a Docker container using something like <code>docker run --privileged --sysctl net.ipv4.ip_forward=1 ...</code></li> <li>Get the container IP</li> <li>Add a routing rule, such as <code>ip route add 1.2.3.4/32 via $container_ip</code></li> </ul> <p>And then I'm able to connect to the database as usual.</p> <p>I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.</p> <p>PS: I'm aware of the sidecar pattern, but I don't think this would be ideal for our use case, as our jobs are short-lived tasks, and we are not able to run multiple &quot;gateway&quot; containers at the same time.</p>
Gabriel Milan
<p><code>I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.</code></p> <p>You can start a GKE in a fully private network like <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">this</a>, then you run application that needs to be fully private in this cluster. Access to this cluster is only possible when explicitly granted; just like those commands you used in your question, but of course now you will use the cloud platform (eg. service control, bastion etc etc), there is no need to &quot;route traffic through a specific pod in Kubernetes for certain IPs&quot;. But if you have to run everything in a cluster, then likely a fully private cluster will not work for you, in this case you can use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy" rel="nofollow noreferrer">network policy</a> to control access to your database pod.</p>
gohm'c
<p>Following the instructions on the Keycloak docs site below, I'm trying to set up Keycloak to run in a Kubernetes cluster. I have an Ingress Controller set up which successfully works for a simple test page. Cloudflare points the domain to the ingress controllers IP.</p> <p>Keycloak deploys successfully (<code>Admin console listening on http://127.0.0.1:9990</code>), but when going to the domain I get a message from NGINX: <code>503 Service Temporarily Unavailable</code>.</p> <p><a href="https://www.keycloak.org/getting-started/getting-started-kube" rel="nofollow noreferrer">https://www.keycloak.org/getting-started/getting-started-kube</a></p> <p>Here's the Kubernetes config:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: keycloak-cip spec: type: ClusterIP ports: - port: 80 targetPort: 8080 selector: name: keycloak --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: nginx service.beta.kubernetes.io/linode-loadbalancer-default-protocol: https service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ &quot;tls-secret-name&quot;: &quot;my-secret&quot;, &quot;protocol&quot;: &quot;https&quot; }' spec: rules: - host: my.domain.com http: paths: - backend: serviceName: keycloak-cip servicePort: 8080 tls: - hosts: - my.domain.com secretName: my-secret --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: default labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:12.0.3 env: - name: KEYCLOAK_USER value: &quot;admin&quot; - name: KEYCLOAK_PASSWORD value: &quot;admin&quot; - name: PROXY_ADDRESS_FORWARDING value: &quot;true&quot; ports: - name: http containerPort: 8080 - name: https containerPort: 8443 readinessProbe: httpGet: path: /auth/realms/master port: 8080 initialDelaySeconds: 90 periodSeconds: 5 failureThreshold: 30 successThreshold: 1 revisionHistoryLimit: 1 </code></pre> <hr /> <p>Edit:</p> <p>TLS should be handled by the ingress controller.</p> <p>--</p> <p>Edit 2:</p> <p>If I go into the controller using kubectl exec, I can do <code>curl -L http://127.0.0.1:8080/auth</code> which successfully retrieves the page: <code>&lt;title&gt;Welcome to Keycloak&lt;/title&gt;</code>. So I'm sure that keycloak is running. It's just that either traffic doesn't reach the pod, or keycloak doesn't respond.</p> <p>If I use the ClusterIP instead but otherwise keep the call above the same, I get a <code>Connection timed out</code>. I tried both ports 80 and 8080 with the same result.</p>
Martin01478
<p>The following configuration is required to run <strong>keycloak</strong> behind <strong>ingress controller</strong>:</p> <pre><code>- name: PROXY_ADDRESS_FORWARDING value: &quot;true&quot; - name: KEYCLOAK_HOSTNAME value: &quot;my.domain.com&quot; </code></pre> <p>So I think adding correct <strong>KEYCLOAK_HOSTNAME</strong> value should solve your issue.</p> <p>I had a similar issue with Traefik Ingress Controller: <strong><a href="https://stackoverflow.com/questions/67828817/cant-expose-keycloak-server-on-aws-with-traefik-ingress-controller-and-aws-http">Can&#39;t expose Keycloak Server on AWS with Traefik Ingress Controller and AWS HTTPS Load Balancer</a></strong></p> <p>You can find the full code of my configuration here: <strong><a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></strong></p>
Mykhailo Skliar
<p>I'm working with microservice architecture using Azure AKS with Istio.</p> <p>I configure all, and developers work with microservices to create the web platform, apis, etc.</p> <p>But with this, I have a doubt. There is much yaml to configure for Istio and Kubernetes, e.g. <code>Ingress</code>, <code>VirtualService</code>, <code>Gateway</code> etc.</p> <p>Is this configuration, part of the developer responsibility? should they create and configure this? or is these configuration files part of the responsibility for the DevOps team? so that developers only is responsible for creating nodejs project, and the DevOps team configure the nodejs project configuration to execute in k8s architecture?</p>
mpanichella
<p>A developer needs to</p> <ol> <li>focus on his business logic.</li> <li>Also know where his code is going to run and under what kind of env.</li> </ol> <p>The 1) is quite obvious here. The 2) is usually not implicit at times and actually I think if the dev think they do not own the runtime configuration then its like throwing the responsibility over the wall.</p> <p>Lets say for example if the app is going to be exposed by an ingress controller the app-dev needs to ensure</p> <ul> <li>that app works well with http and https traffic (incase we are doing ssl passthrough).</li> <li>all the resource url/paths and the right ports are exposed and registered with the ingress.</li> </ul> <p>The same argument can be extended to other resource types. say Virtual Machines or Deployment specs.</p> <p>Now if the dev believes these are not their responsibility is not to write these yaml files, they still need to document the contract of what their service needs with another &quot;person&quot; to enable them to write the configs. But aren't the yamls themselves that contract?</p>
Neel
<p>I am trying to create another Issuer can for another subdomain. I am following this example: <a href="https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/03-setup-ingress-controller/nginx.md" rel="nofollow noreferrer">Digital Ocean Kubernetes tutorial</a> and in this example the author gives an example for the <a href="http://echo.starter-kit.online/" rel="nofollow noreferrer">http://echo.starter-kit.online/</a> subdomain which I was able to get working using my own subdomain.</p> <p>I am trying to get this working for the quote.starter-kit.online example by creating a new Issuer like following:</p> <pre><code>--- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: quote-letsencrypt-nginx namespace: backend spec: # ACME issuer configuration # `email` - the email address to be associated with the ACME account (make sure it's a valid one) # `server` - the URL used to access the ACME server’s directory endpoint # `privateKeySecretRef` - Kubernetes Secret to store the automatically generated ACME account private key acme: email: [email protected] server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: quote-letsencrypt-nginx-private-key solvers: # Use the HTTP-01 challenge provider - http01: ingress: class: nginx </code></pre> <p>And the following Ingress rule for the quote subdomain:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-quote namespace: backend annotations: cert-manager.io/issuer: letsencrypt-nginx spec: tls: - hosts: - quote.mydomain.com secretName: quote-letsencrypt rules: - host: quote.mydomain.com http: paths: - path: / pathType: Prefix backend: service: name: quote port: number: 8080 ingressClassName: nginx </code></pre> <p>when I do the following:</p> <pre><code>&gt;kubectl get certificates -n backend NAME READY SECRET AGE letsencrypt-nginx True letsencrypt-nginx 5d2h quote-letsencrypt-nginx False quote-letsencrypt-nginx 2s </code></pre> <p>I can see the certs. However, when I do the following I see the https is not working:</p> <pre><code> curl -Li quote.mydomain.com HTTP/1.1 308 Permanent Redirect Date: Sun, 02 Jan 2022 23:49:40 GMT Content-Type: text/html Content-Length: 164 Connection: keep-alive Location: https://quote.mydomain.com curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. </code></pre>
Katlock
<p>Try:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-quote namespace: backend annotations: cert-manager.io/issuer: quote-letsencrypt-nginx # &lt;-- changed spec: tls: - hosts: - quote.mydomain.com secretName: quote-letsencrypt-tls rules: ... </code></pre>
gohm'c
<p>I was able to successfully start <strong>keycloak server</strong> on <strong>AWS K3S Kubernetes Cluster</strong> with <strong>Istio Gateway</strong> and <strong>AWS HTTPS Application Load Balancer</strong>.</p> <p>I can successfully see <strong>Keycloak Home Page</strong>: <strong><a href="https://keycloak.skycomposer.net/auth/" rel="nofollow noreferrer">https://keycloak.skycomposer.net/auth/</a></strong></p> <p>But when I click on <strong>Admin Console</strong> link, then the <strong>Blank Page</strong> is shown: <strong><a href="https://keycloak.skycomposer.net/auth/admin/master/console/" rel="nofollow noreferrer">https://keycloak.skycomposer.net/auth/admin/master/console/</a></strong></p> <p><strong>Browser Inspect Tool</strong> shows that: <strong><a href="http://keycloak.skycomposer.net/auth/js/keycloak.js?version=rk826" rel="nofollow noreferrer">http://keycloak.skycomposer.net/auth/js/keycloak.js?version=rk826</a></strong> link returns the following status:</p> <pre><code>(blocked:mixed-content) </code></pre> <p>I did some research on the internet and the reason seems to be related with redirection from <strong>https</strong> to <strong>http</strong>, which is not correctly handled by <strong>istio gateway</strong> and <strong>aws load balancer</strong></p> <p>But unfortunately, I couldn't find the solution, how to solve it for my particular environment.</p> <p>Here are my configuration files:</p> <p><strong>keycloak-config.yaml:</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: keycloak data: KEYCLOAK_USER: admin@keycloak KEYCLOAK_MGMT_USER: mgmt@keycloak JAVA_OPTS_APPEND: '-Djboss.http.port=8080' PROXY_ADDRESS_FORWARDING: 'true' KEYCLOAK_HOSTNAME: 'keycloak.skycomposer.net' KEYCLOAK_FRONTEND_URL: 'https://keycloak.skycomposer.net/auth' KEYCLOAK_LOGLEVEL: INFO ROOT_LOGLEVEL: INFO DB_VENDOR: H2 </code></pre> <p><strong>keycloak-deployment.yaml:</strong></p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak annotations: sidecar.istio.io/rewriteAppHTTPProbers: &quot;true&quot; spec: containers: - name: keycloak image: jboss/keycloak:13.0.1 imagePullPolicy: Always ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - name: keycloak-data mountPath: /opt/jboss/keycloak/standalone/data env: - name: KEYCLOAK_USER valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_USER - name: KEYCLOAK_MGMT_USER valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_MGMT_USER - name: JAVA_OPTS_APPEND valueFrom: configMapKeyRef: name: keycloak key: JAVA_OPTS_APPEND - name: DB_VENDOR valueFrom: configMapKeyRef: name: keycloak key: DB_VENDOR - name: PROXY_ADDRESS_FORWARDING valueFrom: configMapKeyRef: name: keycloak key: PROXY_ADDRESS_FORWARDING - name: KEYCLOAK_HOSTNAME valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_HOSTNAME - name: KEYCLOAK_FRONTEND_URL valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_FRONTEND_URL - name: KEYCLOAK_LOGLEVEL valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_LOGLEVEL - name: ROOT_LOGLEVEL valueFrom: configMapKeyRef: name: keycloak key: ROOT_LOGLEVEL - name: KEYCLOAK_PASSWORD valueFrom: secretKeyRef: name: keycloak key: KEYCLOAK_PASSWORD - name: KEYCLOAK_MGMT_PASSWORD valueFrom: secretKeyRef: name: keycloak key: KEYCLOAK_MGMT_PASSWORD volumes: - name: keycloak-data persistentVolumeClaim: claimName: keycloak-pvc </code></pre> <p><strong>keycloak-service.yaml:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: keycloak spec: ports: - protocol: TCP name: http port: 80 targetPort: 8080 selector: app: keycloak </code></pre> <p><strong>istio-gateway.yaml:</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - &quot;keycloak.skycomposer.net&quot; </code></pre> <p><strong>istio-virtualservice.yaml:</strong></p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: keycloak spec: hosts: - keycloak.skycomposer.net gateways: - istio-gateway http: - match: - uri: prefix: / route: - destination: host: keycloak.default.svc.cluster.local port: number: 80 </code></pre> <p>I successfully installed <strong>istio 1.9.1</strong> with <strong>istioctl</strong>:</p> <pre><code>istioctl install \ --set meshConfig.accessLogFile=/dev/stdout \ --skip-confirmation </code></pre> <p>Also, I labelled default namespace with <strong>istio injection</strong>, so all my pods in default namespace have <strong>istio sidecar container</strong>:</p> <pre><code>kubectl label namespace default istio-injection=enabled NAME READY STATUS RESTARTS AGE whoami-6c4757bbb5-9zkbl 2/2 Running 0 13m notification-microservice-5dfcf96b95-ll8lm 2/2 Running 0 13m customermgmt-6b48586868-ddlnw 2/2 Running 0 13m usermgmt-c5b65964-df2vc 2/2 Running 0 13m keycloak-d48f9bbbf-tsm5h 2/2 Running 0 13m </code></pre> <p>Here is also <strong>terraform</strong> configuration of <strong>AWS Load Balancer</strong>:</p> <pre><code>resource &quot;aws_lb&quot; &quot;mtc_lb&quot; { name = &quot;mtc-loadbalancer&quot; subnets = var.public_subnets security_groups = [var.public_sg] idle_timeout = 400 } resource &quot;aws_lb_target_group&quot; &quot;mtc_tg&quot; { name = &quot;mtc-lb-tg-${substr(uuid(), 0, 3)}&quot; port = var.tg_port protocol = var.tg_protocol vpc_id = var.vpc_id lifecycle { create_before_destroy = true ignore_changes = [name] } health_check { healthy_threshold = var.elb_healthy_threshold unhealthy_threshold = var.elb_unhealthy_threshold timeout = var.elb_timeout interval = var.elb_interval } } resource &quot;aws_lb_listener&quot; &quot;mtc_lb_listener_http&quot; { load_balancer_arn = aws_lb.mtc_lb.arn port = 80 protocol = &quot;HTTP&quot; default_action { type = &quot;redirect&quot; redirect { port = &quot;443&quot; protocol = &quot;HTTPS&quot; status_code = &quot;HTTP_301&quot; } } } resource &quot;aws_lb_listener&quot; &quot;mtc_lb_listener&quot; { load_balancer_arn = aws_lb.mtc_lb.arn port = 443 protocol = &quot;HTTPS&quot; depends_on = [aws_lb_target_group.mtc_tg] certificate_arn = var.certificate_arn default_action { type = &quot;forward&quot; target_group_arn = aws_lb_target_group.mtc_tg.arn } } </code></pre>
Mykhailo Skliar
<p>Investigating <strong>request headers</strong>, I finally found the cause of the issue.</p> <p>This header was always &quot;<strong>http</strong>&quot; by default:</p> <pre><code>X-Forwarded-Proto: http </code></pre> <p>Changing the value to:</p> <pre><code>X-Forwarded-Proto: https </code></pre> <p>solved the issue.</p> <p>Here is the example of <strong>Istio Virtual Service</strong>, which sets &quot;<strong>X-Forwarded-Proto</strong>&quot; request header to &quot;<strong>https</strong>&quot; for all requests:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: keycloak spec: hosts: - keycloak.skycomposer.net gateways: - istio-gateway http: - match: - uri: prefix: / route: - destination: host: keycloak.default.svc.cluster.local port: number: 80 headers: request: set: x-forwarded-proto: https </code></pre> <p>P.S. Ideal solution would be to set this value in <strong>AWS Application Load Balancer</strong>, but I wasn't sure how to do it with my <strong>terraform configuration</strong> of <strong>aws load balancer</strong>, so I decided to solve it on <strong>Istio Virtual Service</strong> level.</p>
Mykhailo Skliar
<p>I've been trying to run few services in AWS EKS Cluster. I followed the ingress-nginx guide to get https with AWS ACM certificate</p> <blockquote> <p><a href="https://kubernetes.github.io/ingress-nginx/deploy/#aws" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#aws</a></p> </blockquote> <p>Used tls termination at ingress controller</p> <p>I used 3 routes for each services as</p> <p><strong>adminer.xxxx.com</strong> - points to an adminer service</p> <p><strong>socket.xxxx.com</strong> - points to the wss service written in nodejs</p> <p><strong>service.xxxx.com</strong> - points to a program that returns a page which connects to socket url</p> <p>Without TLS Termination, in http:// everything works fine, <strong>ws://socket.xxxx.com/socket.io</strong> gets connected and responds well.</p> <p>When I add TLS, the request goes to <strong>wss://socket.xxxx.com/socket.io</strong> and the nginx returns 400. I Can't figure out why it happens.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-body-size: 100m nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $http_host; # nginx.ingress.kuberenetes.io/use-regex: &quot;true&quot; spec: rules: - host: adminer.xxxx.com http: paths: - path: / backend: serviceName: adminer-svc servicePort: 8080 - host: socket.xxxx.com http: paths: - path: / backend: serviceName: nodejs-svc servicePort: 2020 - host: service.xxxx.com http: paths: - path: / backend: serviceName: django-svc servicePort: 8000 </code></pre> <p>I Tried with and without these configurations</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $http_host; </code></pre> <p>Also I've tried changing the <strong>socket.xxxx.com</strong> into <strong>service.xxxx.com</strong> and assigned to be forwarded for <em><strong>/socket.io</strong></em> path</p> <p>I've also put a url in nodejs with express to test if its working at all, and it responds properly in https://</p> <p>Only the wss:// has the issue.</p> <p>PS : This entire Service works when nginx is setup in a normal system with nginx configuration</p> <pre><code>location / { proxy_pass http://localhost:2020/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } </code></pre> <p>I tried request like this as well</p> <p><a href="https://node-socket.xxxx.com/socket.io/?EIO=3&amp;transport=polling" rel="noreferrer">https://node-socket.xxxx.com/socket.io/?EIO=3&amp;transport=polling</a> this works</p> <p><a href="https://node-socket.xxxx.comsocket.io/?EIO=3&amp;transport=websocket" rel="noreferrer">https://node-socket.xxxx.comsocket.io/?EIO=3&amp;transport=websocket</a> this doesnt.</p> <p>Combinations I tried</p> <pre><code>protocol, balancer, backendproto, transport =&gt; result wss://, ELB, TCP, websocket =&gt; 400 wss://, NLB, TCP, websocket =&gt; 400 wss://, ELB, HTTP, websocket =&gt; 400 wss://, NLB, HTTP, websocket =&gt; 400 ws://, ELB, TCP, websocket =&gt; 400 ws://, ELB, HTTP, websocket =&gt; 400 ws://, NLB, TCP, websocket =&gt; 400 ws://, NLB, HTTP, websocket =&gt; 400 </code></pre> <p>polling worked in every cases</p>
Adharsh M
<p>You seems to be missing</p> <pre><code>nginx.org/websocket-services </code></pre> <p>annotation</p> <p>It's value should be a value of kubernetes service name. See <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/</a></p>
Piotr
<p>I have deployed an EKS cluster with a private endpoint (with the help of vpc endpoints). There is no public endpoint and there is no internet gateway.</p> <p>I need to understand how to access the Kubernetes API from an EC2 instance launched in one of the private subnets in the same VPC. I am using Session Manager with vpc endpoint to run commands on the EC2 instance.</p> <p>Any advice on how to install and configure kubectl to manage the cluster in this case?</p>
Morariu
<p><code>...how to access the Kubernetes API from an EC2 instance launched in one of the private subnets in the same VPC.</code></p> <p>Typically you use SSM connect on the EC2 console to start a session with the EC2 instance launched in the private subnet, and interact with your fully private cluster from there.</p>
gohm'c
<p>Can anyone explain to me why when running my load test on one pod it gives better TPS rather than when scaling to two pods.</p> <p>I expected that when running the same scenario with the same configuration on 2 pods the TPS will be increased but this is not what happened.</p> <p>Is this normal behaviour that scaling horizontal not improve the total number of requests?</p> <p>Please note that I didn't get any failures on one pod just scaled to 2 for high availability.</p>
Marwa Mohamed Mahmoud
<p>It's really depends on what your pod did. As @spencer mentioned. Besides that, there still many factor will impact your expectation:</p> <ol> <li>Does your pod has leader election?</li> <li>QPS/Burst setting(for controller,since I have no idea what your pod did).</li> <li>...</li> </ol> <p>Based on your case, I guess your pod is not the TPS limiting factor.</p> <p>Basically increase the replication of pod will not low down the TPS at least.</p>
vincent pli
<p>I am using Deployment kind to define a pod. Pod definition contains two containers and 1 sidecar.</p> <p>In kubernetes events, Execution of container start is in sequential way. How we can start both containers parallely.</p> <p>Container 1 (6s) + Container (5s) + NEG ready(2s) = 13s</p> <p>If we start container in parallel we can minimize the pod spinning total time.</p>
Uday Chauhan
<p><code>In kubernetes events, Execution of container start is in sequential way. How we can start both containers parallely.</code></p> <p>Although container is started in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L935" rel="nofollow noreferrer">sequence</a>, kubelet does <strong>not</strong> wait for the container to enter running state to start the next container. The time gap for kubelet to loop thru is only fraction of second. It is the startup and readiness time that your containers needed that define how fast it can start receiving requests.</p>
gohm'c
<p>I have a .pfx file that a Java container needs to use.</p> <p>I have created a tls secret using the command</p> <p><code>kubectl create secret tls secret-pfx-key --dry-run=client --cert tls.crt --key tls.key -o yaml</code></p> <pre><code>apiVersion: v1 kind: Secret type: kubernetes.io/tls metadata: name : secret-pfx-key namespace: default data: #cat tls.crt | base64 tls.crt: base64-gibberish.... #cat tls.key | base64 tls.key: base64-gibberish.... </code></pre> <p>However, now I cannot understand how to use it. When I add the secret as volume in the pod I can see the two files that are created. But I need the combination of the two in one .pfx file.</p> <p>Am I missing something? Thanks.</p> <p>Note: I have read the related stackoverflow questions but could not understand how to use it.</p>
Kostas Demiris
<p>You can <a href="https://www.sslshopper.com/ssl-converter.html" rel="nofollow noreferrer">convert</a> to pfx first, then <code>kubectl create secret generic mypfx --from-file=pfx-cert=&lt;converted pfx file&gt;</code></p> <p>Mount the secret as a volume in your pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-mypfx spec: restartPolicy: OnFailure volumes: - name: pfx-volume secret: secretName: mypfx containers: - name: busybox image: busybox command: [&quot;ash&quot;,&quot;-c&quot;,&quot;cat /path/in/the/container/pfx-cert; sleep 5&quot;] volumeMounts: - name: pfx-volume mountPath: /path/in/the/container </code></pre> <p>The above example dump the cert, wait for 5s and exit.</p>
gohm'c
<p>Here is the issue: We have several microk8s cluster running on different networks; yet each have access to our storage network where our NAS are.</p> <p>within Kubernetes, we create disks with an nfs-provisionner (nfs-externalsubdir). Some disks were created with the IP of the NAS server specified. Once we had to change the IP, we discovered that the disk was bound to the IP, and changing the IP meant creating a new storage resource within.</p> <p>To avoid this, we would like to be able to set a DNS record on the Kubernetes cluster level so we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed (when we upgrade or migrate our external NAS appliances, for instance) for instance, I'd like to tell every microk8s environnements that:</p> <p>192.168.1.4 my-nas.mydomain.local</p> <p>... like I would within the /etc/hosts file.</p> <p>Is there a proper way to achieve this? I tried to follow the advices on this link: <a href="https://stackoverflow.com/questions/37166822/is-there-a-way-to-add-arbitrary-records-to-kube-dns">Is there a way to add arbitrary records to kube-dns?</a> (the answer upvoted 15 time, the cluster-wise section), restarted a deployment, but it didn't work</p> <p>I cannot use the hostAliases feature since it isn't provided on every chart we are using, that's why I'm looking for a more global solution.</p> <p>Best Regards,</p>
seblel
<p><code>...we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed...</code></p> <p>For this you can try headless service without touching coreDNS:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-nas namespace: kube-system # &lt;-- you can place it somewhere else labels: app: my-nas spec: ports: - protocol: TCP port: &lt;nas port&gt; --- apiVersion: v1 kind: Endpoints metadata: name: my-nas subsets: - addresses: - ip: 192.168.1.4 ports: - port: &lt;nas port&gt; </code></pre> <p>Use it as: <code>my-nas.kube-system.svc.cluster.local</code></p>
gohm'c