Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have been trying to deploy a Kubernetes cluster in Digital Ocean. Everything seems to work except when I try to apply the tls certificates. I have been following these steps, but with <code>Nginx Ingress Controller</code> v1.0.0 and <code>cert-manager</code> v1.5.0.</p>
<p>I have two urls, let's say <code>api.example.com</code> and <code>www.example.com</code></p>
<p>Checking the challenge I saw <code>Waiting for HTTP-01 challenge propagation: failed to perform self check GET request...</code></p>
<p>I tried adding the following annotations to the ingress:</p>
<pre><code>kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
</code></pre>
<p>Or using this service as a workaround:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "www.example.com"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
#CHANGE/ADD THIS
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
</code></pre>
<p>If I go to the URL challenge I am able to see the hash, but I am stuck, I am not sure why it is failing or the steps to solve this.</p>
| agusgambina | <p>As <a href="https://stackoverflow.com/users/2853555/agusgambina">agusgambina</a> has mentioned in the comment, problem is solved:</p>
<blockquote>
<p>I was able to make this work, first I need to get the load balancer id executing <code> k describe svc ingress-nginx-controller --namespace=ingress-nginx</code> and then pasting in the annotation <code>kubernetes.digitalocean.com/load-balancer-id: “xxxx-xxxx-xxxx-xxxx-xxxxx”</code> thanks for your comments, it helped me to solve the issue.</p>
</blockquote>
<p>This problem described also <a href="https://www.digitalocean.com/community/questions/issue-with-waiting-for-http-01-challenge-propagation-failed-to-perform-self-check-get-request-from-acme-challenges" rel="nofollow noreferrer">here</a> and there is also a tutorial<br />
<a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes#step-2-%E2%80%94-setting-up-the-kubernetes-nginx-ingress-controller" rel="nofollow noreferrer">How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes</a>.</p>
| Mikołaj Głodziak |
<p>I have multiple kubernetes clusters, whose versions are 1.13, 1.16, 1.19.</p>
<p>I'm trying to monitor the total number of threads so that I need the metric "container_threads".</p>
<p>But for the cluster version equal or lower than 1.16, the container_threads metric looks like somewhat wrong.</p>
<p>For 1.16, the metric values is always 0, for 1.13 no container_threads metrics exists.</p>
<p>I know that the metric is from cadvisor which is included in kubelet.</p>
<p>I want to make sure that from which version, the cadvisor doesn't have container_threads.</p>
<p>I know how to check kubelet version "kubelet --version".</p>
<p>But I don't know how to find the version of cadvisor.</p>
<p>Does anyone know about it?</p>
<p>Thanks!</p>
| JAESANGPARK | <p>There is no specific command to find the version of <a href="https://github.com/google/cadvisor/releases" rel="nofollow noreferrer">cAdvisor</a>. However, metrics can be accessed using commands like $ kubectl top</p>
<p>For Latest Version Of Cadvisor , We will use the official cAdvisor docker image from google hosted on the <a href="https://hub.docker.com/r/google/cadvisor/" rel="nofollow noreferrer">Docker Hub</a>.</p>
<p>For more information about <a href="https://www.rancher.cn/blog/2019/native-kubernetes-monitoring-tools-part-1" rel="nofollow noreferrer">cAdvisor UI</a> Overview and Processes, over to the cAdvisor section. Also, cAdvisor’s UI has been marked deprecated as of Kubernetes version 1.10 and the interface is scheduled to be completely removed in version 1.12.</p>
<p>If you run Kubernetes version 1.12 or later, the UI has been removed. However, the metrics are still there since cAdvisor is part of the kubelet binary.</p>
<p>The kubelet binary exposes all its runtime metrics and all the <a href="http://localhost:8001/api/v1/nodes/gke-c-plnf4-default-pool-5eb56043-23p5/proxy/metrics/cadvisor" rel="nofollow noreferrer">cAdvisor metrics at the /metrics endpoint</a> using the Prometheus exposition format.</p>
<p><strong>Note:</strong> cAdvisor doesn’t store metrics for long-term use, so if you want that functionality, you’ll need to look for a dedicated monitoring tool.</p>
| Khaja Shaik |
<p>Hi I am working in kubernetes. I have two pods running. I want to call another pod from one pod. I am trying as below</p>
<pre><code>HttpClient req = new HttpClient();
var content = await req.GetAsync("https://cepserviceone.cep-dev.svc.cluster.local/api/values");
string response = await content.Content.ReadAsStringAsync();
return response;
</code></pre>
<p>I have exposed both services as cluster IP as below.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: cep #### Insert your application service name here ####
namespace: cep-dev #### Insert your application's namespace. Omit this line to use default namespace. ####
labels:
app: cep #### Insert your application service name here ####
spec:
# Use one of ClusterIP, LoadBalancer or NodePort. See https://kubernetes.io/docs/concepts/services-networking/service/
type: ClusterIP
selector:
app: cep #### Insert your application deployment name here. This must match the deployment name specified in the deployment manifest ####
instance: app
ports:
- port: 8080 #### Replace with appropriate port
targetPort: 80 #### Replace with the port name defined in deployment
</code></pre>
<p>This is another service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: cepserviceone #### Insert your application service name here ####
namespace: cep-dev #### Insert your application's namespace. Omit this line to use default namespace. ####
labels:
app: cepserviceone #### Insert your application service name here ####
spec:
# Use one of ClusterIP, LoadBalancer or NodePort. See https://kubernetes.io/docs/concepts/services-networking/service/
type: ClusterIP
selector:
app: cepservice #### Insert your application deployment name here. This must match the deployment name specified in the deployment manifest ####
instance: app
ports:
- port: 8080 #### Replace with appropriate port
targetPort: 80 #### Replace with the port name defined in deployment
</code></pre>
<p>I have ingress which routes requests accordingly. When I try to access serviceone application I get below error</p>
<pre><code>An invalid request URI was provided. Either the request URI must be an absolute URI or BaseAddress must be set.
</code></pre>
<p>May I know what wrong I am doing here? Any help would be greatly appreciated. Thanks</p>
| Niranjan godbole | <p>Use your service port 8080:</p>
<p><code>var content = await req.GetAsync("https://cepserviceone.cep-dev.svc.cluster.local:8080/api/values");</code></p>
| gohm'c |
<p>I'm trying to isntall Jenkins on Kubernetes using Helm 3 and following hte official instructions but running up against a permission issue.</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: jenkins
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
namespace: jenkins
spec:
storageClassName: jenkins-pv
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/jenkins-volume/
</code></pre>
<p>Then pull down the <code>values.yaml</code> file: <code>wget https://raw.githubusercontent.com/jenkinsci/helm-charts/main/charts/jenkins/values.yaml</code></p>
<p>I adjust the <code>adminPassword</code> (this is a demo system): <code>adminPassword: "mySecret"</code></p>
<p>Finally I change <code>storageClass:</code> to be <code>storageClass: jenkins-pv</code></p>
<h1>Output / Debug Logs</h1>
<pre><code>$ kubectl logs -n jenkins jenkins-0 init
disable Setup Wizard
/var/jenkins_config/apply_config.sh: 4: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/jenkins.install.UpgradeWizard.state: Permission denied
$ kubectl describe pod -n jenkins jenkins-0
Name: jenkins-0
Namespace: jenkins
Priority: 0
Node: ip-172-31-40-127/172.31.40.127
Start Time: Mon, 30 Nov 2020 10:37:19 +0000
Labels: app.kubernetes.io/component=jenkins-controller
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=jenkins
controller-revision-hash=jenkins-57958b7d49
statefulset.kubernetes.io/pod-name=jenkins-0
Annotations: checksum/config: 2a4c2b3ea5dea271cb7c0b8e8582b682814d39f8e933e0348725b0b9a7dbf258
Status: Pending
IP: 10.42.0.44
IPs:
IP: 10.42.0.44
Controlled By: StatefulSet/jenkins
Init Containers:
init:
Container ID: containerd://64862ebd6791966db07981196d5dbd4c3b583d9e3e6543a31b252d19c2f9405b
Image: jenkins/jenkins:lts
Image ID: docker.io/jenkins/jenkins@sha256:980d55fd29a287d2d085c08c2bb6c629395ab2e3dd7547641035b4f126acc322
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Terminated
Reason: Error
Exit Code: 2
Started: Mon, 30 Nov 2020 10:53:41 +0000
Finished: Mon, 30 Nov 2020 10:53:41 +0000
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Mon, 30 Nov 2020 10:48:29 +0000
Finished: Mon, 30 Nov 2020 10:48:29 +0000
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment: <none>
Mounts:
/usr/share/jenkins/ref/plugins from plugins (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-zjzdt (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=3
Startup: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=12
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
JAVA_OPTS: -Dcasc.reload.token=$(POD_NAME)
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
CASC_JENKINS_CONFIG: /var/jenkins_home/casc_configs
Mounts:
/run/secrets/chart-admin-password from admin-secret (ro,path="jenkins-admin-password")
/run/secrets/chart-admin-username from admin-secret (ro,path="jenkins-admin-user")
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-zjzdt (ro)
config-reload:
Container ID:
Image: kiwigrid/k8s-sidecar:0.1.275
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
LABEL: jenkins-jenkins-config
FOLDER: /var/jenkins_home/casc_configs
NAMESPACE: jenkins
REQ_URL: http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)
REQ_METHOD: POST
REQ_RETRY_CONNECT: 10
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-zjzdt (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins
ReadOnly: false
sc-config-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
admin-secret:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins
Optional: false
jenkins-token-zjzdt:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-zjzdt
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned jenkins/jenkins-0 to ip-172-31-40-127
Normal Pulled 15m (x4 over 16m) kubelet, ip-172-31-40-127 Successfully pulled image "jenkins/jenkins:lts"
Normal Created 15m (x4 over 16m) kubelet, ip-172-31-40-127 Created container init
Normal Started 15m (x4 over 16m) kubelet, ip-172-31-40-127 Started container init
Normal Pulling 14m (x5 over 16m) kubelet, ip-172-31-40-127 Pulling image "jenkins/jenkins:lts"
Warning BackOff 74s (x71 over 16m) kubelet, ip-172-31-40-127 Back-off restarting failed container
</code></pre>
| J Smith | <p>I see that this happens when using <code>hostPath</code> Minikube one node cluster, like in the documentation. The issue is because the <code>/data/jenkins-volume</code> folder in the Minikube node is created with <code>root</code> ownership.</p>
<p>So, if you don't want to run as root with the <code>runAsUser: 0</code> you can just change the permissions in the <code>/data/jenkins-volume</code> entering into the node with:</p>
<pre><code>$ minikube ssh
$ sudo chown -R 1000:1000 /data/jenkins-volume
</code></pre>
<p>Once you do that you can create the <code>pv</code> and deploy Jenkins with Helm with values:</p>
<pre><code>runAsUser: 1000
fsGroup: 1000
</code></pre>
<p>It worked for me.</p>
| dcanadillas |
<p>Why I'd want to have multiple replicas of my DB?</p>
<ol>
<li>Redundancy: I have > 1 replicas of my app code. Why? In case one node fails, another can fill its place when run behind a load balancer.</li>
<li>Load: A load balancer can distribute traffic to multiple instances of the app.</li>
<li>A/B testing. I can have one node serve one version of the app, and another serve a different one.</li>
<li>Maintenance. I can bring down one instance for maintenance, and keep the other one up with 0 down-time.</li>
</ol>
<p>So, I assume I'd want to do the same with the backing db if possible too.</p>
<p>I realize that many nosql dbs are better configured for multiple instances, but I am interested in relational dbs.</p>
<p>I've played with operators like <a href="https://github.com/CrunchyData/postgres-operator" rel="nofollow noreferrer">this</a> and <a href="https://github.com/mysql/mysql-operator" rel="nofollow noreferrer">this</a> but have found problems with the docs, have not been able to get them up and running and found the community a bit lacking. Relying on this kind of thing in production makes me nervous. The Mysql operator has a note even, saying it's not for production use.</p>
<p>I see that native k8s <a href="https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/" rel="nofollow noreferrer">statefulsets have scaling</a> but these docs aren't specific to dbs at all. I assume the complication is that dbs need to write persistently to disk via a volume and that data has to be synced and routed somehow if you have more than one instance.</p>
<p>So, is this something that's non-trivial to do myself? Or, am I better off having a dev environment that uses a one-replica db image in the cluster in order to save on billing, and a prod environment that uses a fully managed db, something like <a href="https://cloud.google.com/sql/docs/postgres#docs" rel="nofollow noreferrer">this</a> that takes care of the scaling/HA for me? Then I'd use kustomize to manage the yaml variances.</p>
<p><strong>Edit</strong>:</p>
<p>I actually found a postgres operator that worked great. Followed the docs one time through and it all worked, and it's from <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">postgres docs</a>.</p>
| Aaron | <p>I have created this community wiki answer to summarize the topic and to make pertinent information more visible.</p>
<p>As <a href="https://stackoverflow.com/users/4216641/turing85" title="14,085 reputation">Turing85</a> well mentioned in the comment:</p>
<blockquote>
<p>Do NOT share a pvc to multiple db instances. Even if you use the right backing volume (it must be an object-based storage in order to be read-write many), with enough scaling, performance will take a hit (after all, everything goes to one file system, this will stress the FS). The proper way would be to configure clustering. All major relational databases (mssql, mysql, postgres, oracle, ...) do support clustering. To be on the secure side, however, I would recommend to buy a scalable database "as a service" unless you know exactly what you are doing.</p>
</blockquote>
<blockquote>
<p>The good solution might be to use a single replica StatefulSet for development, to avoid billing and use a fully managed cloud based sql solution in prod. Unless you have the knowledge or a suffiiciently professional operator to deploy a clustered dbms.</p>
</blockquote>
<p>Another solution may be to use a different operator as <a href="https://stackoverflow.com/users/299180/aaron">Aaron</a> did:</p>
<blockquote>
<p>I actually found a postgres operator that worked great. Followed the docs one time through and it all worked, and it's from postgres: <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">https://www.kubegres.io/doc/getting-started.html</a></p>
</blockquote>
<p>See also <a href="https://stackoverflow.com/questions/27157227/can-relational-database-scale-horizontally/27162851">this similar question</a>.</p>
| Mikołaj Głodziak |
<p>In the Kubernetes documentation for controlling-access to the API server, under the <a href="https://kubernetes.io/docs/concepts/security/controlling-access/#authorization" rel="nofollow noreferrer">Authorization section</a> it says that authorization is controlled through a Policy.</p>
<p>However, a Policy is not found as an API resource:</p>
<pre><code>❯ k api-resources | grep -i policy
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
❯ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.3+k3s1
</code></pre>
<p>So what exactly is a Policy? How is it setup?</p>
<p>The docs are not very clear on this point.</p>
| jersey bean | <blockquote>
<p>...under the Authorization section it says that authorization is controlled through a Policy.</p>
</blockquote>
<p>The sample you saw is a cluster that uses ABAC. From your screenshot your cluster is likely to use RBAC which access policy is not used.</p>
<blockquote>
<p>So what exactly is a Policy? How is it setup?</p>
</blockquote>
<p>Here's the official documentation for <a href="https://kubernetes.io/docs/reference/access-authn-authz/abac/" rel="nofollow noreferrer">Attribute-based access control</a></p>
| gohm'c |
<p>I need to know the data traffic of one of my kubernetes nodes in a specific month.</p>
<p>This query gives me metrics for the last 30 days and the value is in MiB/s</p>
<pre><code>fetch k8s_node
| metric 'kubernetes.io/node/network/received_bytes_count'
| align rate(30d)
| every 30d
| group_by ['node_name'],
[value_received_bytes_count_aggregate: aggregate(value.received_bytes_count)]
</code></pre>
<p>How can I get this value in bytes and specify a month(only February of 2023 for example)</p>
<p>Thank you</p>
| Thiago Gabriel Leite Ferreira | <p>Try with below code:</p>
<pre><code>fetch k8s_node
| metric 'kubernetes.io/node/network/received_bytes_count'
| filter (time>=’2023-02-01T00:00:00Z’ and time<’2023-03-01T00:00:00Z’)
| align rate(1m)
| every 1d
| group_by ['node_name'],
[value_received_bytes_count_aggregate: aggregate(value.received_bytes_count)]
</code></pre>
<p>By adding the filter clause we can limit the data only for the month February 2023. Check this <a href="https://dataschool.com/learn-sql/dates/" rel="nofollow noreferrer">document for Date and Time Functions</a></p>
| Abhijith Chitrapu |
<p>I'm using kubeadm to build k8s cluster and default ssl certs will be used in 1 year.
I plan use cfssl or opensll to gen new certs with 10 years use.
Could anynone pls help me.</p>
<p>Thanks all</p>
| Thanhvanptit | <p>To renew Kubernetes certs for 10 years (not recommended).
<img src="https://i.stack.imgur.com/9FSfj.png" alt="" /></p>
<ul>
<li><p>Check certs expiration</p>
<p><code>kubeadm alpha certs check-expiration --config="/etc/kubernetes/kubeadm-config.yaml"</code></p>
</li>
<li><p>Back up the existing Kubernetes certificates</p>
<p><code>mkdir -p $HOME/fcik8s-old-certs/pki</code></p>
<p><code>/bin/cp -p /etc/kubernetes/pki/*.* $HOME/fcik8s-old-certs/pki</code></p>
</li>
<li><p>Back up the existing configurtion files</p>
<p><code>/bin/cp -p /etc/kubernetes/*.conf $HOME/fcik8s-old-certs</code></p>
</li>
<li><p>Back up your home configuration</p>
<p><code>mkdir -p $HOME/fcik8s-old-certs/.kube</code></p>
<p><code>/bin/cp -p ~/.kube/config $HOME/fcik8s-old-certs/.kube/.</code></p>
</li>
<li><p>Add <em>--cluster-signing-duration</em> flag (<em>--experimental-cluster-signing-duration</em> prior to 1.19) for <em>kube-controller-manager</em></p>
<blockquote>
<p>Edit /etc/kubernetes/manifests/kube-controller-manager.yaml</p>
</blockquote>
<pre><code> apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --experimental-cluster-signing-duration=87600h
...
...
</code></pre>
<p>87600h ~ 10 years</p>
</li>
<li><p>Renew all certs</p>
<p><code>kubeadm alpha certs renew all --config /etc/kubernetes/kubeadm-config.yaml --use-api</code></p>
</li>
<li><p>Approve the cert request</p>
<p><code>kubectl get csr</code></p>
<p><code>kubectl certificate approve <cert_request></code></p>
</li>
<li><p>Update the kubeconfig file</p>
<p><code>kubeadm init phase kubeconfig all --config /etc/kubernetes/kubeadm-config.yaml</code></p>
</li>
<li><p>Overwrite the original admin file with the newly generated admin configuration file</p>
<p><code>cp -i /etc/kubernetes/admin.conf $HOME/.kube/config</code></p>
<p><code>chown $(id -u):$(id -g) $HOME/.kube/config</code></p>
</li>
<li><p>Restart components</p>
<p><code>docker restart $(docker ps | grep etcd | awk '{ print $1 }')</code></p>
<p><code>docker restart $(docker ps | grep kube-apiserver | awk '{ print $1 }')</code></p>
<p><code>docker restart $(docker ps | grep kube-scheduler | awk '{ print $1 })</code></p>
<p><code>docker restart $(docker ps | grep kube-controller | awk '{ print $1 }')</code></p>
<p><code>systemctl daemon-reload && systemctl restart kubelet</code></p>
</li>
<li><p>Check api-server cert expiration</p>
<p><code>echo | openssl s_client -showcerts -connect 127.0.0.1:6443 -servername api 2>/dev/null | openssl x509 -noout -enddate</code></p>
</li>
</ul>
| Tinh Huynh |
<p>I'm trying to mount file from the host running minikube cluster with Hyper-V and pass in into MySQL Container with deployment yaml , I tried to add the file to the minikube vm ( with ssh) and then mount it to the deployment with PV and Claim , I tried to mount from the localhost that running the minikube ( my computer ) but still I don't see the file.</p>
<p>Current Configuration is : I have on the Hyper-V VM running minikube folder named data , and inside this folder i Have the file i want to transfer to the container ( pod ) .</p>
<p><strong>PV Yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: sqlvolume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data
</code></pre>
<p><strong>claim.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: sqlvolume
name: sqlvolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
<p><strong>deployment.yaml (MySQL)</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
labels:
io.kompose.service: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
creationTimestamp: null
labels:
io.kompose.service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: crud
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /data
name: sqlvolume
# resources:
# requests:
# memory: "64Mi"
# cpu: "250m"
# limits:
# memory: "128Mi"
# cpu: "500m"
hostname: mysql
restartPolicy: Always
volumes:
- name: sqlvolume
persistentVolumeClaim:
claimName: sqlvolume
status: {}
</code></pre>
<p>I Don't mind how to achieve that just , <strong>I have Hyper-V Minikube running on my computer and I want to transfer file mysql.sql from the host ( or from the PV I created ) to the pod.</strong></p>
<p>how can I achieve that ?</p>
| ITBYD | <p>You can try with a hostPath type PersistentVolume</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/<file_name>"
</code></pre>
<p>PersistentVolumeClaim</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
volumeName: "pv-volume"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Deployment ( changed pvc name )</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
labels:
io.kompose.service: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\Users\itayb\Desktop\K8S-Statefulset-NodeJs-App-With-MySql\kompose.exe convert
kompose.version: 1.24.0 (7c629530)
creationTimestamp: null
labels:
io.kompose.service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: crud
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /data
name: sqlvolume
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
hostname: mysql
restartPolicy: Always
volumes:
- name: sqlvolume
persistentVolumeClaim:
claimName: pv-claim
status: {}
</code></pre>
| Eyal Solomon |
<p>When I am accessing a Istio gateway <code>NodePort</code> from the Nginx server using <code>curl</code>, I am getting response properly, like below:</p>
<pre><code>curl -v "http://52.66.195.124:30408/status/200"
* Trying 52.66.195.124:30408...
* Connected to 52.66.195.124 (52.66.195.124) port 30408 (#0)
> GET /status/200 HTTP/1.1
> Host: 52.66.195.124:30408
> User-Agent: curl/7.76.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Sat, 18 Sep 2021 04:33:35 GMT
< content-type: text/html; charset=utf-8
< access-control-allow-origin: *
< access-control-allow-credentials: true
< content-length: 0
< x-envoy-upstream-service-time: 2
<
* Connection #0 to host 52.66.195.124 left intact
</code></pre>
<p>The same when I am configuring through Nginx proxy like below, I am getting <code>HTTP ERROR 426</code> through the domain.</p>
<p>Note: my domain is HTTPS - <a href="https://dashboard.example.com" rel="noreferrer">https://dashboard.example.com</a></p>
<pre><code>server {
server_name dashboard.example.com;
location / {
proxy_pass http://52.66.195.124:30408;
}
}
</code></pre>
<p>Can anyone help me to understand the issue?</p>
| Rahul Radhakrishnan | <p>HTTP 426 error means <a href="https://httpstatuses.com/426" rel="noreferrer">upgrade required</a>:</p>
<blockquote>
<p>The server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol.</p>
</blockquote>
<p>or <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/426" rel="noreferrer">another info</a>:</p>
<blockquote>
<p>The HTTP <strong><code>426 Upgrade Required</code></strong> client error response code indicates that the server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol.</p>
</blockquote>
<p>In your situation, you need to check what version of the HTTP protocol you are using. It seems too low. Look at <a href="https://github.com/envoyproxy/envoy/issues/2506" rel="noreferrer">this thread</a>. In that case, you had to upgrade from <code>1.0</code> to <code>1.1</code>.</p>
<p>You need to upgrade your HTTP protocol version in NGINX config like there:</p>
<blockquote>
<p>This route is for a legacy API, which enabled NGINX cache for performance reason, but in this route's proxy config, it missed a shared config <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version" rel="noreferrer"><code>proxy_http_version 1.1</code></a>, which default to use HTTP 1.0 for all NGINX upstream.</p>
<p>And Envoy will return <code>HTTP 426</code> if the request is <code>HTTP 1.0</code>.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I'm having some issues with permissions and I'm really hoping someone can point me to where I'm going wrong...</p>
<p>I've got a Kube cluster set up and functioning (for example, I'm running a mysql pod and connecting to it without issue), and I've been trying to get a Postgresql pod running with TLS support. The service that will be connecting to this pod requires TLS, so going without TLS is unfortunately not an option.</p>
<p>Here's where things get a bit messy, everything functions - except for the fact that for some reason Postgres init can't seem to read my certificate files that are stored in Kube secrets. Seems like whatever options I choose, Postgres init returns the following:</p>
<pre><code>$ kubectl logs data-server-97469df55-8wd6q
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgres ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
2021-09-11 20:03:54.323 UTC [32] WARNING: no usable system locales were found
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgres -l logfile start
waiting for server to start....2021-09-11 20:04:01.882 GMT [37] FATAL: could not load server certificate file "/var/lib/postgres-secrets/server.crt": Permission denied
2021-09-11 20:04:01.882 GMT [37] LOG: database system is shut down
pg_ctl: could not start server
Examine the log output.
stopped waiting
</code></pre>
<p>I HIGHLY suspect my issue is the very first line, but I'm not sure how to go about resolving this in Kubernetes. How do I tell Kubernetes that I need to mount my secrets so that user 'postgres' can read them (being lazy and doing a <code>chmod 0777</code> does not work)?</p>
<p>These are my configs:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: data-server
labels:
app: data-server
spec:
ports:
- name: data-server
targetPort: 5432
protocol: TCP
port: 5432
selector:
app: data-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-server
spec:
selector:
matchLabels:
app: data-server
strategy:
type: Recreate
template:
metadata:
labels:
app: data-server
spec:
serviceAccountName: default
containers:
- name: postgres
image: postgres:alpine
imagePullPolicy: IfNotPresent
args:
- -c
- hba_file=/var/lib/postgres-config/pg_hba.conf
- -c
- config_file=/var/lib/postgres-config/postgresql.conf
env:
- name: PGDATA
value: /var/lib/postgres
- name: POSTGRES_PASSWORD_FILE
value: /var/lib/postgres-secrets/postgres-pwd.txt
ports:
- name: data-server
containerPort: 5432
hostPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-config
mountPath: /var/lib/postgres-config
- name: postgres-storage
mountPath: /var/lib/postgres-data
- name: postgres-secrets
mountPath: /var/lib/postgres-secrets
volumes:
- name: postgres-config
configMap:
name: data-server
- name: postgres-storage
persistentVolumeClaim:
claimName: gluster-claim
- name: postgres-secrets
secret:
secretName: data-server
defaultMode: 0640
</code></pre>
<p>Secrets:</p>
<pre><code>$ kubectl get secret
NAME TYPE DATA AGE
data-server Opaque 5 131m
default-token-nq7pv kubernetes.io/service-account-token 3 5d5h
</code></pre>
<p>PV / PVC</p>
<pre><code>$ kubectl describe pv,pvc
Name: gluster-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: default/gluster-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 50Gi
Node Affinity: <none>
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: gluster-cluster
EndpointsNamespace: <unset>
Path: /gv0
ReadOnly: false
Events: <none>
Name: gluster-claim
Namespace: default
StorageClass:
Status: Bound
Volume: gluster-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 50Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: data-server-97469df55-8wd6q
dnsutils
mysql-6f47967858-xngbr
</code></pre>
| The Kaese | <p>Figured it out.. Turns out it was just a necessary block in the template/spec:</p>
<pre><code>securityContext:
runAsUser: 70
fsGroup: 70
</code></pre>
<p>Took way too long to find a <a href="https://stackoverflow.com/questions/66316489/dockerfile-run-addgroup-group-postgres-in-use">reference</a> to this using the googles. Seems a bit odd too...what happens if I want to switch off alpine to something else? the UID/GID aren't going to be the same. I'll have to find those and change those here too. Seems silly to use IDs rather than names.</p>
| The Kaese |
<p>I'm creating EKS cluster using the <a href="https://eksctl.io/" rel="nofollow noreferrer">eksctl</a>. While developing the yaml configurations for the underlying resources, I came to know that spot instance is also supported with AWS EKS cluster(<a href="https://eksctl.io/usage/spot-instances/" rel="nofollow noreferrer">here</a>). However while referring the <a href="https://eksctl.io/usage/schema/" rel="nofollow noreferrer">documentation/schema</a>, I didn't find anything to limit the bidding price for spot instance. So by default, it will bid with on demand pricing which is not ideal. Am I missing anything here or it's just not possible at the moment?</p>
<p>Sample yaml config for spot(cluster-config-spot.yaml) -</p>
<pre><code>apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: spot-cluster
region: us-east-2
version: "1.23"
managedNodeGroups :
- name: spot-managed-node-group-1
instanceType: ["c7g.xlarge","c6g.xlarge"]
minSize: 1
maxSize: 10
spot: true
</code></pre>
<p>AWS EKS cluster creation command -</p>
<pre><code>eksctl create cluster -f cluster-config-spot.yaml
</code></pre>
| Swanand | <p><code>maxPrice</code> can be set for self-managed node group <a href="https://eksctl.io/usage/spot-instances/#unmanaged-nodegroups" rel="nofollow noreferrer">this way</a>; but this is not supported for managed node group. You can upvote the feature <a href="https://github.com/aws/containers-roadmap/issues/1575" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>I'm trying to deploy postgres as a <code>StatefulSet</code> on <code>k3d</code>.</p>
<p>I set the environment variable <code>POSTGRES_USER</code>, but when I try to connect to the db, it isn't taken into account and I see that authentication has failed. I can login with the password <code>pgpassword</code> and default user <code>postgres</code>.</p>
<p>Why isn't the <code>POSTGRES_USER</code> taken into account?</p>
<p>This is my yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
replicas: 2
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgresql-db-disk
mountPath: /data
env:
- name: POSTGRES_USER
value: pguser
- name: POSTGRES_PASSWORD
value: pgpassword
- name: PGDATA
value: /data/pgdata
volumes:
- name: postgresql-db-disk
hostPath:
# directory location on host
path: /opt/x/projects/tilt_test/data/
# this field is optional
type: DirectoryOrCreate
</code></pre>
| Preston | <blockquote>
<p>but when i try to connect to the db, it isn't taken into account, and i see that authentication has failed.</p>
</blockquote>
<p>It works exactly as it should. The <code>POSTGRES_USER</code> and <code>POSTGRES_PASSWORD</code> system variables are used to create the default user and password when creating the database. <strong>If, on the other hand, you intend to log in to an existing user later, you must set and use system variables: <code>PGUSER</code> and <code>PGPASSWORD</code>.</strong> <a href="https://www.postgresql.org/docs/10/libpq-envars.html" rel="nofollow noreferrer">Here</a> is the explanation:</p>
<blockquote>
<ul>
<li><p><code>PGUSER</code> behaves the same as the <a href="https://www.postgresql.org/docs/10/libpq-connect.html#LIBPQ-CONNECT-USER" rel="nofollow noreferrer">user</a> connection parameter.</p>
</li>
<li><p><code>PGPASSWORD</code> behaves the same as the <a href="https://www.postgresql.org/docs/10/libpq-connect.html#LIBPQ-CONNECT-PASSWORD" rel="nofollow noreferrer">password</a> connection parameter. Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using a password file (see <a href="https://www.postgresql.org/docs/10/libpq-pgpass.html" rel="nofollow noreferrer" title="33.15. The Password File">Section 33.15</a>).</p>
</li>
</ul>
</blockquote>
<p>You can also find a complete tutorial <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">How to Deploy PostgreSQL Statefulset in Kubernetes With High Availability</a>. Additionally, you will find an explanation for <code>POSTGRES_USER</code> and <code>POSTGRES_PASSWORD</code> system variables:</p>
<blockquote>
<p>POSTGRES_USER: The user that should be created automatically when the Postgres process starts.
POSTGRES_PASSWORD: The password for the user created by default.</p>
</blockquote>
<p>In this case, logging into the database takes place during readiness and liveness probes.</p>
| Mikołaj Głodziak |
<p>I am trying to convert an Istio service mesh running on k8s from <code>http</code> to <code>https</code> but stumbled upon many problems. I don't really understand what are all the steps required to do that.</p>
<p>As I know, there are 2 kinds of traffic that requires TLS in a mesh:</p>
<ul>
<li><p><strong>between internal services</strong>: scramming through Istio docs let me know that Istio will somehow automatically configure mTLS between services so all of them will communicate securely without any extra configuration. However, I still don't understand deeply how they implement this mTLS. How does it differ from normal TLS and what is mTLS role in the other kind of traffic (client outside to service inside)?</p>
</li>
<li><p><strong>from client outside to a service inside</strong>: this is where I don't know what to do. I know that in order for a service to have TLS it needs TLS certificate by a trusted CA. However, as the outer client will not talk directly to the service inside but only through the Istio ingress gateway. Do I need to provide cert for every service or only the ingress gateway? All of my services are now exposing port 80 for <code>HTTP</code>. Do I need to convert all of them to port 443 and <code>HTTPS</code> or just the ingress gateway is enough?</p>
</li>
</ul>
<p>Regarding the certificates, if I just use self-signing certs for now, can I just create cert and key with openssl and create secrets from it (maybe sync between namespaces with <code>kubed</code>), then all services use the same cert and key? Everywhere suggests me to use <code>cert-manager</code>. However, I don't know if it is worth the effort?</p>
<p>I would be really thankful if anyone can explain with some illustrations.</p>
| Nolan Edric | <p>In general, if you need a good explanation of the issues related to Istio (also with pictures), I recommend that you check the documentation. You can find around <a href="https://istio.io/latest/search/?q=tls&site=docs" rel="nofollow noreferrer">540 topics</a> related to TLS in Istio.</p>
<p>Istio is a very well documented service. Here you can find more information about <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/tls-configuration/" rel="nofollow noreferrer">Understanding TLS Configuration</a>. You can also find good article about <a href="https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/" rel="nofollow noreferrer">Mutual TLS Migration</a>.</p>
<blockquote>
<p>However I still don't understand deeply how they implement this mTLS, how does it differ from normal TLS and what is mTLS role in the other kind of traffic (client outside to service inside).</p>
</blockquote>
<p>Mutual TLS, or mTLS for short, is a method for <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-authentication/" rel="nofollow noreferrer">mutual authentication</a>. mTLS ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private <a href="https://www.cloudflare.com/learning/ssl/what-is-a-cryptographic-key/" rel="nofollow noreferrer">key</a>. The information within their respective <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate" rel="nofollow noreferrer">TLS certificates</a> provides additional verification. You can read more about it <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/" rel="nofollow noreferrer">here</a>. Additionally yo can also see page about <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-http/" rel="nofollow noreferrer">HTTP Traffic</a> (mTLS is required for this case).</p>
<blockquote>
<p>All of my services are now exposing port 80 for HTTP. Do I need to convert all of them to port 443 and HTTPS or just the ingress gateway is enough?</p>
</blockquote>
<p>It is possible to create <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/" rel="nofollow noreferrer">Ingress Gateway without TLS Termination</a>:</p>
<blockquote>
<p>The <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/" rel="nofollow noreferrer">Securing Gateways with HTTPS</a> task describes how to configure HTTPS ingress access to an HTTP service. This example describes how to configure HTTPS ingress access to an HTTPS service, i.e., configure an ingress gateway to perform SNI passthrough, instead of TLS termination on incoming requests.</p>
</blockquote>
<p><strong>EDIT (added more explanation and documentation links):</strong></p>
<blockquote>
<p><a href="https://istio.io/latest/about/service-mesh/" rel="nofollow noreferrer">Service mesh</a> uses a proxy to intercept all your network traffic, allowing a broad set of application-aware features based on configuration you set.</p>
</blockquote>
<blockquote>
<p>Istio securely provisions strong identities to every workload with X.509 certificates. Istio agents, running alongside each Envoy proxy, work together with istiod to automate key and certificate rotation at scale. The <a href="https://istio.io/latest/docs/concepts/security/#pki" rel="nofollow noreferrer">following diagram</a> shows the identity provisioning flow.</p>
</blockquote>
<blockquote>
<p>Peer <a href="https://istio.io/latest/docs/concepts/security/#authentication" rel="nofollow noreferrer">authentication</a>: used for service-to-service authentication to verify the client making the connection. Istio offers <a href="https://en.wikipedia.org/wiki/Mutual_authentication" rel="nofollow noreferrer">mutual TLS</a> as a full stack solution for transport authentication, which can be enabled without requiring service code changes.</p>
</blockquote>
<p>Peer authentication modes that are supported: <code>Permissive</code>, <code>Strict</code>, and <code>Disable</code>.</p>
<p>In order to answer this question:</p>
<blockquote>
<p>All of my services are now exposing port 80 for HTTP. Do I need to convert all of them to port 443 and HTTPS or just the ingress gateway is enough?</p>
</blockquote>
<p>fully we could have informed the customer that using Istio Gateway can expose services from Istio service mesh to the outside using plain HTTP, with TLS termination or in PASSTHROUGH TLS mode. Incoming TLS termination could be improved (using TLS certificate approved by a trusted CA or using cert-manger with Istio Gateway). You can read more about this topic <a href="https://istio.io/latest/docs/ops/integrations/certmanager/" rel="nofollow noreferrer">here</a>.</p>
| Mikołaj Głodziak |
<p>Prometheus Question:
I am using prometheus on Helm, and I want to mount several .yml files on the same location /etc/config/alrtingRules
It is vital that these remain separated as different files in the git repo.
I have tried mounting them each to its own configMap, and use "extraConfigmapMounts" to put them all in location, but I am facing difficulties.</p>
<p>I've tried two configurations:</p>
<p>first:</p>
<pre><code> extraConfigmapMounts:
- name: recording-rules
mountPath: /etc/config/recording-rules.yml
subPath: recording-rules.yml
configMap: recording-rules
readOnly: true
- name: dummytest-alerting
mountPath: /etc/config/alertingRules/dummytest.yml
subPath: dummytest.yml
configMap: dummytest-alerting
readOnly: true
- name: app1-alerting
mountPath: /etc/config/alertingRules/app1.yml
subPath: app1.yml
configMap: app1-alerting
readOnly: true
- name: app2-alerting
mountPath: /etc/config/alertingRules/app2.yml
subPath: app2.yml
configMap: app2-alerting
readOnly: true
</code></pre>
<p>This helm succeeds, but then prometheus-server fails to load with this error:</p>
<blockquote>
<p>ts=2022-06-13T08:25:35.322Z caller=manager.go:968 level=error
component="rule manager" msg="loading groups failed"
err="/etc/config/alertingRules/dummytest.yml: read
/etc/config/alertingRules/dummytest.yml: is a directory</p>
</blockquote>
<p>second:</p>
<pre><code> extraConfigmapMounts:
- name: recording-rules
mountPath: /etc/config/
subPath: recording-rules.yml
configMap: recording-rules
readOnly: true
- name: dummytest-alerting
mountPath: /etc/config/alertingRules/
subPath: dummytest.yml
configMap: dummytest-alerting
readOnly: true
- name: app1-alerting
mountPath: /etc/config/alertingRules/
subPath: app1.yml
configMap: app1-alerting
readOnly: true
- name: app2-alerting
mountPath: /etc/config/alertingRules/
subPath: app2.yml
configMap: app2-alerting
readOnly: true
</code></pre>
<p>With this configuration, Helm fails giving this error:</p>
<blockquote>
<p>Error: UPGRADE FAILED: failed to create patch: The order in patch
list: [map[mountPath:/etc/config name:server-recording-rules
readOnly:true subPath:recording-rules.yml] map[mountPath:/etc/config
name:config-volume] map[mountPath:/data subPath:]
map[mountPath:/etc/config/alertingRules name:server-app2-alerting
readOnly:true subPath:app2.yml]
map[mountPath:/etc/config/alertingRules
name:server-app1-alerting readOnly:true subPath:app1.yml]
map[mountPath:/etc/config/alertingRules name:server-dummytest-alerting
readOnly:true subPath:dummytest.yml]] doesn't match $setElementOrder
list: [map[mountPath:/etc/config] map[mountPath:/data]
map[mountPath:/etc/config] map[mountPath:/etc/config/alertingRules]
map[mountPath:/etc/config/alertingRules]
map[mountPath:/etc/config/alertingRules]]</p>
</blockquote>
<p>Any suggestions as to how to mount several .yml files into the /etc/config of prometheus server?</p>
| Yonatan Huber | <p>The solution I found is thus:
I create an empty configMap called "alerting-rules".
Every microservice gets its own yaml file with only "data" field. For example:</p>
<pre><code>data:
app1.yaml: |
groups:
...
...
...
...
...
</code></pre>
<p>I mount the various alerting yamls into alerting-rules using the "kubectl patch" command, for example:</p>
<blockquote>
<pre><code>- kubectl patch configmap -n prometheus alerting-rules --patch-file path/to/app1.yaml
</code></pre>
</blockquote>
<p>In the prometheus values I add this:</p>
<pre><code>server:
extraConfigmapMounts:
- name: recording-rules
mountPath: /etc/config/recordingRules
configMap: recording-rules
readOnly: true
- name: alerting-rules
mountPath: /etc/config/alertingRules
configMap: alerting-rules
readOnly: true
</code></pre>
<p>And further down, under I add this:</p>
<pre><code>serverFiles:
prometheus.yml:
rule_files:
- /etc/config/recordingRules/recording-rules.yaml
- /etc/config/alertingRules/dummytest.yaml
- /etc/config/alertingRules/app1.yaml
- /etc/config/alertingRules/app2.yaml
</code></pre>
<p>This loads all alerts into prometheus, thus allowing us to manage our alerts in a per-microservice yaml.</p>
| Yonatan Huber |
<p>I'm invoking Kaniko (I Docker image that can build Docker images) successfully in this way (EKS environment):</p>
<pre><code>cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --stdin --rm --restart=Never --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --env=AWS_SDK_LOAD_CONFIG=true -- --destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-54-DV-6525-kube --context tar://stdin --label commit=8e3a236f702c689891a50a60acf7e05658fa3939 --label build_url=Sin-Jenkins
</code></pre>
<p>This works ok, except when there is no enough ephemeral storage available.</p>
<p>Now I want to specify limits, like <code>ephemereal-storage</code>.
As the <code>--limits</code> option has been removed in recent versions of Kubernetes, I have to use the <code>--overrides</code>, and I have to change many things.</p>
<p>Here is how I do it:</p>
<pre><code>cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --restart=Never -i --rm --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --overrides='{"apiVersion":"v1",
"spec":
{"containers":[{
"name":"kaniko",
"stdin": true,
"restartPolicy":"Never",
"image":"748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0",
"env":[{"name":"AWS_SDK_LOAD_CONFIG","value":"true"}],
"resources":{"requests":{
"ephemeral-storage":"116Mi",
"memory": "50Mi",
"cpu": "50m"
}},
"args":[
"--destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-57-DV-6525-kube",
"--context","tar://stdin"
,"--label","commit=8e3a236f702c689891a50a60acf7e05658fa3939","--label","build_url=Sin-Jenkins"]
}]}'
</code></pre>
<p>It launches correctly kaniko, but after 65 seconds, kaniko is killed and the only error message is:</p>
<pre><code>error: timed out waiting for the condition
</code></pre>
<pre><code>$ kubectl get ev | grep kaniko
19m Normal Scheduled pod/kaniko-httpd-ex Successfully assigned sbt-david/kaniko-httpd-ex to ip-10-120-134-171.eu-west-1.compute.internal
19m Normal Pulled pod/kaniko-httpd-ex Container image "748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0" already present on machine
19m Normal Created pod/kaniko-httpd-ex Created container kaniko
19m Normal Started pod/kaniko-httpd-ex Started container kaniko
17m Normal Killing pod/kaniko-httpd-ex Stopping container kaniko
</code></pre>
<p>How to specify limits on resources when using <code>stdin</code>?</p>
<p>Related info:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubectl/issues/1101" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/issues/1101</a></li>
</ul>
<p>Notes:</p>
<ul>
<li>I cannot use the <code>--limits</code> flag from <code>kubectl run</code> as it has been removed recently.</li>
</ul>
| david.perez | <p>Your sample:</p>
<pre><code>...
"resources":{"requests":{
"ephemeral-storage":"116Mi"}, # <-- incorrect
...
</code></pre>
<p>Try:</p>
<pre><code>...
"resources":{
"requests":{
"ephemeral-storage": "116Mi",
"memory": "50Mi",
"cpu": "50m"
},
...
</code></pre>
<p>Here's a working example that runs a pod for 5m. Do <code>kubectl describe pod busybox</code> and check the <code>Requests</code> section:</p>
<pre><code>kubectl run busybox --image busybox --overrides '
{
"apiVersion": "v1",
"spec": {
"restartPolicy": "Never",
"containers": [
{
"name": "busybox",
"image": "busybox",
"command": ["ash", "-c", "sleep 300"],
"resources": {
"requests":{
"ephemeral-storage": "116Mi",
"memory": "50Mi",
"cpu": "50m"
}}}]}}'
</code></pre>
| gohm'c |
<p>I can't figure out whats wrong with my role binding. I keep getting this error while trying to get metrics for my pod.</p>
<p><code>"pods.metrics.k8s.io "my-pod-name" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "metrics.k8s.io" in the namespace "default""</code></p>
<p>Here is the Cluster role yaml file</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: ["", "metrics.k8s.io"] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Then I ran this command</p>
<pre><code>kubectl create clusterrolebinding pod-reader \
--clusterrole=pod-reader \
--serviceaccount=default:default
</code></pre>
| Daniel Kobe | <p>ClusterRole & ClusterRolebinding are non-namespaced resources. So, remove the namespace line from your YAML file.</p>
<p>Alternatively, use Role & RoleBinding if you want to scope to a namespace.</p>
| Nathan |
<p>What is the difference between <code>$kubectl create deploy</code> and <code>$kubectl create deployment</code>? Google Cloud Fundamental's Lab is using kubectl create deploy command, but in the Kubernetes documentation/Interactive Tutorial (<a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/</a>), it is using the command kubectl create deployment. So just wanted to ask this group, which one is the correct/latest?</p>
| Sudipta Deb | <p>They meant the same. You can find the SHORTNAMES for K8s resource with <code>kubectl api-resources</code>.</p>
| gohm'c |
<p>Any idea how to go about this, can't find much clear info on google, to measure the Errors (40x, and 50x) on my service endpoints. My services are up and when I delete pods just for a test, I can see in the blackbox metrics that the prometheus gets and error, but not specified like 40x type or 50x.</p>
<p>Edit 1:</p>
<ul>
<li>Yes, I have setup my cluster, at this stage is experimental, I have set it up on a VirtualBox+Vagrant+K3s. I have created two simple services one front end one backend, and configured prometheus Jobs to discover the services and probe their uptime via Blackbox monitor. My goal is to get somehow some metrics on a grafana dashboard the measure the number of 40x or 50x errors for all the requests to these services within a period of time. Currently whats on my mind is measuring the number of 2xx and reporting only Non-2xx status codes but that would include more errors/status than 40x and 50x.</li>
</ul>
<p>Prometheus is deployed as a helm stack, same with the Blackbox monitor. Everything is deployed on the default namespace, because at this stage is just for testing on how to achieve this goal.</p>
| Nesim Pllana | <p>Based on <a href="https://groups.google.com/g/prometheus-users/c/QUY9NsLPsZk" rel="nofollow noreferrer">this topic</a>:</p>
<blockquote>
<p>Services in Kubernetes are kind of like load-balancers - they just route requests to underlying pods. The pods themselves actually contain the application that does the work and returns the status code.
You don't monitor kubernetes services <em>per-se</em> for 4xx or 5xx errors, you need to monitor the underlying application itself.</p>
</blockquote>
<p>So, you need to create an architecture to monitoring your application. Prometheus only collects metrics and makes graphs out of it, it does not process anything by itself. Metrics must be exposed by the application. <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/" rel="nofollow noreferrer">Here</a> you can find topic - Kubernetes monitoring with Prometheus, the ultimate guide. Is very comprehensive and explains perfectly how to monitor an application. For you, the most interesting part should be <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/#services" rel="nofollow noreferrer">How to monitor a Kubernetes service with Prometheus</a>. You can also find there a <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus-operator-part3/" rel="nofollow noreferrer">Prometheus Operator Tutorial</a>. It could help you with automation deployment for Prometheus, Alertmanager and Grafana.</p>
<p>Once you've installed everything, you'll be able to collect metrics. It is good practice to use <a href="https://prometheus.io/docs/practices/instrumentation/#use-labels" rel="nofollow noreferrer">lables</a>. This allows you to easily distinguish between different response codes from your application.</p>
<blockquote>
<p>For example, rather than <code>http_responses_500_total</code> and <code>http_responses_403_total</code>, create a single metric called <code>http_responses_total</code> with a <code>code</code> label for the HTTP response code. You can then process the entire metric as one in rules and graphs.</p>
</blockquote>
| Mikołaj Głodziak |
<p>In CKAD exam I have been asked to SSH to other node in cluster to do some kubectl operations like <code>kubectl get all</code>, though with that getting below:</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
</blockquote>
<p>Tried doing sudo, but did not work and did check kubectl config view (can see empty file in client node)</p>
<p>How to do this?</p>
| Arpit Saklecha | <p>You need to list the available nodes in the cluster, but first, make sure you're using the correct context:</p>
<pre><code>k get nodes
</code></pre>
<p>You will get the available noted like:<br />
<code>node-0 node-1</code> (to see which one is the worker node, or if you were asked to ssh to a specific node then copy-paste it) should be:</p>
<pre><code>ssh node-0
</code></pre>
<p>This is to create some files/directory (ex: to persist data) once you finish return to the master to complete your task.</p>
| ahmedabouhamed |
<p>We have multi-node cluster setup, now try to install dashboard.</p>
<p>I ran command <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml </code> and it deploy all resource.</p>
<p>I check the service is created.</p>
<pre><code># kubectl get services -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.103.75.109 <none> 8000/TCP 4m53s
kubernetes-dashboard ClusterIP 10.106.194.108 <none> 443/TCP 4m53s
</code></pre>
<p>Then try to access the service url.</p>
<pre><code># curl https://10.106.194.108 -k
<!--
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>Kubernetes Dashboard</title>
<link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png">
<meta name="viewport" content="width=device-width">
<style>body,html{height:100%;margin:0;}</style><link rel="stylesheet" href="styles.aa0538c9a91ebbb04705.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.aa0538c9a91ebbb04705.css"></noscript></head>
<body>
<kd-root></kd-root>
<script src="runtime.1a20bc8321eb559541a1.js" defer></script><script src="polyfills.2565916e4afd13edaa84.js" defer></sc
creationTimestamp: "2021-11-09T21:20:14Z"
ript><script src="scripts.f76573725d49abb057d3.js" defer></script><script src="en.main.7f7baee1f12d075d7cb9.js" defer></script>
</body></html>
</code></pre>
<p>Then I follow the steps to convert <code>ClusterIP</code> to <code>NodePort</code>,</p>
<p>Service before conversion.</p>
<pre><code># kubectl -n kubernetes-dashboard describe service kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.194.108
IPs: 10.106.194.108
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 192.168.211.130:8443
Session Affinity: None
Events: <none>
</code></pre>
<p>Service after edit using <code>kubectl -n kubernetes-dashboard edit service kubernetes-dashboard</code> command.</p>
<pre><code># kubectl -n kubernetes-dashboard describe service kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.194.108
IPs: 10.106.194.108
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32358/TCP
Endpoints: 192.168.211.130:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>then <code>curl https://10.106.194.108:32358 -k</code> never return data :(.</p>
<p>I check the pod IP, and its working.</p>
<pre><code># curl https://192.168.211.130:8443 -k
<!--
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>Kubernetes Dashboard</title>
<link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png">
<meta name="viewport" content="width=device-width">
<style>body,html{height:100%;margin:0;}</style><link rel="stylesheet" href="styles.aa0538c9a91ebbb04705.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.aa0538c9a91ebbb04705.css"></noscript></head>
<body>
<kd-root></kd-root>
<script src="runtime.1a20bc8321eb559541a1.js" defer></script><script src="polyfills.2565916e4afd13edaa84.js" defer></script><script src="scripts.f76573725d49abb057d3.js" defer></script><script src="en.main.7f7baee1f12d075d7cb9.js" defer></script>
</body></html>
</code></pre>
<p>How to check why its not working after changing to <code>NodePort</code> ?</p>
| Nilesh | <p>k8s nodeport service works like this :</p>
<p>which means your valid endpoint would be</p>
<pre><code>http://{NODE_IP}:{PORT}
</code></pre>
<p><a href="https://i.stack.imgur.com/3wepF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3wepF.png" alt="enter image description here" /></a></p>
| Eyal Solomon |
<p>I'm using <code>microk8s</code> on an Unbuntu 20.04 LTS server hosted in AWS (EC2). I want to install <code>kubeflow</code> in order to setup a pipeline for machine learning jobs. I followed this official installation guide: <a href="https://ubuntu.com/ai/install-kubeflow" rel="nofollow noreferrer">https://ubuntu.com/ai/install-kubeflow</a></p>
<p>Everything worked fine and I am also able to access the kubeflow dashboard, however, the problem is that most of the menu points are missing and I get the following error when opening the dashboard: "Cannot load dashboard menu link".</p>
<p>I did a lot of research and tried many different options (like intalling kubeflow on top of microk8s using juju), but I always get the same error and I ran out of options.</p>
<p>What could be the reason for that and how can I fix that?</p>
| C3d | <p>I finally found the solution, it is described here: <a href="https://github.com/canonical/bundle-kubeflow/issues/344#issuecomment-827936799" rel="nofollow noreferrer">https://github.com/canonical/bundle-kubeflow/issues/344#issuecomment-827936799</a></p>
<p>This fixed the issue for me.</p>
| C3d |
<p>I have built a structure for a microservice app. auth is the first Dockerfile I have made and as far as I can tell is not building.</p>
<pre><code>C:\Users\geeks\Desktop\NorthernHerpGeckoSales\NorthernHerpGeckoSales\auth>docker build -t giantgecko/auth .
[+] Building 0.1s (9/9) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 206B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 53B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 0.0s
=> [1/5] FROM docker.io/library/node:alpine 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 479B 0.0s
=> [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY package.json . 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> ERROR [5/5] COPY /Users/geeks/Desktop/NorthernHerpGeckoSales/NorthernHerpGeckoSales/auth . 0.0s
------
> [5/5] COPY /Users/geeks/Desktop/NorthernHerpGeckoSales/NorthernHerpGeckoSales/auth .:
------
failed to compute cache key: "/Users/geeks/Desktop/NorthernHerpGeckoSales/NorthernHerpGeckoSales/auth" not found: not found
</code></pre>
| Jonathan Lang | <p>I was able to resolve this by changing my Dockerfile to:</p>
<pre class="lang-sh prettyprint-override"><code>FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>The fix came after I changed <code>COPY</code> from the absolute path to <code>. .</code>, and cleared the npm cache with <code>npm cache clean -f</code>.</p>
| Jonathan Lang |
<p>I'm in the process of creating fargate profiles for my AWS EKS cluster using terraform. In <a href="https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v19.5.1/examples/karpenter/main.tf#L114" rel="nofollow noreferrer">this example</a> for the Terraform Karpenter module, they have a loop that creates one profile for each of the 3 availability zones used in the example:</p>
<pre><code>
fargate_profiles = {
for i in range(3) :
"karpenter-${element(split("-", local.azs[i]), 2)}" => {
selectors = [
{ namespace = "karpenter" }
]
# We want to create a profile per AZ for high availability
subnet_ids = [element(module.vpc.private_subnets, i)]
}
}
</code></pre>
<p>Why is this necessary? What's the difference between this and creating a single profile that is attached to 3 zones? Something like this:</p>
<pre><code> fargate_profiles = {
"karpenter" = {
selectors = [
{ namespace = "karpenter" }
]
subnet_ids = var.vpc_private_subnets
}
}
</code></pre>
| kenske | <p>One profile one AZ allows you to deploy pod in specific AZ. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Amazon EKS and Fargate spread pods across each of the subnets that's
defined in the Fargate profile. However, you might end up with an
uneven spread. If you must have an even spread, use two Fargate
profiles. Even spread is important in scenarios where you want to
deploy two replicas and don't want any downtime. We recommend that
each profile has only one subnet.</p>
</blockquote>
| gohm'c |
<p>I am testing istio 1.10.3 to add headers with minikube but I am not able to do so.</p>
<p><strong>Istio</strong> is installed in the <code>istio-system</code> namespaces.
The namespace where the deployment is deployed is labeled with <code>istio-injection=enabled</code>.</p>
<p>In the <code>config_dump</code> I can see the LUA code only when the context is set to <code>ANY</code>. When I set it to <code>SIDECAR_OUTBOUND</code> the code is not listed:</p>
<pre><code>"name": "envoy.lua",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua",
"inline_code": "function envoy_on_request(request_handle)\n request_handle:headers():add(\"request-body-size\", request_handle:body():length())\nend\n\nfunction envoy_on_response(response_handle)\n response_handle:headers():add(\"response-body-size\", response_handle:body():length())\nend\n"
}
</code></pre>
<p>Someone can give me some tips?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: headers-envoy-filter
namespace: nginx-echo-headers
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("request-body-size", request_handle:body():length())
end
function envoy_on_response(response_handle)
response_handle:headers():add("response-body-size", response_handle:body():length())
end
workloadSelector:
labels:
app: nginx-echo-headers
version: v1
</code></pre>
<p>Below is my deployment and Istio configs:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-echo-headers-v1
namespace: nginx-echo-headers
labels:
version: v1
spec:
selector:
matchLabels:
app: nginx-echo-headers
version: v1
replicas: 2
template:
metadata:
labels:
app: nginx-echo-headers
version: v1
spec:
containers:
- name: nginx-echo-headers
image: brndnmtthws/nginx-echo-headers:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-echo-headers-svc
namespace: nginx-echo-headers
labels:
version: v1
service: nginx-echo-headers-svc
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: nginx-echo-headers
version: v1
---
# ISTIO GATEWAY
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-echo-headers-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "api.decchi.com.ar"
# ISTIO VIRTUAL SERVICE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-echo-headers-virtual-service
namespace: nginx-echo-headers
spec:
hosts:
- 'api.decchi.com.ar'
gateways:
- istio-system/nginx-echo-headers-gateway
http:
- route:
- destination:
# k8s service name
host: nginx-echo-headers-svc
port:
# Services port
number: 80
# workload selector
subset: v1
## ISTIO DESTINATION RULE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx-echo-headers-dest
namespace: nginx-echo-headers
spec:
host: nginx-echo-headers-svc
subsets:
- name: "v1"
labels:
app: nginx-echo-headers
version: v1
</code></pre>
<p>It is only working when I configure the context in <code>GATEWAY</code>. The <code>envoyFilter</code> is running in the <code>istio-system</code> namespace and the <code>workloadSelector</code> is configured like this:</p>
<pre><code>workloadSelector:
labels:
istio: ingressgateway
</code></pre>
<p>But my idea is to configure it in <code>SIDECAR_OUTBOUND</code>.</p>
| Little crazy | <blockquote>
<p>it is only working when I configure the context in <strong>GATEWAY</strong>, the envoyFilter is running in the <strong>istio-system</strong> namespace</p>
</blockquote>
<p>That's correct! You should apply your <code>EnvoyFilter</code> in the config root namespace <code>istio-system</code>- in your case.</p>
<p>And the most important part, just omit <code>context</code> field, when matching your <code>configPatches</code>, so that this applies to both sidecars and gateways. You can see the examples of usage in <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">this Istio Doc</a>.</p>
| Mikołaj Głodziak |
<p><strong>Background</strong></p>
<p>I am trying to learn to automate deployments with Jenkins on my laptop computer. I did not check the resource settings in the helm chart when I deployed Jenkins and I ended up over provisioned the memory and cpu requests.</p>
<p>The pod was initializing for several minutes and then eventually ended up in the status of CrashLoopBackOf.</p>
<p><strong>Software and Versions</strong></p>
<pre><code>$ minikube start
😄 minikube v1.17.1 on Microsoft Windows 10 Enterprise 10.0.19042 Build 19042
...
...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2
...
</code></pre>
<p>Note that Docker was installed from Visual Studio Code with Docker Desktop and Windows 10 WSL Ubuntu 20.04 LTS enabled.</p>
<pre><code>$ helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
</code></pre>
<p><strong>Installation</strong></p>
<pre><code>$ helm repo add stable https://charts.jenkins.io
$ helm repo ls
NAME URL
stable https://charts.jenkins.io
$ kubectl create namespace devops-cicd
namespace/devops-cicd created
$ helm install jenkins stable/jenkins --namespace devops-cicd
$ kubectl get svc -n devops-cicd -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
jenkins ClusterIP 10.108.169.104 <none> 8080/TCP 7m1s app.kubernetes.io/component=jenkins-controller,app.kubernetes.io/instance=jenkins
jenkins-agent ClusterIP 10.103.213.213 <none> 50000/TCP 7m app.kubernetes.io/component=jenkins-controller,app.kubernetes.io/instance=jenkins
$ kubectl get pod -n devops-cicd --output wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
jenkins-0 1/2 Running 1 8m13s 172.17.0.10 minikube <none> <none>
</code></pre>
<p>The pod failed eventually, ending with the status of CrashLoopBackOff</p>
<p>Unfortunately, I forgot to extract the logs for the pod.</p>
<p>In full disclosure, I got it deployed successfully by pulling the chart to my local file system and halved the size of the memory and cpu settings.</p>
<p><strong>Questions</strong></p>
<p>I fear that the situation of over provisioning in the Production environment one day. So how does one stop a failed pod from respawning/restarting and undo/rollback the deployment?</p>
<p>I tried to set Deployment replicas=0 but it had no effect. Actually, the only resources I could see were a couple of Services, the Pod itself, a PersistentVolume and some secrets.</p>
<p>I had to delete the namespace to remove the pod. This is not ideal. So what is the best way to tackle this situation (i.e. just deal with the problematic pod)?</p>
| absolutelynewbie | <p>Drawing on the feedback I have gathered and confirmed that the pod is scheduled by a <code>StatefulSet</code>. I am attempting to answer my own question with the hope that it is useful for newbies like me.</p>
<p>My question was how to stop a pod (from respawning).</p>
<p>So here I get the info on the StatefulSet:</p>
<pre><code>$ kubectl get statefulsets -n devops-cicd -o wide
NAME READY AGE CONTAINERS IMAGES
jenkins 0/1 33s jenkins,config-reload jenkins/jenkins:2.303.1-jdk11,kiwigrid/k8s-sidecar:1.12.2
</code></pre>
<p>Then scale in:</p>
<pre><code>$ kubectl scale statefulset jenkins --replicas=0 -n devops-cicd
statefulset.apps/jenkins scaled
</code></pre>
<p>Result:</p>
<pre><code>$ kubectl get statefulsets -n devops-cicd -o wide
NAME READY AGE CONTAINERS IMAGES
jenkins 0/0 6m35s jenkins,config-reload jenkins/jenkins:2.303.1-jdk11,kiwigrid/k8s-sidecar:1.12.2
</code></pre>
| absolutelynewbie |
<p>I have a redis DB server container in a microk8s cluster, and I'd like to keep a running instance of the client so that I can attach to it from time to time to talk to the server container.</p>
<p>In <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">one official example</a>, the redis client is called interactively as follows:</p>
<pre><code>kubectl run -i --tty temp --image redis --command "/bin/sh"
</code></pre>
<p>and then, the user enters data or interacts with the server manually. But I need to do this repeatedly from a shell script. What I tried so far is something like:</p>
<pre><code>kubectl run -i temp --rm --image redis -- bash -c "redis-cli -h redis LRANGE myqueue 0 -1 | cut -f 2- -d ' '"
</code></pre>
<p>This works some times, but occasionally gives an error:</p>
<blockquote>
<p>error: timed out waiting for the condition</p>
</blockquote>
<p>I am not sure exactly what condition is being waited for, and I guess it may be related to the fact that a temporary container is created and immediately deleted when the job is done. But I don't necessarily want to repeatedly create and delete containers like this just to run a few shell commands each time. It's slow and now seems to produce unpredictable errors.</p>
<p><em>Is there a way in kubernetes to keep a running instance of the container (redis client here), and send bash commands to it via the pipeline or some other means, from time to time? (instead of deleting and recreating it)</em></p>
| thor | <p>Try <code>kubectl run temp --image redis --command sleep infinity</code>. This will start a redis pod and enter sleep state. You can then use the pod to execute command like <code>kubectl exec -it temp -- redis-cli -h redis LRANGE myqueue 0 -1 | cut -f 2- -d ' '</code>. The pod will not exit upon command execution, it will only exit if you delete the pod.</p>
| gohm'c |
<p>I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.</p>
<p>Everything works fine since I can access my services from outside using the ingress-gateway NodePort.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
</code></pre>
<p>In our case the port <code>31106</code>.</p>
<p>The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?</p>
<p>In other word, instead of typing <a href="http://example.com:31106/" rel="nofollow noreferrer">http://example.com:31106/</a> , I want them to be able to type <a href="http://example.com/" rel="nofollow noreferrer">http://example.com/</a></p>
<p>Any solution could help.</p>
| hubert | <p>Based on <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>If the <code>EXTERNAL-IP</code> value is set, your environment has an external load balancer that you can use for the ingress gateway. If the <code>EXTERNAL-IP</code> value is <code><none></code> (or perpetually <code><pending></code>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">node port</a>.</p>
</blockquote>
<p>This is in line with what <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> wrote in the comment:</p>
<blockquote>
<p>A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up <code>metallb</code> for an implementation of that. The <code>NodePort</code> port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.</p>
</blockquote>
<p>In your situation you need to access the gateway using the <code>NodePort</code>. Then you can configure istio. Everything is described step by step in <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control" rel="nofollow noreferrer">this doc</a>. You need to choose the instructions corresponding to <code>NodePort</code> and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.</p>
| Mikołaj Głodziak |
<p>I'm trying to SSH into AKS windows node using this <a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="nofollow noreferrer">reference</a> which created debugging Linux node, and ssh into the windows node from the debugging node. Once I enter the Linux node and try to SSH into the windows node, it asks me to type in azureuser password like below:</p>
<pre><code>[email protected]'s password:
Permission denied, please try again.
</code></pre>
<p>What is <code>azureuser@(windows node internal IP address)'s</code> password? Is it my azure service password or is it a <code>WindowsProfileAdminUserPassword</code> that I pass in when I create an AKS cluster using <code>New-AzAksCluster</code> <code>cmdlet</code>? Or is it my ssh keypair password? If I do not know what it is, is there a way I can reset it? Or is there a way I can create a Windows node free from credentials? Any help is appreciated. Thanks ahead!</p>
| yunlee | <p>It looks like you're trying to login with your password, not your ssh key. Look for the <a href="https://www.thorntech.com/passwords-vs-ssh/" rel="nofollow noreferrer">explanation</a> between those methods. These are two different authentication methods. If you want to ssh to your node, you need to chose ssh with key authentication. You can do this by running the command:</p>
<pre><code>ssh -i <id_rsa> azureuser@<your.ip.adress>
</code></pre>
<p>But before this, you need to create key pair. It is well done described in <a href="https://learn.microsoft.com/en-us/azure/aks/ssh#before-you-begin" rel="nofollow noreferrer">this section</a>. Then you can <a href="https://learn.microsoft.com/en-us/azure/aks/ssh#create-the-ssh-connection-to-a-linux-node" rel="nofollow noreferrer">create the SSH connection to a Linux node</a>. You have everything described in detail, step by step, in the documentation you provide.</p>
<p>When you configure everything correctly, you will be able to log into the node using the ssh key pair. You won't need a password. When you execute the command</p>
<pre><code>ssh -i <id_rsa> azureuser@<your.ip.adress>
</code></pre>
<p>you should see an output like this:</p>
<pre><code>The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
Are you sure you want to continue connecting (yes/no)? yes
[...]
Microsoft Windows [Version 10.0.17763.1935]
(c) 2018 Microsoft Corporation. All rights reserved.
</code></pre>
<p>When you see <code>Are you sure you want to continue connecting (yes/no)?</code> you need to write <code>yes</code> and confirm using <code>Enter</code>.</p>
| Mikołaj Głodziak |
<p>I'm trying to retrieve pods using a JSONPATH query which matches the name with a certain pattern matching as specified below and I get the error as shown. Any reason what would be the reason for the failure.</p>
<pre><code>kubectl get po -n sdfd -o jsonpath='{.items[?(@.metadata.generateName =~ /abc.*?/i)].status.podIP}'
error: error parsing jsonpath {.items[?(@.metadata.generateName =~ /abc.*?/i)].status.podIP}, unrecognized character in action: U+007E '~'
</code></pre>
<p>Please find the kubectl cli version as shown below :-</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T21:54:15Z", GoVersion:"go1.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.9", GitCommit:"454b5b515582f8ac8419435dc9c230fc97fb844b", GitTreeState:"clean", BuildDate:"2021-11-01T19:59:05Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| Avi | <p>According to the <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>JSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as <strong>jq</strong>.</p>
</blockquote>
<p>Run the following command to achieve the desired result:</p>
<pre><code>kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).spec.containers[].image'
</code></pre>
<p>Refer to this Github <a href="https://github.com/kubernetes/kubernetes/issues/61406" rel="nofollow noreferrer">issue</a> and <a href="https://stackoverflow.com/questions/36211618/how-to-parse-json-format-output-of-kubectl-get-pods-using-jsonpath">stackpost</a> for more information.</p>
| Fariya Rahmat |
<p>I was trying to resize persistent volumes associated with a stateful set today. I am using Azure Kubernetes service v1.26.6. The persistent voluem is created from a storage class of type "default".</p>
<p>As per the official Kubernetes documentation at: <a href="https://kubernetes.io/blog/2022/05/05/volume-expansion-ga/" rel="nofollow noreferrer">https://kubernetes.io/blog/2022/05/05/volume-expansion-ga/</a>, it is now possible to expand the size of a persistent volume without any downtime, just by updating the spec field of the PVC of the stateful set and re-deploying the stateful set. However, I attempted the same today(i.e tried to redeploy an already deployed stateful set but with increased size for the PVC) and i ran into the usual error:</p>
<blockquote>
<p>Error: UPGRADE FAILED: cannot patch "grafana-loki-querier" with kind
StatefulSet: StatefulSet.apps "grafana-loki-querier" is invalid: spec:
Forbidden: updates to statefulset spec for fields other than
'replicas', 'ordinals', 'template', 'updateStrategy',
'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are
forbidden</p>
</blockquote>
<p>To overcome this error, I then followed the instructions at: <a href="https://www.giffgaff.io/tech/resizing-statefulset-persistent-volumes-with-zero-downtime" rel="nofollow noreferrer">https://www.giffgaff.io/tech/resizing-statefulset-persistent-volumes-with-zero-downtime</a> and was then finally able to successfully deploy the <strong>helm chart</strong> containing updated sizes for the resized PVC.</p>
<p><strong>Questions</strong></p>
<ul>
<li><p>Am I misunderstanding the official Kubernetes documentation that I have linked in the question?</p>
</li>
<li><p>I am curious. Technically, if the size of the PVC can be increased by editing the spec field in the PVC object directly(kubectl edit pvc), why is Kubernetes imposing a restriction to do the same by directly updating the size in the PVC manifest files ie. why cannot I increase the size of the PVC using a <strong>helm deployment</strong>?</p>
</li>
</ul>
<p>Why is the workflow not an integral part of Kubernetes deployments using Helm?</p>
<p>Why does it involve deleting the stateful set followed by recreation of the stateful set? What is the reason for the same?</p>
<p><a href="https://i.stack.imgur.com/MLT6T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MLT6T.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/Kehzj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kehzj.png" alt="enter image description here" /></a></p>
| Kiran Hegde | <blockquote>
<blockquote>
<p>Questions -Am i misunderstanding the official Kubernetes documentation that i have linked in the question?</p>
</blockquote>
</blockquote>
<p>Not really, but you are only partially true.</p>
<p>Even if StorageClass supports resizing, Kubernetes doesn't support (yet) volume expansion through StatefulSets. Please refer to the official enhancement issue at <a href="https://github.com/kubernetes/enhancements/issues/661" rel="nofollow noreferrer">Support Volume Expansion Through StatefulSets</a>
and Kubernetes <a href="https://github.com/kubernetes/kubernetes/issues/68737" rel="nofollow noreferrer">StatefulSet: support resize pvc storage</a>.</p>
<h3>More Insights</h3>
<blockquote>
<p>Why is the workflow not an integral part of Kubernetes deployments using Helm?</p>
</blockquote>
<p>In this case, this has nothing to do with helm charts but actually with the Kubernetes controller involved in the application stack, as the controller is continuously monitoring the app and making the required changes as per its defined manifest and hence as mentioned in your workaround document you had to scale it to <code>0</code></p>
<p>Second, we had to delete the stateful state with the orphan strategy so that the actual pod ( working application ) does not get deleted and only the stateful set controller(kind) is deleted. Because of that, you are allowed to Update the PVC and then it gets resized only if resizing is supported and enabled on your storageClass->Storage Driver which seems the case.</p>
<blockquote>
<blockquote>
<p>Normally helm has quite a lot of flaws but in this case, it's not the culprit.😬</p>
</blockquote>
</blockquote>
| ishuar |
<p>In my <code>/mnt/</code> I have a number of hard drives mounted (e.g. at <code>/mnt/hdd1/</code>, <code>/mnt/hdd2/</code>). Is there any way to make a K8s Persistent Volume on <code>/mnt</code> that can see the content of the hard drives mounted under <code>/mnt</code>? When I make a Local Persistent Volume on <code>/mnt</code>, the K8s pods see the directories hdd1 and hdd2, but they appear as empty.</p>
<p>The following is what I have tested:</p>
<h3>Undesired solution 1:</h3>
<p>I can make a Local Persistent Volume on <code>/mnt/hdd1</code> and then my K8s pod will be able to see the contents of hdd1 hard drive. But as I mentioned before, I want my pod to see <strong>all</strong> the hard drives and I don't want to make a persistent volume for each hard drive especially when I mount a new hard drive under <code>/mnt</code>.</p>
<h3>Undesired solution 2:</h3>
<p>I can mount a Local Persistent Volume on <code>/mnt/</code> with the K8s option of <code>mountPropagation: HostToContainer</code> in the yaml file for my deployment. In this case my pod will see the content of the hard drive if I <strong>remount</strong> the hard drive. But this is not desired because if the pod restarts, I need to remount the hard drive again for the pod to see its content! (Only works when hard drive is remounted when the pod is alive)</p>
| Ali_MM | <p>This approach, <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner" rel="nofollow noreferrer">Local Persistence Volume Static Provisioner</a>, suits better with Kubernetes way of working.</p>
<p>It supports metrics, storage lifecycle (eg. cleanup), node/pv affinity, extensible (eg. dynamic ephemeral storage). For example, with <a href="https://github.com/brunsgaard/eks-nvme-ssd-provisioner" rel="nofollow noreferrer">eks-nvme-ssd-provisioner</a>, there can be a daemonset running to provision fast storage as local. This is ideal for workload that requires ephemeral local storage for data cache, fast compute, while no need to manually perform mount on the ec2 node before pods start.</p>
<p>Usage yaml examples are here, <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/examples" rel="nofollow noreferrer">sig-storage-local-static-provisioner/examples</a>.</p>
| gohm'c |
<p>I just configured a Kubernetes Cluster with this environment</p>
<p>4 VPS nodes with one public IP for each node</p>
<p>K3S Cluster with embebed etcd (k3s version v1.22.7+k3s1)</p>
<ul>
<li>3 master nodes</li>
<li>1 Worker node just for testing</li>
</ul>
<p>MetalLB for Internal Load Balancer (metallb/v0.12.1)</p>
<ul>
<li>IP Range 10.10.0.200-10.10.0.250</li>
</ul>
<p>Traefik as default Kubernetes Ingress Class (Chart v10.19.4 & App v2.6.3)</p>
<p>Every thing is running as expected, I can access all services inside each node in the cluster.</p>
<p>Now, how to finally expose services to Internet Acesss?</p>
<ul>
<li>Cloud Provider Firewall already exposing ports 80 and 443</li>
<li>Internal iptables firewall accept public traffict from those ports</li>
</ul>
<p>I thought Traefik automatically expose port 80 and 443, but lsof actually is not showing as "LISTEN". and pubic ips not responding anything. I am really confused at this, I am newby in kubernetes world.</p>
<p>I have tried port forwarding private ip to metallb load balancer ip but it actually not solve the route.</p>
<pre><code>iptables -t nat -I PREROUTING -p tcp -d <enp0s3-local-ip> --dport 80 -j DNAT --to-destination <load-balancer-ip>:80
iptables -I FORWARD -m state -d <load-balancer-subnet>/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT
</code></pre>
<p>Edit: <a href="https://i.postimg.cc/mD00Fk4L/Screen-Shot-2022-04-06-at-22-04-02.png" rel="nofollow noreferrer">The nodes and traefik already showing the public ip</a></p>
<p>But response from outside the cluster still <code>curl: (56) Recv failure: Connection reset by peer</code></p>
| joepa37 | <p>Try using the <code>kubectl</code> expose command:</p>
<pre><code>$ kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]
</code></pre>
<blockquote>
<p>--external-ip=Additional external IP address (not managed by Kubernetes) to accept for the service. If this IP is routed to a node,
the service can be accessed by this IP in addition to its generated
service IP.</p>
</blockquote>
<p>Or when you install traefik add this value file (as traefik.yaml in this case):</p>
<pre><code>service:
externalIPs:
- <your_external_static_ip_here_without_the_brackets>
</code></pre>
<p>and then install it like this:</p>
<pre><code>helm install --values=./traefik.yaml traefik traefik/traefik -n traefik --create-namespace
</code></pre>
<p>Refer to the <a href="https://stackoverflow.com/questions/62559281/expose-kubernetes-cluster-to-internet">stackpost</a> and a document on <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#:%7E:text=From%20the%20Service%20type%20drop,Kubernetes%20assigned%20to%20your%20Service." rel="nofollow noreferrer">Exposing applications using services</a> for more information.</p>
| Fariya Rahmat |
<p>I am trying to mirror traffic to two copies of the same service in different namespaces. I can access both services by curling their FQDN from a pod running in the default namespace but when i apply the following virtual service nothing gets mirrored. What am i doing wrong?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: testservice-virtual-service
spec:
hosts:
- testservice.default.svc.cluster.local
gateways:
- istio-system/my-gateway
http:
- route:
- destination:
host: testservice.default.svc.cluster.local
weight: 100
mirror:
host: testservice.staging.svc.cluster.local
mirrorPercentage:
value: 100.0
</code></pre>
<p><strong>EDIT</strong> - I have tried adding just the service from the default namespace using the FQDN to the hosts field as well as adding both the default and staging namespace services using the FQDN and still do not see any traffic being mirrored to the staging service.</p>
| arezee | <p>I have posted community wiki answer for better visibility.</p>
<p>As OP has mentioned in the comment, the problem is resolved:</p>
<blockquote>
<p>I was doing something extremely stupid and did not have istio enabled in the default namespace.</p>
</blockquote>
| Mikołaj Głodziak |
<p>We have setup a Kubernetes cluster with a specific <code>service-cluster-ip-range</code>. Now, this range is fully used with IP addresses, so we cannot create new services. When trying to do so, we get the following error:</p>
<pre><code>Warning ProvisioningFailed 2s persistentvolume-controller Failed to provision volume with StorageClass "glusterfs-storage": failed to create volume: failed to create endpoint/service default/glusterfs-dynamic-gluster-vol-mongodb-data-03: error creating service: Internal error occurred: failed to allocate a serviceIP: range is full
</code></pre>
<p>We must increase or change the cluster IP range.</p>
<p>We havn't found any documentation on how to change the cluster IP range. Is that even possible? What would be the steps to do it?</p>
<p>Thanks.</p>
| Florian Coulmier | <p>You can change the cluster ip service range by modifying the /etc/kubernetes/manifest/kube-controller-manager.yaml file. However, you should cordon off the affected nodes and stop all delete all services/deployments so they can get recreated. You may need to change the setting of your cni plugin. I use Calico and it needed to change the cluster CIDR but not the service ip range. After changing the port; be sure to restart kubelet.</p>
| Steve Starling |
<p>With kubectl we can run the following command</p>
<pre><code>kubectl exec -ti POD_NAME -- pwd
</code></pre>
<p>Can I do that from API level? I checked the POD API and seems it is missing there <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/</a></p>
<p>What I am looking for, is a UI tool to view the files in POD without extra dependency</p>
<p>UPDATE:</p>
<p>I found the following code to exec command in pod</p>
<pre><code>package main
import (
"bytes"
"context"
"flag"
"fmt"
"path/filepath"
corev1 "k8s.io/api/core/v1"
_ "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/remotecommand"
"k8s.io/client-go/util/homedir"
//
// Uncomment to load all auth plugins
// _ "k8s.io/client-go/plugin/pkg/client/auth"
//
// Or uncomment to load specific auth plugins
// _ "k8s.io/client-go/plugin/pkg/client/auth/azure"
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// _ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "stage"
pods, err := clientset.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
podName := "ubs-job-qa-0"
containerName := "ubs-job"
// https://github.com/kubernetes/kubernetes/blob/release-1.22/test/e2e/framework/exec_util.go
// https://zhimin-wen.medium.com/programing-exec-into-a-pod-5f2a70bd93bb
req := clientset.CoreV1().
RESTClient().
Post().
Resource("pods").
Name(podName).
Namespace(namespace).
SubResource("exec").
Param("container", containerName)
scheme := runtime.NewScheme()
if err := corev1.AddToScheme(scheme); err != nil {
panic("Cannot add scheme")
}
parameterCodec := runtime.NewParameterCodec(scheme)
req.VersionedParams(&corev1.PodExecOptions{
Stdin: false,
Stdout: true,
Stderr: true,
TTY: true,
Container: podName,
Command: []string{"ls", "-la", "--time-style=iso", "."},
}, parameterCodec)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
panic(err)
}
var stdout, stderr bytes.Buffer
err = exec.Stream(remotecommand.StreamOptions{
Stdin: nil,
Stdout: &stdout,
Stderr: &stderr,
})
if err != nil {
panic(err)
}
text := string(stdout.Bytes())
fmt.Println(text)
}
</code></pre>
| Mr.Wang from Next Door | <p>In your case, use of kubectl is the same as calling the api-server; which in turn call the kubelet on the node and exec your command in the pod namespace.</p>
<p>You can experiment like this:</p>
<pre><code>kubectl proxy --port=8080 &
curl "localhost:8080/api/v1/namespaces/<namespace>/pods/<pod>/exec?command=pwd&stdin=false"
</code></pre>
<p>To copy file you can use: <code>kubectl cp --help</code></p>
| gohm'c |
<p>When I want to restart the kubernetes(<code>v1.21.2</code>) statefulset pod, the pod are stuck with terminating status, and the log shows like this:</p>
<pre><code>error killing pod: failed to "KillPodSandbox" for "8aafe99f-53c1-4bec-8cb8-abd09af1448f" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to check network namespace closed: remove netns: unlinkat /var/run/netns/cni-f9ccb1de-ed43-dff6-1b86-1260e07178e6: device or resource busy"
</code></pre>
<p>the pod terminate for hours but still stuck. why did this happen? what should I do to fixed this problem?</p>
| Dolphin | <p>I think force deletion can be a workaround for this issue.</p>
<p>In order to delete the affected pod that is in the terminating state, please refer to the <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods" rel="nofollow noreferrer">documentation</a>. In case the pod still does not get deleted then you can do the force deletion by following <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#force-deletion" rel="nofollow noreferrer">documentation</a>.</p>
<p>Please note that when you force delete a StatefulSet pod, you are asserting that the Pod in question will never again make contact with other Pods in the StatefulSet and its name can be safely freed up for a replacement to be created.</p>
<p>You can also try these workarounds to quickly mitigate this</p>
<ol>
<li>Run the command below to remove all pods in the terminating state.</li>
</ol>
<blockquote>
<p>for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do
kubectl delete pod $p --grace-period=0 --force;done</p>
</blockquote>
<p>2.Set finalizer value in the deployment YAML to null.</p>
| Fariya Rahmat |
<p>I deployed Istio <a href="https://istio.io/latest/docs/setup/install/operator/" rel="nofollow noreferrer">using the operator</a> and added a custom ingress gateway which is only accessible from a certain source range (our VPN).</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: ground-zero-ingressgateway
spec:
profile: empty
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: istio-vpn-ingressgateway
label:
app: istio-vpn-ingressgateway
istio: vpn-ingressgateway
enabled: true
k8s:
serviceAnnotations:
...
service:
loadBalancerSourceRanges:
- "x.x.x.x/x"
</code></pre>
<p>Now I want to configure Istio to expose a service outside of the service mesh cluster, using the <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/" rel="nofollow noreferrer">Kubernetes Ingress resource</a>. I use the <code>kubernetes.io/ingress.class</code> annotation to tell the Istio gateway controller that it should handle this <code>Ingress</code>.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
...
</code></pre>
<ul>
<li>Kubernetes version (EKS): 1.19</li>
<li>Istio version: 1.10.3</li>
</ul>
<p>Which ingress gateway controller is now used (<code>istio-ingressgateway</code> or <code>istio-vpn-ingressgateway</code>)? Is there a way to specify which one should be used?</p>
<p>P.S. I know that I could create a <code>VirtualService</code> and specify the correct gateway but we want to write a manifest that also works without Istio by specifying the correct ingress controller with an annotation.</p>
| ammerzon | <p>You can create an ingress class that references the ingress controller that is deployed by default in the istio-system namespace. This configuration with ingress will work, however to my current knowledge, this is only used for backwards compatibility. If you want to use <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/" rel="nofollow noreferrer">istio ingress</a> controller functionality, you should use istio gateway and virtual service instead:</p>
<blockquote>
<p>Using the <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">Istio Gateway</a>, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.</p>
</blockquote>
<p>If this solution is not optimal for you, you should use e.g. <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx ingress controller</a> and you can still bind it with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">annotations</a> (deprecated) or using <code>IngressClass</code>. To my present knowledge <strong>it is not possible to bind this ingress class with an additional ingress controller.</strong> If you need an explanation, documentation, you should create an <a href="https://github.com/istio/istio/issues" rel="nofollow noreferrer">issue on github</a>.</p>
<p><strong>Summary:</strong> The recommended option is to use the gateway with virtual service. Another possibility is to use nginx alone ingress with different classes and an ingress resource for them.</p>
| Mikołaj Głodziak |
<p>I have kube cluster & its control plane endpoint is haproxy. I want to use hostname of system where haproxy lies and use it as hostname in the ingress resource. Is it possible to achieve this. The request ha proxy backend config is below:</p>
<pre><code>frontend k8s_frontend
bind *:6443
mode tcp
default_backend k8s_backend
backend k8s_backend
mode tcp
balance roundrobin
server master1 10.50.8.117:6443
server master2 10.50.8.118:6443
server master3 10.50.8.119:6443
frontend http_frontend
bind :80
bind :443 ssl crt /com.pem
default_backend servers
backend servers
balance roundrobin
server worker1 10.50.8.120:443 ssl verify none
server worker2 10.50.8.121:443 ssl verify none
</code></pre>
<p>Below is my ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "HAPROXY_HOSTNAME"
http:
paths:
- pathType: Prefix
path: "/k8s"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
| Surya Teja | <p>Yes, you can mention the hostname of HAProxy in the ingress source. The ingress controller node can be resolved as hostname along with deploying and exposing the echo server service as shown below. Kindly refer to this <a href="https://haproxy-ingress.github.io/docs/getting-started/" rel="nofollow noreferrer">document</a>.</p>
<p>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: haproxy
name: echoserver
spec:
rules:</p>
<ul>
<li>host: $HOST
http:
paths:
<ul>
<li>backend:
serviceName: echoserver
servicePort: 8080
path: /
EOF</li>
</ul>
</li>
</ul>
<p>More details on HAProxy Ingress Controller can be found <a href="https://www.haproxy.com/fr/blog/haproxy_ingress_controller_for_kubernetes/" rel="nofollow noreferrer">here</a>.</p>
| Anbu Thirugnana Sekar |
<p>What I would like is to use podAntiAffinity to limit the number of pods I run on a host of the same version of code.</p>
<p>Specifically, I would like to run 1 pod of version A, and 1 pod of version B. This is to allow to canary deploys without spinning up a large number of new nodes.</p>
<p>I have tried setting my podAntiAffinity</p>
<pre><code> podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
- matchExpressions:
- key: "k8s.git/commit-sha"
operator: In
values:
- valueFrom:
fieldRef:
fieldPath: "metadata.labels['k8s.git/commit-sha']"
topologyKey: kubernetes.io/hostname
weight: 100
</code></pre>
<p>But looking at the source code for k8s, it expected a <code>string</code> object instead of a <code>map</code> object.</p>
<p>Is there another way to accomplish this? Has anyone implemented something similar? I'm running Kubernetes 1.18.</p>
| Mark Robinson | <p>I had this thought before and I resolved it using the method describe here <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">Inter-pod affinity</a>. In short, the method deploy to nodes that already run pod that match the intend label - which in your case the commit-sha.</p>
| gohm'c |
<p>For at least some of the ingress controllers out there, 2 variables must be supplied: <code>POD_NAME</code> and <code>POD_NAMESPACE</code>. The nginx ingress controller makes sure to inject these 2 variables in the container(s) as seen <a href="https://github.com/kubernetes/ingress-nginx/blob/402f21bcb7402942f91258c8971ff472d81f5322/deploy/static/provider/cloud/deploy.yaml#L349-L356" rel="nofollow noreferrer">here</a> (link for Azure deployment templates), HAProxy is using it (as shown <a href="https://www.haproxy.com/blog/dissecting-the-haproxy-kubernetes-ingress-controller/" rel="nofollow noreferrer">here</a>) and probably others are doing it as well.</p>
<p>I get why these 2 values are needed. For the nginx ingress controller the value of the <code>POD_NAMESPACE</code> variable is used to potentially restrict the ingress resource objects the controller will be watching out for to just the namespace it's deployed in, through the <code>--watch-namespace</code> parameter (the Helm chart showing this in action is <a href="https://github.com/kubernetes/ingress-nginx/blob/402f21bcb7402942f91258c8971ff472d81f5322/charts/ingress-nginx/templates/controller-deployment.yaml#L99" rel="nofollow noreferrer">here</a>). As for <code>POD_NAME</code>, not having this will cause some errors in the ingress internal code (the function <a href="https://github.com/kubernetes/ingress-nginx/blob/402f21bcb7402942f91258c8971ff472d81f5322/internal/k8s/main.go#L91" rel="nofollow noreferrer">here</a>) which in turn will probably prevent the ingress from running without the variables set.</p>
<p>Couldn't the ingress controller obtain this information automatically, based on the permissions it has to run (after all it can watch for changes at the Kubernetes level, so one would assume it's "powerful" enough to see its own pod name and the namespace where it was deployed)? In other words, can't the ingress controller do a sort of "whoami" and get its own data? Or is this perhaps a common pattern used across Kubernetes?</p>
| Mihai Albert | <p><strong>It is done by design</strong>, a community that develops this functionality as it approaches the subject.</p>
<p>When the environment variable is started, the variables are known. <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Kubernetes provides these variables</a> and pod can use them when runs.</p>
<p>Of course, if you have a better idea to solve this, you can suggest it in the official thread on <a href="https://github.com/kubernetes/ingress-nginx/issues" rel="nofollow noreferrer">github</a>.</p>
<p>However, bare in mind that this potential solution:</p>
<blockquote>
<p>Couldn't the ingress controller obtain this information automatically, based on the permissions it has to run (after all it can watch for changes at the Kubernetes level, so one would assume it's "powerful" enough to see its own pod name and the namespace where it was deployed)? In other words, can't the ingress controller do a sort of "whoami" and get its own data? Or is this perhaps a common pattern used across Kubernetes?</p>
</blockquote>
<p>will require an extra step. Firstly, pod will have to have additional privileges, secondly, when it is started, it will not have these variables yet.</p>
| Mikołaj Głodziak |
<p>As the title suggests, GCP-LB or the HAProxy Ingress Controller Service which is exposed as type LoadBalancer is distributing traffic unevenly to HAProxy Ingress Controller Pods.</p>
<p><strong>Setup:</strong><br />
I am running the GKE cluster in GCP, and using HAProxy as the ingress controller.<br />
The HAProxy Service is exposed as a type Loadbalancer with staticIP.</p>
<p><strong>YAML for HAProxy service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: haproxy-ingress-static-ip
namespace: haproxy-controller
labels:
run: haproxy-ingress-static-ip
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
cloud.google.com/network-tier: "Premium"
cloud.google.com/neg: '{"ingress": false}'
spec:
selector:
run: haproxy-ingress
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
type: LoadBalancer
loadBalancerIP: "10.0.0.76"
</code></pre>
<p><strong>YAML for HAProxy Deployment:</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
spec:
replicas: 2
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: haproxy-ingress-service-account
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --configmap=haproxy-controller/haproxy
- --default-backend-service=haproxy-controller/ingress-default-backend
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1024
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: run
operator: In
values:
- haproxy-ingress
topologyKey: kubernetes.io/hostname
</code></pre>
<p><strong>HAProxy ConfigMap:</strong></p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy
namespace: haproxy-controller
data:
</code></pre>
<p><strong>Problem:</strong><br />
While debugging some other issue, I found out that the traffic on HAProxy pods has uneven traffic distribution. For e.g. one Pods was receiving 540k requests/sec and another Pod was receiving 80k requests/sec.</p>
<p>On further investigation, it was also found that, new Pods which are started don't start receiving traffic for the next 20-30 mins. And even after that, only a small chunk of traffic is routed through them.</p>
<p>Check the graph below:
<a href="https://i.stack.imgur.com/FaXyz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FaXyz.png" alt="enter image description here" /></a></p>
<p>Another version of uneven traffic distribution. This doesn't seem to be random at all, looks like a weighted traffic distribution:
<a href="https://i.stack.imgur.com/jkh3U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jkh3U.png" alt="enter image description here" /></a></p>
<p>Yet another version of uneven traffic distribution. Traffic from one Pod seems to be shifting towards the other Pod.</p>
<p><a href="https://i.stack.imgur.com/FbD93.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FbD93.png" alt="enter image description here" /></a></p>
<p>What could be causing this uneven traffic distribution and not sending traffic to new pods for a large duration of time?</p>
| kadamb | <p>Kubernetes is integrated with GCP Load Balancer. K8s provides primitives such as ingress and service for users to expose pods through L4/L7 load balancers. Before the introduction of NEGs, the load balancer distributed traffic to VM instances and “kube-proxy” programs iptables to forward traffic to backend pods. This could lead to uneven traffic distribution, unreliable load balancer health check and network performance impact.</p>
<p>I suggest you use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#using-pod-readiness-feedback" rel="nofollow noreferrer">Container native load balancing</a> which allows load balancers to target Kubernetes Pods directly and to evenly distribute traffic to Pods. Using container-native load balancing, load balancer traffic is distributed directly to the Pods which should receive the traffic, eliminating the extra network hop. It also helps with improved health checking since it targets Pods directly, you have visibility into the latency from the HTTP(S) load balancer to Pods. The latency from the HTTP(S) load balancer to each Pod is visible, which was aggregated with node IP-base container-native load balancing. This makes troubleshooting your services at the NEG-level easier.</p>
<p>Container-native load balancers do not support internal TCP/UDP load balancers or network load balancers, so if you want to use this kind of load balancing, you would have to split services into HTTP(80), HTTPS(443) and TCP(1024). To use this, your cluster must have HTTP load-balancing enabled. GKE clusters have HTTP load-balancing enabled by default; you must not disable it.</p>
| Siva Mannani |
<p>I was trying to use the quota feature in kubernetes but everytime my container is stuck on "ContainerCreating" and not moving forward.</p>
<p>I'm not sure what could be the issue .</p>
<p>MyQuota yaml:</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: null
name: awesome-quota
spec:
hard:
pods: 2
requests.cpu: 1
requests.memory: 1024m
limits.cpu: 4
limits.memory: 4096m
status: {}
</code></pre>
<p>My nginx yaml having quota details:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx:1.18.0
name: nginx
resources:
limits:
cpu: "1"
memory: "1024m"
requests:
cpu: "0.5"
memory: "512m"
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
</code></pre>
<p>Thanks
Abdul</p>
| Abdul Mohsin | <p>Change the memory unit to "Mi". Note your sample was modified slightly to limit the scope in kube-public namespace.</p>
<pre><code>> cat << EOF | kubectl apply -f -
> apiVersion: v1
> kind: ResourceQuota
> metadata:
> name: test-quota
> namespace: kube-public
> spec:
> hard:
> pods: 2
> requests.cpu: 1
> requests.memory: 1024Mi
> limits.cpu: 4
> limits.memory: 4096Mi
> EOF
resourcequota/test-quota created
</code></pre>
<p>Run your sample pod:</p>
<pre><code>> cat << EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
> labels:
> app: nginx
> name: nginx
> namespace: kube-public
> spec:
> containers:
> - image: nginx:latest
> name: nginx
> resources:
> limits:
> cpu: "1"
> memory: "1024Mi"
> requests:
> cpu: "0.5"
> memory: "512Mi"
> dnsPolicy: ClusterFirst
> restartPolicy: Never
> EOF
pod/nginx created
</code></pre>
<p>The pod runs as expected:</p>
<pre><code>> kubectl get pods -n kube-public
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 37s
</code></pre>
<p>If resources.limits has no upper bound:</p>
<pre><code>> cat << EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
> labels:
> app: nginx
> name: nginx
> namespace: kube-public
> spec:
> containers:
> - image: nginx:latest
> name: nginx
> resources:
> requests:
> cpu: "0.5"
> memory: "512Mi"
> dnsPolicy: ClusterFirst
> restartPolicy: Never
> EOF
Error from server (Forbidden): error when creating "STDIN": pods "nginx" is forbidden: failed quota: test-quota: must specify limits.cpu,limits.memory
</code></pre>
| gohm'c |
<p>Can someone explain to me what the role of the keyword "template" is in this code :</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: {{ template "identity-openidconnect" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "microService.name" . }}
release: "{{ .Release.Name }}"
xxxx
xxxxxxxxxxxx
</code></pre>
| AllaouaA | <p>The keyword "template" means, that Helm will find the previously created template and complete the yaml file according to the template in the template. It has to be created in advance. This type of construction allows you to refer to the same scheme many times.</p>
<p>For example, we can define a template to encapsulate a Kubernetes block of labels:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- define "mychart.labels" }}
labels:
generator: helm
date: {{ now | htmlDate }}
{{- end }}
</code></pre>
<p>Now we can embed this template inside of our existing ConfigMap, and then include it with the <code>template</code> action:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- define "mychart.labels" }}
labels:
generator: helm
date: {{ now | htmlDate }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
{{- template "mychart.labels" }}
data:
myvalue: "Hello World"
{{- range $key, $val := .Values.favorite }}
{{ $key }}: {{ $val | quote }}
{{- end }}
</code></pre>
<p>When the template engine reads this file, it will store away the reference to <code>mychart.labels</code> until <code>template "mychart.labels"</code> is called. Then it will render that template inline. So the result will look like this:</p>
<pre class="lang-yaml prettyprint-override"><code># Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: running-panda-configmap
labels:
generator: helm
date: 2016-11-02
data:
myvalue: "Hello World"
drink: "coffee"
food: "pizza"
</code></pre>
<p>Note: a <code>define</code> does not produce output unless it is called with a template, as in this example.</p>
<p>For more info about <code>templates</code> you can read <a href="https://helm.sh/docs/chart_template_guide/named_templates/" rel="noreferrer">this page</a>.</p>
| Mikołaj Głodziak |
<p>Quote</p>
<blockquote>
<p>Exposure to knowledge in Dockerization and Kubernetes ( <strong>mSaaS</strong>
technologies) is preferred</p>
</blockquote>
<p>What is mSaaS in context of Quickbooks cloud engineering?</p>
<p><a href="https://i.stack.imgur.com/Gg2qG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gg2qG.png" alt="enter image description here" /></a></p>
| Vy Do | <p>MSAAS stands for managed software as a service. SaaS applications are standardized software solutions, meaning they can be downloaded and implemented as-is but offer little room for configuration to a specific client's needs. MSaaS applications offer a core software solution, which can be configured to a specific client's needs.</p>
<p>MSaaS provides Greater support, training and account service for subscribers. Project managers and the development team can be looped in for more substantial needs and requests (i.e., custom-building features).</p>
<p>MSaaS can often be used immediately upon download, they’re meant to be configured to each account prior to implementation to offer seamless adoption and maximum value.</p>
| Fariya Rahmat |
<p>I'm trying to capture some logs that are file-based in an application pod on GKE and view them from Google Cloud Logging.</p>
<p>For various reasons, these application logs are not sent to STDOUT or STDERR (since those logs are automatically sent to Cloud Logging). I have been suggested to implement a scripting solution that tails the logs and sends them to STDOUT. However I was hoping in a side-car approach with Fluentd (or Fluentbit) logging agent that'll tail the logs and output them to Cloud Logging.</p>
<p>Using the sidecar image <code>"k8s.gcr.io/fluentd-gcp:1.30"</code>, I've tried out the below YAML file (containing the fluentd configmap and deployment) below:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-log-config
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/execution*.log
pos_file /var/log/execution.pos
tag app.*
</source>
<match **>
type google_cloud
</match>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app.kubernetes.io/name: app
app.kubernetes.io/instance: app
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: app
app.kubernetes.io/instance: app
template:
metadata:
labels:
app.kubernetes.io/name: app
app.kubernetes.io/instance: app
spec:
serviceAccountName: app
volumes:
- name: executionlogs
emptyDir: {}
- name: fluentdconfig
configMap:
name: app-log-config
containers:
- name: app
image: appimage:version
imagePullPolicy: IfNotPresent
volumeMounts:
- name: executionlogs
mountPath: /tmp/executionLogs
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: log-agent
image: "k8s.gcr.io/fluentd-gcp:1.30"
imagePullPolicy: IfNotPresent
env:
- name: FLUENTD_ARGS
value: "-c /etc/fluentd-config/fluentd.conf"
volumeMounts:
- name: executionlogs
mountPath: /var/log
- name: fluentdconfig
mountPath: /etc/fluentd-config
</code></pre>
<p>Initially, the sidecar logs were throwing a 403 error, since I hadn't given the service account the requisite permissions (I was using GKE workload identity and the corresponding GCP IAM service account required logWriter permissions to be added). After fixing the error, I got the following logs:</p>
<pre><code>2021-06-27 12:49:09 +0000 [info]: fluent/supervisor.rb:471:read_config: reading config file path="/etc/fluentd-config/fluentd.conf"
2021-06-27 12:49:09 +0000 [info]: fluent/supervisor.rb:337:supervise: starting fluentd-0.12.29
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-google-cloud' version '0.5.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-kafka' version '0.3.1'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-mongo' version '0.7.15'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-record-reformer' version '0.8.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-s3' version '0.7.1'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-scribe' version '0.10.14'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-systemd' version '0.0.5'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-td' version '0.10.29'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-webhdfs' version '0.4.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluentd' version '0.12.29'
2021-06-27 12:49:09 +0000 [info]: fluent/agent.rb:129:add_match: adding match pattern="**" type="google_cloud"
2021-06-27 12:49:10 +0000 [info]: plugin/out_google_cloud.rb:519:block in detect_platform: Detected GCE platform
2021-06-27 12:49:10 +0000 [info]: plugin/out_google_cloud.rb:290:configure: Logs viewer address: https://console.developers.google.com/
project/projectname/logs?service=compute.googleapis.com&key1=instance&key2=9071465168741286442
2021-06-27 12:49:10 +0000 [info]: fluent/root_agent.rb:147:add_source: adding source type="tail"
2021-06-27 12:49:10 +0000 [info]: fluent/engine.rb:133:configure: using configuration file: <ROOT>
<source>
type tail
format none
path /var/log/execution*.log
pos_file /var/log/execution.pos
tag app.*
</source>
<match **>
type google_cloud
</match>
</ROOT>
2021-06-27 12:52:10 +0000 [info]: plugin/in_tail.rb:557:initialize: following tail of /var/log/execution1.log
2021-06-27 12:53:10 +0000 [info]: plugin/out_google_cloud.rb:451:block in write: Successfully sent to Google Cloud Logging API.
</code></pre>
<p>Despite the successful message, I don't see anything on the Cloud Logging end.</p>
<p>So, here are my questions:</p>
<ol>
<li>Is this there a better solution for my use-case?</li>
<li>Is the sidecar image I should be using? I wasn't able to find any other fluentd images and the one I'm using is 3 years old. I'd prefer to use something recommended by Google rather than create my own.</li>
<li>What do I additionally need to do to see the logs on Cloud Logging? How do I debug this further?</li>
</ol>
<p>Thanks!</p>
| Sayon Roy Choudhury | <p>I’ve tried implementing the configuration you’ve implemented and faced the same issue. I then configured all sources to stream output to STDOUT and was able to view logs on the Cloud Logging dashboard.</p>
<p>Below are the sample configurations I’ve used.</p>
<p>Sample_map-config.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/1.log
pos_file /var/log/1.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/2.log
pos_file /var/log/2.log.pos
tag count.format2
</source>
<match **>
type stdout
</match>
</code></pre>
<p>Sample-pod.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-agent
image: k8s.gcr.io/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: fluentd-config
</code></pre>
| Gellaboina Ashish |
<p>I have a sidecar container in a MySQL Pod which will use the MySQL socket file in order to access the database.</p>
<p>I would like to be sure MySQL has successfully started and therefore have created the socket file before than this sidecar container starts.</p>
<p>I tried to add a <code>readiness</code> probe with an <code>exec.command</code> being <code>test -S /var/run/mysqld/mysqld.sock</code> but it fails with:</p>
<pre><code>Readiness probe failed: OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "test -S /var/run/mysqld/mysqld.sock": stat test -S /var/run/mysqld/mysqld.sock: no such file or directory: unknown
</code></pre>
<p>When I open a terminal session in the sidecar container, I can <code>ls</code> the socket file and it's there.</p>
<p>So it looks like my <code>test -S <path></code> command doesn't work as expected in the context of the probe.</p>
<p>How can I write my probe so that as soon as the socket file is available my sidecar container starts?</p>
| ZedTuX | <p>Try:</p>
<pre><code>...
readinessProbe:
exec:
command:
- sh
- -c
- test -S /var/run/mysqld/mysqld.sock
</code></pre>
| gohm'c |
<p>I created a <code>Deployment</code>, <code>Service</code> and an <code>Ingress</code>. Unfortunately, the <code>ingress-nginx-controller</code> pods are complaining that my <code>Service</code> does not have an Active Endpoint:</p>
<p><code>controller.go:920] Service "<namespace>/web-server" does not have any active Endpoint.</code></p>
<p>My <code>Service</code> definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/should_be_scraped: "false"
creationTimestamp: "2021-06-22T07:07:18Z"
labels:
chart: <namespace>-core-1.9.2
release: <namespace>
name: web-server
namespace: <namespace>
resourceVersion: "9050796"
selfLink: /api/v1/namespaces/<namespace>/services/web-server
uid: 82b3c3b4-a181-4ba2-887a-a4498346bc81
spec:
clusterIP: 10.233.56.52
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: web-server
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>My <code>Deployment</code> definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-06-22T07:07:19Z"
generation: 1
labels:
app: web-server
chart: <namespace>-core-1.9.2
release: <namespace>
name: web-server
namespace: <namespace>
resourceVersion: "9051062"
selfLink: /apis/apps/v1/namespaces/<namespace>/deployments/web-server
uid: fb085727-9e8a-4931-8067-fd4ed410b8ca
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: web-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: web-server
spec:
containers:
- env:
<removed environment variables>
image: <url>/<namespace>/web-server:1.10.1
imagePullPolicy: IfNotPresent
name: web-server
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8082
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config
name: <namespace>-config
dnsPolicy: ClusterFirst
hostAliases:
- hostnames:
- <url>
ip: 10.0.1.178
imagePullSecrets:
- name: registry-pull-secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: <namespace>-config
name: <namespace>-config
status:
conditions:
- lastTransitionTime: "2021-06-22T07:07:19Z"
lastUpdateTime: "2021-06-22T07:07:19Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-06-22T07:17:20Z"
lastUpdateTime: "2021-06-22T07:17:20Z"
message: ReplicaSet "web-server-6df6d6565b" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
</code></pre>
<p>In the same namespace, I have more <code>Service</code> and <code>Deployment</code> resources, all of them work, except this one (+ another, see below).</p>
<pre><code># kubectl get endpoints -n <namespace>
NAME ENDPOINTS AGE
activemq 10.233.64.3:61613,10.233.64.3:8161,10.233.64.3:61616 + 1 more... 26d
content-backend 10.233.96.17:8080 26d
datastore3 10.233.96.16:8080 26d
web-server 74m
web-server-metrics 26d
</code></pre>
<p>As you can see, the selector/label are the same (<code>web-server</code>) in the <code>Service</code> as well as in the <code>Deployment</code> definition.</p>
| C-nan | <p><a href="https://stackoverflow.com/users/13524500/c-nan">C-Nan</a> has solved the problem, and has posted a solution as a comment:</p>
<blockquote>
<p>I found the issue. The Pod was started, but not in Ready state due to a failing readinessProbe. I wasn't aware that an endpoint wouldn't be created until the Pod is in Ready state. Removing the readinessProbe created the Endpoint.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I am following the below document to install <code>jenkins-oprator</code> using <code>helm</code> under <code>jenkins</code> namespace . I am using the following values</p>
<pre><code> jenkins.enabled: true
namespace: jenkins
replicaCount: 3
image: virtuslab/jenkins-operator:v0.7.0
webhook.enabled: true
</code></pre>
<p>Installation is successful as can be shown below</p>
<pre><code> $ helm list -n jenkins
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
jenkins jenkins 2023-09-17 14:28:16.815839332 +0000 UTC deployed jenkins-operator-0.8.0 0.8.0
</code></pre>
<p>But it is listing one of the components inside the <code>default</code> namespace</p>
<pre><code> jenkins-jenkins 2/2 Running 0 10m
</code></pre>
<p>Can somebody please guide why it is installing one of the components in <code>default</code> namespace when I specified to be installed in <code>jenkins</code> namespace.</p>
<p><a href="https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/installing-the-operator/" rel="nofollow noreferrer">https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/installing-the-operator/</a></p>
| Zama Ques | <p>Good day!
I've just tried to create values.yaml as YAML format:</p>
<pre><code>jenkins:
enabled: true
namespace: jenkins
</code></pre>
<p>After <code>helm install jenkins jenkins/jenkins-operator -n jenkins -f values.yaml</code> i have no any resources in default ns.</p>
<p>An example of filling values.yaml can be found <a href="https://artifacthub.io/packages/helm/jenkins/jenkins-operator?modal=values" rel="nofollow noreferrer">here</a>.</p>
<p>I hope this helps.</p>
| Cvitoybamp |
<p>The kubernetes ingress controller is deployed in my AKS cluster. I am using the following ingress rules configuration for client certificate authentication.</p>
<p>tls-secret, ca-secret are the two secrets containing certificates required for authentication.</p>
<p>Now, my requirement is to get these certificates from Vault or Consul KV store and ingress rules should pick up these certificates from there.</p>
<ol>
<li><p>Is there a way nginx ingress rule picks up certificates in other form than kubernetes secrets?</p>
</li>
<li><p>If not, how do we convert vault secrets to kubernetes secrets. Is this a standard way of going about this?</p>
</li>
</ol>
<p>Ingress-rules.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
meta.helm.sh/release-name: service
meta.helm.sh/release-namespace: mynamespace
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/auth-tls-secret: microservices/ca-secret
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "3"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
name: service-nginx-ingress
namespace: mynamespace
spec:
ingressClassName: nginx
rules:
- host: ingress-nginx-controller.microservices.com
http:
paths:
- backend:
service:
name: service
port:
number: 8080
path: /api/v1/service
pathType: Prefix
tls:
- hosts:
- ingress-nginx-controller.microservices.com
secretName: tls-secret
</code></pre>
<p>I tried getting certificates on a pod using vault agent, but couldn't create secrets in the cluster using that data from Vault.</p>
| user1514910 | <p>As @larks mentioned, you can use an <a href="https://github.com/external-secrets/external-secrets" rel="nofollow noreferrer">external secrets operator</a> to pick up certificates in other forms than kubernetes secrets.</p>
<blockquote>
<p>The goal of External Secrets Operator is to synchronize secrets from external APIs into Kubernetes. ESO is a collection of custom API resources - ExternalSecret, SecretStore and ClusterSecretStore that provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you.</p>
</blockquote>
<p>External Secrets Operator allows the operator to retrieve secrets from a Kubernetes Cluster - this can be either a remote cluster or the local where the operator runs in. A SecretStore points to a specific namespace in the target Kubernetes Cluster.</p>
<p><strong>Note</strong>- The minimum supported version of Kubernetes is 1.16.0. Users still running Kubernetes v1.15 or below should upgrade to a supported version before installing external-secrets.</p>
| Fariya Rahmat |
<p>Hope all is well. I am stuck with this Pod executing a shell script, using the BusyBox image. The one below works,</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: loop
name: busybox-loop
spec:
containers:
- args:
- /bin/sh
- -c
- |-
for i in 1 2 3 4 5 6 7 8 9 10; \
do echo "Welcome $i times"; done
image: busybox
name: loop
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
</code></pre>
<p>But this one doesn't works as I am using "- >" as the operator,</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox-loop
name: busybox-loop
spec:
containers:
- image: busybox
name: busybox-loop
args:
- /bin/sh
- -c
- >
- for i in {1..10};
- do
- echo ("Welcome $i times");
- done
restartPolicy: Never
</code></pre>
<p>Is it because the for syntax "for i in {1..10};" will not work on sh shell. As we know we don't have any other shells in Busybox or the "- >" operator is incorrect, I don't think so because it works for other shell scripts.</p>
<p>Also when can use "- |" multiline operator(I hope the term is correct) and "- >" operator. I know this syntax below is easy to use, but the problem is when we use double quotes in the script, the escape sequence confuses and never works.</p>
<p>args: ["-c", "while true; do echo hello; sleep 10;done"]</p>
<p>Appreciate your support.</p>
| Celtic Bean | <p><code>...But this one doesn't works as I am using "- >" as the operator...</code></p>
<p>You don't need '-' after '>' in this case, try:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- ash
- -c
- >
for i in 1 2 3 4 5 6 7 8 9 10;
do
echo "hello";
done
</code></pre>
<p><code>kubectl logs pod busybox</code> will print hello 10 times.</p>
| gohm'c |
<p>How to achieve internal service to service communication in Anthos multiple clusters Example
service A deployed in GKE cluster and service B deployed in AKS cluster, here how we can call service A from service B (internally) ?</p>
| Aadesh kale | <p>As suggested by @Harsh Manver you can <a href="https://cloud.google.com/service-mesh/docs/unified-install/off-gcp-multi-cluster-setup" rel="nofollow noreferrer">set up a multi cluster mesh outside Google Cloud</a> to achieve internal service to service communication in Anthos multiple clusters.</p>
<p>As mentioned in the <a href="https://cloud.google.com/service-mesh/docs/unified-install/multi-cloud-hybrid-mesh" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>The cluster's Kubernetes control plane address and the gateway address
need to be reachable from every cluster in the mesh. The Google Cloud
project in which GKE clusters are located should be allowed to create
external load balancing types.</p>
<p>We recommend that you use authorized networks and VPC firewall rules
to restrict the access and ensure traffic should not exposed to
public internet</p>
</blockquote>
| Fariya Rahmat |
<p>Someone deleted the deployment and i tried to find out from the event logs but i found below response:
No resources found in prometheus namespace.</p>
<p>Is there plugin or something to let me know who deleted this resource?</p>
<p>thanks a lot in advance.</p>
| Yassen Fouad Anis | <p>Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster. The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.</p>
<p>As per the <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>Auditing allows cluster administrators to answer the following
questions:</p>
<p>what happened?</p>
<p>when did it happen?</p>
<p>who initiated it?</p>
<p>on what did it happen?</p>
<p>where was it observed?</p>
<p>from where was it initiated?</p>
<p>to where was it going?</p>
</blockquote>
| Fariya Rahmat |
<p>I have this repo <a href="https://github.com/kokizzu/terraform1/pull/1" rel="nofollow noreferrer">terraform1#prometheus-operator</a>, got it working previously with normal prometheus (without operator), now testing for prometheus-operator so if pod scaled horizontally, it would still can scrape correctly all metrics for all pods, not just from 1 pod.</p>
<p>This <a href="//kokizzu.blogspot.com/2023/07/keda-kubernetes-event-driven-autoscaling.html" rel="nofollow noreferrer">article</a> step by step how to run the minikube and terraform.</p>
<p>After deployed, it shows properly:</p>
<pre><code>k get pods -n pf1ns -w 1 ↵
NAME READY STATUS RESTARTS AGE
keda-admission-webhooks-76cd6c4b59-6b59r 1/1 Running 0 20h
keda-operator-5bb494667b-bb2bl 1/1 Running 0 20h
keda-operator-metrics-apiserver-68d9f78869-d65vj 1/1 Running 0 20h
prometheus-pf1prom-0 2/2 Running 0 3m9s
promfiberdeploy-868697d555-2jfgv 1/1 Running 0 20h
</code></pre>
<p>but there's error on <code>prometheus-pf1prom-0</code>:</p>
<pre><code>ts=2023-07-04T17:23:40.085Z caller=klog.go:116 level=error component=k8s_client_runtime func=ErrorDepth msg="pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:pf1ns:pf1promsvcacc\" cannot list resource \"pods\" in API group \"\" in the namespace \"pf1ns\""
</code></pre>
<p>What roles required to list pods?
there's already pod permission:</p>
<pre><code> rule {
api_groups = [""]
resources = ["services", "endpoints", "pods"]
verbs = ["get", "list", "watch"]
}
</code></pre>
| Kokizzu | <p>Make sure you attach the created <code>Role</code> to the <code>ServiceAccount</code> using a <code>RoleBinding</code>.</p>
| Zasda Yusuf Mikail |
<p>I am trying to get a volume mounted as a non-root user in one of my containers. I'm trying an approach from <a href="https://stackoverflow.com/a/51195446/1322">this</a> SO post using an initContainer to set the correct user, but when I try to start the configuration I get an "unbound immediate PersistentVolumneClaims" error. I suspect it's because the volume is mounted in both my initContainer and container, but I'm not sure why that would be the issue: I can see the initContainer taking the claim, but I would have thought when it exited that it would release it, letting the normal container take the claim. Any ideas or alternatives to getting the directory mounted as a non-root user? I did try using securityContext/fsGroup, but that seemed to have no effect. The /var/rdf4j directory below is the one that is being mounted as root.</p>
<p>Configuration:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: triplestore-data-storage-dir
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: local-storage
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Delete
hostPath:
path: /run/desktop/mnt/host/d/workdir/k8s-data/triplestore
type: DirectoryOrCreate
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: triplestore-data-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: local-storage
volumeName: "triplestore-data-storage-dir"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: triplestore
labels:
app: demo
role: triplestore
spec:
selector:
matchLabels:
app: demo
role: triplestore
replicas: 1
template:
metadata:
labels:
app: demo
role: triplestore
spec:
containers:
- name: triplestore
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: Always
ports:
- name: http
protocol: TCP
containerPort: 8080
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: storage
mountPath: /var/rdf4j
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
volumes:
- name: storage
persistentVolumeClaim:
claimName: "triplestore-data-storage"
</code></pre>
<p>kubectl get pvc</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
triplestore-data-storage Bound triplestore-data-storage-dir 10Gi RWX local-storage 13s
</code></pre>
<p>kubectl get pv</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
triplestore-data-storage-dir 10Gi RWX Delete Bound default/triplestore-data-storage local-storage 17s
</code></pre>
<p>kubectl get events</p>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
21s Warning FailedScheduling pod/triplestore-6d6876f49-2s84c 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
19s Normal Scheduled pod/triplestore-6d6876f49-2s84c Successfully assigned default/triplestore-6d6876f49-2s84c to docker-desktop
3s Normal Pulled pod/triplestore-6d6876f49-2s84c Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
3s Normal Created pod/triplestore-6d6876f49-2s84c Created container take-data-dir-ownership
3s Normal Started pod/triplestore-6d6876f49-2s84c Started container take-data-dir-ownership
2s Warning BackOff pod/triplestore-6d6876f49-2s84c Back-off restarting failed container
46m Normal Pulled pod/triplestore-6d6876f49-9n5kt Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
79s Warning BackOff pod/triplestore-6d6876f49-9n5kt Back-off restarting failed container
21s Normal SuccessfulCreate replicaset/triplestore-6d6876f49 Created pod: triplestore-6d6876f49-2s84c
21s Normal ScalingReplicaSet deployment/triplestore Scaled up replica set triplestore-6d6876f49 to 1
</code></pre>
<p>kubectl describe pods/triplestore-6d6876f49-tw8r8</p>
<pre><code>Name: triplestore-6d6876f49-tw8r8
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 17 Jan 2022 10:17:20 -0500
Labels: app=demo
pod-template-hash=6d6876f49
role=triplestore
Annotations: <none>
Status: Pending
IP: 10.1.2.133
IPs:
IP: 10.1.2.133
Controlled By: ReplicaSet/triplestore-6d6876f49
Init Containers:
take-data-dir-ownership:
Container ID: docker://89e7b1e3ae76c30180ee5083624e1bf5f30b55fd95bf1c24422fabe41ae74408
Image: eclipse/rdf4j-workbench:amd64-3.5.0
Image ID: docker-pullable://registry.com/publicrepos/docker_cache/eclipse/rdf4j-workbench@sha256:14621ad610b0d0269dedd9939ea535348cc6c147f9bd47ba2039488b456118ed
Port: <none>
Host Port: <none>
Command:
chown
-R
100:65533
/var/rdf4j
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 17 Jan 2022 10:22:59 -0500
Finished: Mon, 17 Jan 2022 10:22:59 -0500
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/rdf4j from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8wdv (ro)
Containers:
triplestore:
Container ID:
Image: eclipse/rdf4j-workbench:amd64-3.5.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 200Mi
Environment: <none>
Mounts:
/var/rdf4j from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8wdv (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: triplestore-data-storage
ReadOnly: false
kube-api-access-s8wdv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m24s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 6m13s default-scheduler Successfully assigned default/triplestore-6d6876f49-tw8r8 to docker-desktop
Normal Pulled 4m42s (x5 over 6m12s) kubelet Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
Normal Created 4m42s (x5 over 6m12s) kubelet Created container take-data-dir-ownership
Normal Started 4m42s (x5 over 6m12s) kubelet Started container take-data-dir-ownership
Warning BackOff 70s (x26 over 6m10s) kubelet Back-off restarting failed container
</code></pre>
<h1>Solution</h1>
<p>As it turns out the problem was that the initContainer wasn't running as root, it was running as the default user of the container, and so didn't have the permissions to run the <code>chown</code> command. In the linked SO comment, this was the first comment to the answer, with the response being that the initContainer ran as root - this has apparently changed in newer versions of kubernetes. There is a solution though, you can set the <code>securityContext</code> on the container to run as root, giving it permission to run the <code>chown</code> command, and that successfully allows the volume to be mounted as a non-root user. Here's the final configuration of the initContainer.</p>
<pre><code>initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
securityContext:
runAsUser: 0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
</code></pre>
| Matt McMinn | <p><code>1 pod has unbound immediate PersistentVolumeClaims.</code> - this error means the pod cannot bound to the PVC on the node where it has been scheduled to run on. This can happen when the PVC bounded to a PV that refers to a location that is not valid on the node that the pod is scheduled to run on. It will be helpful if you can post the complete output of <code>kubectl get nodes -o wide</code>, <code>kubectl describe pvc triplestore-data-storage</code>, <code>kubectl describe pv triplestore-data-storage-dir</code> to the question.</p>
<p>The mean time, PVC/PV is optional when using <code>hostPath</code>, can you try the following spec and see if the pod can come online:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: triplestore
labels:
app: demo
role: triplestore
spec:
selector:
matchLabels:
app: demo
role: triplestore
replicas: 1
template:
metadata:
labels:
app: demo
role: triplestore
spec:
containers:
- name: triplestore
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: IfNotPresent
ports:
- name: http
protocol: TCP
containerPort: 8080
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: storage
mountPath: /var/rdf4j
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
volumes:
- name: storage
hostPath:
path: /run/desktop/mnt/host/d/workdir/k8s-data/triplestore
type: DirectoryOrCreate
</code></pre>
| gohm'c |
<p>The domain configured is <code>ticket.devaibhav.live</code></p>
<p><code>ping ticket.devaibhav.live</code> is pointing to the correct IP address of the load balancer provisioned by Digital Ocean. I haven't configured SSL on the cluster yet, but if I try to access my website <a href="http://ticket.devaibhav.live" rel="nofollow noreferrer">http://ticket.devaibhav.live</a> gives an 400 bad request. I am new to kubernetes and networking inside a cluster.</p>
<p>According to my understanding, when browser sends request to <a href="http://ticket.devaibhav.live" rel="nofollow noreferrer">http://ticket.devaibhav.live</a> the request is sent to the Digital Ocean Load balancer and then the ingress service (Ingress-nginx by kubernetes in my case) routes the traffic based on the rules I have defined.</p>
<p>ingress-nginx service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-hostname: 'ticket.devaibhav.live'
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>ingress resource rules</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticket.devaibhav.live
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /api/tickets/?(.*)
pathType: Prefix
backend:
service:
name: tickets-srv
port:
number: 3000
- path: /api/orders/?(.*)
pathType: Prefix
backend:
service:
name: orders-srv
port:
number: 3000
- path: /api/payments/?(.*)
pathType: Prefix
backend:
service:
name: payments-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
</code></pre>
<p>essentially when I hit <a href="http://ticket.devaibhav.live" rel="nofollow noreferrer">http://ticket.devaibhav.live</a> the request should be mapped to the last rule where it must be routed to client-srv.</p>
<p>client deployment and service</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: vaibhav908/client
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>The above configuration works well on the development server where I am using minikube.
I am unable to understand where I am going wrong with the configuration. I will provide more details as I feel it would be necessary.</p>
<p>[edit]
on the cluster that is deployed
<code>kubectl get services</code></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-srv ClusterIP 10.245.100.25 <none> 3000/TCP 2d17h
and some other services
</code></pre>
<p><code>kubectl describe ingress</code></p>
<pre><code>Name: ingress-service
Labels: <none>
Namespace: default
Address: ticket.devaibhav.live
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticket.devaibhav.live
/api/users/?(.*) auth-srv:3000 (10.244.1.76:3000)
/api/tickets/?(.*) tickets-srv:3000 (10.244.0.145:3000)
/api/orders/?(.*) orders-srv:3000 (10.244.1.121:3000)
/api/payments/?(.*) payments-srv:3000 (10.244.1.48:3000)
/?(.*) client-srv:3000 (10.244.1.32:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
</code></pre>
| Vaibhav07 | <p>Make sure you have your ingress controller configured to respect the proxy protocol settings in the LB. Try adding a proxy <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol" rel="nofollow noreferrer">protocol</a> directive to your config map.</p>
<p>As given in the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>Enables or disables the PROXY protocol to receive client connection
(real IP address) information passed through proxy servers and load
balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).</p>
</blockquote>
| Fariya Rahmat |
<p>In a multiple node cluster we want to expose a service handling UDP traffic. There are two requirements:</p>
<ol>
<li>We want the service to be backed up by multiple pods (possibly running on different nodes) in order to scale horizontally.</li>
<li>The service needs the UDP source IP address of the client (i.e., should use DNAT instead of SNAT)</li>
</ol>
<p>Is that possible?</p>
<p>We currently use a <code>NodePort</code> service with <code>externalTrafficPolicy: local</code>. This forces DNAT but only the pod running on the requested node is receiving the traffic.
There doesn't seem to be a way to spread the load over multiple pods on multiple mnodes.</p>
<p>I already looked at <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">this Kubernetes tutorial</a> and also this article <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">here</a>.</p>
| nimrodm | <p><strong>The Problem</strong></p>
<p>I feel like there is a need for some explanation before facing the actual issue(s) in order to understand <em>why</em> things do not work as expected:</p>
<p>Usually what happens when using <code>NodePort</code> is that you expose a port on every node in your cluster. When making a call to <code>node1:port</code> the traffic will then (same as with a <code>ClusterIP</code> type) be forwarded to one Pod that matches the <code>selector</code>, regardless of that Pod being on <code>node1</code> or another node.</p>
<p>Now comes the tricky part.
When using <code>externalTrafficPolicy: Local</code>, packages that arrive on a node that does not have a Pod on it will be dropped.
Perhaps the following illustration explains the behavior in a more understandable way.</p>
<p><code>NodePort</code> with default <code>externalTrafficPolicy: Cluster</code>:</p>
<pre><code>package --> node1 --> forwards to random pod on any node (node1 OR node2 OR ... nodeX)
</code></pre>
<p><code>NodePort</code> with <code>externalTrafficPolicy: Local</code>:</p>
<pre><code>package --> node1 --> forwards to pod on node1 (if pod exists on node1)
package --> node1 --> drops package (if there is no pod on node1)
</code></pre>
<p>So in essence to be able to properly distribute the load when using <code>externalTrafficPolicy: Local</code> two main issues need to be addressed:</p>
<ol>
<li>There has to be a Pod running on every node in order for packages not to be dropped</li>
<li>The client has to send packages to multiple nodes in order for the load to be distributed</li>
</ol>
<hr />
<p><strong>The solution</strong></p>
<p>The first issue can be resolved rather easily by using a <code>DaemonSet</code>. It will ensure that one instance of the Pod runs on every node in the cluster.</p>
<p>Alternatively one could also use a simple <code>Deployment</code>, manage the <code>replicas</code> manually and ensure proper distribution across the nodes <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">by using <code>podAntiAffinity</code></a>. This approach would take more effort to maintain since <code>replicas</code> must be adjusted manually but can be useful if you want to have more than just 1 Pod on each node.</p>
<p>Now for the second issue.
The easiest solution would be to let the client implement logic on his part and send requests to all the nodes in a round robin principle, however, that is not a very practical and/or realistic way of doing it.</p>
<p>Usually when using <code>NodePort</code> there is still a load balancer of some kind in front of it to distribute the load (not taking about the Kubernetes service type <code>LoadBalancer</code> here). This may seem redundant since by default <code>NodePort</code> will distribute the traffic across all the Pods anyways, however, the node that gets requested still gets the traffic and then another hop happens. Furthermore if only the same node is addressed at all time, once that node goes down (for whatever reason) traffic will never reach any of the Pods anyways. So for those (and many other reasons) a load balancer should <em>always</em> be used in combination with <code>NodePort</code>. To solve the issue simply configure the load balancer to preserve the source IP of the original client.</p>
<p>Furthermore, depending on what cloud you are running on, there is a chance of you being able to configure a service type <code>LoadBalancer</code> instead of <code>NodePort</code> (which basically is a <code>NodePort</code> service + a load balancer in front of it as described above) , configure it with <code>externalTrafficPolicy: Local</code> and address the first issue as described earlier and you achieved what you wanted to do.</p>
| F1ko |
<p>I am using a Nginx Ingress Controller in a Kubernetes Cluster. I've got an application within the cluster, which was available over the internet. Now I'm using the Ingress Controller to access the application, with the intent of showing some custom errors.</p>
<p>If i access the application (which is not written by myself, therefore I can't change things there), it receives the IP address of the <code>nginx-ingress-controller-pod</code>. The logs of the <code>nginx-ingress-controller-pod</code> indicate that the remote address is a different one.</p>
<p>I've already tried things like <code>use-proxy-protocol</code>, then I would be able to use <code>$remote_addr</code> and get the right IP. But as I mentioned I am not able to change my application, so I have to "trick" the ingress controller to use the <code>$remote_addr</code> as his own.
How can i configure the ingress, so the application will get the request from the remote IP and not from the <code>nginx-ingress-controller-pod</code> IP? Is there a way to do this?</p>
<p>Edit: I'm using a bare metal kubernetes installation with kubernetes v1.19.2 and the nginx chart <code>ingress-nginx-3.29.0</code>.</p>
| black_hawk | <p><strong>It could not achievable by using layer 7 ingress controller.</strong></p>
<p>If Ingress preserves the source IP then response will got directly from the app pod to the client, so the client will get a response from a IP:port different from what he connected to. Or even worse - client's NAT drops the response completely because it doesn't match the existing connections.</p>
<p>You can take a look at this <a href="https://stackoverflow.com/questions/63836681/preserve-source-ip-on-kubernetes-bare-metal-with-ingress-nginx-iptables-and-met">similar question</a> on stackoverflow with accepted answer:</p>
<blockquote>
<p>As ingress is above-layer-4 proxy. There is no way you can preserve SRC IP in layer 3 IP protocol. The best is and I think Nginx Ingress already been set by default that they put the "X-Forwarded-For" header in any HTTP forward.
Your app supposes to log the X-Forwarded-For header</p>
</blockquote>
<p>You can try to workaround by following <a href="https://docs.ovh.com/gb/en/kubernetes/getting-source-ip-behind-loadbalancer/" rel="nofollow noreferrer">this article</a>. It could help you to preserve your IP.</p>
<p>I also recommend this <a href="https://blog.envoyproxy.io/introduction-to-modern-network-load-balancing-and-proxying-a57f6ff80236" rel="nofollow noreferrer">very good article</a> about load balancing and proxying. You will also learn a bit about load balancing on L7:</p>
<blockquote>
<p>L7 load balancing and the OSI model
As I said above in the section on L4 load balancing, using the OSI model for describing load balancing features is problematic. The reason is that L7, at least as described by the OSI model, itself encompasses multiple discrete layers of load balancing abstraction. e.g., for HTTP traffic consider the following sublayers:</p>
<ul>
<li>Optional Transport Layer Security (TLS). Note that networking people argue about which OSI layer TLS falls into. For the sake of this discussion we will consider TLS L7.</li>
<li>Physical HTTP protocol (HTTP/1 or HTTP/2).</li>
<li>Logical HTTP protocol (headers, body data, and trailers).</li>
<li>Messaging protocol (gRPC, REST, etc.).</li>
</ul>
</blockquote>
| Mikołaj Głodziak |
<p>I am making some tests, and I believe I may be misunderstanding how "allocatable" work.
I have 2 nodes running on Azure-AKS, both with Memory capacity of 65 Gi, and nothing but some basic daemon running on them.
When describing these nodes, I have for both about 59 Gi memory "allocatable".</p>
<p>Then I start a new pod, with a resources request and limit of 30Gi on one of these node. In practice, the pod use just a couple of Mo.
I would be expecting the "allocatable" to reduce to 29 Gi, but it stays at 59 Gi.</p>
<p>If I am doing a describe, the "allocated" section will correctly display what I actually requested.</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
memory 31190Mi (53%) 32170Mi (55%)
....
</code></pre>
<p>But not the allocatable:</p>
<pre><code>Allocatable:
attachable-volumes-azure-disk: 16
cpu: 7820m
ephemeral-storage: 119716326407
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 59350320Ki
pods: 30
</code></pre>
<p><strong>So, is allocatable supposed to take in account the requested resources of the pod ?</strong></p>
<p>If yes, why is the value not decreasing here?</p>
<p>If no, how can I know what is the total amount of request/limit on one specific node (in a parsable format, such as json) ?</p>
| Djoby | <p><code>is allocatable supposed to take in account the requested resources of the pod ?</code></p>
<p>No. Allocatable is your underlying instance capacity <strong>minus</strong> all the reserved (eg. system-reserved, kube-reserved, eviction threshold).</p>
<p><code>...how can I know what is the total amount of request/limit on one specific node?</code></p>
<p>You can continue use <code>kubectl describe node</code>, or <code>kubectl top node</code>, or kubernetes dasboard and many other monitoring tools out there.</p>
| gohm'c |
<p>What is the different between</p>
<pre><code>--dry-run
--dry-run=client
--dry-run=server
</code></pre>
<p>opportunities?</p>
<p>And is there any purpose other than create a definition file?</p>
<p>Thank you for your time.</p>
| Alidmrc | <p>Passage from the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">official Kubernetes <code>kubectl</code> references</a>:</p>
<blockquote>
<p>[--dry-run] Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p>
</blockquote>
<p>The following table should explain it in a much simpler way:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">sends data to server/cluster</th>
<th style="text-align: center;">perform change on server/cluster</th>
<th style="text-align: center;">validation by the server/cluster</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"><code>--dry-run client</code></td>
<td style="text-align: center;">no</td>
<td style="text-align: center;">no</td>
<td style="text-align: center;">no</td>
</tr>
<tr>
<td style="text-align: left;"><code>--dry-run server</code></td>
<td style="text-align: center;">yes</td>
<td style="text-align: center;">no</td>
<td style="text-align: center;">yes</td>
</tr>
<tr>
<td style="text-align: left;"><code>--dry-run none</code></td>
<td style="text-align: center;">yes</td>
<td style="text-align: center;">yes</td>
<td style="text-align: center;">yes</td>
</tr>
</tbody>
</table>
</div> | F1ko |
<p>I have a Kubernetes cluster of 3 nodes in Amazon EKS. It's running 3 pods of Cockroachdb in a StatefulSet. Now I want to use another instance type for all nodes of my cluster.
So my plan was this:</p>
<ol>
<li>Add 1 new node to the cluster, increase replicas in my StatefulSet to 4 and wait for the new Cockroachdb pod to fully sync.</li>
<li>Decommission and stop one of the old Cockroachdb nodes.</li>
<li>Decrease replicas of the StatefulSet back to 3 to get rid of one of the old pods.</li>
<li>Repeat steps 1-3 two more times.</li>
</ol>
<p>Obviously, that doesn't work because StatefulSet deletes most recent pods first when scaling down, so my new pod gets deleted instead of the old one.
I guess I could just create a new StatefulSet and make it use existing PVs, but that doesn't seem like the best solution for me. Is there any other way to do the migration?</p>
| etherman | <p>You can consider make a copy of your ASG current launch template -> upgrade the instance type of the copied template -> point your ASG to use this new launch template -> perform ASG instance refresh. Cluster of 3 nodes with minimum 90% of healthy % ensure only 1 instance will be replace at a time. Affected pod on the drained node will enter pending state for 5~10 mins and redeploy on the new node. This way you do not need to scale up StatefulSet un-necessary.</p>
| gohm'c |
<p>I havw a question, as part of the kubectl deployment process we currently run, we sometimes need to run migrations on the database as part of that process.</p>
<p>We do what we call a rollout restart after we have re-built a tagged Docker image.</p>
<p>However, we usually have anywhere between 2 and 4 pods, with randomly assigned names, e.g.:</p>
<pre><code>web-7f54669b5f-c6z8m 1/1 Running 0 55s
web-7f54669b5f-fp2kw 1/1 Running 0 67s
</code></pre>
<p>To run, say, a database migration command, we could do:</p>
<pre><code>kubectl exec --stdin --tty web-7f54669b5f-fp2kw -- python manage.py migrate --plan
</code></pre>
<p>That's fine. But we were wondering if there a way to just target any of the "web" pods ...?</p>
<p>I'm assuming it's not possible, but not 100% sure why...</p>
| Micheal J. Roberts | <p>So, what you can do is add unique labels to all pods and fetch each one of them using labels.</p>
<p>Example:</p>
<pre><code>kubectl exec --stdin --tty $(kubectl get po -l "<label_name>=<label_value>" \
-o jsonpath='{.items[0].metadata.name}') -- python manage.py migrate --plan
</code></pre>
| Gautam Rajotya |
<p>Is it possible to create a Kubernetes service and pod in different namespaces, for example, having myweb-svc pointing to the actual running myweb-pod, while myweb-svc and myweb-pod are in different namespaces?</p>
| Ian | <p>YAML manifest to create both the pod and the service in their respective namespaces. You need to specify the ‘namespace’ field in the ‘metadata’ section of both the ‘pod’ and ‘service’ objects to specify the namespace in which they should be created.</p>
<p>Also, if you want to point your Service to a Service in a different namespace or on another cluster you can use service without a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">pod selector</a>.</p>
<p>Refer to this link on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" rel="nofollow noreferrer">Understanding kubernetes Object</a> for more information.</p>
| Fariya Rahmat |
<p>Im trying to schedule a CronJob to launch a kubectl command. The cronjob does not start a pod.
This is my cronjob</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mariadbdumpsa
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl get pods;echo 'DDD'
restartPolicy: OnFailure
</code></pre>
<p>I create the cronjob on openshift by:</p>
<pre><code>oc create -f .\cron.yaml
</code></pre>
<p>Obtaining the following results</p>
<pre><code>PS C:\Users\mymachine> oc create -f .\cron.yaml
cronjob.batch/mariadump created
PS C:\Users\mymachine> oc get cronjob -w
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
mariadump */1 * * * * False 0 <none> 22s
mariadump */1 * * * * False 1 10s 40s
mariadump */1 * * * * False 0 20s 50s
PS C:\Users\mymachine> oc get pods -w
NAME READY STATUS RESTARTS AGE
</code></pre>
<p>The cronjob does not start a pod, but if change to this cronjob(removing the serviceaccount)</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl get pod;echo 'DDD'
restartPolicy: OnFailure
</code></pre>
<p>it works as expected without having permissions.</p>
<pre><code>PS C:\Users\myuser> oc get cronjob -w
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
mariadump */1 * * * * False 0 <none> 8s
mariadump */1 * * * * False 1 3s 61s
PS C:\Users\myuser> oc get pods -w
NAME READY STATUS RESTARTS AGE
mariadump-1616089500-mnfxs 0/1 CrashLoopBackOff 1 8s
PS C:\Users\myuser> oc logs mariadump-1616089500-mnfxs
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:default" cannot list resource "pods" in API group "" in the namespace "my-namespace"
</code></pre>
<p>For giving the cronjob the proper permissions I used this template to create the Role, the rolebinding and the ServiceAccount.</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my_namespace
name: mariadbdump
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mariadbdump
namespace: my_namespace
subjects:
- kind: ServiceAccount
name: mariadbdumpsa
namespace: my_namespace
roleRef:
kind: Role
name: mariadbdump
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mariadbdumpsa
namespace: my_namespace
</code></pre>
<p>Anyone can help me to know why the cronjob with the ServiceAccount is not working?</p>
<p>Thanks</p>
| randex17 | <p>With this yaml is actually working</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: mariadbdump
rules:
- apiGroups:
- ""
- ''
resources:
- deployments
- replicasets
- pods
- pods/exec
verbs:
- 'watch'
- 'get'
- 'create'
- 'list'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mariadbdump
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: mariadbdumpsa
namespace: my-namespace
roleRef:
kind: Role
name: mariadbdump
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mariadbdumpsa
namespace: my-namespace
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mariadbdumpsa
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl exec $(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}') -- /opt/rh/rh-mariadb102/root/usr/bin/mysqldump --skip-lock-tables -h 127.0.0.1 -P 3306 -u userdb --password=userdbpass databasename >/tmp/backup.sql;kubectl cp my-namespace/$(kubectl get pods | grep Running | grep 'mariadbdump' | awk '{print $1}'):/tmp/backup.sql my-namespace/$(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}'):/tmp/backup.sql;echo 'Backup done'
restartPolicy: OnFailure
</code></pre>
| randex17 |
<p>I have a single node kubernetes cluster running in a VM in azure. I have a service running SCTP server in port 38412. I need to expose that port externally. I have tried by changing the port type to NodePort. But no success. I am using flannel as a overlay network. using Kubernetes version 1.23.3.</p>
<p>This is my service.yaml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: fivegcore
meta.helm.sh/release-namespace: open5gs
creationTimestamp: "2022-02-11T09:24:09Z"
labels:
app.kubernetes.io/managed-by: Helm
epc-mode: amf
name: fivegcore-amf
namespace: open5gs
resourceVersion: "33072"
uid: 4392dd8d-2561-49ab-9d57-47426b5d951b
spec:
clusterIP: 10.111.94.85
clusterIPs:
- 10.111.94.85
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: tcp
nodePort: 30314
port: 80
protocol: TCP
targetPort: 80
- name: ngap
nodePort: 30090
port: 38412
protocol: SCTP
targetPort: 38412
selector:
epc-mode: amf
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>As you can see I changed the port type to NodePort.</p>
<pre><code>open5gs fivegcore-amf NodePort 10.111.94.85 <none> 80:30314/TCP,38412:30090/SCTP
</code></pre>
<p>This is my Configmap.yaml. In this configmap that ngap dev is the server I want to connect which is using default eth0 interface in the container.</p>
<pre><code>apiVersion: v1
data:
amf.yaml: |
logger:
file: /var/log/open5gs/amf.log
#level: debug
#domain: sbi
amf:
sbi:
- addr: 0.0.0.0
advertise: fivegcore-amf
ngap:
dev: eth0
guami:
- plmn_id:
mcc: 208
mnc: 93
amf_id:
region: 2
set: 1
tai:
- plmn_id:
mcc: 208
mnc: 93
tac: 7
plmn_support:
- plmn_id:
mcc: 208
mnc: 93
s_nssai:
- sst: 1
sd: 1
security:
integrity_order : [ NIA2, NIA1, NIA0 ]
ciphering_order : [ NEA0, NEA1, NEA2 ]
network_name:
full: Open5GS
amf_name: open5gs-amf0
nrf:
sbi:
name: fivegcore-nrf
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: fivegcore
meta.helm.sh/release-namespace: open5gs
creationTimestamp: "2022-02-11T09:24:09Z"
labels:
app.kubernetes.io/managed-by: Helm
epc-mode: amf
</code></pre>
<p>I exec in to the container and check whether the server is running or not.
This is the netstat of the container.</p>
<pre><code>Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.244.0.31:37742 10.105.167.186:80 ESTABLISHED 1/open5gs-amfd
sctp 10.244.0.31:38412 LISTEN 1/open5gs-amfd
</code></pre>
<p>sctp module is also loaded in the host.</p>
<pre><code>$lsmod | grep sctp
sctp 356352 8
xt_sctp 20480 0
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,ip_vs,sctp
x_tables 49152 18 ip6table_filter,xt_conntrack,xt_statistic,iptable_filter,iptable_security,xt_tcpudp,xt_addrtype,xt_nat,xt_comment,xt_owner,ip6_tables,xt_sctp,ipt_REJECT,ip_tables,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark
</code></pre>
<p>Is it possible to expose this server externally?</p>
| vigneshwaran s | <p>Neither AKS nor Flannel supports SCTP at this point of writing. Here's some <a href="https://learn.microsoft.com/en-us/answers/questions/573521/does-azure-aks-support-sctp-and-how-to-enable-it.html" rel="nofollow noreferrer">details</a> about it.</p>
| gohm'c |
<p>Shortly, there are two services that communicates with each others via HTTP REST APIs. My deployment is running in an AKS cluster. For ingress controller, I installed this Nginx controller helm chart:
<a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a> <br><br>
The load balancer has a fix IP attached. My deployment running in my cluster should send usage info to the other service periodically and vica versa. However, that service has an IP whitelist and I need to provide a static IP for whitelisting my deployment. Currently, the problem is that my cURL call has the node's IP which is always changing depending on which node my deployment is running on. Also, the number of nodes are scaled dinamically, too. My goal is to send egress traffic through the loadbalancer something like this:<br> <a href="https://i.stack.imgur.com/LQVGc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQVGc.png" alt="enter image description here" /></a> <br><br>
Is there any way to route the outbound traffic from my pods to the loadbalancer?</p>
| Joci Ivanics | <p>This is possible with Azure Load Balancer with <a href="https://learn.microsoft.com/en-us/azure/load-balancer/egress-only" rel="nofollow noreferrer">outbound rules</a>; which the LB will do a SNAT and your "other service" will see the fixed frontend public IP. Another method is use <a href="https://learn.microsoft.com/en-us/samples/azure-samples/aks-nat-agic/aks-nat-agic/" rel="nofollow noreferrer">Virtual Network NAT</a> where your "other service" will see the fixed NAT public IP. You can then whitelist the fixed public IP either way.</p>
| gohm'c |
<p>Forum,</p>
<p>I am currently looking into Azure Synapse as an option for migrating our on-prem data architecture. I am excited by the functionality it offers - SQL Pools, Spark Pools, and the accompanying notebooks. I get that Synapse can function as a all in one data platform, where my data scientists and data analists can use its functionality to deliver insights at will. However, a large part of the work my team does is creating <em>data products</em>.</p>
<p>We currently have a kubernetes cluster with several stand-alone API's that perform data-science operations in the larger whole of our software. They can be thought of as microservices. Most of the ETL is done in our SQL-server, and the microservices in our K8S cluster (usually python + some python packages + FastAPI) typically get the required data from our SQL-server through some SQL-query with an ODBC connector.</p>
<p>Now my question is, how suitable is Synapse for such an architecture? Can I call upon the SQL-pool or spark-pool to do the heavy data-lifting from outside the azure environment, say from a kubernetes pod?</p>
| Psychotechnopath | <p>Unfortunately you can't integrate Azure Synapse Analytics with Kubernetes Services.</p>
<p>While Synapse SQL helps perform SQL queries, Apache Spark executes batch/stream processing on Big Data. SQL Pool is used to work with data stored in Dedicated SQL Pool while Spark SQL can be integrated with existing data preparation or data science projects that you may hold in Azure Databricks or Azure Machine Learning Services.</p>
<p>Also, as per this <a href="https://sourceforge.net/software/compare/Azure-Synapse-vs-Azure-Kubernetes-Service-AKS/" rel="nofollow noreferrer">third-party document</a>, Azure Synapse Analytics can't integrate with Kubernetes Services.</p>
<p><a href="https://i.stack.imgur.com/w9BAM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w9BAM.png" alt="enter image description here" /></a></p>
<p>As a workaround, you can copy/move your data from Kubernetes to Azure Services like Azure Dedicated SQL Pool, Azure Blob Storage or Azure Data Lake Storage and then integrate it with Azure Synapse pipeline or Spark Pool.</p>
| Utkarsh Pal |
<p>I am a k8s and EKS noob.
I'm trying to kick a cluster and keep meeting the <em>desiredCapacity</em> key. I just cannot find where it is documented.<br>
Don't bother pointing to the <a href="https://eksctl.io/usage/schema/#nodeGroups-desiredCapacity" rel="nofollow noreferrer">schema</a>, it's not described there.</p>
| Skulas | <p><code>When kicking a managed EKS cluster using a yaml config file, what does desiredCapacity in ClusterConfig specify?</code></p>
<p>This field is actually used by ASG; where it means your cluster will start by launching the <strong>desired capacity</strong> (# of EC2 instance), and ASG will monitor and replace any instance that has health issue to maintain the <strong>desired capacity</strong> for your cluster.</p>
| gohm'c |
<p>I am trying to run this command in my Argo workflow</p>
<p><code>kubectl cp /tmp/appendonly.aof redis-node-0:/data/appendonly.aof -c redis -n redis</code></p>
<p>but I get this error</p>
<pre><code>Error from server (InternalError): an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding (get pods redis-node-0)
</code></pre>
<p>surprisingly when I am copying the file from a pod to local system then it is working, like this command <code>kubectl cp redis-node-0:/data/appendonly.aof tmp/appendonly.aof -c redis -n redis</code></p>
<p>Any idea what might be causing it?</p>
| Viplove | <p>Solution -
Not sure what was causing this issue but found this command in the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#copy-files-and-directories-to-and-from-containers" rel="nofollow noreferrer">docs</a> that worked fine</p>
<p><code>tar cf - appendonly.aof | kubectl exec -i -n redis redis-node-0 -- tar xf - -C /data</code></p>
| Viplove |
<p>I'm trying to generate a <strong>Unique Key</strong> in the application, using date/time and a number sequence managed by the application. It works fine as we don't have multiapplication.</p>
<p>The application is running in a Kubernetes pod with auto scaling configured.</p>
<p>Is there any way to generate or get a <strong>unique and numeric identifier per pod</strong> and put them in the container environment variables? there is no need for the intentifier to be fixed to use the statefulSets</p>
<p><strong>UPDATE</strong></p>
<p>the problem we are having with the uid is the size of the collections, tha's why we're are looking for a solution that's about the size of a bigInt, and if there is any other numberic unique id similar as an alternative of use for the UID.</p>
| lego | <p><code>...get a unique and numeric identifier per pod and put them in the container environment variables?</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
command: ["ash","-c","echo ${MY_UID} && sleep 3600"]
env:
- name: MY_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
</code></pre>
<p>Run <code>kubectl logs <pod></code> will print you the unique ID assigned to the environment variable in your pod.</p>
| gohm'c |
<p>I am a neophyte, I'm trying to configure my project on gitlab to be able to integrate it with a kubernetes cluster infrastructure pipeline.
While I am configuring gitlab asked for a certificate and a token. Since kuberntes is deployed on azure, how can I create/retrieve the certicate and required token?
Possibly which user / secret in the kuberntes service does it refer to?</p>
<p><a href="https://i.stack.imgur.com/XFZTx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XFZTx.png" alt="enter image description here" /></a></p>
| Jonio | <p>You can get the default values of <em><strong>CA certificate</strong></em> using the below steps :</p>
<p><strong>CA Certificate:</strong></p>
<p><em><strong>CA certificate</strong></em> is nothing but the Kubernetes certificate that we use in the config file for authenticating to the cluster.</p>
<ol>
<li>Connect to AKS cluster,<code>az aks get-credentials — resource-group <RG> — name <KubeName></code></li>
<li>Run <code>kubectl get secrets</code> , after you run command in output you will
get a default token name , you can copy the name.</li>
<li>Run <code>kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode</code> to get the
certificate , you can copy the certificate and use it in setting the
runner.</li>
</ol>
<p><strong>Output:</strong></p>
<p><a href="https://i.stack.imgur.com/Chbgo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Chbgo.png" alt="enter image description here" /></a></p>
<p><strong>Token :</strong></p>
<p>The token will be of the <em><strong>service account with cluster-admin permissions</strong></em> which <code>Gitlab</code> will use to access the AKS cluster , so you can create a <em><strong>new admin service account</strong></em> if not created earlier by using below steps:</p>
<ol>
<li><p>Create a Yaml file with below contents :</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: kube-system
</code></pre>
</li>
<li><p>Run <code>kubectl apply -f <filename>.yaml</code> to apply and bind the service
account to the cluster.</p>
</li>
<li><p>Run <code>kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')</code> to get the token
for the Gitlab Admin we created in the file and bind with the
cluster in the previous step. You can copy the token value and use it in
the runner setting .</p>
</li>
</ol>
<p><strong>Output:</strong></p>
<p><a href="https://i.stack.imgur.com/nXlQ2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nXlQ2.png" alt="enter image description here" /></a></p>
| Ansuman Bal |
<p>I have spring boot app that consume from rabbitmq I deployed this app on k8s and created keda file to scale it if the size of the queue pass x messages it work fine i got new pod created ,but now I want to make like a rule on the scale down part I want to make sure that there is no user using that pod before scale it down so i dont interpept any activity I searched on google and chatgpt I found something like prestop and readnessProbe ,Who Have an idea how this can be done I'll apreciate an kind of help :)</p>
| Mohamed Nalouti | <p>I assume that there is some limited time that your application need to process request like 45s from receiving request to complete.</p>
<p>In such case I would use <code>terminationGracePeriodSeconds</code> which by default is <code>30s</code> but you can extend it to any value.</p>
<p>What is happening under the hood is when <code>KEDA</code> will start scaling down and your pod will be in <code>Terminating</code> state it is immediately removed from <code>service</code> endpoint and it is not receiving new requests (if <code>service</code> exist for that pod). Then it will send <code>SIGTERM</code> signal and wait until process will finish it's work (please be sure that your application handle <code>SIGTERM</code> properly). After processing current request pod should be killed before time defined by <code>terminationGracePeriodSeconds</code>. If processing of request take longer than <code>terminationGracePeriodSeconds</code> it will just kill pod without waiting and message should go back to queue.</p>
<p>So if processing request is taking eg. <code>120s</code> sample manifest should looks like this</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
terminationGracePeriodSeconds: 130
</code></pre>
| Michał Lewndowski |
<p>I am new to the DevOps and Terraform domain, and I would like to ask the following. I have already create a VNET (using portal) which called "myVNET" in the resource group "Networks". I am trying to implement a AKS cluster using Terraform. My main.tf file is below</p>
<pre><code>provider "azurerm" {
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
features {}
}
resource "azurerm_resource_group" "MyPlatform" {
name = var.resourcename
location = var.location
}
resource "azurerm_kubernetes_cluster" "aks-cluster" {
name = var.clustername
location = azurerm_resource_group.MyPlatform.location
resource_group_name = azurerm_resource_group.MyPlatform.name
dns_prefix = var.dnspreffix
default_node_pool {
name = "default"
node_count = var.agentnode
vm_size = var.size
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
network_policy = "calico"
}
}
</code></pre>
<p>My question is the following, how can I attach my cluster to my VNET?</p>
| p4pe | <p>You do that by assigning the subnet ID to the node pool <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#vnet_subnet_id" rel="nofollow noreferrer">vnet_subnet_id</a>.</p>
<pre><code>data "azurerm_subnet" "subnet" {
name = "<name of the subnet to run in>"
virtual_network_name = "MyVNET"
resource_group_name = "Networks"
}
...
resource "azurerm_kubernetes_cluster" "aks-cluster" {
...
default_node_pool {
name = "default"
...
vnet_subnet_id = data.azurerm_subnet.subnet.id
}
...
</code></pre>
<p>You can reference this <a href="https://github.com/edalferes/terraform-azure-aks" rel="nofollow noreferrer">existing module</a> to build your own module if not use it directly.</p>
| gohm'c |
<p>I have pods that are of kind <code>Cronjob</code> running in parallel. They complete task and run again after fixed interval of 20 minutes as per cron expression. I noticed that some pods are restarting 2-3 times before completing task.</p>
<p>I checked details in <code>kubectl describe pod</code> command and found that pod <code>exit code 2</code> when it restart due to some error:</p>
<pre><code>Last State: Terminated
Reason: Error
Exit Code: 2
</code></pre>
<p>I searched about exit code 2 and found that it is misuse of a <code>shell builtin commands</code>. How I can find which shell builtin is misused. How to debug cause of exit code 2.</p>
<p>Thanks in advance.</p>
| anujprashar | <p>An exit code of 2 indicates either that the application chose to return that error code, or (by convention) there was a misuse of a shell built-in. Check your pod’s command specification to ensure that the command is correct. If you think it is correct, try running the image locally with a shell and run the command directly.</p>
<p>Refer to this <a href="https://www.datree.io/resources/kubernetes-error-codes-crashloopbackoff" rel="nofollow noreferrer">link</a> for more information.</p>
| Fariya Rahmat |
<p>I'm reading helm documentation,</p>
<blockquote>
<p>The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes.</p>
</blockquote>
<p>I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with "y" and "z" and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes.</p>
<p>I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it:</p>
<pre><code>{{- define "statement.resource"}}
{{- $statementProvider := lower ( $.Values.statementProvider ) -}}
{{- $statementFiles := list -}}
{{- range $path, $bytes := .Files.Glob "templates/**" }}
{{- if eq $statementProvider "x" -}}
{{- if not (or (hasPrefix $path "y") (hasPrefix $path "z")) -}}
{{- $statementFiles = append $statementFiles $path -}}
{{- end }}
{{- $statementFiles -}}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
| someone | <p>It can be done with simple <code>if</code> statement.</p>
<p>Your <code>template</code> file</p>
<pre><code>{{- if .Values.serviceAccount.enabled -}}
...
{{- end }}
</code></pre>
<p>Your <code>values</code> file</p>
<pre><code>serviceAccount:
enabled: true/false
</code></pre>
<p>You also can do the same for resources that are nested and conditional add <code>volumet</code> to your <code>deployment</code>.</p>
| Michał Lewndowski |
<p>I have a deployment file with replicas set to 1. So when I do 'kubectl get ...' I get 1 record each for deployment, replicaset and pod.</p>
<p>Now I set replicas to 2 in deployment.yaml, apply it and when I run 'kubectl get ..' command, I get 2 records each for deployments, replicaset and pods each.</p>
<p>Shouldn't previous deployment be overwritten, hence resulting in a single deployment, and similar for replicaset(2 pods are ok as now replicas is set to 2)?</p>
<p>This is deployment file content:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
ports:
- containerPort: 80
</code></pre>
| Mandroid | <p><code>...Now I set replicas to 2 in deployment.yaml, apply it and when I run 'kubectl get ..' command, I get 2 records each for deployments, replicaset and pods each.</code></p>
<p>Can you try <code>kubectl get deploy --field-selector metadata.name=nginx-deployment</code>. You should get just 1 deployment. The number of pods should follow this deployment's <code>replicas</code>.</p>
| gohm'c |
<p>I want to create Azure Kubernetes Service resource which supports GPU computing. I have huge amount of data and docker image which requires Nvidia drivers. When I attempt to create it I get:</p>
<pre><code>Size not available
This size is currently unavailable in eastus for this subscription: NotAvailableForSubscription.
</code></pre>
<p>I get this message for every location I choose. I suppose the problem is that I use Azure Pass Sponsorship. Is there any way to do it on this kind of subscription?</p>
| kkorniszuk | <p>You receive this error when the resource SKU you have selected (such as K8’s Cluster or VM’s) is not available for the location you have selected</p>
<p>You can check the product availability in selected region by <a href="https://azure.microsoft.com/en-us/global-infrastructure/services/" rel="nofollow noreferrer">Products available by region</a>.</p>
<p>To determine which SKUs are available in a region/zone, use the <a href="https://learn.microsoft.com/en-us/powershell/module/az.compute/get-azcomputeresourcesku" rel="nofollow noreferrer">Get-AzComputeResourceSku</a> command. Filter the results by location. You must have the latest version of PowerShell for this command.</p>
<pre><code>Get-AzComputeResourceSku | where {$_.Locations -icontains "centralus"}
</code></pre>
<p>Refer this <a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/error-sku-not-available" rel="nofollow noreferrer">documentation</a> for more information.</p>
<p>Please refer to this <a href="https://learn.microsoft.com/api/Redirect/en-in/documentation/articles/azure-subscription-service-limits/" rel="nofollow noreferrer">document</a> for a list of common Microsoft Azure limits, quotas and constraints for Azure Sponsorship Subscription.</p>
<p>The following monthly usage quotas are applied. If you need more than these limits, please contact <a href="https://azure.microsoft.com/en-in/support/options/" rel="nofollow noreferrer">customer service</a> at any time so that they can understand your needs and adjust these limits appropriately.</p>
<p>Reference: <a href="https://azure.microsoft.com/en-in/offers/ms-azr-0036p/" rel="nofollow noreferrer">Microsoft Azure Sponsorship Offer</a></p>
| RahulKumarShaw |
<p>I have a kubernetes cluster running in AWS EKS, with the cluster-autoscaler [1] installed (using the helm provider for terraform [2]).</p>
<p>The cluster-autoscaler docs list a number of supported startup parameters[3], but it's not clear to me how to set them: can anybody point me in the right direction?</p>
<p>[1] <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a></p>
<p>[2] <a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release</a></p>
<p>[3] <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca</a></p>
| Renoa | <p>Since you are deploying <code>cluster-autoscaler</code> with <code>terraform</code> <code>helm</code> provider you need to set additional parameters like you will do with <code>helm chart</code>.</p>
<p>All parameters are passed <a href="https://github.com/kubernetes/autoscaler/blob/master/charts/cluster-autoscaler/values.yaml#L166" rel="nofollow noreferrer">here</a>.</p>
<p>Some example:</p>
<pre><code>resource "helm_release" "autoscaler" {
name = "cluster-autoscaler"
repository = "https://kubernetes.github.io/autoscaler"
chart = "cluster-autoscaler"
values = [
"${file("values.yaml")}"
]
set {
name = "extraArgs.leader-elect"
value = "true"
}
set {
name = "extraArgs.scale-down-utilization-threshold"
value = "0.8"
}
}
</code></pre>
<p>As a bonus I would advise to move to <a href="https://karpenter.sh/" rel="nofollow noreferrer">Karpenter</a> since it is much better option if you are on <code>AWS</code>.</p>
| Michał Lewndowski |
<p>I'm trying to follow their docs and create this pod monitoring
i apply it and i see nothing in metrics</p>
<p>what am i doing wrong?</p>
<pre><code>apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: monitoring
spec:
selector:
matchLabels:
app: blah
namespaceSelector:
any: true
endpoints:
- port: metrics
interval: 30s
</code></pre>
| deagleshot | <p>As mentioned in the offical <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed" rel="nofollow noreferrer">documnentation</a>:</p>
<p>The following manifest defines a PodMonitoring resource, prom-example, in the NAMESPACE_NAME namespace. The resource uses a Kubernetes label selector to find all pods in the namespace that have the label app with the value prom-example. The matching pods are scraped on a port named metrics, every 30 seconds, on the /metrics HTTP path.</p>
<pre><code>apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: prom-example
spec:
selector:
matchLabels:
app: prom-example
endpoints:
- port: metrics
interval: 30s
</code></pre>
<p>To apply this resource, run the following command:</p>
<pre><code>kubectl -n NAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.5.0/examples/pod-monitoring.yaml
</code></pre>
<p>Also check the document on <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/observing" rel="nofollow noreferrer">Obeserving your GKE clusters</a>.</p>
<p><strong>UPDATE:</strong></p>
<p>After applying the manifests, the managed collection will be running but no metrics will be generated. You must deploy a PodMonitoring resource that scrapes a valid metrics endpoint to see any data in the Query UI.</p>
<p>Check the logs by running the below commands:</p>
<pre><code>kubectl logs -f -ngmp-system -lapp.kubernetes.io/part-of=gmp
kubectl logs -f -ngmp-system -lapp.kubernetes.io/name=collector -c prometheus
</code></pre>
<p>If you see any error follow this <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#ingest-problems" rel="nofollow noreferrer">link</a> to troubleshoot.</p>
| Fariya Rahmat |
<p>I'ts been 60min now and my persistent volume claim is still pending.</p>
<p>My storage class:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Minikube did not supply this one, I had to add it with the yaml above. In the dashboard I can click on it and it references the persistent volume which is green/ok.</p>
<p>My persistent volume (green, ok):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
</code></pre>
<p>The reason I need persistent storage is that nodered store its data in /data so that whats I'm trying to do here; provide it with persistent volume to store data. And since this is locally using minikube I can take advantage of /data folder on the minikube instance that per documentation is persistent.</p>
<p>My persistent volume claim for my nodered app.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>If I add the deployment or not, the persistent storage claim is still yellow/pending in the dashboard. Any reason for that? What am I missing here?</p>
<p><strong>Update:</strong></p>
<p>kubectl describe pvc/nodered-claim:</p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 2m52s (x162 over 42m) persistentvolume-controller waiting for first consumer to be created before binding
</code></pre>
| bobby2947 | <p>Update your StorageClass to immediate:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate # <-- bind as soon as PVC is created
</code></pre>
<p><code>WaitForFirstConsumer</code> will only bind when a Pod uses your PVC.</p>
<p><code>...If I add the deployment or not, the persistent storage claim is still yellow/pending in the dashboard.</code></p>
<p>Your deployment will also enter pending state if the PVC it needs failed to bind.</p>
| gohm'c |
<p>So, I need to reverse engineer how an application works and it is deployed on a kubernates pod. This has been assigned to me after the previous developers just "left".</p>
<p>I need to download some files from that pod. I access it using Kubernates Dashboard by clicking the View Logs Button of the Pod.</p>
<p>How can I copy files from this pod to my local? There is a zip file and I want to inspect and read it contents.</p>
<p>How can I download other files?</p>
<p>kubectl cp didn't work for me.</p>
<p><a href="https://i.stack.imgur.com/NsnGK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NsnGK.jpg" alt="enter image description here" /></a></p>
| 10101010 | <p>As a first step please install <a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="nofollow noreferrer">kubectl</a>.</p>
<p>Then make sure that you have access to <code>k8s</code> cluster where the pod is running. This is done via <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">kubeconfig</a>. Since you are on <code>AWS</code> it can be done with following command</p>
<pre><code>aws eks update-kubeconfig --region region-code --name my-cluster
</code></pre>
<p>Next is just check execution of command:</p>
<pre><code>kubectl cp <pod-name>:<fully-qualified-file-name> /<path-to-your-file>/<file-name> -c <container-name>
</code></pre>
| Michał Lewndowski |
<p>Below is my yaml file to create a container group with two containers names as fluentd and mapp.
But for the mapp container I want to get the image from a private repository. I am not using Azure Container Registry, I do not have an experience with it either.
I want to push the logs to Loganalytics.</p>
<pre><code>apiVersion: 2019-12-01
location: eastus2
name: mycontainergroup003
properties:
containers:
- name: mycontainer003
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mapp-log
properties:
image: reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
command: - /bin/sh - -c - > i=0; while true; do echo "$i: $(date)" >> /var/log/1.log; echo "$(date) INFO $i" >> /var/log/2.log; i=$((i+1)); sleep 1; done
imageRegistryCredentials:
- server: reg-dev.rx.com
username: <username>
password: <password>
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: <id>
workspaceKey: <key>
tags: null
type: Microsoft.ContainerInstance/containerGroups
</code></pre>
<p>I am executing below command to run the yaml:</p>
<pre><code>>az container create -g rg-np-tp-ip01-deployt-docker-test --name mycontainergroup003 --file .\azure-deploy-aci-2.yaml
(InaccessibleImage) The image 'reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest' in container group 'mycontainergroup003' is not accessible. Please check the image and registry credential.
Code: InaccessibleImage
Message: The image 'reg-dev.rx.com/gl/xg/iss/mapp/com.corp.mapp:1.0.0-SNAPSHOT_latest' in container
group 'mycontainergroup003' is not accessible. Please check the image and registry credential.
</code></pre>
<p>How can I make the imageregistry reg-dev.rx.com accessible from Azure. Till now, I used the same imageregistry in every yaml and ran 'kubectl apply' command. But now I am trying to run the yaml via Azure cli.
Can someone please help?</p>
| UnicsSol | <p>The Error you are getting usually comes when you are giving wrong name and credentials for login server or Image that you are trying to pull.</p>
<p>I Can not tested as which private registry you are trying to use. But same thing can use achive using Azure Container registry. I tested in my environment and its working fine for me same you can apply in your environment as well.</p>
<p><strong>You can pushed your existing image into ACR using below command</strong></p>
<p>Example : you can apply like this below</p>
<p>Step 1 : <strong>login in azure</strong></p>
<pre><code> az login
</code></pre>
<p>Step 2: <strong>Created Container Registry</strong></p>
<pre><code>az acr create -g "<resource group>" -n "TestMyAcr90" --sku Basic --admin-enabled true
</code></pre>
<p>.</p>
<p>Step 3 :<strong>Tag docker image in the following format <code>loginserver/imagename</code></strong></p>
<p><code>docker tag 0e901e68141f testmyacr90.azurecr.io/my_nginx</code><br />
<a href="https://i.stack.imgur.com/CNuwF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CNuwF.png" alt="enter image description here" /></a></p>
<p>Step 4 : <strong>login to ACR</strong>.</p>
<p><code>docker login testmyacr90.azurecr.io</code></p>
<p>Step 5 : <strong>Push docker images into container registry</strong></p>
<pre><code>docker push testmyacr90.azurecr.io/my_nginx
</code></pre>
<p><a href="https://i.stack.imgur.com/m6fo8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m6fo8.png" alt="enter image description here" /></a></p>
<p><strong>YAML FILE</strong></p>
<pre><code>apiVersion: 2019-12-01
location: eastus2
name: mycontainergroup003
properties:
containers:
- name: mycontainer003
properties:
environmentVariables: []
image: fluent/fluentd
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mapp-log
properties:
image: testmyacr90.azurecr.io/my_nginx:latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
command:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
imageRegistryCredentials:
- server: testmyacr90.azurecr.io
username: TestMyAcr90
password: SJ9I6XXXXXXXXXXXZXVSgaH
osType: Linux
restartPolicy: Always
diagnostics:
logAnalytics:
workspaceId: dc742888-fd4d-474c-b23c-b9b69de70e02
workspaceKey: ezG6IXXXXX_XXXXXXXVMsFOosAoR+1zrCDp9ltA==
tags: null
type: Microsoft.ContainerInstance/containerGroups
</code></pre>
<p>You can get the loginserver name , Username and password of ACR from here.</p>
<p><a href="https://i.stack.imgur.com/YGqN7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YGqN7.png" alt="enter image description here" /></a></p>
<p><strong>Succesfully run the file and able to create Container Group along with two container as declare in file.</strong></p>
<p><a href="https://i.stack.imgur.com/kAn0r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kAn0r.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/57j1A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57j1A.png" alt="enter image description here" /></a></p>
| RahulKumarShaw |
<p>I have a PostgreSQL Kubernetes Service based on <a href="https://github.com/zalando/patroni" rel="nofollow noreferrer">Patroni</a>/<a href="https://github.com/zalando/spilo" rel="nofollow noreferrer">Spilo</a>.
This Kubernetes service deploys a cluster of three PostgreSQL pods + three Etcd pods.
During maintenance, I had a failure and I wasn't able to restore the old configuration that worked fine before the rolling update.</p>
<p>I searched for documentation and it seems StatefulSets doesn't support rollbacks as deployment. I found <a href="https://stackoverflow.com/questions/62425011/statefulset-unable-to-rollback-if-the-pods-are-not-in-running-state">this thread</a> that <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#forced-rollback" rel="nofollow noreferrer">references this doc</a>.</p>
<p>To be honest, however, I didn't understand how to proceed.</p>
<p>My cluster has the following pods:</p>
<pre><code>postgres-0
postgres-1
postgres-2
etcd-0
etcd-1
etcd-2
</code></pre>
<p>my rolling update simply needed to upgrade the etcd image from 3.3.20 to 3.5.1. The upgrade started to update etcd-2 and the pod crashed for several reasons. So my intention was to stop the update and revert the etcd-2 to 3.3.20.</p>
<p>How I should proceed in a situation like this? How liveness and probing can help me here?
At the moment, the solution proposed in that thread is a bit confusing to me.</p>
| Salvatore D'angelo | <p>To undo changes that have been made, first checkout the rollout history <code>kubectl rollout history sts <name> -n <namespace if not default></code>.</p>
<p>Get more details about a revision <code>kubectl rollout history sts <name> --revision <number> -n <namespace if not default></code>.</p>
<p>Undo the changes <code>kubectl rollout undo sts <name> --to-revision <number> -n <namespace if not default></code></p>
| gohm'c |
<p>My AKS cluster and storage account are in the same Region: East US 2.
I have created secret:
<code>kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=fastorage --from-literal=azurestorageaccountkey='OWd7e9Ug' secret/fa-fileshare-secret created</code></p>
<p>In that storage account I have file share: <code>containershare</code></p>
<p>I have checked in the Configuration of the secret and values are being matched for account name and key (as this is stated in similar questions which did not help me).
I think VNET for storage account and AKS cluster are different, and also Subscription and Resource group are different (if relevant.)</p>
<p>When I try to execute deployment for my app, I am getting:</p>
<pre><code> Mounting arguments: -t cifs -o actimeo=30,mfsymlinks,file_mode=0777,dir_mode=0777,
<masked> //fastorage.file.core.windows.net/containershare
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/#fa-fileshare-secret#containershare#ads-volume#default/globalmount
Output: mount error(13): Permission denied
</code></pre>
<p>In <code>deployment.yaml</code> definition:</p>
<pre><code>........
volumes:
- name: ads-volume
azureFile:
secretName: fa-fileshare-secret
shareName: containershare
readOnly: false
............
</code></pre>
<p>What can be the problem (since different region and wrong credentials are not the issue). I am accessing the cluster through the kubectl from remote windows machine.</p>
| vel | <p>Thank You <a href="https://stackoverflow.com/users/12990185/andreys">AndreyS</a> for confirming you resolve your issue. Here is few more additional details that can help to know cause of your issue.</p>
<p>As Per Microsoft <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#akssubnetnotallowed" rel="nofollow noreferrer">Document</a> here is the possible cause for this error <strong><code>Mount error(13): Permission denied</code></strong></p>
<blockquote>
<ul>
<li><a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#secretnotusecorrectstorageaccountkey" rel="nofollow noreferrer">Cause 1: Kubernetes secret doesn't reference the correct storage account name or
key</a></li>
<li><a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#akssubnetnotallowed" rel="nofollow noreferrer">Cause 2: AKS's VNET and subnet aren't allowed for the storage account</a></li>
<li><a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#aksnotawareprivateipaddress" rel="nofollow noreferrer">Cause 3: Connectivity is via a private link but nodes and the private endpoint are in different
VNETs</a></li>
</ul>
</blockquote>
<p>For mounting the storage file share with AKS Cluster(Pod) you should deploy both the resource in same resource group and same region and also to make sure to both resource in same VNET if not then you have to allow access to your AKS VNET in Storage is set to Selected networks, check if the VNET and subnet of the AKS cluster are added.</p>
<p><a href="https://i.stack.imgur.com/ubEdj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ubEdj.png" alt="enter image description here" /></a></p>
<p>It may take a few moments for the changes to take effect. After the VNET and subnet are added, check if the pod status changes from ContainerCreating to Running and mounted the File share as well.</p>
| RahulKumarShaw |
<p>I have an image in ECR I want to use as a container in my jenkins pipeline. My Kubernetes cluster is a k3s cluster running local. I am unable to pull the image (I am guessing) because I am not properly passing my AWS creds (stored in a username with password secret homelab-k3s-ecr). Through my searching I cannot find how to set the AWS creds when pulling from ECR. Below is my Jenkinfile and the error. Any guidance on how to pass the AWS creds to the kuberenetes agent config so that when I attempt to pull it can authenticate?</p>
<pre><code>pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
name: vapi
namespace: jenkins
spec:
containers:
- name: homelab
image: <id>.dkr.ecr.us-east-2.amazonaws.com/homelab:1.0.0
imagePullSecrets:
- name: homelab-k3s-ecr
"""
}
}
stages {
stage('Build') {
steps {
container('homelab') {
sh 'echo "Running build inside the ECR container"'
}
}
}
}
}
</code></pre>
<pre><code>ERROR: Unable to pull Docker image "<id>.dkr.ecr.us-east-2.amazonaws.com/homelab:1.0.0". Check if image tag name is spelled correctly.
</code></pre>
<p>I have tried using the imagePullSecrets value as i found in the docs <a href="https://www.jenkins.io/doc/pipeline/steps/kubernetes/" rel="nofollow noreferrer">https://www.jenkins.io/doc/pipeline/steps/kubernetes/</a> but i could not get it to work</p>
| Danny | <p>Main problem is that you cannot use raw <code>AWS Access Key</code> and <code>Secret Access Key</code> to pull images from ECR. You need to execute login command <code>aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com</code> in 12h intervals since credentials are valid only for that time.</p>
<p><a href="https://skryvets.com/blog/2021/03/15/kubernetes-pull-image-from-private-ecr-registry/" rel="nofollow noreferrer">Here</a> is some nice article describing how to create workaround for this and update <code>ECR</code> credentials automatically.</p>
| Michał Lewndowski |
<p>I am getting below error after installing keda in my k8s cluster and created some scaled object...</p>
<p>whatever command i am running EG: " kubectl get pods" i am getting response with below error message..</p>
<p>How to get rid of below error message.</p>
<p>E0125 11:45:32.766448 316 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1</p>
| Senthilraj Chettykumar | <p>This error is from <code>client-go</code> when there are no resources available in <code>external.metrics.k8s.io/v1beta1</code> here in client-go, it gets all ServerGroups.
When KEDA is not installed then external.metrics.k8s.io/v1beta1 is not part of ServerGroups and hence its not called and therefore no issue.</p>
<p>But when KEDA is installed then it creates an ApiService</p>
<pre><code>$ kubectl get apiservice | grep keda-metrics
v1beta1.external.metrics.k8s.io keda/keda-metrics-apiserver True 20m
</code></pre>
<p>But it doesn't create any external.metrics.k8s.io resources</p>
<pre><code>$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": []
}
</code></pre>
<p>Since there are no resources, client-go throws an error.</p>
<p>The workaround is registering a dummy resource in the empty resource group.</p>
<p>Refer to this <a href="https://github.com/kubeshop/botkube/issues/829" rel="noreferrer">Github</a> link for more detailed information.</p>
| Fariya Rahmat |
<p>We are migrating from <code>Jenkins Master/Slave</code> installation to <code>Kubernetes deployment</code> using <code>Jenkins</code> operator from <a href="https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/" rel="nofollow noreferrer">https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/</a></p>
<p>This is a snippet from our initial <code>jenkinsfile</code></p>
<pre><code> pipeline {
agent {
label ('jenkins-slave')
}
stages {
stage ('Cloud Authentication') {
steps {
script {
withCredentials([file(credentialsId: 'mygcpproject-8e16', variable: 'key')]) {
sh """
cat ${key} > mykeys.json
chmod 400 mykeys.json
"""
}
..........
..........
</code></pre>
<p>Now since we are moving to <code>Jenkins running as kubernetes pods</code> , what should be our value for <code>agent</code> inside the <code>pipeline</code></p>
<pre><code> pipeline {
agent {
label ('jenkins-slave')
</code></pre>
<p>Any suggestions or reference to any documentation will be highly appreciated.</p>
| Zama Ques | <p>When you run <code>Jenkins</code> in <code>k8s</code> agents can be define two ways. One option is directly from <code>yaml</code> in <code>Jenkinsfile</code>. Second way is to include pod template from repository where you store your <code>Jenkins</code> pipeline configuration file.</p>
<p>Please refer to <a href="https://plugins.jenkins.io/kubernetes/#plugin-content-declarative-pipeline" rel="nofollow noreferrer">this</a> part of <code>k8s</code> plugin documentation for more detail instruction.</p>
| Michał Lewndowski |
<p>My goal is to create an environment variable for the pod out of a mounted secret volume. I want to skip the intermediate step with creating Kubernetes secret (and refer the k8s secret for the env) so nothing is stored on the etcd storage.</p>
<p>I am using the CSI Driver to mount the secrets of my Azure Key Vault. The volume is working correctly.</p>
<p>Deployment.yaml:</p>
<pre><code>...
spec:
volumes:
- name: keyvault-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: kevault-secrets
containers:
- name: busybox
image: k8s.gcr.io/e2e-test-images/busybox:1.29
command:
- /bin/sh
args:
- '-c'
- >-
SECRET1=$(cat /mnt/keyvault-secrets/secret1); export SECRET1;echo
$SECRET1; sleep 1d;
volumeMounts:
- name: keyvault-secrets
readOnly: true
mountPath: /mnt/keyvault-secrets
</code></pre>
<p>On startup the Pod is able to populate the environment variable and even prints its value correctly on the console. If I log into the Pod the environment variable is gone.</p>
<p>Any ideas why the environment variable vanishes?</p>
| Michael Kemmerzell | <p>Environment set in a shell session (like the one in your command) is local to that session only.</p>
| gohm'c |
<p>We need to configure health check for our service which uses apscheduler to schedule and run the jobs. The idea is to check if apscheduler is running jobs fine at the specified time or working as expected.</p>
<p>We tried scheduler.running, but it shows true when it's not able to pick next jobs. Any suggestions here?</p>
| summercostanza | <p>You can use add_listener() and can listen to only particular types of events by giving the appropriate mask argument to add_listener(), OR’ing the different constants together. The listener callable is called with one argument, the event object.</p>
<p>Example:</p>
<pre><code>def my_listener(event):
if event.exception:
print('The job crashed :(')
else:
print('The job worked :)')
scheduler.add_listener(my_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
</code></pre>
<p>Refer to this <a href="https://apscheduler.readthedocs.io/en/3.x/userguide.html#scheduler-events" rel="nofollow noreferrer">document</a> for more information.</p>
| Fariya Rahmat |
<p>I am getting this error(ResourceGroupNotFound) while running this command <az aks get-credentials --resource-group AKSResourceGroupName --name AKSClusterName></p>
<p>Error message :-</p>
<p>az aks get-credentials --resource-group AKSResourceGroupName --name AKSClusterName
(ResourceGroupNotFound) Resource group 'AKSResourceGroupName' could not be found.
Code: <strong>ResourceGroupNotFound</strong>
Message: Resource group 'AKSResourceGroupName' could not be found.</p>
| user17784455 | <p><code>az aks get-credentials</code> command is used to
Get access credentials for a managed Kubernetes cluster.</p>
<p>Example:</p>
<pre><code>az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup
</code></pre>
<p><strong>name</strong> and <strong>resource-group</strong> are required parameters where you should provide Name of your managed cluster and Name of your resource group.</p>
| RKM |
<p>this is my deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.12.2
ports:
- containerPort: 80
</code></pre>
<hr />
<p>this is my pods</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5cc6c7559b-6vk87 0/1 ContainerCreating 0 51m <none> k8s-node2 <none> <none>
nginx-deployment-5cc6c7559b-g7wpz 0/1 ContainerCreating 0 51m <none> k8s-node1 <none> <none>
nginx-deployment-5cc6c7559b-s6k2s 0/1 ContainerCreating 0 51m <none> k8s-node1 <none> <none>
</code></pre>
<hr />
<p>this is my description of a pod</p>
<pre><code>Name: nginx-deployment-5cc6c7559b-6vk87
Namespace: default
Priority: 0
Node: k8s-node2/192.168.74.136
Start Time: Mon, 22 Mar 2021 03:02:36 -0400
Labels: app=nginx
pod-template-hash=5cc6c7559b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/nginx-deployment-5cc6c7559b
Containers:
nginx:
Container ID:
Image: nginx:1.12.2
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s7x98 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-s7x98:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s7x98
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/nginx-deployment-5cc6c7559b-6vk87 to k8s-node2
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "688ae6e1b403f8cf0f56bb41ef6e2341044c949304874400a3f4ced159c40f08" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d8b9f498bf0407ebc5e8e47700af9cec559632f38d12252b1edcde723ce9863f" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2c72ed28e5672a1da32f7941ba0b638eb459048ff9e70aec42bd125a569faf3f" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7dd06af29506a6e4f22c9484b47ca23412a57b61398ee6caa89edec59e2dcfa5" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6c14c33fdbb3bb8e42d7e33c991bc51220dcbfd5acc71115c26f966a759fff29" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dacb90c7ab07cc55c83dba82286e65dd89e30be569e9b5744202c2ae65f54830" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d03004bff912d6f9aaf614e892d2b43c153392e8fcc03e7988c43d4dfb46ebf0" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7eaf53ffba761c30bfa13f2b3cae2ca2957f9fefee47edf6c0b46943bb09d7a3" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 33m kubelet, k8s-node2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5047e9ad878b99b69090cf96e5534dfe10ec46830cdcc7e73a8afc96dc11e98c" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 18m (x859 over 33m) kubelet, k8s-node2 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 3m34s (x1712 over 33m) kubelet, k8s-node2 (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "501ff9b578eac098d6f763a0bc6212423b71714c9d2b1c83ea94b25e7a30e374" network for pod "nginx-deployment-5cc6c7559b-6vk87": networkPlugin cni failed to set up pod "nginx-deployment-5cc6c7559b-6vk87_default" network: open /run/flannel/subnet.env: no such file or directory
</code></pre>
<p>I find a similar question:<a href="https://stackoverflow.com/questions/51169728/failed-create-pod-sandbox-rpc-error-code-unknown-desc-networkplugin-cni-fa">Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod network</a></p>
<p>I have /etc/cni/net.d and its /opt/cni/bin\</p>
<pre><code>[root@k8s-master bin]# cd /etc/cni/net.d/
[root@k8s-master net.d]# ll -a
total 4
drwxr-xr-x. 2 root root 33 Feb 25 11:08 .
drwxr-xr-x. 3 root root 19 Feb 25 11:08 ..
-rw-r--r--. 1 root root 292 Feb 25 11:08 10-flannel.conflist
[root@k8s-master net.d]# cd /opt/cni/bin
[root@k8s-master bin]# ll -a
total 56484
drwxr-xr-x. 2 root root 239 Feb 25 10:01 .
drwxr-xr-x. 3 root root 17 Feb 25 10:01 ..
-rwxr-xr-x. 1 root root 3254624 Sep 9 2020 bandwidth
-rwxr-xr-x. 1 root root 3581192 Sep 9 2020 bridge
-rwxr-xr-x. 1 root root 9837552 Sep 9 2020 dhcp
-rwxr-xr-x. 1 root root 4699824 Sep 9 2020 firewall
-rwxr-xr-x. 1 root root 2650368 Sep 9 2020 flannel
-rwxr-xr-x. 1 root root 3274160 Sep 9 2020 host-device
-rwxr-xr-x. 1 root root 2847152 Sep 9 2020 host-local
-rwxr-xr-x. 1 root root 3377272 Sep 9 2020 ipvlan
-rwxr-xr-x. 1 root root 2715600 Sep 9 2020 loopback
-rwxr-xr-x. 1 root root 3440168 Sep 9 2020 macvlan
-rwxr-xr-x. 1 root root 3048528 Sep 9 2020 portmap
-rwxr-xr-x. 1 root root 3528800 Sep 9 2020 ptp
-rwxr-xr-x. 1 root root 2849328 Sep 9 2020 sbr
-rwxr-xr-x. 1 root root 2503512 Sep 9 2020 static
-rwxr-xr-x. 1 root root 2820128 Sep 9 2020 tuning
-rwxr-xr-x. 1 root root 3377120 Sep 9 2020 vlan
</code></pre>
<p>I have three nodes named k8s-master k8s-node1 and k8s-node2,but I don't add rules for nodes.</p>
<p>Something is not right</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-m7bjr 0/1 CrashLoopBackOff 171 24d
coredns-7ff77c879f-x4xjf 0/1 Running 170 24d
etcd-k8s-master 1/1 Running 0 24d
kube-apiserver-k8s-master 1/1 Running 8 24d
kube-controller-manager-k8s-master 1/1 Running 2 24d
kube-proxy-6wxcp 1/1 Running 1 24d
kube-proxy-cmhn6 1/1 Running 0 24d
kube-proxy-pzhqc 1/1 Running 0 24d
kube-scheduler-k8s-master 1/1 Running 2 24d
</code></pre>
<p>my network plugin flannel isn't work,maybe it cause this question</p>
| 暴躁铁蛋 | <p>Only execute one command</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<p>Question is over</p>
| 暴躁铁蛋 |
<p>I have created kubernetes ingress with frontend config and the ECDSA P-384 TLS cert on Google Cloud Platform, after few seconds of creating process i received the followind error:</p>
<blockquote>
<p>Error syncing to GCP: error running load balancer syncing routine:
loadbalancer <em><strong><strong><strong>-default-</strong></strong></strong>-ingress-</em>****** does not exist:
Cert creation failures -
k8s2-cr-<em><strong><strong>-</strong></strong></em><em><strong><strong><strong><strong><strong><strong>-</strong></strong></strong></strong></strong></strong></em>***** Error:googleapi:
Error 400: The ECDSA curve is not supported.,
sslCertificateUnsupportedCurve</p>
</blockquote>
<p>Why The ECDSA curve is not supported? Is there any way to enable this support?</p>
<p>Create tls-secret command:</p>
<pre><code>kubectl create secret tls tls --key [key-path] --cert [cert-path]
</code></pre>
<p>Frontend-config:</p>
<pre><code>apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: default
labels:
kind: ingress
annotations:
networking.gke.io/v1beta1.FrontendConfig: frontend-config
spec:
tls:
- hosts:
- '*.mydomain.com'
secretName: tls
rules:
- host: mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: spa-ingress-service
port:
number: 80
- host: api.mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: api-ingress-service
port:
number: 80
</code></pre>
<p>spa services:</p>
<pre><code># SERVICE LOAD BALANCER
apiVersion: v1
kind: Service
metadata:
name: spa-service
labels:
app/name: spa
spec:
type: LoadBalancer
selector:
app/template: spa
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
# SERVICE NODE PORT - FOR INGRESS
apiVersion: v1
kind: Service
metadata:
name: spa-ingress-service
labels:
app/name: ingress.spa
spec:
type: NodePort
selector:
app/template: spa
ports:
- name: https
protocol: TCP
port: 80
targetPort: http
</code></pre>
<p>api services:</p>
<pre><code># SERVICE LOAD BALANCER
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app/name: api
spec:
type: LoadBalancer
selector:
app/template: api
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
# SERVICE NODE PORT - FOR INGRESS
apiVersion: v1
kind: Service
metadata:
name: api-ingress-service
labels:
app/name: ingress.api
spec:
type: NodePort
selector:
app/template: api
ports:
- name: https
protocol: TCP
port: 80
targetPort: http
</code></pre>
<p>kubectl describe ingress response:</p>
<p><img src="https://i.stack.imgur.com/Ywsie.jpg" alt="describe" /></p>
| Mikolaj | <p>The gcp <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#private-key" rel="nofollow noreferrer">load balancer</a> supports <strong>RSA-2048 or ECDSA P-256</strong> certificates. Also DownstreamTlsContexts support multiple TLS certificates. These may be a mix of RSA and <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/ssl#certificate-selection" rel="nofollow noreferrer">P-256 ECDSA</a> certificates.</p>
<p>The following error is due to the incompatibility with the P-384 certificate currently being used rather than the P-256 certificate.</p>
<p>For additional information refer to the <a href="https://cloud.google.com/load-balancing/docs/l7-internal" rel="nofollow noreferrer">Load Balancing Overview</a>.</p>
| Bakul Mitra |
<p>I have deployed mongodb pod on kubernetes and able to connect and insert the data in database. But when the pod gets restart all my data is being lost. I have created persistentvolume also for mongodb but still the data is being lost. Below are my yamls. Can someone please someone explain what I am doing wrong here.</p>
<p>Statefulset.yaml</p>
<pre><code>---
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "mongo-development"
namespace: "development"
spec:
serviceName: "mongo-development"
replicas: 1
selector:
matchLabels:
app: "mongo-development"
template:
metadata:
labels:
app: "mongo-development"
spec:
containers:
-
name: "mongo-development"
image: "mongo"
imagePullPolicy: "Always"
env:
-
name: "MONGO_INITDB_ROOT_USERNAME"
value: "xxxx"
-
name: "MONGO_INITDB_ROOT_PASSWORD"
value: "xxxx"
ports:
-
containerPort: 27017
name: "mongodb"
volumeMounts:
-
name: "mongodb-persistent-storage"
mountPath: "/var/lib/mongodb"
volumes:
-
name: "mongodb-persistent-storage"
persistentVolumeClaim:
claimName: "mongodb-pvc-development"
</code></pre>
<p>pvc.yaml</p>
<pre><code>---
apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
name: "mongodb-pvc-development"
namespace: "development"
labels:
app: "mongo-development"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: gp2
</code></pre>
<p>pv.yaml</p>
<pre><code>---
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "mongo-pv-development"
namespace: "development"
labels:
type: local
app: "mongo-development"
spec:
storageClassName: "local-storage"
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: "development"
name: "mongodb-pvc-development"
nfs:
path: /mnt
server: xxxx
</code></pre>
<p>service.yaml</p>
<pre><code>---
apiVersion: "v1"
kind: "Service"
metadata:
name: "mongo-development"
namespace: "development"
labels:
app: "mongo-development"
spec:
ports:
-
name: "mongodb"
port: 27017
targetPort: 27017
clusterIP: "None"
selector:
app: "mongo-development"
</code></pre>
| SVD | <p>Issue got resolved the mountpath which I have defined was incorrect it should be /data/db in statefulset.yaml file</p>
| SVD |
<p>My team and I, we are trying to deploy a BERT NLP model to production using Kubernets and Kubeflow.</p>
<p>We almost got it. Everything was processed correctly and we've got our desired output as we can see in our log:</p>
<p><a href="https://i.stack.imgur.com/7GWgb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7GWgb.jpg" alt="enter image description here" /></a></p>
<p><strong>But the process simply doesn't finish</strong> and the last line in our DAG keeps with the 'loading' icon, as you can see here:</p>
<p><a href="https://i.stack.imgur.com/Gv4S7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gv4S7.jpg" alt="enter image description here" /></a></p>
<p>Eventually, checking on Kubernetes Engine, we realized an 'attention' icon to our job with the description <strong>'containers with unready status: [main]'</strong>.</p>
<p><a href="https://i.stack.imgur.com/MWMWT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MWMWT.jpg" alt="enter image description here" /></a></p>
<p>We can't find out what might be happening and we've being trying to find out for a whole week. I don't want to put everything we tried to not be more verbose on my post.</p>
<p>If you want to check more about our deployment code:</p>
<pre><code>import kfp
from kfp.components import OutputPath, InputPath
perform_query_op = kfp.components.create_component_from_func(
perform_dataset_query,
output_component_file='components/clf_perform_query.component.yaml',
packages_to_install=["pandas", "pyarrow", "gcsfs", "google-cloud-bigquery"])
treat_dataset_op = kfp.components.create_component_from_func(
treatment_data,
output_component_file="components/clf_treat_dataset.component.yaml",
packages_to_install=["pandas", "pyarrow", "gcsfs", "gensim"])
balance_categories_clf_op = kfp.components.create_component_from_func(
balance_categories_clf,
output_component_file="components/clf_balance_categories.component.yaml",
packages_to_install=["pandas", "pyarrow", "gcsfs"])
create_train_test_split_op = kfp.components.create_component_from_func(
create_train_test_split,
output_component_file="components/clf_create_train_test_split.component.yaml",
packages_to_install=["pandas", "pyarrow", "gcsfs", "sklearn"])
train_model_op = kfp.components.create_component_from_func(
train_transformers_model,
output_component_file="components/clf_train_transformers_model.component.yaml",
packages_to_install=["pandas", "pyarrow", "gcsfs", "sklearn", "torch", "torchvision", "tqdm", "transformers", "google-cloud-storage"],
base_image='pytorch/pytorch:1.9.0-cuda10.2-cudnn7-runtime')
@kfp.dsl.pipeline()
def train_classifier_pipeline():
# Step 1: get data
queried = perform_query_op()
# Step 2: data treatment
treated = treat_dataset_op(queried.output)
# Step 3: Filtering and balancing categories
balanced = balance_categories_clf_op(treated.output)
# Step 4: Split datasets
splits_A = create_train_test_split_op(balanced.outputs["brand_A"], "brand_A")
splits_B = create_train_test_split_op(balanced.outputs["brand_B"], "brand_B")
splits_C = create_train_test_split_op(balanced.outputs["brand_C"], "brand_C")
splits_D = create_train_test_split_op(balanced.outputs["brand_D"], "brand_D")
# Step 5: Train transformers model
train_model_op(splits_acom.outputs["train"], splits_A.outputs["test"], "brand_A")\
.set_memory_request('20G')\
.set_memory_limit('23G')\
.set_gpu_limit('1').add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-t4')
train_model_op(splits_shop.outputs["train"], splits_B.outputs["test"], "brand_B")\
.set_memory_request('20G')\
.set_memory_limit('23G')\
.set_gpu_limit('1').add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-t4')
train_model_op(splits_suba.outputs["train"], splits_C.outputs["test"], "brand_C")\
.set_memory_request('20G')\
.set_memory_limit('23G')\
.set_gpu_limit('1').add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-t4')
train_model_op(splits_soub.outputs["train"], splits_D.outputs["test"], "brand_D")\
.set_memory_request('20G')\
.set_memory_limit('23G')\
.set_gpu_limit('1').add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-t4')
if __name__ == '__main__':
from kfp.compiler.compiler import Compiler
Compiler().compile(train_classifier_pipeline, "components/train_classifier.pipeline.yaml")
# Execute single run
kfp_client = kfp.Client(host='my_path') # PRD
kfp_client.create_run_from_pipeline_func(train_classifier_pipeline, arguments={})
</code></pre>
<p>Any insights I will be grateful!</p>
| Guilherme Giuliano Nicolau | <p>Without the logs it will be hard to determine the exact root cause. Seeing the piece of code where you indicate using T4 GPU nodes, the cause can be one of the described below:</p>
<ol>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus?_ga=2.132041740.-2000871821.1632153301&_gac=1.190167385.1634247741.Cj0KCQjwqp-LBhDQARIsAO0a6aKfHj4LVumCjt_9pHZu4NIpjcDC9pznenO9oixNxro6jG_Ou7BcAc4aAsHrEALw_wcB#availability" rel="nofollow noreferrer">Availability</a></li>
</ol>
<p>There’s not enough resources in your zone, which is common when using GPUs. GPUs are available in specific regions and zones. When you request GPU quota, consider the regions in which you intend to run your clusters.</p>
<p>For a complete list of applicable regions and zones, refer to GPUs on Compute Engine.
You can also see GPUs available in your zone using the gcloud command-line tool. To see a list of all GPU accelerator types supported in each zone, run the following command:</p>
<p><code>gcloud compute accelerator-types list</code></p>
<ol start="2">
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus?_ga=2.132041740.-2000871821.1632153301&_gac=1.190167385.1634247741.Cj0KCQjwqp-LBhDQARIsAO0a6aKfHj4LVumCjt_9pHZu4NIpjcDC9pznenO9oixNxro6jG_Ou7BcAc4aAsHrEALw_wcB#gpu_quota" rel="nofollow noreferrer">GPU quota</a></li>
</ol>
<p>Your project quota is not enough to satisfy your request (not enough resources allocated for your project to provision the requested GPU instances). Your GPU quota is the total number of GPUs that can run in your Google Cloud project. To create clusters with GPUs, your project must have sufficient GPU quota.</p>
<p>Your GPU quota should be at least equivalent to the total number of GPUs you intend to run in your cluster. If you enable cluster autoscaling, you should request GPU quota at least equivalent to your cluster's maximum number of nodes multiplied by the number of GPUs per node.</p>
<p>For example, if you create a cluster with three nodes that runs two GPUs per node, your project requires at least six GPU quota.</p>
<ol start="3">
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus?_ga=2.132041740.-2000871821.1632153301&_gac=1.190167385.1634247741.Cj0KCQjwqp-LBhDQARIsAO0a6aKfHj4LVumCjt_9pHZu4NIpjcDC9pznenO9oixNxro6jG_Ou7BcAc4aAsHrEALw_wcB#limitations" rel="nofollow noreferrer">Limitations</a></li>
</ol>
<p>Before using GPUs on GKE, keep in mind the following limitations:</p>
<ul>
<li>You cannot add GPUs to existing node pools.</li>
<li>GPU nodes cannot be live migrated during maintenance events.</li>
<li>GPUs are only supported with general-purpose N1 machine types.</li>
<li>GPUs are not supported in Windows Server node pools.</li>
</ul>
<p>You can find the full list of aspects to have in count when using GPU on kubernetes cluster in this guide: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus?_ga=2.132041740.-2000871821.1632153301&_gac=1.190167385.1634247741.Cj0KCQjwqp-LBhDQARIsAO0a6aKfHj4LVumCjt_9pHZu4NIpjcDC9pznenO9oixNxro6jG_Ou7BcAc4aAsHrEALw_wcB" rel="nofollow noreferrer">Running GPUs</a> which includes several other details including the ones described above</p>
| Jaime López |
<p>I'm trying to understand concept of client side load balancer architecture (i.e. eureka + spring cloud api gateway + ribbon).
Assuming we have kubernetes env where is native discovery service and load balancer why would we use client side LB and eureka.
Do you have any idea/any use case in mind or maybe this approach is useful with different environment?</p>
<p><a href="https://i.stack.imgur.com/juYJs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/juYJs.png" alt="Architecture diagram" /></a></p>
| brick | <p>It depends on your use-case. There can be situations where you need to directly use Eureka server registry and Eureka client discovery offered by Spring Cloud Netflix. Ribbon is the client side load balancer provided by Spring Cloud Netflix.
In my experience, it is not impossible to use Eureka in any environment. It can be your local data centre or cloud infrastructure. However when it comes to deployment environment, there are so many alternatives for us to achieve the server registry mechanism. Sometimes those alternatives are the best solutions. I will give you an example below…</p>
<p>If you host your application in your local server (Local data centre)</p>
<p>Now in this scenario you can use Eureka and continue your server registry and discovery mechanism. (That is not the only way. I mentioned Eureka for this scenario because it would be a good use case for it)</p>
<p>If you host your application in AWS infrastructure</p>
<p>The AWS environment gives you lots of benefits and services such that you can forget the burden of maintaining and implementing Eureka. You can achieve simply the same behaviour by AWS load balancers, AWS target groups and even more by adding AWS auto scaling groups. In AWS it self there are so many other ways to achieve this as well.
Long story in short that for your scenario, you can continue using the power of Kubernetes and get the privilege unless you have a specific reason to use Eureka and put a large effort to implement it. You should select what suits the best depending on time, effort, maintainability, performance etc.
Hope this helps for you to get an idea. Happy coding!</p>
<p>Spring micro service K8s:</p>
<p>This sample application is to test spring apps service discovery with spring-cloud-kubernetes in K8 env. For other envs you can use Eureka(which is actually the default).</p>
<p>For more information refer to this <a href="https://github.com/dhananjay12/spring-microservices-using-spring-kubernetes" rel="nofollow noreferrer">document</a>.</p>
| Ramesh kollisetty |
<p>Running microk8s v1.23.3 on Ubuntu 20.04.4 LTS. I have set up a minimal pod+service:</p>
<pre><code>kubectl create deployment whoami --image=containous/whoami --namespace=default
</code></pre>
<p>This works as expected, curl <code>10.1.76.4:80</code> gives the proper reply from <code>whoami</code>.
I have a service configured, see content of <code>service-whoami.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: default
spec:
selector:
app: whoami
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>This also works as expected, the pod can be reached through the clusterIP on <code>curl 10.152.183.220:80</code>.
Now I want to expose the service using the <code>ingress-whoami.yaml</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: whoami
port:
number: 80
rules:
- http:
paths:
- path: /whoami
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
</code></pre>
<p>ingress addon is enabled.</p>
<pre><code>microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
</code></pre>
<p>ingress seems to point to the correct pod and port. <code>kubectl describe ingress</code> gives</p>
<pre><code>Name: whoami-ingress
Labels: <none>
Namespace: default
Address:
Default backend: whoami:80 (10.1.76.12:80)
Rules:
Host Path Backends
---- ---- --------
*
/whoami whoami:80 (10.1.76.12:80)
Annotations: <none>
Events: <none>
</code></pre>
<p>Trying to reach the pod from outside with <code>curl 127.0.0.1/whoami</code> gives a <code>404</code>:</p>
<pre><code><html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
</code></pre>
<p>Where did I go wrong? This setup worked a few weeks ago.</p>
| petwri | <p>Ok, figured it out.
I had forgotten to specify the <code>ingress.class</code> in the annotations-block.
I updated <code>ingress-whoami.yaml</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /whoami
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
</code></pre>
<p>Now everything is working.</p>
| petwri |
<p>Say I have a kubernetes Cron Job that runs every day at 10am but it needs to provision new nodes in order for it to run. Will k8 wait until 10am to start provisioning those resources (therefore actually running some time after 10am)? Or will it have everything ready to go at 10am? Are there settings to control this?</p>
| owise1 | <p><code>Will k8 wait until 10am to start provisioning those resources...</code></p>
<p>No. K8s does not provision a computer node and make the computer join the cluster.</p>
<p><code>...will it have everything ready to go at 10am?</code></p>
<p>The job will be created at the scheduled time and pod will be spawn. At that time if your cluster runs out of node to run the pod, the pod will enter pending state. At this point if you have configured <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">autoscaler</a> in your cluster, autoscaler will request for a new node to join your cluster so that the pending pod can run.</p>
| gohm'c |
<p>I'm creating a pipeline to deploy some application in kubernetes.</p>
<p>I've been given the authentication credentials as a yaml file similar to the following:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tL******0tLS0t
server: https://api.whatever.com
name: gs-name-clientcert
contexts:
- context:
cluster: gs-name-clientcert
user: gs-name-clientcert-user
name: gs-name-clientcert
current-context: gs-name-clientcert
kind: Config
preferences: {}
users:
- name: gs-name-clientcert-user
user:
client-certificate-data: LS************RS0tLS0t
client-key-data: LS0tL***********tLQ==
</code></pre>
<p>How can I tell kubectl to use that config file when I use the apply command?
Thanks.</p>
| DeejonZ | <p><strong>kubeconfig file path:</strong></p>
<p>The config modifies kubeconfig files using subcommands like “kubectl config set current-context my-context”
The loading order follows these rules:</p>
<ol>
<li><p>If the –kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place.</p>
</li>
<li><p>If the $KUBECONFIG environment variable is set, then it uses a list of paths (normal path delimiting rules for your system). These paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, it creates the last file in the list.</p>
</li>
<li><p>Otherwise, ${HOME}/.kube/config is used and no merging takes place.</p>
<p>kubectl config SUBCOMMAND</p>
<p>Options</p>
<p>--kubeconfig="": use a particular kubeconfig file</p>
</li>
</ol>
<p>For more information refer to the command <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_config/" rel="nofollow noreferrer">kubectl config</a> and also follow the path <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">config file</a>.</p>
<p><strong>Also check the indentation of your config file:</strong></p>
<ul>
<li><p>If you are using TAB for indentation or any other purpose. Only use SPACE characters.</p>
</li>
<li><p>To find indentation errors use monospaced fonts to view and edit YAML.</p>
</li>
</ul>
<p>For more information about indentation refer to <a href="https://www.tutorialspoint.com/yaml/yaml_indentation_and_separation.htm#:%7E:text=YAML%20in%20detail.-,Indentation%20of%20YAML,-YAML%20does%20not" rel="nofollow noreferrer">Indentation to YAML</a></p>
| Mayur Kamble |
<p>I try to deploy some .yaml file with code of Kubernetes, but get error</p>
<pre><code>TASK [/cur/develop/inno/777/name.k8s/roles/deploy_k8s_dashboard : Apply the Kubernetes dashboard] **************************************************************************************************************************************************************************************
Monday 17 October 2022 13:52:07 +0200 (0:00:00.836) 0:00:01.410 ********
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named kubernetes.dynamic.resource
fatal: [cibd1]: FAILED! => changed=false
error: No module named kubernetes.dynamic.resource
msg: Failed to import the required Python library (kubernetes) on bvm's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named kubernetes.dynamic.resource
fatal: [cibd1]: FAILED! => changed=false
error: No module named kubernetes.dynamic.resource
msg: Failed to import the required Python library (kubernetes) on bvm's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
</code></pre>
<p>Could you please help with advice, How can I fix it?</p>
| Oleg | <p>I've had a similar problem. locally I had to execute the following</p>
<pre><code>ansible-galaxy collection install kubernetes.core
</code></pre>
<p>On the target server make sure that you have python3 installed as python2 won't be enough. Once that is installed, I also had to define the below in my vars:</p>
<pre><code>ansible_python_interpreter: /bin/python3
</code></pre>
| workinginitisnotstressfulatall |
<p>I'd like to do all k8s installation, configuration, and maintenance using Helm v3 (v3.7.2).</p>
<p>Thus I have setup yaml templates for:</p>
<ul>
<li>deployment</li>
<li>configmap</li>
<li>service</li>
<li>ingress</li>
</ul>
<p>Yet I can't find any information in the Helm v3 docs on setting up an HPA (<em>HorizontalPodAutoscaler</em>). Can this be done using an hpa.yaml that pulls from values.yaml?</p>
| paiego | <p>Yes. Example, try <code>helm create nginx</code> will create a template project call "nginx", and inside the "nginx" directory you will find a templates/hpa.yaml example. Inside the values.yaml -> autoscaling is what control the HPA resources:</p>
<pre><code>autoscaling:
enabled: false # <-- change to true to create HPA
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
</code></pre>
| gohm'c |
Subsets and Splits