Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>Assume there is a backend application with a private key stored in a .env file.</p>
<p>For the project file structure:</p>
<pre><code>|-App files
|-Dockerfile
|-.env
</code></pre>
<p>If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.</p>
<p>I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?</p>
| Yuk Chan | <p>Moving this out of comments for better visibility.</p>
<hr />
<p>First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, <code>containerd</code> is used as well).</p>
<p>There are many resources on the internet which describe the key difference. For example this one is from <a href="https://azure.microsoft.com/en-us/topic/kubernetes-vs-docker/" rel="nofollow noreferrer">microsoft docs</a></p>
<hr />
<p>First <code>configmaps</code> and <code>secrets</code> should be created:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap" rel="nofollow noreferrer">Creating and managing configmaps</a> and <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret" rel="nofollow noreferrer">creating and managing secrets</a></p>
<p>There are different types of <a href="https://kubernetes.io/docs/concepts/configuration/secret/#secret-types" rel="nofollow noreferrer">secrets</a> which can be created.</p>
<hr />
<ol>
<li>Use configmaps/secrets as environment variables.</li>
</ol>
<p>Further referring to <code>configMaps</code> and <code>secrets</code> as <code>environment variables</code> looks like (configmaps and secrets have the same structure):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
</code></pre>
<ol start="2">
<li>Use configmaps/secrets as volumes (it will be presented as file).</li>
</ol>
<p>Below the example of using secrets as files mounted in a specific directory:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
</code></pre>
<p>There's a <a href="https://medium.com/avmconsulting-blog/secrets-management-in-kubernetes-378cbf8171d0" rel="nofollow noreferrer">good article</a> which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.</p>
| moonkotte |
<p>I have microservices running within EKS 1.22.
Is there a possible way to check the microservice communication also pod-to-pod communication within the cluster through CLI?</p>
| dev | <p>Everything out of the box should work properly (assuming you have <a href="https://aws.amazon.com/eks/" rel="nofollow noreferrer">AWS EKS</a>).</p>
<p>I think this article - <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services</a> has very helpful hints.</p>
<p>Let's first check pod to pod communication using trival method - <code>ping</code> command. I created two NGINX deployments (one in default namespace, second one in namespace <code>test</code>):</p>
<pre><code>kubectl create deployment nginx --image=nginx
kubectl create deployment nginx --image=nginx -n test
</code></pre>
<p>Now I will check IP addresses of both of them:</p>
<pre><code>user@shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
nginx-6799fc88d8-jxpj4 1/1 Running 0 3m13s 172.17.0.2
</code></pre>
<p>And also in <code>test</code> namespace:</p>
<pre><code>user@shell:~$ kubectl get pods -o wide -n test
NAME READY STATUS RESTARTS AGE IP
nginx-6799fc88d8-z2glk 1/1 Running 0 103s 172.17.0.4
</code></pre>
<p>Now I will <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">execute into one pod</a> and check connectivity to the second one:</p>
<pre><code>user@shell:~$ kubectl exec -it nginx-6799fc88d8-jxpj4 -- sh
# apt update
Hit:1 http://security.debian.org/debian-security bullseye-security InRelease
...
All packages are up to date.
# apt-get install inetutils-ping
Reading package lists... Done
...
Setting up inetutils-ping (2:2.0-1) ...
# ping 172.17.0.4
PING 172.17.0.4 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: icmp_seq=0 ttl=64 time=0.058 ms
64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.123 ms
</code></pre>
<p>Okay, so the pod to pod connection is working. Keep in mind that container images are minimal, so you may install <code>ping</code> as I did in my example. Also, depending on your application you can use different methods for checking connectivity - I could use <code>curl</code> command as well and I will get the standard NGINX home page.</p>
<p>Now time to create <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> (I am assuming this is what you mean by microservice) and test connectivity. Service is an abstract mechanism for exposing pods on a network. So we can test connectivity either by getting list of endpoints - IP address of the pods associated with this service - <code>kubectl get endpoints my-service</code>, and then checking pod to pod connection like in previous example, or we can just <code>curl</code> service IP address/hostname. For hostname between namespaces it's little bit different! Check below:</p>
<p>Let's create deployments with 3 replicas:</p>
<pre><code>kubectl create deployment nginx --image=nginx --replicas=3
kubectl create deployment nginx --image=nginx --replicas=3 -n test
</code></pre>
<p>For each deployment we will <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/#creating-a-service-for-an-application-running-in-five-pods" rel="nofollow noreferrer">create service using <code>kubectl expose</code></a>:</p>
<pre><code>kubectl expose deployment nginx --name=my-service --port=80
kubectl expose deployment nginx --name=my-service-test --port=80 -n test
</code></pre>
<p>Time to get IP addresses of the services:</p>
<pre><code>user@shell:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 64d
my-service ClusterIP 10.107.224.54 <none> 80/TCP 12m
</code></pre>
<p>And in <code>test</code> namespace:</p>
<pre><code>user@shell:~$ kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service-test ClusterIP 10.110.51.62 <none> 80/TCP 8s
</code></pre>
<p>I will exec into pod in default namespace and <code>curl</code> IP address of the <code>my-service-test</code> in second namespace:</p>
<pre><code>user@shell:~$ kubectl exec -it nginx-6799fc88d8-w5q8s -- sh
# curl 10.110.51.62
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
</code></pre>
<p>Okay, it's working... Let's try with hostname:</p>
<pre><code># curl my-service-test
curl: (6) Could not resolve host: my-service-test
</code></pre>
<p>Not working... why? Let's check <code>/etc/resolv.conf</code> file:</p>
<pre><code># cat resolv.conf
nameserver 10.96.0.10
search test.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>It's looking for <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-work-by-dns-name" rel="nofollow noreferrer">hostnames only in namespace where pod is located</a>.</p>
<p>So pod in the <code>test</code> namespace will have something like:</p>
<pre><code># cat resolv.conf
nameserver 10.96.0.10
search test.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>Let's try to curl <code>my-service-test.test.svc.cluster.local</code> from pod in default namespace:</p>
<pre><code># curl my-service-test.test.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</code></pre>
<p>It's working.</p>
<p>To sum up:</p>
<ul>
<li>communication between pods in cluster should work properly between all namespaces (assuming that you have proper <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">CNI plugin</a> installed, on AWS EKS you have)</li>
<li>pods containers are in most cases Linux containers so just use <a href="https://geekflare.com/linux-test-network-connectivity/" rel="nofollow noreferrer">Linux tools</a> to check connectivity (like <code>ping</code> or <code>curl</code>)</li>
<li>Services are an abstract mechanism for exposing pods on a network, you can connect to them for example using <code>curl</code> command</li>
<li>IP addresses are cluster wide, hostnames are namespace wide - if you want to connect to the resource from the other namespace you need to use fully-qualified name (it applies for services, pods...)</li>
</ul>
<p>Also check these articles:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">Access Services Running on Clusters | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods | Kubernetes</a></li>
</ul>
| Mikolaj S. |
<p>In Kubernetes, I have the following section in deployment.yaml. I am using ConfigMap and I want to set the path dynamically based on the pod metadata or label or env variable in pod. Does ConfigMap support setting path dynamically?</p>
<pre><code>spec:
volumes:
- name: configmap
configMap:
name: devconfig
items:
- key: config
path: $(ENVIRONMENT)
defaultMode: 420
</code></pre>
| developthou | <p>This is call substitution which kubectl does not support out of the box. However, you can easily achieve what you want by using <a href="https://stackoverflow.com/questions/14155596/how-to-substitute-shell-variables-in-complex-text-files">envsubst</a> command which will substitute <code>$ENVIRONMENT</code> in your yaml with the the environment variable set in your current shell.</p>
| gohm'c |
<p>I am using <code>nginx ingress controller</code> below is the ingress rule file for 2 services:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- rewrite.bar.com.com
secretName: ingress-tls
rules:
- host: rewrite.bar.com.com
- http:
paths:
- path: /my-service-1/(.*)
pathType: Prefix
backend:
service:
name: my-service-1
port:
number: 36995
- path: /my-service-2/(.*)
pathType: Prefix
backend:
service:
name: my-service-2
port:
number: 32243
</code></pre>
<p>Now using below command through shell of service-2 I can curl to the service-1 api endpoint, here I need to pass host ('wire.com') which is TLS enabled as well,</p>
<pre><code>curl --resolve wire.com:443:10.22.148.179 https://wire.com:32243/GetData
</code></pre>
<p>Above curl using host address give me response successfully, no issue here!</p>
<p>Now I am using IP address of the POD instead of host address, but this won't give me response, it's always give error like <code>curl: (52) Empty reply from server</code>. Here <code>10.22.148.179</code> is my ingress public IP address and <code>10.2.0.58</code> is my POD IP address.</p>
<pre><code>curl --resolve enabledservices-dev-aks.honeywell.com:443:10.22.148.179 http//10.2.0.58:32243/GetData
</code></pre>
<p>My goal to hit the POD/service api end point through IP address, is this possible with context of Ingress integrated?</p>
| user584018 | <p>Moving this from comments to answer.</p>
<hr />
<p>The issue was curl request and HTTP protocol used while the server is serving by HTTPS. This is the reason of <code>(52) Empty reply from server</code> error.</p>
<p>Request by curl should be done by specifying the protocol like:</p>
<pre><code>curl https://test.example.com:8888
</code></pre>
<hr />
<p><code>Ingress</code> is used as a single entry point to the cluster so all inside services can be exposed internally in the cluster using <code>cluster-ip</code> service type - see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">kubernetes service types</a>.</p>
<p>If any inside service/pod is required to be tested from inside the cluster, request should be executed from the cluster to be able to hit a <code>cluster-ip</code> since <code>cluster-ip</code> is only accessible within the cluster.</p>
| moonkotte |
<p>I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.</p>
<p>Output from <code>/etc/hosts</code>:</p>
<pre><code>127.0.0.1 localhost
127.0.1.1 main
</code></pre>
<p>Excerpt from <code>microk8s status</code>:</p>
<pre><code>addons:
enabled:
dashboard # The Kubernetes dashboard
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
</code></pre>
<p>I checked for the running dashboard (<code>kubectl get all --all-namespaces</code>):</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-node-2jltr 1/1 Running 0 23m
kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
kube-system deployment.apps/metrics-server 1/1 1 1 22m
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
</code></pre>
<p>I want to expose the microk8s dashboard within my local network to access it through <code>http://main/dashboard/</code></p>
<p>To do so, I did the following <code>nano ingress.yaml</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: main
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
</code></pre>
<p>Enabling the ingress-config through <code>kubectl apply -f ingress.yaml</code> gave the following error:</p>
<pre><code>error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
</code></pre>
<p>Help would be much appreciated, thanks!</p>
<p><strong>Update:</strong>
@harsh-manvar pointed out a mismatch in the config version. I have rewritten ingress.yaml to a very stripped down version:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: kube-system
spec:
rules:
- http:
paths:
- path: /dashboard
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>Applying this works. Also, the ingress rule gets created.</p>
<pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system dashboard public * 127.0.0.1 80 11m
</code></pre>
<p>However, when I access the dashboard through <code>http://<ip-of-kubernetes-master>/dashboard</code>, I get a <code>400</code> error.</p>
<p>Log from the ingress controller:</p>
<pre><code>192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
</code></pre>
<p>Does the dashboard also need to be exposed using the <code>microk8s proxy</code>? I thought the ingress controller would take care of this, or did I misunderstand this?</p>
| petwri | <p>To fix the error <code>error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1</code> you need to set <code>apiVersion</code> to the <code> networking.k8s.io/v1</code>. From the <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="noreferrer">Kubernetes v1.16 article about deprecated APIs</a>:</p>
<blockquote>
<ul>
<li>NetworkPolicy in the <strong>extensions/v1beta1</strong> API version is no longer served
- Migrate to use the <strong>networking.k8s.io/v1</strong> API version, available since v1.8. Existing persisted data can be retrieved/updated via the new version.</li>
</ul>
</blockquote>
<p>Now moving to the second issue. You need to add a few annotations and make few changes in your Ingress definition to make dashboard properly exposed on the microk8s cluster:</p>
<ul>
<li>add <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer"><code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code> annotation</a></li>
<li>add <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="noreferrer"><code>nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/dashboard)$ $1/ redirect;</code> annotation</a></li>
<li>change <code>path: /dashboard</code> to <code>path: /dashboard(/|$)(.*)</code></li>
</ul>
<p>We need them to properly forward the request to the backend pods - <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-dashboard-custom-path/" rel="noreferrer">good explanation in this article</a>:</p>
<blockquote>
<p><strong>Note:</strong> The "nginx.ingress.kubernetes.io/rewrite-target" annotation rewrites the URL before forwarding the request to the backend pods. In <strong>/dashboard(/|$)(.*)</strong> for <strong>path</strong>, <strong>(.*)</strong> stores the dynamic URL that's generated while accessing the Kubernetes Dashboard. The "nginx.ingress.kubernetes.io/rewrite-target" annotation replaces the captured data in the URL before forwarding the request to the <strong>kubernetes-dashboard</strong> service. The "nginx.ingress.kubernetes.io/configuration-snippet" annotation rewrites the URL to add a trailing slash ("/") only if <strong>ALB-URL/dashboard</strong> is accessed.</p>
</blockquote>
<p>Also we need another two changes:</p>
<ul>
<li>add <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="noreferrer"><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"</code> annotation</a> to tell NGINX Ingress to communicate with Dashboard service using HTTPs</li>
<li>add <code>kubernetes.io/ingress.class: public</code> annotation <a href="https://stackoverflow.com/a/67041204/16391991">to use NGINX Ingress created by microk8s <code>ingress</code> plugin</a></li>
</ul>
<p>After implementing everything above, the final YAML file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/dashboard)$ $1/ redirect;
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
kubernetes.io/ingress.class: public
name: dashboard
namespace: kube-system
spec:
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>It should work fine. No need to run <code>microk8s proxy</code> command.</p>
| Mikolaj S. |
<p>I am creating a POD file with multiple containers. One is a webserver container and another is my PostgreSQL container. Here is my pod file named <code>simple.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-01T16:28:15Z"
labels:
app: eltask
name: eltask
spec:
containers:
- name: el_web
command:
- ./entrypoints/entrypoint.sh
env:
- name: PATH
value: /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: RUBY_MAJOR
value: "2.7"
- name: BUNDLE_SILENCE_ROOT_WARNING
value: "1"
- name: BUNDLE_APP_CONFIG
value: /usr/local/bundle
- name: LANG
value: C.UTF-8
- name: RUBY_VERSION
value: 2.7.2
- name: RUBY_DOWNLOAD_SHA256
value: 1b95ab193cc8f5b5e59d2686cb3d5dcf1ddf2a86cb6950e0b4bdaae5040ec0d6
- name: GEM_HOME
value: /usr/local/bundle
image: docker.io/hmtanbir/elearniotask
ports:
- containerPort: 3000
hostPort: 3000
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /app
- name: el_db
image: docker.io/library/postgres:10-alpine3.13
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: PG_MAJOR
value: "10"
- name: PG_VERSION
value: "10.17"
- name: PGDATA
value: /var/lib/postgresql/data
- name: LANG
value: en_US.utf8
- name: PG_SHA256
value: 5af28071606c9cd82212c19ba584657a9d240e1c4c2da28fc1f3998a2754b26c
- name: POSTGRES_PASSWORD
value: password
args:
- postgres
command:
- docker-entrypoint.sh
ports:
- containerPort: 5432
hostPort: 9876
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /
dnsConfig: {}
restartPolicy: Never
status: {}
</code></pre>
<p>I am running a webserver container in <code>3000:3000</code> port and the DB container port is <code>9876:5432</code>.
But when I run cmd using PODMAN <code>podman play kube simple.yaml</code>, DB container is running <code>127.0.0.0:9876</code> but webserver can't connect with the DB server.</p>
<p><a href="https://i.stack.imgur.com/q4U9y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q4U9y.png" alt="enter image description here" /></a></p>
<p>My webserver DB config:</p>
<pre><code>ELTASK_DATABASE_HOST=localhost
ELTASK_DATABASE_PORT=9876
ELTASK_DATABASE_USERNAME=postgres
ELTASK_DATABASE_PASSWORD=password
</code></pre>
<p>If I run the webserver without Podman, the server can connect with the DB using <code>9876</code> port.</p>
<p>So, Why the webserver can't connect with the database container while it is running through Podman?</p>
| HM Tanbir | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>There are two parts of the answer:</p>
<ol>
<li><strong>Which port to use and why</strong></li>
</ol>
<p>For <code>postgresql</code> as a backend service, you can omit the <code>hostPort</code> since <code>podman</code> as a frontend service will access <code>postresql</code> using <code>Cluster-IP</code> and therefore it will be available on port <code>5432</code>. Completely deleting this part is not recommended approach, when you have a lot of pods with containers inside, it's much better to be able to quickly see which container is exposed on which port.</p>
<p>Also in general <code>hostPort</code> shouldn't be used unless it's the only way, consider using <code>NodePort</code>:</p>
<blockquote>
<p>Don't specify a hostPort for a Pod unless it is absolutely necessary.
When you bind a Pod to a hostPort, it limits the number of places the
Pod can be scheduled, because each <hostIP, hostPort, protocol>
combination must be unique.</p>
</blockquote>
<p>See <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="nofollow noreferrer">best practices</a>.</p>
<ol start="2">
<li><strong>Frontend and backend deployment</strong></li>
</ol>
<p>It's always advisable and a best practice to separate backend and frontend into different <code>deployments</code>, so they can be managed fully separately like upgrades, replicas and etc. As well as <code>services</code> - you don't need to expose backend service outside the cluster.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">an frontend and backend example</a>.</p>
<p>Also @David Maze correctly said that database should use a <code>statefulSet</code> - see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">more details about Statefulset</a>.</p>
| moonkotte |
<p>I have a deployment on Minikube for a .Net Core 5 API that is not returning response when I try to invoke it from Postman. When I run a GET from Postman to the exposed port (32580) and endpoint <code>http:// localhost:32580/api/platforms/</code> I get:</p>
<pre><code>Error: getaddrinfo ENOTFOUND
</code></pre>
<p>Oddly enough I was previously getting a <code>Connection refused</code> (before I restarted my Docker desktop). The container works perfectly when I use Docker but once I deployed it to Kubernetes context it no longer works.</p>
<p>I am unsure how exactly I can debug the container and get more meaningful error detail.</p>
<p>I have tried the following:</p>
<ol>
<li>Checking status of Deployment (platforms-depl)</li>
</ol>
<hr />
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 134d
ping-google 0/1 1 0 2d2h
platforms-depl 1/1 1 1 115m
</code></pre>
<ol start="2">
<li>Checking status of Pod (platforms-depl-84d7f5bdc6-sgxcp)</li>
</ol>
<hr />
<pre><code>NAME READY STATUS RESTARTS AGE
hello-minikube-6ddfcc9757-6mfmf 1/1 Running 21 134d
ping-google-5f84d66fcc-kbb7j 0/1 ImagePullBackOff 151 2d2h
platforms-depl-84d7f5bdc6-sgxcp 1/1 Running 1 115m
</code></pre>
<ol start="3">
<li>Running <code>kubectl describe pod platforms-depl-84d7f5bdc6-sgxcp</code> gives below output (truncated):</li>
</ol>
<hr />
<pre><code>Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/platforms-depl-84d7f5bdc6
Containers:
platformservice:
Container ID: docker://a73ce2dc737206e502df94066247299a6dcb6a038087d0f42ffc6e3b9dd194dd
Image: golide/platformservice:latest
Image ID: docker-pullable://golide/platformservice@sha256:bbed5f1d7238d2c466a6782333f8512d2e464f94aa64d8670214646a81b616c7
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 28 Sep 2021 15:12:22 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rl5kf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
</code></pre>
<ol start="4">
<li><p>When I run <code>docker ps</code> I cannot see the container and it also doesn't appear in the list of running containers in VS Code Docker/Containers extension.</p>
</li>
<li><p><code>kubectl get services</code> gives me the following:</p>
</li>
</ol>
<hr />
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube NodePort 10.99.23.44 <none> 8080:31378/TCP 134d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 134d
paymentsapi NodePort 10.111.243.3 <none> 5000:30385/TCP 108d
platformnpservice-srv NodePort 10.98.131.95 <none> 80:32580/TCP 2d2h
</code></pre>
<p>Then tried pinging the ClusterIP:</p>
<pre><code>Pinging 10.98.131.95 with 32 bytes of data:
Request timed out.
Request timed out.
Ping statistics for 10.98.131.95:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
</code></pre>
<p>What am I missing?</p>
<p>I read in suggestions I have to exec into the pod so that I get meaningful output but I'm not sure of the exact commands to run. I tried:</p>
<pre><code>kubectl exec POD -p platforms-depl-84d7f5bdc6-sgxcp
</code></pre>
<p>only to get error:</p>
<pre><code>kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "POD" not found
</code></pre>
<p>My environment Docker Linux containers with WSL2 on Windows 10.</p>
<p>What am I missing?</p>
| Golide | <p>First thing which is worth to note that generally minikube has a lot of possible <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">drivers to choose from</a> - in my case I found the <code>docker</code> drive to be most easiest to use.</p>
<p>My setup is:</p>
<ul>
<li>WSL2</li>
<li>Docker Desktop with <a href="https://docs.docker.com/desktop/windows/wsl/" rel="nofollow noreferrer">Docker Desktop WLS 2 backend enabled</a></li>
</ul>
<p>I used following command to start minikube: <code>minikube start --driver=docker</code>.
If you are using other driver I suggest moving to the <code>docker</code> one.</p>
<p>Answering your question:</p>
<p><em>What am I missing ?</em></p>
<p>By setting nodePort service type you are exposing your deployment / replica set using node IP address which not accessible from Windows host (when using <code>docker</code> driver). It's because all Kubernetes cluster resources are setup inside Docker container, which is isolated.</p>
<p>However, minikube offers simple solution to make available specified nodePort service to your Windows host. <a href="https://minikube.sigs.k8s.io/docs/commands/service/" rel="nofollow noreferrer">Just run <code>minikube service</code> command which will create a tunnel</a>. Let's check it.</p>
<p>You setup <code>platformnpservice-srv</code> service so you need to use this name in <code>minikube service</code> command instead of <code>testmini</code> which I used:</p>
<pre><code>minikube service --url testmini
🏃 Starting tunnel for service testmini.
|-----------|----------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------|-------------|------------------------|
| default | testmini | | http://127.0.0.1:33849 |
|-----------|----------|-------------|------------------------|
http://127.0.0.1:33849
❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it.
</code></pre>
<p>Note the last sentence - we need to keep this terminal window to be open, otherwise the tunnel won't be established.
Now, on my Windows host, in the browser I'm opening: <code>http://127.0.0.1:33849/api/platforms</code> website. The output is following:</p>
<pre><code>[{"id":1,"name":"Dot Net","publisher":"Microsoft","cost":"Free"},{"id":2,"name":"Linux","publisher":"Ubuntu","cost":"Free"},{"id":3,"name":"Kubernetes","publisher":"CNCF","cost":"Free"},{"id":4,"name":"SQL Express","publisher":"Microsoft","cost":"Free"}]
</code></pre>
<p>Voila! Seems that everything is working properly.</p>
<p>Also, other notes:</p>
<blockquote>
<p>tried pinging the ClusterIP</p>
</blockquote>
<p>The ClusterIP is <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">internal address which is only accessible from the cluster</a>, so you can't ping it from Windows host neither WSL2.</p>
<blockquote>
<p><em>kubectl exec POD -p platforms-depl-84d7f5bdc6-sgxcp</em></p>
</blockquote>
<p>As output suggests, you need to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">specify command you want to exec on the pod</a>. If you just want to get bash shell, use the following:</p>
<pre><code>kubectl exec -it platforms-depl-84d7f5bdc6-sgxcp -- sh
</code></pre>
| Mikolaj S. |
<p>I have an Access Point created on AWS EFS and now I do need to share it across multiple Persistent Volumes in Kubernetes which would eventually be used by multiple namespaces.</p>
<p>Is there a way that I can perform those, or would I need to create a separate volume with size allocation under the same mount point?</p>
| Pasan Chamikara | <p><code>...share it across multiple Persistent Volumes in Kubernetes which would eventually be used by multiple namespaces</code></p>
<p>First, <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html#efs-install-driver" rel="nofollow noreferrer">install the EFS CSI driver</a>.</p>
<p>Then create the StorageClass and PersistentVolume representing the EFS volume and access point you have created:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <name>
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: <name>
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: <name> # <-- match this to the StorageClass name
csi:
driver: efs.csi.aws.com
volumeHandle: <fs-handle-id>::<access-point-id>
</code></pre>
<p>In each of the namespace that you wish to mount the access point, create a PersistentVolumeClaim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <name>
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
storageClassName: <name> # <-- match this to the StorageClass name
resources:
requests:
storage: 1Gi # <-- match this to the PersistentVolume
</code></pre>
<p>As usual, you specify the volume in your spec to use it:</p>
<pre><code>...
volumes:
- name: <name>
persistentVolumeClaim:
claimName: <name>
</code></pre>
| gohm'c |
<p>In Kubernetes <code>CustomResourceDefinitions</code> (CRDs), we can specify <code>additionalPrinterColumns</code>, which (for example) are used for <code>kubectl get</code> with a CRD. The value for a column is usually extracted from the status of a CRD using a <code>jsonPath</code>. From the <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#type" rel="nofollow noreferrer">Kubernetes docs</a>, we can also see that timestamps are rendered in a user friendly way (e.g., <em>5m</em> or <em>2h</em>, representing the duration from this timestamp to now):</p>
<pre class="lang-yaml prettyprint-override"><code>additionalPrinterColumns:
- name: Duration
type: date
jsonPath: .status.completitionTimestamp
</code></pre>
<p>The Kubernetes <em>Job</em> resource is an example for a resource, which does not only show since when it exists, but also <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">for long it was running</a>:</p>
<pre><code>NAME COMPLETIONS DURATION AGE
hello-4111706356 0/1 0s
hello-4111706356 0/1 0s 0s
hello-4111706356 1/1 5s 5s
</code></pre>
<p>I'm looking for building something similar for my CRD, that is: Showing the duration between two timestamps in the same way. More specific, I would like to get the duration between two status fields such as <code>.status.startTimestamp</code> and <code>.status.completitionTimestamp</code> evaluated and formatted by Kubernetes.</p>
<p>As exactly the same thing is done in the <em>Job</em> resource, I'm wondering if this is somehow possible or if this is special behavior built into <code>kubectl</code>?</p>
| Sören Henning | <p>I will answer on your question partially so you have some understanding and ideas on what/how/where.</p>
<hr />
<p><strong>kubectl get</strong></p>
<p>When <code>kubectl get jobs</code> is executed, <code>kubernetes API server</code> decides which fields to provide in response:</p>
<blockquote>
<p>The <code>kubectl</code> tool relies on server-side output formatting. Your
cluster's API server decides which columns are shown by the <code>kubectl get</code> command</p>
</blockquote>
<p>See <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#additional-printer-columns" rel="nofollow noreferrer">here</a>.</p>
<p><code>Duration</code> field for <code>jobs</code> is also calculated on the server's side. This happens because <code>job</code> is a well-known resource for kubernetes server and it's built into the code "How to print the response". See <a href="https://github.com/kubernetes/kubernetes/blob/d1a5513cb044ad805007cbea6327bdfb1cc73aab/pkg/printers/internalversion/printers.go#L1031-L1046" rel="nofollow noreferrer">JobDuration - printer</a>.</p>
<p>This also can be checked by running regular command:</p>
<pre><code>kubectl get job job-name --v=8
</code></pre>
<p>And then using <a href="https://kubernetes.io/docs/reference/kubectl/overview/#server-side-columns" rel="nofollow noreferrer"><code>server-print</code></a> flag set to <code>false</code> (default is <code>true</code> for human-readable reasons):</p>
<pre><code>kubectl get job job-name --v=8 --server-print=false
</code></pre>
<p>With last command only general information will be returned and <code>name</code> and <code>age</code> will be shown in output.</p>
<hr />
<p><strong>What can be done</strong></p>
<p>Let's start with <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers" rel="nofollow noreferrer">CRDs and controllers</a>:</p>
<blockquote>
<p>On their own, custom resources let you store and retrieve structured
data. When you combine a custom resource with a custom controller,
custom resources provide a true declarative API.</p>
<p>The Kubernetes declarative API enforces a separation of
responsibilities. You declare the desired state of your resource. The
Kubernetes controller keeps the current state of Kubernetes objects in
sync with your declared desired state. This is in contrast to an
imperative API, where you instruct a server what to do.</p>
</blockquote>
<p>Moving forward to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates" rel="nofollow noreferrer"><code>feature gates</code></a>. We're interested in <code>CustomResourceSubresources</code>:</p>
<blockquote>
<p>Enable <code>/status</code> and <code>/scale</code> subresources on resources created from
<code>CustomResourceDefinition</code>.</p>
</blockquote>
<p>This <code>feature gate</code> is enabled by default starting from kubernetes <code>1.16</code>.</p>
<p>Therefore custom field like <code>duration-execution</code> could be created within CRD <code>subresource</code>'s status and custom controller could update the value of the given field whenever the value is changed using <code>watch update funtion</code>.</p>
<p><strong>Part 2</strong></p>
<p>There's a <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#controlling-pruning" rel="nofollow noreferrer"><code>controller prunning</code></a> that should be taken into consideration:</p>
<blockquote>
<p>By default, all unspecified fields for a custom resource, across all
versions, are pruned. It is possible though to opt-out of that for
specific sub-trees of fields by adding
<code>x-kubernetes-preserve-unknown-fields: true</code> in the <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema" rel="nofollow noreferrer">structural OpenAPI
v3 validation schema</a>.</p>
</blockquote>
<p>Here's a very similar <a href="https://stackoverflow.com/a/66766843/15537201">answer</a> about custom field and <code>additionalPrinterColumns</code>.</p>
| moonkotte |
<p>I am using a manually created Kubernetes Cluster (using <code>kubeadm</code>) deployed on AWS ec2 instances (manually created ec2 instances). I want to use AWS EBS volumes for Kubernetes persistent volume. How can I use AWS EBS volumes within Kubernetes cluster for persistent volumes?</p>
<p>Cluster details:</p>
<ul>
<li><code>kubectl</code> veresion: 1.19</li>
<li><code>kubeadm</code> veresion: 1.19</li>
</ul>
| Suresh | <p>Posted community wiki for better visibility with general solution as there are no further details / logs provided. Feel free to expand it.</p>
<hr />
<p>The official supported way to mount <a href="https://aws.amazon.com/ebs/" rel="nofollow noreferrer">Amazon Elastic Block Store</a> as <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Kubernetes volume</a> on the self-managed Kubernetes cluster running on AWS is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer"><code>awsElasticBlockStore</code>volume type</a>.</p>
<p>To manage the lifecycle of Amazon EBS volumes on the self-managed Kubernetes cluster running on AWS please install <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver" rel="nofollow noreferrer">Amazon Elastic Block Store Container Storage Interface Driver</a>.</p>
| Mikolaj S. |
<p>i create cluster</p>
<pre><code>eksctl create cluster \
--version 1.21 \
--region eu-central-1 \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 3 \
--name cluster
</code></pre>
<p>after that , i install grafana/prometheus</p>
<p><strong>a3a4b626096bc4cf5836786f7d1b2ae2-1272374646.eu-central-1.elb.amazonaws.com</strong>
when i ping this link , every time ip is different , how can i static it ? i mean elastic ip.
i try this solution but still not working .
<a href="https://blog.tooljet.com/aws-nat-eks/" rel="nofollow noreferrer">https://blog.tooljet.com/aws-nat-eks/</a></p>
| no name | <p><code>...how can i static it ? i mean elastic ip</code></p>
<p>You can use the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#other-elb-annotations" rel="nofollow noreferrer">annotation</a> <code>service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <AllocationId></code> to associate EIP with the provisioned NLB. Note 1 EIP for 1 availability zone.</p>
| gohm'c |
<p>I have installed ingress controller via helm as a daemonset. I have configured the ingress as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: rcc
annotations:
haproxy.org/check: 'true'
haproxy.org/check-http: /serviceCheck
haproxy.org/check-interval: 5s
haproxy.org/cookie-persistence: SERVERID
haproxy.org/forwarded-for: 'true'
haproxy.org/load-balance: leastconn
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-frontend
port:
number: 8080
</code></pre>
<pre><code>kubectl get ingress -n rcc
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
webapp-ingress <none> example.com 10.110.186.170 80 11h
</code></pre>
<p>The type chosen was loadbalancer.
I can ping from any node the ip address of the ingress on port 80 also can curl it just fine. I can also browse any of the ingress pods ip address from the node just fine. But when I browse the node ip o port 80 I get connection refused. Anything that I am missing here?</p>
| zozo6015 | <p>I installed last <code>haproxy ingress</code> which is <code>0.13.4</code> version using helm.</p>
<p>By default it's installed with <code>LoadBalancer</code> service type:</p>
<pre><code>$ kubectl get svc -n ingress-haproxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress LoadBalancer 10.102.166.149 <pending> 80:30312/TCP,443:32524/TCP 3m45s
</code></pre>
<p>Since I have the same <code>kubeadm</code> cluster, <code>EXTERNAL-IP</code> will be pending. And as you correctly mentioned in question, <code>CLUSTER-IP</code> is accessible on the nodes when cluster is set up using <code>kubeadm</code>.</p>
<hr />
<p>There are two options how to access your ingress:</p>
<ol>
<li>Using <code>NodePort</code>:</li>
</ol>
<p>From output above there's a <code>NodePort 30312</code> for internally exposed port <code>80</code>. Therefore from outside the cluster it should be accessed by <code>Node_IP:NodePort</code>:</p>
<pre><code>curl NODE_IP:30312 -IH "Host: example.com"
HTTP/1.1 200 OK
</code></pre>
<ol start="2">
<li>Set up <a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer"><code>metallb</code></a>:</li>
</ol>
<p>Follow <a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer">installation guide</a> and second step is to configure <code>metallb</code>. I use <a href="https://metallb.universe.tf/configuration/#layer-2-configuration" rel="nofollow noreferrer">layer 2</a>. Be careful to assign not used ip range!</p>
<p>After I installed and set up the <code>metallb</code>, my haproxy has <code>EXTERNAL-IP</code> now:</p>
<pre><code>$ kubectl get svc -n ingress-haproxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress LoadBalancer 10.102.166.149 172.16.1.241 80:30312/TCP,443:32524/TCP 10m
</code></pre>
<p>And now I can access ingress by <code>EXTERNAL-IP</code> on port <code>80</code>:</p>
<pre><code>curl 172.16.1.241 -IH "Host: example.com"
HTTP/1.1 200 OK
</code></pre>
<hr />
<p>Useful to read:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Kubernetes service types</a></li>
</ul>
| moonkotte |
<p>We've been using <code>/stats/summary</code> to get <code>fs</code> metrics, which is like:</p>
<pre><code>"fs": {
"time": "2021-10-14T03:46:05Z",
"availableBytes": 17989276262,
"capacityBytes": 29845807308,
"usedBytes": 5856531046,
"inodesFree": 16799593,
"inodes": 17347097,
"inodesUsed": 57504
},
</code></pre>
<p>And due to this <a href="https://github.com/elastic/beats/issues/12792" rel="nofollow noreferrer">Move away from kubelet stats/summary</a>, we need to get the same data in another way.</p>
<p>We've tried <code>/metrics/cadvisor</code> and <code>/metrics/resources</code>, but were not successful to get <code>fs</code> data.
Also, it seems that CAdvisor will also be deprecated (in TBD+2 <a href="https://github.com/kubernetes/kubernetes/issues/68522" rel="nofollow noreferrer">here</a>)</p>
<p>We've been searching the net for possible solution but can't seem to find any.</p>
<p>Any ideas on how this can be done?
Or probably point us to the right direction or documentation?</p>
<p>Thank you in advance.</p>
| Miguel Luiz | <p>Posted community wiki based on Github topic. Feel free to expand it.</p>
<hr />
<p>Personally, I have not found any equivalent of this call (<code>/api/v1/nodes/<node name>/proxy/stats/summary</code>), and as it is still working and not deprecated in the Kubernetes newest versions ( <code>1.21</code> and <code>1.22</code>), I'd recommend just using it and wait for information about replacement from the Kubernetes team. Check below information:</p>
<p>Information from this <a href="https://github.com/kubernetes/kubernetes/issues/68522" rel="nofollow noreferrer">GitHub topic - # Reduce the set of metrics exposed by the kubelet #68522</a> (last edited: November 2020, issue open):</p>
<p>It seems that <code>/stats/summary/</code> does not have any replacement recommendation ready:</p>
<blockquote>
<p>[TBD] Propose out-of-tree replacements for kubelet monitoring endpoints</p>
</blockquote>
<p>They will keep the Summary API for the next four versions counting from the version in which replacement will be implemented:</p>
<blockquote>
<p>[TBD+4] Remove the Summary API, cAdvisor prometheus metrics and remove the <code>--enable-container-monitoring-endpoints</code> flag.</p>
</blockquote>
<hr />
<p>In Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md" rel="nofollow noreferrer"><code>v1.23</code> changelog</a> there is no information about changing anything related to the Summary API.</p>
<p>I'd suggest observing and pinging Kubernetes developers directly in <a href="https://github.com/kubernetes/kubernetes/issues/68522" rel="nofollow noreferrer">this GitHub topic</a> for more information.</p>
| Mikolaj S. |
<p>I'm using terraform v0.14.2 , and I'm trying to create a EKS cluster but I'm having problem when the nodes are joining to the cluster. The status stay stucked in "Creating" until get an error:</p>
<p>My code to deploy is:</p>
<blockquote>
<p>Error: error waiting for EKS Node Group (EKS_SmartSteps:EKS_SmartSteps-worker-node-uk) creation: NodeCreationFailure: Instances failed to join the kubernetes cluster. Resource IDs: [i-00c4bac08b3c42225]</p>
</blockquote>
<pre><code>resource "aws_eks_node_group" "managed_workers" {
for_each = local.ob
cluster_name = aws_eks_cluster.cluster.name
node_group_name = "${var.cluster_name}-worker-node-${each.value}"
node_role_arn = aws_iam_role.managed_workers.arn
subnet_ids = aws_subnet.private.*.id
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
launch_template {
id = aws_launch_template.worker-node[each.value].id
version = aws_launch_template.worker-node[each.value].latest_version
}
depends_on = [
kubernetes_config_map.aws_auth_configmap,
aws_iam_role_policy_attachment.eks-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.eks-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-AmazonEC2ContainerRegistryReadOnly,
]
lifecycle {
create_before_destroy = true
ignore_changes = [scaling_config[0].desired_size, scaling_config[0].min_size]
}
}
resource "aws_launch_template" "worker-node" {
for_each = local.ob
image_id = data.aws_ssm_parameter.cluster.value
name = "${var.cluster_name}-worker-node-${each.value}"
instance_type = "t3.medium"
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 20
volume_type = "gp2"
}
}
tag_specifications {
resource_type = "instance"
tags = {
"Instance Name" = "${var.cluster_name}-node-${each.value}"
Name = "${var.cluster_name}-node-${each.value}"
}
}
}
</code></pre>
<p>In fact, I see in the EC2 instances and EKS the nodes attached to the EKS cluster, but with this status error:</p>
<blockquote>
<p>"Instances failed to join the kubernetes cluster"</p>
</blockquote>
<p>I cant inspect where is the error because the error messages dont show more info..</p>
<p>Any idea?</p>
<p>thx</p>
| Humberto Lantero | <p>So others can follow, you need to include a user data script to get the nodes to join the cluster. Something like:</p>
<p>userdata.tpl</p>
<pre><code>MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --b64-cluster-ca ${B64_CLUSTER_CA} --apiserver-endpoint ${API_SERVER_URL}
--==MYBOUNDARY==--\
</code></pre>
<p>Where you render it like so</p>
<pre><code>locals {
user_data_values = {
CLUSTER_NAME = var.cluster_name
B64_CLUSTER_CA = var.cluster_certificate_authority
API_SERVER_URL = var.cluster_endpoint
}
}
resource "aws_launch_template" "cluster" {
image_id = "ami-XXX" # Make sure the AMI is an EKS worker
user_data = base64encode(templatefile("userdata.tpl", local.user_data_values))
...
}
</code></pre>
<p>Aside from that, make sure the node group is part of the worker security group and has the required IAM roles and you should be fine.</p>
| robcxyz |
<p>I have 1 deployment and 1 service configurations:</p>
<p><strong>Deployment</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard-backend-deployment
spec:
replicas: 2
selector:
matchLabels:
app: dashboard-backend
template:
metadata:
labels:
app: dashboard-backend
spec:
containers:
- name: dashboard-backend
image: $BACKEND_IMAGE
imagePullPolicy: Always
env:
- name: NODE_ENV
value: $NODE_ENV
- name: PORT
value: '3000'
- name: ACCESS_TOKEN_JWT_KEY
value: $ACCESS_TOKEN_JWT_KEY
- name: REFRESH_TOKEN_JWT_KEY
value: $REFRESH_TOKEN_JWT_KEY
- name: GOOGLE_OAUTH_CLIENT_ID
value: $GOOGLE_OAUTH_CLIENT_ID
- name: GOOGLE_OAUTH_CLIENT_SECRET
value: $GOOGLE_OAUTH_CLIENT_SECRET
- name: GOOGLE_OAUTH_REDIRECT_URI
value: $GOOGLE_OAUTH_REDIRECT_URI
- name: GH_OAUTH_CLIENT_ID
value: $GH_OAUTH_CLIENT_ID
- name: GH_OAUTH_CLIENT_SECRET
value: $GH_OAUTH_CLIENT_SECRET
- name: GITHUB_OAUTH_REDIRECT_URI
value: $GITHUB_OAUTH_REDIRECT_URI
- name: MIXPANEL_TOKEN
value: $MIXPANEL_TOKEN
- name: FRONTEND_URL
value: $FRONTEND_URL
- name: CLI_TOKEN_JWT_KEY
value: $CLI_TOKEN_JWT_KEY
- name: DATABASE_URL
value: $DATABASE_URL
</code></pre>
<p><strong>Service</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: backend-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: $SSL_CERTIFICATE_ARN
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: dashboard-backend
type: LoadBalancer
ports:
- name: https
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>I have a cluster, AWS EKS, configured. I run this command:
<code>kubectl apply -f=./k8s/backend-deployment.yaml -f=./k8s/backend-service.yaml</code>, of course, when <code>kubectl</code> is "connected" to my AWS EKS cluster.</p>
<p>Output of command:</p>
<pre><code>Using kubectl version: Client Version: v1.26.0
Kustomize Version: v4.5.7
Using aws-iam-authenticator version: {"Version":"0.6.2","Commit":"..."}
deployment.apps/dashboard-backend-deployment unchanged
service/backend-service unchanged
</code></pre>
<p>When I enter the load balancers section in <code>EC2</code> service in AWS, there are no load balancers at all. Why?</p>
<p>These are the Terraform files, I used to deploy my cluster:</p>
<p><strong>eks-cluster</strong>:</p>
<pre><code>data "aws_iam_policy_document" "eks_cluster_policy" {
version = "2012-10-17"
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["eks.amazonaws.com"]
}
}
}
resource "aws_iam_role" "cluster" {
name = "${var.project}-Cluster-Role"
assume_role_policy = data.aws_iam_policy_document.eks_cluster_policy.json
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-eks-cluster-iam-role",
}
)
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}
resource "aws_eks_cluster" "main" {
name = "${var.project}-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.24"
vpc_config {
subnet_ids = flatten([aws_subnet.public[*].id, aws_subnet.private[*].id])
endpoint_private_access = true
endpoint_public_access = true
public_access_cidrs = ["0.0.0.0/0"]
}
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-eks-cluster",
}
)
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
]
}
resource "aws_security_group" "eks_cluster" {
name = "${var.project}-cluster-sg"
description = "Cluster communication with worker nodes"
vpc_id = aws_vpc.main.id
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-cluster-sg"
}
)
}
resource "aws_security_group_rule" "cluster_inbound" {
description = "Allow worker nodes to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 443
type = "ingress"
}
resource "aws_security_group_rule" "cluster_outbound" {
description = "Allow cluster API Server to communicate with the worker nodes"
from_port = 1024
protocol = "tcp"
security_group_id = aws_security_group.eks_cluster.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 65535
type = "egress"
}
</code></pre>
<p><strong>EKS WORKER NODES</strong></p>
<pre><code>data "aws_iam_policy_document" "eks_node_policy" {
version = "2012-10-17"
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role" "node" {
name = "${var.project}-Worker-Role"
assume_role_policy = data.aws_iam_policy_document.eks_node_policy.json
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-eks-node-iam-role",
}
)
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.node.name
}
resource "aws_eks_node_group" "main" {
cluster_name = aws_eks_cluster.main.name
node_group_name = var.project
node_role_arn = aws_iam_role.node.arn
subnet_ids = aws_subnet.private[*].id
scaling_config {
desired_size = 1
max_size = 2
min_size = 1
}
ami_type = "AL2_x86_64"
capacity_type = "ON_DEMAND"
disk_size = 20
instance_types = ["t3.small"]
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-eks-node-group",
}
)
depends_on = [
aws_iam_role_policy_attachment.node_AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.node_AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.node_AmazonEC2ContainerRegistryReadOnly,
]
}
resource "aws_security_group" "eks_nodes" {
name = "${var.project}-node-sg"
description = "Security group for all nodes in the cluster"
vpc_id = aws_vpc.main.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-node-sg"
"kubernetes.io/cluster/${var.project}-cluster" = "owned"
}
)
}
resource "aws_security_group_rule" "nodes_internal" {
description = "Allow nodes to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.eks_nodes.id
source_security_group_id = aws_security_group.eks_nodes.id
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "nodes_cluster_inbound" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.eks_nodes.id
source_security_group_id = aws_security_group.eks_cluster.id
to_port = 65535
type = "ingress"
}
</code></pre>
<p><strong>VPC</strong></p>
<pre><code>resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = merge(
var.tags,
{
Name = "${var.project}-vpc",
"kubernetes.io/cluster/${var.project}-cluster" = "shared"
}
)
}
resource "aws_subnet" "public" {
count = var.availability_zones_count
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = merge(
var.tags,
{
Name = "${var.project}-public-subnet",
"kubernetes.io/cluster/${var.project}-cluster" = "shared"
"kubernetes.io/role/elb" = 1
}
)
map_public_ip_on_launch = true
}
resource "aws_subnet" "private" {
count = var.availability_zones_count
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, count.index + var.availability_zones_count)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = merge(
var.tags,
{
Name = "${var.project}-private-sg"
"kubernetes.io/cluster/${var.project}-cluster" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
)
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = merge(
var.tags,
{
Name = "${var.project}-igw",
}
)
depends_on = [aws_vpc.main]
}
resource "aws_route_table" "primary" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = merge(
var.tags,
{
Name = "${var.project}-primary-route-table",
}
)
}
resource "aws_route_table_association" "internet_access" {
count = var.availability_zones_count
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.primary.id
}
resource "aws_eip" "main" {
vpc = true
tags = merge(
var.tags,
{
Name = "${var.project}-ngw-ip"
}
)
}
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.main.id
subnet_id = aws_subnet.public[0].id
tags = merge(
var.tags,
{
Name = "${var.project}-ngw"
}
)
}
resource "aws_route" "main" {
route_table_id = aws_vpc.main.default_route_table_id
nat_gateway_id = aws_nat_gateway.main.id
destination_cidr_block = "0.0.0.0/0"
}
resource "aws_security_group" "public_sg" {
name = "${var.project}-Public-sg"
vpc_id = aws_vpc.main.id
tags = merge(
var.tags,
{
Name = "${var.project}-Public-sg",
}
)
}
resource "aws_security_group_rule" "sg_ingress_public_443" {
security_group_id = aws_security_group.public_sg.id
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "sg_ingress_public_80" {
security_group_id = aws_security_group.public_sg.id
type = "ingress"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "sg_egress_public" {
security_group_id = aws_security_group.public_sg.id
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group" "data_plane_sg" {
name = "${var.project}-Worker-sg"
vpc_id = aws_vpc.main.id
tags = merge(
var.tags,
{
Name = "${var.project}-Worker-sg",
}
)
}
resource "aws_security_group_rule" "nodes" {
description = "Allow nodes to communicate with each other"
security_group_id = aws_security_group.data_plane_sg.id
type = "ingress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = flatten([cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 0), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 1), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 2), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 3)])
}
resource "aws_security_group_rule" "nodes_inbound" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
security_group_id = aws_security_group.data_plane_sg.id
type = "ingress"
from_port = 1025
to_port = 65535
protocol = "tcp"
cidr_blocks = flatten([cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 2), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 3)])
}
resource "aws_security_group_rule" "node_outbound" {
security_group_id = aws_security_group.data_plane_sg.id
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group" "control_plane_sg" {
name = "${var.project}-ControlPlane-sg"
vpc_id = aws_vpc.main.id
tags = merge(
var.tags,
{
Name = "${var.project}-ControlPlane-sg",
}
)
}
resource "aws_security_group_rule" "control_plane_inbound" {
security_group_id = aws_security_group.control_plane_sg.id
type = "ingress"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = flatten([cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 0), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 1), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 2), cidrsubnet(var.vpc_cidr, var.subnet_cidr_bits, 3)])
}
resource "aws_security_group_rule" "control_plane_outbound" {
security_group_id = aws_security_group.control_plane_sg.id
type = "egress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
</code></pre>
<p>There are more files of course, but not sure they are relevant.</p>
| Tal Rofe | <p>In order to create Loadbalancers automatically on the basis of service and ingress you need to deploy <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/" rel="nofollow noreferrer"><code>aws-load-balancer-controller</code></a> in your EKS cluster.</p>
<p>AWS also has official documentation at <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">here</a>.</p>
<blockquote class="spoiler">
<p>! Disclaimer !! I have created the below-mentioned repo.</p>
</blockquote>
<p>You can also find terraform code to deploy and use the <code>aws-load-balancer-controller</code> in this <a href="https://github.com/ishuar/terraform-eks/tree/main/examples/cluster_with_alb" rel="nofollow noreferrer">ishuar/terraform-eks GitHub Repo</a> along with EKS module for practical referencing.</p>
| ishuar |
<p>I have a script that uses curl and that script should be run in Kubernetes agent on Jenkins. Here is my original agent configuration:</p>
<pre><code> pipeline {
agent {
kubernetes {
customWorkspace 'ng-cleaner'
yaml """
kind: Pod
metadata:
spec:
imagePullSecrets:
- name: jenkins-docker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- build
schedulerName: default-scheduler
tolerations:
- key: type
operator: Equal
value: jenkins
effect: NoSchedule
containers:
- name: jnlp
env:
- name: CONTAINER_ENV_VAR
value: jnlp
- name: build
image: tixartifactory-docker.jfrog.io/baseimages/helm:helm3.2.1-helm2.16.2-kubectl.0
ttyEnabled: true
command:
- cat
tty: true
"""
}
}
</code></pre>
<p>The error message is "curl ....
/home/jenkins/agent/ng-cleaner@tmp/durable-0d154ecf/script.sh: 2: curl: not found"</p>
<p>What I tried:</p>
<ol>
<li>added shell step to main "build" container:
shell: sh "apk add --no-cache curl", also tried "apt install curl"- didn't help</li>
<li>added new container with curl image:
- name: curl
image: curlimages/curl:7.83.1
ttyEnabled: true
tty: true
command:
- cat - didn't help as well</li>
</ol>
<p>Any suggestions on how I can make it work?</p>
| user15824359 | <p>I resolved it.
It was needed to add shell step to main container:</p>
<pre><code>shell: sh "apk add --no-cache curl"
</code></pre>
<p>and then place my script inside container block:</p>
<pre><code>stages {
stage('MyStage') {
steps {
container('build'){
script {
</code></pre>
| user15824359 |
<p>I got a K8S+DinD issue:</p>
<ul>
<li>launch Kubernetes cluster</li>
<li>start a main docker image and a DinD image inside this cluster</li>
<li>when running a job requesting GPU, got error <code>could not select device driver "nvidia" with capabilities: [[gpu]]</code></li>
</ul>
<p>Full error</p>
<pre><code>http://localhost:2375/v1.40/containers/long-hash-string/start: Internal Server Error ("could not select device driver "nvidia" with capabilities: [[gpu]]")
</code></pre>
<p><code>exec</code> to the DinD image inside of K8S pod, <code>nvidia-smi</code> is not available.</p>
<p>Some debugging and it seems it's due to the DinD is missing the Nvidia-docker-toolkit, I had the same error when I ran the same job directly on my local laptop docker, I fixed the same error by installing <strong>nvidia-docker2</strong> <code>sudo apt-get install -y nvidia-docker2</code>.</p>
<p>I'm thinking maybe I can try to install nvidia-docker2 to the DinD 19.03 (docker:19.03-dind), but not sure how to do it? By multiple stage docker build?</p>
<p>Thank you very much!</p>
<hr />
<p>update:</p>
<p>pod spec:</p>
<pre><code>spec:
containers:
- name: dind-daemon
image: docker:19.03-dind
</code></pre>
| Elon Bezos | <p>I got it working myself.</p>
<p>Referring to</p>
<ul>
<li><a href="https://github.com/NVIDIA/nvidia-docker/issues/375" rel="noreferrer">https://github.com/NVIDIA/nvidia-docker/issues/375</a></li>
<li><a href="https://github.com/Henderake/dind-nvidia-docker" rel="noreferrer">https://github.com/Henderake/dind-nvidia-docker</a></li>
</ul>
<blockquote>
<p>First, I modified the ubuntu-dind image (<a href="https://github.com/billyteves/ubuntu-dind" rel="noreferrer">https://github.com/billyteves/ubuntu-dind</a>) to install nvidia-docker (i.e. added the instructions in the nvidia-docker site to the Dockerfile) and changed it to be based on nvidia/cuda:9.2-runtime-ubuntu16.04.</p>
</blockquote>
<blockquote>
<p>Then I created a pod with two containers, a frontend ubuntu container and the a privileged docker daemon container as a sidecar. The sidecar's image is the modified one I mentioned above.</p>
</blockquote>
<p>But since this post is 3 year ago from now, I did spent quite some time to match up the dependencies versions, repo migration over 3 years, etc.</p>
<p>My modified version of Dockerfile to build it</p>
<pre><code>ARG CUDA_IMAGE=nvidia/cuda:11.0.3-runtime-ubuntu20.04
FROM ${CUDA_IMAGE}
ARG DOCKER_CE_VERSION=5:18.09.1~3-0~ubuntu-xenial
RUN apt-get update -q && \
apt-get install -yq \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" && \
apt-get update -q && apt-get install -yq docker-ce docker-ce-cli containerd.io
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN set -eux; \
apt-get update -q && \
apt-get install -yq \
btrfs-progs \
e2fsprogs \
iptables \
xfsprogs \
xz-utils \
# pigz: https://github.com/moby/moby/pull/35697 (faster gzip implementation)
pigz \
# zfs \
wget
# set up subuid/subgid so that "--userns-remap=default" works out-of-the-box
RUN set -x \
&& addgroup --system dockremap \
&& adduser --system -ingroup dockremap dockremap \
&& echo 'dockremap:165536:65536' >> /etc/subuid \
&& echo 'dockremap:165536:65536' >> /etc/subgid
# https://github.com/docker/docker/tree/master/hack/dind
ENV DIND_COMMIT 37498f009d8bf25fbb6199e8ccd34bed84f2874b
RUN set -eux; \
wget -O /usr/local/bin/dind "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind"; \
chmod +x /usr/local/bin/dind
##### Install nvidia docker #####
# Add the package repositories
RUN curl -fsSL https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add --no-tty -
RUN distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && \
echo $distribution && \
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
tee /etc/apt/sources.list.d/nvidia-docker.list
RUN apt-get update -qq --fix-missing
RUN apt-get install -yq nvidia-docker2
RUN sed -i '2i \ \ \ \ "default-runtime": "nvidia",' /etc/docker/daemon.json
RUN mkdir -p /usr/local/bin/
COPY dockerd-entrypoint.sh /usr/local/bin/
RUN chmod 777 /usr/local/bin/dockerd-entrypoint.sh
RUN ln -s /usr/local/bin/dockerd-entrypoint.sh /
VOLUME /var/lib/docker
EXPOSE 2375
ENTRYPOINT ["dockerd-entrypoint.sh"]
#ENTRYPOINT ["/bin/sh", "/shared/dockerd-entrypoint.sh"]
CMD []
</code></pre>
<p>When I use <code>exec</code> to login into the Docker-in-Docker container, I can successfully run <code>nvidia-smi</code> (which previously return not found error then cannot run any GPU resource related docker run)</p>
<p>Welcome to pull my image at <code>brandsight/dind:nvidia-docker</code></p>
| Elon Bezos |
<p>I am trying to setup a local cluster using minikube in a Windows machine. Following some tutorials in <code>kubernetes.io</code>, I got the following manifest for the cluster:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-deployment
labels:
app: external-nginx
spec:
selector:
matchLabels:
app: external-nginx
replicas: 2
template:
metadata:
labels:
app: external-nginx
spec:
containers:
- name: external-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: expose-nginx
labels:
service: expose-nginx
spec:
type: NodePort
selector:
app: external-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32000
</code></pre>
<p>If I got things right, this should create a pod with a nginx instance and expose it to the host machine at port 32000.
However, when I run <code>curl http://$(minikube ip):32000</code>, I get a <strong>connection refused</strong> error.</p>
<p>I ran bash inside the service <em>expose-nginx</em> via <code>kubectl exec svc/expose-nginx -it bash</code> and from there I was able to access the <em>external-nginx</em> pods normally, which lead me to believe it is not a problem within the cluster.</p>
<p>I also tried to change the type of the service to LoadBalancer and enable the <code>minikube tunnel</code>, but got the same result.</p>
<p>Is there something I am missing?</p>
| Luís Gabriel de Andrade | <p>Almost always by default <code>minikube</code> uses <code>docker</code> driver for the <code>minikube</code> VM creation. In the host system it looks like a big docker container for the VM in which other kubernetes components are run as containers as well. Based on tests <code>NodePort</code> for services often doesn't work as it's supposed to like accessing the service exposed via <code>NodePort</code> should work on <code>minikube_IP:NodePort</code> address.</p>
<p>Solutions are:</p>
<ul>
<li><p>for local testing use <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer"><code>kubectl port-forward</code></a> to expose service to the local machine (which OP did)</p>
</li>
<li><p>use <a href="https://minikube.sigs.k8s.io/docs/commands/service/" rel="nofollow noreferrer"><code>minikube service</code></a> command which will expose the service to the host machine. Works in a very similar way as <code>kubectl port-forward</code>.</p>
</li>
<li><p>instead of <code>docker</code> driver use proper virtual machine which will get its own IP address (<code>VirtualBox</code> or <code>hyperv</code> drivers - depends on the system). <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">Reference</a>.</p>
</li>
<li><p>(Not related to <code>minikube</code>) Use built-in feature <code>kubernetes</code> in Docker Desktop for Windows. I've already <a href="https://stackoverflow.com/a/69113528/15537201">tested it</a> and service type should be <code>LoadBalancer</code> - it will be exposed to the host machine on <code>localhost</code>.</p>
</li>
</ul>
| moonkotte |
<p>My kubernetes K3s cluster gives this error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 17m default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
Warning FailedScheduling 17m default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
</code></pre>
<p>In order to list the taints in the cluster I executed:</p>
<pre><code>kubectl get nodes -o json | jq '.items[].spec'
</code></pre>
<p>which outputs:</p>
<pre><code>{
"podCIDR": "10.42.0.0/24",
"podCIDRs": [
"10.42.0.0/24"
],
"providerID": "k3s://antonis-dell",
"taints": [
{
"effect": "NoSchedule",
"key": "node.kubernetes.io/disk-pressure",
"timeAdded": "2021-12-17T10:54:31Z"
}
]
}
{
"podCIDR": "10.42.1.0/24",
"podCIDRs": [
"10.42.1.0/24"
],
"providerID": "k3s://knodea"
}
</code></pre>
<p>When I use <code>kubectl describe node antonis-dell</code> I get:</p>
<pre><code>Name: antonis-dell
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=antonis-dell
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=true
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: csi.volume.kubernetes.io/nodeid: {"ch.ctrox.csi.s3-driver":"antonis-dell"}
flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"f2:d5:6c:6a:85:0a"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.1.XX
k3s.io/hostname: antonis-dell
k3s.io/internal-ip: 192.168.1.XX
k3s.io/node-args: ["server"]
k3s.io/node-config-hash: YANNMDBIL7QEFSZANHGVW3PXY743NWWRVFKBKZ4FXLV5DM4C74WQ====
k3s.io/node-env:
{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/e61cd97f31a54dbcd9893f8325b7133cfdfd0229ff3bfae5a4f845780a93e84c","K3S_KUBECONFIG_MODE":"644"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 17 Dec 2021 12:11:39 +0200
Taints: node.kubernetes.io/disk-pressure:NoSchedule
</code></pre>
<p>where it seems that node has a disk-pressure taint.</p>
<p>This command doesn't work: <code>kubectl taint node antonis-dell node.kubernetes.io/disk-pressure:NoSchedule-</code> and it seems to me that even if it worked, this is not a good solution because the taint assigned by the control plane.</p>
<p>Furthermore in the end of command <code>kubectl describe node antonis-dell</code> I observed this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FreeDiskSpaceFailed 57m kubelet failed to garbage collect required amount of images. Wanted to free 32967806976 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 52m kubelet failed to garbage collect required amount of images. Wanted to free 32500092928 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 47m kubelet failed to garbage collect required amount of images. Wanted to free 32190205952 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 42m kubelet failed to garbage collect required amount of images. Wanted to free 32196628480 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 37m kubelet failed to garbage collect required amount of images. Wanted to free 32190926848 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 2m21s (x7 over 32m) kubelet (combined from similar events): failed to garbage collect required amount of images. Wanted to free 30909374464 bytes, but freed 0 bytes
</code></pre>
<p>Maybe the disk-pressure is related to this? How can I delete the unwanted images?</p>
| e7lT2P | <p>Posting the answer as a community wiki, feel free to edit and expand.</p>
<hr />
<p><code>node.kubernetes.io/disk-pressure:NoSchedule</code> taint indicates that some disk pressure happens (as it's called).</p>
<blockquote>
<p>The <code>kubelet</code> detects disk pressure based on <code>imagefs.available</code>, <code>imagefs.inodesFree</code>, <code>nodefs.available</code> and <code>nodefs.inodesFree</code>(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the <code>kubelet</code> to determine if the Node condition and taint should be added/removed.</p>
</blockquote>
<p>More details on <code>disk-pressure</code> are available in <a href="https://medium.com/kubernetes-tutorials/efficient-node-out-of-resource-management-in-kubernetes-67f158da6e59" rel="nofollow noreferrer">Efficient Node Out-of-Resource Management in Kubernetes</a> under <code>How Does Kubelet Decide that Resources Are Low?</code> section:</p>
<blockquote>
<p><code>memory.available</code> — A signal that describes the state of cluster
memory. The default eviction threshold for the memory is 100 Mi. In
other words, the kubelet starts evicting Pods when the memory goes
down to 100 Mi.</p>
<p><code>nodefs.available</code> — The nodefs is a filesystem used by
the kubelet for volumes, daemon logs, etc. By default, the kubelet
starts reclaiming node resources if the nodefs.available < 10%.</p>
<p><code>nodefs.inodesFree</code> — A signal that describes the state of the nodefs
inode memory. By default, the kubelet starts evicting workloads if the
nodefs.inodesFree < 5%.</p>
<p><code>imagefs.available</code> — The imagefs filesystem is
an optional filesystem used by a container runtime to store container
images and container-writable layers. By default, the kubelet starts
evicting workloads if the imagefs.available < 15 %.</p>
<p><code>imagefs.inodesFree</code> — The state of the imagefs inode memory. It has no default eviction threshold.</p>
</blockquote>
<hr />
<p><strong>What to check</strong></p>
<p>There are different things that can help, such as:</p>
<ul>
<li><p>prune unused objects like images (with Docker CRI) - <a href="https://docs.docker.com/config/pruning/#prune-images" rel="nofollow noreferrer">prune images</a>.</p>
<blockquote>
<p>The docker image prune command allows you to clean up unused images. By default, docker image prune only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by any container.</p>
</blockquote>
</li>
<li><p>check files/logs on the node if they take a lot of space.</p>
</li>
<li><p>any another reason why disk space was consumed.</p>
</li>
</ul>
| moonkotte |
<p>I have a k8s cluster with three masters, two workers, and an external haproxy and use flannel as a cni.
The coredns have problems, and their status is running, but they don't become ready.</p>
<p>Coredns log</p>
<p><img src="https://i.stack.imgur.com/lFNGV.png" alt="1" /></p>
<p>I get the logs of this pod, and I get this message:</p>
<blockquote>
<p>[INFO] plugin/ready: Still waiting on: "Kubernetes."</p>
</blockquote>
<p>What I do to solve this problem but didn't get any result:<br />
1- check ufw and disable it.<br />
2- check IPtables and flush them.<br />
3- check Kube-proxy logs.<br />
4- check the haproxy, and it is accessible from out and all servers in the cluster.<br />
5- check nodes network.<br />
7- reboot all servers at the end. :))</p>
<p>I get describe po :</p>
<p><a href="https://i.stack.imgur.com/1S5ux.png" rel="nofollow noreferrer">describe pod</a></p>
| mona moghadampanah | <h3>Lets see your CoreDNS works at all?</h3>
<ul>
<li>You can crete a simple pod and go inside and from there, curl Services via <code>IP:PORT</code> & <code>Service-name:PORT</code>
<pre class="lang-bash prettyprint-override"><code>kubectl run -it --rm test-nginx-svc --image=nginx -- bash
</code></pre>
</li>
</ul>
<ol>
<li>IP:PORT
<pre class="lang-bash prettyprint-override"><code>curl http://<SERVICE-IP>:8080
</code></pre>
</li>
<li>DNS
<pre class="lang-bash prettyprint-override"><code>curl http://nginx-service:8080
</code></pre>
</li>
</ol>
<p>If you couldn't curl your service via <code>Service-name:PORT</code> then you probably have a DNS Issue....</p>
<hr />
<h4>CoreDNS</h4>
<p>Service Name Resolution Problems?</p>
<ul>
<li>Check CoreDNS Pods are running and accessible?</li>
<li>Check CoreDNS logs</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl run -it test-nginx-svc --image=nginx -- bash
</code></pre>
<ul>
<li>Inside the Pod
<pre class="lang-bash prettyprint-override"><code>cat /etc/resolv.conf
</code></pre>
</li>
<li>The result would look like:
<pre><code>nameserver 10.96.0.10 # IP address of CoreDNS
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
</li>
</ul>
<hr />
<h3>If It is NOT working:</h3>
<p>I suggest to try re-install it via <a href="https://github.com/flannel-io/flannel" rel="nofollow noreferrer">official docs</a> or <a href="https://github.com/flannel-io/flannel/issues/1426" rel="nofollow noreferrer">helm chart</a></p>
<p>OR</p>
<p>Try onther CNIs like <a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/" rel="nofollow noreferrer">weave</a></p>
<hr />
<p><a href="https://github.com/alifiroozi80/CKA/tree/main/CKA#coredns" rel="nofollow noreferrer">Source</a></p>
| Ali |
<p>I have a Django app that uses the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">official Kubernetes client for python</a> and works fine but it only deploys (rightly) public registries.</p>
<p>Is there a way to execute a login and then let Kubernetes client pull a private image freely? I wouldn't like to execute direct <code>cmd</code> commands for the login and the image pull.. Thanks!</p>
| Marco Frag Delle Monache | <p>Actually it's pretty easy to do using official Kubernetes Python Client. You need to do two steps:</p>
<ul>
<li>create a secret of type <code>dockerconfigjson</code> (could be done by command line or using Python client) - you are putting here your credentials</li>
<li>add this secret into your deployment / pod definition using <code>imagePullSecrets</code> so Kubernetes client can pull images from private repositories</li>
</ul>
<p><strong>Create secret of type <code>dockerconfigjson</code>:</strong></p>
<p>Replace <code><something></code> with your data.</p>
<p>Command line:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret docker-registry private-registry \
--docker-server=<your-registry-server> --docker-username=<your-name> \
--docker-password=<your-pword> --docker-email=<your-email>
</code></pre>
<p>Equivalent in Kubernetes Python Client (remember to pass in secure way variable <code>password</code>, for example check <a href="https://stackoverflow.com/questions/15209978/where-to-store-secret-keys-django">this</a>):</p>
<pre class="lang-py prettyprint-override"><code>import base64
import json
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
# Credentials
username = <your-name>
password = <your-pword>
mail = <your-email>
secret_name = "private-registry"
namespace = "default"
# Address of Docker repository - in case of Docker Hub just use https://index.docker.io/v1/
docker_server = <your-registry-server>
# Create auth token
auth_decoded = username + ":" + password
auth_decoded_bytes = auth_decoded.encode('ascii')
base64_auth_message_bytes = base64.b64encode(auth_decoded_bytes)
base64_auth_message = base64_auth_message_bytes.decode('ascii')
cred_payload = {
"auths": {
docker_server: {
"username": username,
"password": password,
"email": mail,
"auth": base64_auth_message
}
}
}
data = {
".dockerconfigjson": base64.b64encode(
json.dumps(cred_payload).encode()
).decode()
}
secret = client.V1Secret(
api_version="v1",
data=data,
kind="Secret",
metadata=dict(name=secret_name, namespace=namespace),
type="kubernetes.io/dockerconfigjson",
)
v1.create_namespaced_secret(namespace, body=secret)
</code></pre>
<p><strong>Add this secret into your deployment / pod definition using <code>imagePullSecrets</code>: option</strong></p>
<p>Now, let's move to using newly created secret - depends how you want to deploy pod / deployment in Python code there are two ways: apply <code>yaml</code> file or create pod / deployment manifest directly in the code. I will show both ways. As before, replace <code><something></code> with your data.</p>
<p>Example <code>yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: private-registry-pod
spec:
containers:
- name: private-registry-container
image: <your-private-image>
imagePullSecrets:
- name: private-registry
</code></pre>
<p>In last line we are referring to secret <code>docker-registry</code> created in previous step.</p>
<p>Let's apply this <code>yaml</code> file using Kubernetes Python client:</p>
<pre><code>from os import path
import yaml
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
config_yaml = "pod.yaml"
with open(path.join(path.dirname(__file__), config_yaml)) as f:
dep = yaml.safe_load(f)
resp = v1.create_namespaced_pod(body=dep, namespace="default")
print("Deployment created. status='%s'" % str(resp.status))
</code></pre>
<p>All in Python code - both pod definition and applying process:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
import time
config.load_kube_config()
v1 = client.CoreV1Api()
pod_name = "private-registry-pod"
secret_name = "private-registry"
namespace = "default"
container_name = "private-registry-container"
image = <your-private-image>
# Create a pod
print("Creating pod...")
pod_manifest= {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": pod_name
},
"spec": {
"containers": [
{
"name": container_name,
"image": image
}
],
"imagePullSecrets": [
{
"name": secret_name
}
]
}
}
resp = v1.create_namespaced_pod(body=pod_manifest, namespace=namespace)
# Wait for a pod
while True:
resp = v1.read_namespaced_pod(name=pod_name, namespace=namespace)
if resp.status.phase != 'Pending':
break
time.sleep(1)
print("Done.")
</code></pre>
<p>Sources:</p>
<ul>
<li><a href="https://github.com/kubernetes-client/python/issues/501" rel="nofollow noreferrer">Github thread</a></li>
<li><a href="https://stackoverflow.com/questions/56673919/kubernetes-python-api-client-execute-full-yaml-file">Stackoverflow topic</a></li>
<li><a href="https://github.com/kubernetes-client/python/blob/master/examples/pod_exec.py" rel="nofollow noreferrer">Official Kubernetes Python client example</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#inspecting-the-secret-regcred" rel="nofollow noreferrer">Kubernetes docs</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">Another Kubernetes docs</a></li>
<li><a href="https://github.com/kubeflow/kubeflow/issues/1748" rel="nofollow noreferrer">Github topic</a></li>
</ul>
| Mikolaj S. |
<p>I am trying to patch a cronjob, but somehow it doesn't work as I would expect. I use the same folder structure for a deployment and that works.</p>
<p>This is the folder structure:</p>
<pre class="lang-sh prettyprint-override"><code>.
├── base
│ ├── kustomization.yaml
│ └── war.cron.yaml
└── overlays
└── staging
├── kustomization.yaml
├── war.cron.patch.yaml
└── war.cron.staging.env
</code></pre>
<p>base/kustomization.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: Kustomization
resources:
- war.cron.yaml
</code></pre>
<p>base/war.cron.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: war-event-cron
image: my-registry/war-service
imagePullPolicy: IfNotPresent
command:
- python
- run.py
args:
- sync-events
envFrom:
- secretRef:
name: war-event-cron-secret
restartPolicy: OnFailure
</code></pre>
<p>Then I am trying to patch this in the staging overlay.</p>
<p>overlays/staging/kustomization.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: Kustomization
namespace: staging
bases:
- "../../base"
patchesStrategicMerge:
- war.cron.patch.yaml
secretGenerator:
- name: war-event-cron-secret
behavior: create
envs:
- war.cron.staging.env
</code></pre>
<p>overlays/staging/war.cron.patch.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: war-event-cron
image: my-registry/war-service:nightly
args:
- sync-events
- --debug
</code></pre>
<p>But the result of <code>kustomize build overlays/staging/</code> is not what I want. The <code>command</code> is gone and the <code>secret</code> is not referenced.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
...
kind: Secret
metadata:
name: war-event-cron-secret-d8m6bh7284
namespace: staging
type: Opaque
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
namespace: staging
spec:
jobTemplate:
spec:
template:
spec:
containers:
- args:
- sync-events
- --debug
image: my-registry/war-service:nightly
name: war-event-cron
restartPolicy: OnFailure
schedule: '*/5 * * * *'
</code></pre>
| The Fool | <p>It's known bug in <code>kustomize</code> - check and follow <a href="https://github.com/kubernetes-sigs/kustomize/issues/4062" rel="nofollow noreferrer">this</a> topic (created ~ one month ago) on GitHub for more information.</p>
<p>For now, fix for your issue is to use <code>apiVersion:batch/v1beta1</code> instead of <code>apiVersion: batch/v1</code> in <code>base/war.cron.yaml</code> and <code>overlays/staging/war.cron.patch.yaml</code> files.</p>
| Mikolaj S. |
<p>I learnt that to run a container as rootless, you need to specify either the SecurityContext:runAsUser 1000 or specify the USER directive in the DOCKERFILE.</p>
<p>Question on this is that there is no UID 1000 on the Kubernetes/Docker host system itself.</p>
<p>I learnt before that Linux User Namespacing allows a user to have a different UID outside it's original NS.</p>
<p>Hence, how does UID 1000 exist under the hood? Did the original root (UID 0) create a new user namespace which is represented by UID 1000 in the container?</p>
<p>What happens if we specify UID 2000 instead?</p>
| transcend3nt | <p>Hope this answer helps you</p>
<blockquote>
<p>I learnt that to run a container as rootless, you need to specify
either the SecurityContext:runAsUser 1000 or specify the USER
directive in the DOCKERFILE</p>
</blockquote>
<p>You are correct except in <code>runAsUser: 1000</code>. you can specify any UID, not only <code>1000</code>. Remember any UID you want to use (<code>runAsUser: UID</code>), that <code>UID</code> should already be there!</p>
<hr />
<p>Often, base images will already have a user created and available but leave it up to the development or deployment teams to leverage it. For example, the official Node.js image comes with a user named node at UID <code>1000</code> that you can run as, but they do not explicitly set the current user to it in their Dockerfile. We will either need to configure it at runtime with a <code>runAsUser</code> setting or change the current user in the image using a <code>derivative Dockerfile</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>runAsUser: 1001 # hardcode user to non-root if not set in Dockerfile
runAsGroup: 1001 # hardcode group to non-root if not set in Dockerfile
runAsNonRoot: true # hardcode to non-root. Redundant to above if Dockerfile is set USER 1000
</code></pre>
<p>Remmeber that <code>runAsUser</code> and <code>runAsGroup</code> <strong>ensures</strong> container processes do not run as the <code>root</code> user but don’t rely on the <code>runAsUser</code> or <code>runAsGroup</code> settings to guarantee this. Be sure to also set <code>runAsNonRoot: true</code>.</p>
<hr />
<p>Here is full example of <code>securityContext</code>:</p>
<pre class="lang-yaml prettyprint-override"><code># generic pod spec that's usable inside a deployment or other higher level k8s spec
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
# basic container details
- name: my-container-name
# never use reusable tags like latest or stable
image: my-image:tag
# hardcode the listening port if Dockerfile isn't set with EXPOSE
ports:
- containerPort: 8080
protocol: TCP
readinessProbe: # I always recommend using these, even if your app has no listening ports (this affects any rolling update)
httpGet: # Lots of timeout values with defaults, be sure they are ideal for your workload
path: /ready
port: 8080
livenessProbe: # only needed if your app tends to go unresponsive or you don't have a readinessProbe, but this is up for debate
httpGet: # Lots of timeout values with defaults, be sure they are ideal for your workload
path: /alive
port: 8080
resources: # Because if limits = requests then QoS is set to "Guaranteed"
limits:
memory: "500Mi" # If container uses over 500MB it is killed (OOM)
#cpu: "2" # Not normally needed, unless you need to protect other workloads or QoS must be "Guaranteed"
requests:
memory: "500Mi" # Scheduler finds a node where 500MB is available
cpu: "1" # Scheduler finds a node where 1 vCPU is available
# per-container security context
# lock down privileges inside the container
securityContext:
allowPrivilegeEscalation: false # prevent sudo, etc.
privileged: false # prevent acting like host root
terminationGracePeriodSeconds: 600 # default is 30, but you may need more time to gracefully shutdown (HTTP long polling, user uploads, etc)
# per-pod security context
# enable seccomp and force non-root user
securityContext:
seccompProfile:
type: RuntimeDefault # enable seccomp and the runtimes default profile
runAsUser: 1001 # hardcode user to non-root if not set in Dockerfile
runAsGroup: 1001 # hardcode group to non-root if not set in Dockerfile
runAsNonRoot: true # hardcode to non-root. Redundant to above if Dockerfile is set USER 1000
</code></pre>
<hr />
<p>sources:</p>
<ul>
<li><a href="https://github.com/BretFisher/podspec" rel="nofollow noreferrer">Kubernetes Pod Specification Good Defaults</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context" rel="nofollow noreferrer">Configure a Security Context for a Pod or Container</a></li>
<li><a href="https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/" rel="nofollow noreferrer">10 Kubernetes Security Context settings you should understand</a></li>
</ul>
| Ali |
<p>I am having a k3s cluster with my application pods running. In all the pods when I login ( with <code>kubectl exec <pod_name> -n <ns> -it /bin/bash</code> command ) there is <strong><code>kubernetes.io</code></strong> directory which contain secret token that anyone can get if they do <code>cat token</code> :</p>
<pre><code>root@Ubuntu-VM: kubectl exec app-test-pod -n app-system -it /bin/bash
root@app-test-pod:/var/run/secrets/kubernetes.io/serviceaccount# ls -lhrt
total 0
lrwxrwxrwx 1 root root 12 Oct 11 12:07 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 11 12:07 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 11 12:07 ca.crt -> ..data/ca.crt
</code></pre>
<p>This seems a security threat (or vulnerability). Can someone let me know if there is a way to remove this dependency from pod so that I can restrict users (even root users also) to access this secret if they login to pod ? Also If this is possible then how will pods do communicate with the API Server ?</p>
| solveit | <p>To clarify a couple of things:</p>
<blockquote>
<p>This seems a security threat (or vulnerability).</p>
</blockquote>
<p>It actually isn't a vulnerability unless you configured it to be one.
The ServiceAccount you are talking about is the <code>deafult</code> one which exists in every namespace.
By default that ServiceAccount does not have any permissions that make it unsafe.
If you want to you <em>can</em> add certain rights to the <code>default</code> ServiceAccount <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">using RBAC</a>. For example you can configure it to be able to list all Pods in the same namespace but unless you do that, the ServiceAccount is not considered a vulnerability at all and will not be able to retrieve any useful information.
This applies to <em>all</em> ServiceAccounts, not only the <code>default</code> one.</p>
<blockquote>
<p>Can someone let me know if there is a way to remove this dependency from pod so that I can restrict users (even root users also) to access this secret if they login to pod ?</p>
</blockquote>
<p>Yes it is possible, actually there are two options:</p>
<p>Firstly there is a field called <code>automountServiceAccountToken</code> for the <code>spec</code> section in Pods which you can set to <code>false</code> if you do not want the <code>default</code> ServiceAccount to be mounted at all.</p>
<p>Here is an example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
automountServiceAccountToken: false
[...]
</code></pre>
<p>Other than that you can create/edit a ServiceAccount and assign it the <code>automountServiceAccountToken: false</code> field:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
namespace: default
[...]
</code></pre>
<blockquote>
<p>Also If this is possible then how will pods do communicate with the API Server ?</p>
</blockquote>
<p>Pods actually do not need to communicate with the API server at all.
Even when using features like a <code>livenessProbe</code> it is not necessary for Pods to communicate with the API server at all.
As a matter of fact most Pods <em>never</em> communicate with the API server.
The only reason a Pod would need to communicate with the API server is if it is planning on directly interacting with the cluster. Usually this is never required unless you want to write a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="noreferrer">custom operator</a> or something similar.
You will still be able to use all the functionality a Pod has to offer you even if you do not mount the ServiceAccount because all those features are based around a Kubernetes communicating with your Pod not the other way around (a <code>livenessProbe</code> for example is being evaluated by <code>kubelet</code>, there is no need at all for the Pod to communicate with the API).</p>
| F1ko |
<p>I have a <code>k3s</code> cluster that have system pods with <code>calico</code> policy applied:</p>
<pre><code>kube-system pod/calico-node-xxxx
kube-system pod/calico-kube-controllers-xxxxxx
kube-system pod/metrics-server-xxxxx
kube-system pod/local-path-provisioner-xxxxx
kube-system pod/coredns-xxxxx
app-system pod/my-app-xxxx
</code></pre>
<p>I ran <a href="https://rancher.com/docs/k3s/latest/en/upgrades/killall/" rel="nofollow noreferrer">/usr/local/bin/k3s-killall.sh</a> to clean up containers and networks. Will this clean/remove/reset my calico networking also? (though after <code>killall.sh</code> the iptables of calico still present)</p>
<p><strong>Quoting from the killall.sh link:</strong></p>
<blockquote>
<p>The killall script cleans up containers, K3s directories, and networking components while also removing the iptables chain with all the associated rules.</p>
</blockquote>
<p>It says that networking component will also be cleaned up though but is it kubernetes networking or any networking applied to cluster?</p>
| solveit | <p>When you install <code>k3s</code> based on the instructions <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/" rel="nofollow noreferrer">here</a> it won't install Calico CNI by default. There is a need to install Calico CNI <a href="https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart" rel="nofollow noreferrer">separately</a>.</p>
<p>To answer you question, let's analyse <code>/usr/local/bin/k3s-killall.sh</code> file, especially the part with <code>iptables</code> command:</p>
<pre><code>...
iptables-save | grep -v KUBE- | grep -v CNI- | iptables-restore
</code></pre>
<p>As can see, this command only removes <code>iptables</code> <a href="https://unix.stackexchange.com/questions/506729/what-is-a-chain-in-iptables">chains</a> starting with <code>KUBE</code> or <code>CNI</code>.</p>
<p>If you run command <code>iptables -S</code> on cluster setup with <code>k3s</code> and Calico CNI you can see that chains used by Calico are starting with <code>cali-</code>:</p>
<pre><code> iptables -S
-A cali-FORWARD -m comment --comment "cali:vjrMJCRpqwy5oRoX" -j MARK --set-xmark 0x0/0xe0000
-A cali-FORWARD -m comment --comment "cali:A_sPAO0mcxbT9mOV" -m mark --mark 0x0/0x10000 -j cali-from-hep-forward
...
</code></pre>
<p>Briefly answering your questions:</p>
<blockquote>
<p>I ran <a href="https://rancher.com/docs/k3s/latest/en/upgrades/killall/" rel="nofollow noreferrer">/usr/local/bin/k3s-killall.sh</a> to clean up containers and networks. Will this clean/remove/reset my calico networking also ?</p>
</blockquote>
<p>No. There will be still some of Calico CNI components for example earlier mentioned <code>iptables</code> chains but also network interfaces:</p>
<pre><code> ip addr
6: calicd9e5f8ac65@if4: <BROADCAST,MULTICAST,UP,LOWER_UP>
7: cali6fcd2eeafde@if4: <BROADCAST,MULTICAST,UP,LOWER_UP>
...
</code></pre>
<blockquote>
<p>It says that networking component will also be cleaned up though, but is it kubernetes networking or any networking applied to cluster ?</p>
</blockquote>
<p>Those are network components provided by <code>k3s</code> by default like earlier mentioned <code>KUBE-</code> and <code>CNI-</code> <code>iptables</code> chains. To get more information what exatcly <code>k3s-killall.sh</code> script does, I'd recommend reading it's <a href="https://get.k3s.io/" rel="nofollow noreferrer">code</a> (<code>k3s-killall.sh</code> script starting from <code># --- create killall script ---</code>, line 575).</p>
| Mikolaj S. |
<p>Today the coredns one pod running into issue, I check the coredns pod and show log like this:</p>
<pre><code>.:53
2022-05-23T08:41:36.664Z [INFO] CoreDNS-1.3.1
2022-05-23T08:41:36.665Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2022-05-23T08:41:36.665Z [INFO] plugin/reload: Running configuration MD5 = 8646128cd34ade07719a0787cce6943e
2022-05-23T09:40:08.490Z [ERROR] plugin/errors: 2 oap. A: dial udp 100.100.2.136:53: i/o timeout
</code></pre>
<p>currently the coredns have 2 pod, one of the pod have this issue. The dns pod ip is <code>172.30.112.19</code>, why the dns tried to connect <code>100.100.2.136</code>? why did this happen? what should I do to make it work?</p>
| Dolphin | <p><code>why the dns tried to connect 100.100.2.136?</code></p>
<p>When coreDNS gets request that is outside the cluster domain (eg. <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a>), it will forward the request to upstream nameserver (likely the 100.100.2.136:53 in your case). You can check in coreDNS ConfigMap for <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options" rel="nofollow noreferrer">forward . /etc/resolv.conf</a></p>
<p><code>why did this happen? what should I do to make it work?</code></p>
<p>If the other node that run the coreDNS pod is function correctly, there could be discrepancy in the nameserver and/or the node's network settings. It is possible also the CNI agent on the node has malfunction and messed up with IP table. You can try cordon this node and move the pod to another node.</p>
| gohm'c |
<p>If I have 2 worker nodes in the k8s cluster like this.</p>
<ul>
<li>worker-1 <- bd1</li>
<li>worker-2</li>
</ul>
<p>I will use openebs device localpv as a storage solution.
Suppose I attach the device to the node worker 2 and delete the worker-1.</p>
<ul>
<li>worker-2 <- bd1</li>
</ul>
<p>Is everything still working? If not, what solutions do you guys use for this case?</p>
<p>Thanks.</p>
| Souji Tendo | <p>For the case of openEBS, the node device manager will detect the device has been moved. Subsequent workload that claims the PV will be schedule on the worker-2 to consume the resource.</p>
| gohm'c |
<p>I deployed a service <code>myservice</code> to the k8s cluster. Using <code>kubectl describe serivce ...</code>, I can find that the service ip is <code>172.20.127.114</code> . At the same time, the service endpoint is <code>10.34.188.30:5000,10.34.89.157:5000</code>. How does Kubernetes handle service address to endpoint address translation? Does <code>kube-proxy</code> handle the NAT? Which linux module does <code>kube-proxy</code> use to handle NAT?</p>
<pre><code>kubectl describe service myservice
Name: myservice
Namespace: default
Labels: app=myservice
app.kubernetes.io/instance=myservice
Annotations: argocd.argoproj.io/sync-wave: 3
Selector: app=myservice
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.127.114
IPs: 172.20.127.114
Port: <unset> 80/TCP
TargetPort: 5000/TCP
Endpoints: 10.34.188.30:5000,10.34.89.157:5000
Session Affinity: None
Events: <none>
</code></pre>
<p><code>iptables</code> is used for setting node firewall rules. My understanding is that <code>iptables</code> does not do NAT.</p>
| yuyang | <p>I hope this helps you.</p>
<h4>Pod to Pod communication</h4>
<ul>
<li>No built-in solution</li>
<li>Expects you to implement a networking solution</li>
<li>But impose fundamental requirements on any implementation to be pluggable
into Kubernetes</li>
</ul>
<h5>K8s requirements of CNI Plugins</h5>
<ul>
<li>Every Pod gets its unique IP address</li>
<li>Pods on the same Node can Communicate with that IP address</li>
<li>Pods on different Node can Communicate with that IP address <strong>without NAT</strong>
(Network Address Translation)</li>
</ul>
<hr />
<h4>Kubernetes Networking Model</h4>
<ul>
<li>All nodes must be able to reach each other, <strong>without NAT</strong></li>
<li>All pods must be able to reach each other, <strong>without NAT</strong></li>
<li>Pods and nodes must be able to reach each other, <strong>without NAT</strong></li>
<li>Each pod is aware of its IP address (<strong>no NAT</strong>)</li>
<li>Pod IP addresses are assigned by the network implementation</li>
</ul>
<hr />
<h4>Summary</h4>
<ul>
<li>The "pod-to-pod network" or "pod network":
<ul>
<li>Provides communication between pods and nodes</li>
<li>Is generally implemented with CNI plugins</li>
</ul>
</li>
<li>The "pod-to-service network":
<ul>
<li>Provides internal communication and load balancing</li>
<li>Is generally implemented with <code>kube-proxy</code></li>
</ul>
</li>
<li>Network policies:
<ul>
<li>Provide firewalling and isolation</li>
<li>Can be bundled with the "pod network" or provided by another component</li>
</ul>
</li>
<li>Inbound traffic can be handled by multiple components:
<ul>
<li>Something like kube-proxy (for NodePort services)</li>
<li>Load balancers (ideally, connected to the pod network)</li>
</ul>
</li>
<li>It is possible to use multiple pod networks in parallel (with "meta-plugins" like CNI-Genie or Multus)</li>
</ul>
<hr />
<h4>Useful Links</h4>
<ul>
<li><a href="https://github.com/alifiroozi80/CKA/tree/main/CKA#how-cni-plugins-implement-it" rel="nofollow noreferrer">How CNI works?</a></li>
<li><a href="https://github.com/alifiroozi80/CKA/tree/main/CKA#dns-in-kubernetes" rel="nofollow noreferrer">DNS in Kubernetes</a></li>
<li><a href="https://github.com/alifiroozi80/CKA/tree/main/CKA#how-requests-are-forwarded-from-service-to-pod" rel="nofollow noreferrer">How requests are forwarded from Service to Pod
</a></li>
</ul>
| Ali |
<p>One of my project which is using kubernetes to manage the containers has something called <strong>test</strong>. Where developer has defined connectivity.</p>
<p>I tried to search it over the internet but found nothing which can clear this.</p>
<p>Can somebody help me to understand this kind and connectivity in test.yml?</p>
<pre><code>kind: test
spec:
connectivity:
to:
from
</code></pre>
| Saurabh Sharma | <p><code>kind: Test</code> is a CustomResource (CR).
CRs can be used in a cluster after <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition" rel="nofollow noreferrer">applying a CustomResourceDefinition (CRD) that describes all the fields</a>.</p>
<p>By doing so you extend Kubernetes with new objects that can be used. This is useful if you want to <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">write your own operator or controller</a>.
Without an operator/controller the CR is <strong>not doing anything</strong>. It will just hold some information that you can lookup (similar to a ConfigMap) but it will not do anything.</p>
<p>Here is an explanation on how Kubernetes works for in-built objects such as a Deployment:</p>
<ol>
<li>You use <code>kubectl apply -f some-deployment.yaml</code></li>
<li>Your call will be sent to the kube-apiserver</li>
<li>The kube-apiserver will save the information (<code>name</code> of the Deployment, <code>replicas</code>, <code>image</code> to use, ...) in etcd</li>
<li>The kube-controller-manager continuously communicates to the kube-apiserver and asks him to show him all information there is regarding Deployments.</li>
<li>The kube-apiserver retrieves the information from etcd and sends it back to the kube-controller-manager</li>
<li>The kube-controller-manager sees that there is a new Deployment (the one you applied) and will now proceed on creating the Pods (and before that the ReplicaSet).</li>
</ol>
<p>As you can see the one that <em>actually</em> creates the Pods is the kube-controller-manager.</p>
<p>In case the kube-controller-manager does not support all the features you expect from Kubernetes, you are able to create your own controller which is called an operator by <a href="https://github.com/operator-framework/operator-sdk" rel="nofollow noreferrer">using the operator SDK</a>. Your operator can then watch all objects that you will create as a CustomResource (such as <code>Test</code>).</p>
<p>To check all the CRDs you applied on your cluster execute:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get crd
</code></pre>
<p>To get a list of <em>all</em> objects that you can apply to Kubernetes execute the following:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl api-resources
</code></pre>
| F1ko |
<p>I have installed a Kubernetes cluster using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">kind k8s</a> as it was easier to setup and run in my local VM. I also installed Docker separately. I then created a docker image for Spring boot application I built for printing messages to the stdout. It was then added to <strong>kind k8s</strong> local registry. Using this newly created local image, I created a deployment in the kubernetes cluster using <strong>kubectl apply -f config.yaml</strong> CLI command. Using similar method I've also deployed <strong>fluentd</strong> hoping to collect logs from <code>/var/log/containers</code> that would be mounted to fluentD container.</p>
<p>I noticed <code>/var/log/containers/</code> symlink link doesn't exist. However there is <code>/var/lib/docker/containers/</code> and it has folders for some containers that were created in the past. None of the new container IDs doesn't seem to exist in <code>/var/lib/docker/containers/</code> either.</p>
<p>I can see logs in the console when I run <code>kubectl logs pod-name</code> even though I'm unable to find the logs in the local storage.</p>
<p>Following the answer in another <a href="https://stackoverflow.com/questions/67931771/logs-of-pods-missing-from-var-logs-in-kubernetes">thread</a> given by stackoverflow member, I was able to get some information but not all.</p>
<p>I have confirmed Docker is configured with json logging driver by running the following command.
<code>docker info | grep -i logging</code></p>
<p>When I run the following command (found in the thread given above) I can get the image ID.
<code>kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'</code></p>
<p>However I cannot use it to inspect the docker image using <code>docker inspect</code> as Docker is not aware of such image which I assume it due to the fact it is managed by <strong>kind</strong> control plane.</p>
<p>Appreciate if the experts in the forum can assist to identify where the logs are written and recreate the <code>/var/log/containers</code> symbolink link to access the container logs.</p>
| Jason Nanay | <p>It's absolutely normal that your local installed Docker doesn't have containers running in pod created by kind Kubernetes. Let me explain why.</p>
<p>First, we need to figure out, why kind Kubernetes actually needs Docker. It needs it <strong>not</strong> for running containers inside pods. It needs Docker to <strong><a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">create container which will be Kubernetes node</a></strong> - and on this container you will have pods which will have containers that are you looking for.</p>
<blockquote>
<p><a href="https://sigs.k8s.io/kind" rel="nofollow noreferrer">kind</a> is a tool for running local Kubernetes clusters using Docker container “nodes”.</p>
</blockquote>
<p>So basically the layers are : your VM -> container hosted on yours VM's docker which is acting as Kubernetes node -> on this container there are pods -> in those pods are containers.</p>
<p>In kind <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">quickstart section</a> you can find more detailed information about image used by kind:</p>
<blockquote>
<p>This will bootstrap a Kubernetes cluster using a pre-built <a href="https://kind.sigs.k8s.io/docs/design/node-image" rel="nofollow noreferrer">node image</a>. Prebuilt images are hosted at<a href="https://hub.docker.com/r/kindest/node/" rel="nofollow noreferrer"><code>kindest/node</code></a>, but to find images suitable for a given release currently you should check the <a href="https://github.com/kubernetes-sigs/kind/releases" rel="nofollow noreferrer">release notes</a> for your given kind version (check with <code>kind version</code>) where you'll find a complete listing of images created for a kind release.</p>
</blockquote>
<p>Back to your question, let's find missing containers!</p>
<p>On my local VM, I setup <a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installation" rel="nofollow noreferrer">kind Kubernetes</a> and I have installed <code>kubectl</code> tool Then, I created an example <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">nginx-deployment</a>. By running <code>kubectl get pods</code> I can confirm pods are working.</p>
<p>Let's find container which is acting as node by running <code>docker ps -a</code>:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2892110866 kindest/node:v1.21.1 "/usr/local/bin/entr…" 50 minutes ago Up 49 minutes 127.0.0.1:43207->6443/tcp kind-control-plane
</code></pre>
<p>Okay, now we can exec into it and find containers. Note that <code>kindest/node</code> image is not using docker as the container runtime but <a href="https://github.com/kubernetes-sigs/cri-tools" rel="nofollow noreferrer"><code>crictl</code></a>.</p>
<p>Let's exec into node: <code>docker exec -it 1d2892110866 sh</code>:</p>
<pre><code># ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
#
</code></pre>
<p>Now we are in node - time to check if containers are here:</p>
<pre><code># crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
135c7ad17d096 295c7be079025 47 minutes ago Running nginx 0 4e5092cab08f6
ac3b725061e12 295c7be079025 47 minutes ago Running nginx 0 6ecda41b665da
a416c226aea6b 295c7be079025 47 minutes ago Running nginx 0 17aa5c42f3512
455c69da57446 296a6d5035e2d 57 minutes ago Running coredns 0 4ff408658e04a
d511d62e5294d e422121c9c5f9 57 minutes ago Running local-path-provisioner 0 86b8fcba9a3bf
116b22b4f1dcc 296a6d5035e2d 57 minutes ago Running coredns 0 9da6d9932c9e4
2ebb6d302014c 6de166512aa22 57 minutes ago Running kindnet-cni 0 6ef310d8e199a
2a5e0a2fbf2cc 0e124fb3c695b 57 minutes ago Running kube-proxy 0 54342daebcad8
1b141f55ce4b2 0369cf4303ffd 57 minutes ago Running etcd 0 32a405fa89f61
28c779bb79092 96a295389d472 57 minutes ago Running kube-controller-manager 0 2b1b556aeac42
852feaa08fcc3 94ffe308aeff9 57 minutes ago Running kube-apiserver 0 487e06bb5863a
36771dbacc50f 1248d2d503d37 58 minutes ago Running kube-scheduler 0 85ec6e38087b7
</code></pre>
<p>Here they are. You can also notice that there are other container which are acting as <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">Kubernetes Components</a>.</p>
<p>For further debugging containers I would suggest reading documentation about <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/" rel="nofollow noreferrer">debugging Kubernetes nodes with crictl</a>.</p>
<p>Please also note that on your local VM there is file <code>~/.kube/config</code> which has information needed for <code>kubectl</code> to communicate between your VM and the Kubernetes cluster (in case of kind Kubernetes - docker container running locally).</p>
<p>Hope It will help you. Feel free to ask any question.</p>
<p><strong>EDIT - ADDED INFO HOW TO SETUP MOUNT POINTS</strong></p>
<p>Answering question from comment about mounting directory from node to local VM. We need to setup <a href="https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts" rel="nofollow noreferrer">"Extra Mounts"</a>. Let's create a definition needed for kind Kubernetes:</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /tmp/logs/
containerPath: /var/log/pods
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: Bidirectional
</code></pre>
<p>Note that I'm using <code>/var/log/pods</code> instead of <code>/var/log/containers/</code> - it is because on the cluster created by kind Kubernetes <code>containers</code> directory has only symlinks to logs in <code>pod</code> directory.</p>
<p>Save this <code>yaml</code>, for example as <code>cluster-with-extra-mount.yaml</code> , then create a cluster using this (create a directory <code>/tmp/logs</code> before applying this command!):</p>
<pre><code>kind create cluster --config=/tmp/cluster-with-extra-mount.yaml
</code></pre>
<p>Then all containers logs will be in <code>/tmp/logs</code> on your VM.</p>
| Mikolaj S. |
<p>I have defined a deployment with two containers:</p>
<pre><code>selector:
matchLabels:
{{ $appLabelKey }}: {{ $appLabelValue }}
template:
metadata:
annotations:
....
labels:
{{ $appLabelKey }}: {{ $appLabelValue }}
app: james
spec:
containers:
- name: container1
image: image1
command: ['sh', '-c', 'while true; do echo Waiting tests execution finished; sleep 3600;done']
imagePullPolicy: IfNotPresent
env:
volumeMounts:
- mountPath: "/mnt/nfs"
name: nfs-volume
- name: james
image: linagora/james-jpa-sample:3.4.0
ports:
- name: smtp-port
containerPort: 25
- name: imap-port
containerPort: 143
</code></pre>
<p>I have a service for the Apache James that's in the second container:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: james-service
spec:
selector:
app: james
ports:
- port: 143
targetPort: 143
protocol: TCP
name: imap-port
- port: 25
targetPort: 25
protocol: TCP
name: smtp-port
type: LoadBalancer
</code></pre>
<p>However, the labels apply to both containers, not just the james container, is there a way to specify labels per container, rather than per deployment?
The service sort of also applies to the other container, which is not working properly, its mount is there, but not working.</p>
| Pavitx | <p>This is how you can do:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: container-1
spec:
selector:
app: web
ports:
- name: container-1
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: container-2
spec:
selector:
app: web
ports:
- name: container-2
port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: container-1
image: traefik/whoami
ports:
- containerPort: 80
env:
- name: WHOAMI_NAME
value: "container-1"
- name: container-2
image: traefik/whoami
ports:
- containerPort: 8080
env:
- name: WHOAMI_NAME
value: "container-2"
- name: WHOAMI_PORT_NUMBER
value: "8080"
...
</code></pre>
<p>Apply the spec and run <code>kubectl port-forward svc/container-1 8001:80</code> and <code>kubectl port-forward svc/container-2 8002:80</code>. To connect to container 1 <code>curl localhost:8001</code>, you can connect to another container in the same pod: <code>curl localhost:8002</code>.</p>
| gohm'c |
<p>Hey there so I was trying to deploy my first and simple webapp with no database on minikube but this Imagepulloff error keeps coming in the pod.
Yes I have checked the name of Image,tag several times;
Here are the logs and yml files.</p>
<pre><code>Namespace: default
Priority: 0
Service Account: default
Labels: app=nodeapp1
pod-template-hash=589c6bd468
Annotations: <none>
Status: Pending
Controlled By: ReplicaSet/nodeapp1-deployment-589c6bd468
Containers:
nodeserver:
Container ID:
Image: ayushftw/nodeapp1:latest
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k6mkb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k6mkb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapOptional: <nil>
DownwardAPI: true
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m3s default-scheduler Successfully assigned default/nodeapp1-deployment-589c6bd468-5lg2n to minikube
Normal Pulling 2m2s kubelet Pulling image "ayushftw/nodeapp1:latest"
Warning Failed 3s kubelet Failed to pull image "ayushftw/nodeapp1:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 3s kubelet Error: ErrImagePull
Normal BackOff 2s kubelet Back-off pulling image "ayushftw/nodeapp1:latest"
Warning Failed 2s kubelet Error: ImagePullBackOff
</code></pre>
<p>deployment.yml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp1-deployment
labels:
app: nodeapp1
spec:
replicas: 1
selector:
matchLabels:
app: nodeapp1
template:
metadata:
labels:
app: nodeapp1
spec:
containers:
- name: nodeserver
image: ayushftw/nodeapp1:latest
ports:
- containerPort: 3000
</code></pre>
<p>service.yml fie</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nodeapp1-service
spec:
selector:
app: nodeapp1
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31011
</code></pre>
<p>Please Help If anybody knows anything about this .</p>
| ayush | <p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p>
<p>First, pull the image via <code>Docker</code></p>
<pre class="lang-bash prettyprint-override"><code>docker image pull ayushftw/nodeapp1:latest
</code></pre>
<p>Then load the downloaded image to <code>minikube</code></p>
<pre class="lang-bash prettyprint-override"><code>minikube image load ayushftw/nodeapp1:latest
</code></pre>
<p>And then everything will work because now kubectl will use the image that is stored locally.</p>
| Ali |
<p>So I've a cron job like this:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron-job
spec:
schedule: "0 0 31 2 *"
failedJobsHistoryLimit: 3
successfulJobsHistoryLimit: 1
concurrencyPolicy: "Forbid"
startingDeadlineSeconds: 30
jobTemplate:
spec:
backoffLimit: 0
activeDeadlineSeconds: 120
...
</code></pre>
<p>Then i trigger the job manually like so:</p>
<pre><code>kubectl create job my-job --namespace precompile --from=cronjob/my-cron-job
</code></pre>
<p>But it seams like I can trigger the job as often as I want and the <code>concurrencyPolicy: "Forbid"</code> is ignored.</p>
<p>Is there a way so that manually triggered jobs will respect this or do I have to check this manually?</p>
| natschz | <blockquote>
<p>Note that concurrency policy only applies to the jobs created by the same cron job.</p>
</blockquote>
<p>The <code>concurrencyPolicy</code> field only applies to jobs created by the same cron job, as stated in the documentation: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy</a></p>
<p>When executing <code>$ kubectl create job my-job --namespace precompile --from=cronjob/my-cron-job</code> you are essentially creating a one-time job on its own that uses the <code>spec.jobTemplate</code> field as a reference to create it. Since <code>concurrencyPolicy</code> is a cronjob field, it is not even being evaluated.</p>
<p><strong>TL;DR</strong></p>
<p>This actually is the expected behavior. Manually created jobs are not effected by <code>concurrencyPolicy</code>. There is no flag you could pass to change this behavior.</p>
| F1ko |
<p>While working with Kubernetes for some months now, I found a nice way to use one single existing domain name and expose the cluster-ip through a sub-domain but also most of the microservices through different sub-sub-domains using the ingress controller.</p>
<p>My ingress example code:</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: cluster-ingress-basic
namespace: ingress-basic
selfLink: >-
/apis/networking.k8s.io/v1beta1/namespaces/ingress-basic/ingresses/cluster-ingress-basic
uid: 5d14e959-db5f-413f-8263-858bacc62fa6
resourceVersion: '42220492'
generation: 29
creationTimestamp: '2021-06-23T12:00:16Z'
annotations:
kubernetes.io/ingress.class: nginx
managedFields:
- manager: Mozilla
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:16Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubernetes.io/ingress.class': {}
'f:spec':
'f:rules': {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:loadBalancer':
'f:ingress': {}
spec:
rules:
- host: microname1.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: kylin-job-svc
servicePort: 7070
- host: microname2.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: superset
servicePort: 80
- {}
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xx
</code></pre>
<p>With this configuration:</p>
<ol>
<li>microname1.subdomain.domain.com is pointing into Apache Kylin</li>
<li>microname2.subdomain.domain.com is pointing into Apache Superset</li>
</ol>
<p>This way all microservices can be exposed using the same Cluster-Load-Balancer(IP) but the different sub-sub domains.</p>
<p>I tried to do the same for the SQL Server but this is not working, not sure why but I have the feeling that the reason is that the SQL Server communicates using TCP and not HTTP.</p>
<pre><code>- host: microname3.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: mssql-linux
servicePort: 1433
</code></pre>
<p>Any ideas on how I can do the same for TCP services?</p>
| Stavros Koureas | <p>Your understanding is good, by default NGINX Ingress Controller only supports HTTP and HTTPs traffic configuration (Layer 7) so probably your SQL server is not working because of this.</p>
<p>Your SQL service is operating using TCP connections so it is <a href="https://stackoverflow.com/questions/56798909/resolve-domain-name-uri-when-listening-to-tcp-socket-in-c-sharp/56799133#56799133">does not take into consideration custom domains that you are trying to setup as they are using same IP address anyway</a> .</p>
<p>The solution for your issue is not use custom sub-domain(s) for this service but to setup <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">exposing TCP service in NGINX Ingress Controller</a>. For example you can setup this SQL service to be available on ingress IP on port 1433:</p>
<blockquote>
<p>Ingress controller uses the flags <code>--tcp-services-configmap</code> and <code>--udp-services-configmap</code> to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <code><namespace/service name>:<service port>:[PROXY]:[PROXY]</code></p>
</blockquote>
<p>To setup it you can follow steps provided in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">official NGINX Ingress documentation</a> but there are also some more detailed instructions on StackOverflow, for example <a href="https://stackoverflow.com/questions/61430311/exposing-multiple-tcp-udp-services-using-a-single-loadbalancer-on-k8s/61461960#61461960">this one</a>.</p>
| Mikolaj S. |
<p>Currently it takes quite a long time before the pod can be terminated after a kubectl delete command. I have the feeling that it could be because of the sleep command.</p>
<p>How can I make the container stop faster?
What best practices should I use here?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
template:
spec:
containers:
- image: alpine
..
command:
- /bin/sh
- -c
- |
trap : TERM INT
while true; do
# some code to check something
sleep 10
done
</code></pre>
<p>Is my approach with "trap: TERM INT" correct? At the moment I don't see any positive effect...</p>
<p>When I terminate the pod it takes several seconds for the command to come back.</p>
<pre><code>kubectl delete pod my-pod
</code></pre>
| user5580578 | <p>Add <code>terminationGracePeriodSeconds</code> to your spec will do:</p>
<pre><code>...
spec:
template:
spec:
terminationGracePeriodSeconds: 10 # <-- default is 30, can go as low as 0 to send SIGTERM immediately.
containers:
- image: alpine
</code></pre>
| gohm'c |
<p>Let us assume we are the owners of a Kubernetes cluster and we give other users in our organization access to individual namespaces, so they are not supposed to know what is going on in other namespaces.</p>
<p>If user A deploys a certain ressource like a Grafana-Prometheus monitoring stack to namespace A, how do we ensure that he cannot see with the monitoring stack anything from namespace B, where he should not have any access to.</p>
<p>Of course, we will have to limit the rights of user A anyhow, but how do we automatically limit the rights of it's deployed ressources in namespace A? In case you have any suggestions perhaps with some Kubernetes configuration examples, that would be great.</p>
| tobias | <p>The most important aspect of this question is to control the access permission of the service accounts which will be used in the Pods and a network policy which will limit the traffic within the namespace.</p>
<p>Hence we arrive to this algorithm:</p>
<p><strong>Prerequisite:</strong>
Creating the user and namespace</p>
<pre><code>sudo useradd user-a
kubectl create ns ns-user-a
</code></pre>
<ol>
<li>limiting access permission of user-a to the namespace ns-user-a.</li>
</ol>
<pre><code>kubectl create clusterrole permission-users --verb=* --resource=*
kubectl create rolebinding permission-users-a --clusterrole=permission-users --user=user-a --namespace=ns-user-a
</code></pre>
<ol start="2">
<li>limiting all the service accounts access permission of namespace ns-user-a.</li>
</ol>
<pre><code>kubectl create clusterrole permission-serviceaccounts --verb=* --resource=*
kubectl create rolebinding permission-serviceaccounts --clusterrole=permission-serviceaccounts --namespace=ns-user-a --group=system:serviceaccounts:ns-user-a
kubectl auth can-i create pods --namespace=ns-user-a --as-group=system:serviceaccounts:ns-user-a --as sa
</code></pre>
<ol start="3">
<li>A network policy in namespace ns-user-a to limit incoming traffic from other namespaces.</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: ns-user-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
</code></pre>
<p><strong>Edit: Allowing traffic from selective namespaces</strong></p>
<p>Assign a label to the monitoring namespace with a custom label.</p>
<pre><code>kubectl label ns monitoring nsname=monitoring
</code></pre>
<p>Or, use the following <strong>reserved labels from kubernetes</strong> to make sure nobody can edit or update this label. So by convention this label should have "monitoring" as assigned value for the "monitoring" namespace.</p>
<p><a href="https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-metadata-name" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-metadata-name</a></p>
<pre><code>kubernetes.io/metadata.name
</code></pre>
<p>Applying a network policy to allow traffic from internal and monitoring namespace.</p>
<p>Note: Network policies always add up. So you can keep both or you can only keep the new one. I am keeping both here, for example purposes.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-only-monitoring-and-inernal
namespace: ns-user-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # allows traffic from ns-user-a namespace (same as earlier)
- namespaceSelector: # allows traffic from monitoring namespace
matchLabels:
kubernetes.io/metadata.name: monitoring
</code></pre>
| Rajesh Dutta |
<p>Last year, I built a K8S cluster on AWS with <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a></p>
<p>Now, One year after, I don't remember which are the private keys. I must connect my VM through SSH, but I don't remember where I have put the PK on my local machine. Is there anyway to find the location of the PK ?</p>
| Juliatzin | <p>Connecting to a Kubernetes cluster usually has nothing to do with SSH keys. In order to be able to execute commands onto a Kubernetes cluster you need to have a <code>kubeconfig</code> which will be used by <code>kubectl</code>.
In case you are searching for your <code>kubeconfig</code> it is usually located at <code>~/.kube/config</code> but the path can vary depending onto your setup (you can set the path to anything by defining an environment variable like so: <code>export KUBECONFIG=/some/path/to/a/kubeconfig</code>).</p>
<p>SSH keys on the other hand are stored in a hidden folder called <code>.ssh</code> inside your home directory (<code>~/.ssh/</code>). In there you will find both private and public keys; public keys usually end on <code>.pub</code>.
In case you do not have a <code>~/.ssh/</code> directory check other users on your machine, perhaps you created them for the root user.
Other than that you might also want to check your <code>history</code> as the private key location can also be specified with <code>ssh -i</code> as in <code>ssh -i /path/to/private_key user@remote-machine</code>, perhaps you saved it somewhere else and specified the key that way.</p>
| F1ko |
<p>I need to deploy several containers to a Kubernetes cluster. The objetive is automating the deployment of Kafka, Kafka Connect, PostgreSQL, and others. Some of them already provide a Helm operator that we could use. So my question is, can we somehow use those helm operators inside our operator? If so, what would be the best approach?</p>
<p>The only method I can think of so far is calling the helm setup console commands from within a deployment app.
Another approach, without using those helm files, would be implementing the functionality of each operator in my own operator, which doesn't seem to make much sense since what I need was already developed and is public.</p>
<p>I'm very new to operator development so please excuse me if this is a silly question.</p>
<p><strong>Edit</strong>:
The main purpose of the operator is to deploy X databases. Along with that we would like to have a single operator/bundle that deploys the whole system right away. Does it even make sense to use an operator to bundle, even if we have additional tasks for some of the containers? With this, the user would specify in the yaml file:</p>
<pre><code>databases
- type: "postgres"
name: "users"
- type: "postgres"
name: "purchases"
</code></pre>
<p>and 2 PostgreSQL databases would be created. Those databases could then be mentioned in other yaml files or further down in the same yaml file. Case on hands: the information from the databases will be pulled by Debezium (another container), so Debezium needs to know their addresses. So the operator should create a service and associate the service address with the database name.</p>
<p>This is part of an ETL system. The idea is that the operator would allow an easy deployment of the whole system by taking care of most of the configuration.
With this in mind, we were thinking if it wasn't possible to pick on existing Helm operators (or another kind of operator) and deploy them with small modifications to the configurations such as different ports for different databases.</p>
<p>But after reading F1ko's reply I gained new perspectives. Perhaps this is not possible with an operator as initially expected?</p>
<p><strong>Edit2</strong>: Clarification of edit1.</p>
| KON | <p>Just for clarification purposes:</p>
<ul>
<li><p>Helm is a package manager with which you can install an application onto the cluster in a bundled matter: it basically provides you with all the necessary YAMLs, such as ConfigMaps, Services, Deployments, and whatever else is needed to get the desired application up and running in a proper way.</p>
</li>
<li><p>An Operator is essentially a controller. In Kubernetes, there are lots of different controllers that define the "logic" whenever you do something (e.g. the replication-controller adds more replicates of a Pod if you decide to increment the <code>replicas</code> field). There simply are too many controllers to list them all and have running individually, that's why they are compiled into a single binary known as the kube-controller-manager.
Custom-built controllers are called operators for easier distinction. These operators simply watch over the state of certain "things" and are going to perform an action if needed. Most of the time these "things" are going to be CustomResources (CRs) which are essentially new Kubernetes objects that were introduced to the cluster by applying CustomResourceDefinitions (CRDs).</p>
</li>
</ul>
<p>With that being said, it is not uncommon to use helm to deploy operators, however, try to avoid the term "helm operator" as it is actually referring to a very specific operator and may lead to confusion in the future: <a href="https://github.com/fluxcd/helm-operator" rel="nofollow noreferrer">https://github.com/fluxcd/helm-operator</a></p>
<blockquote>
<p>So my question is, can we somehow use those helm operators inside our operator?</p>
</blockquote>
<p>Although you may build your own operator with the <a href="https://github.com/operator-framework/operator-sdk" rel="nofollow noreferrer">operator-sdk</a> which then lets you deploy or trigger certain events from other operators (e.g. by editing their CRDs) there is no reason to do so.</p>
<blockquote>
<p>The only method I can think of so far is calling the helm setup console commands from within a deployment app.</p>
</blockquote>
<p>Most likely what you are looking for is a proper CI/CD workflow.
Simply commit the helm chart and <a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer"><code>values.yaml</code></a> files that you are using during <code>helm install</code> inside a <a href="https://git-scm.com/" rel="nofollow noreferrer">Git</a> repository and have a CI/CD tool (such as <a href="https://docs.gitlab.com/ee/ci/" rel="nofollow noreferrer">GitLab</a>) deploy them to your cluster every time you make a new commit.</p>
<p><strong>Update:</strong> As the other edited his question and left a comment i decided to update this post:</p>
<blockquote>
<p>The main purpose of the operator is to deploy X databases. Along with that we would like to have a single operator/bundle that deploys the whole system right away.</p>
</blockquote>
<blockquote>
<p>Do you think it makes sense to bundle operators together in another operator, as one would do with Helm?</p>
</blockquote>
<p>No it does not make sense at all. That's exactly what helm is there for. With helm you can bundle stuff, you can even bundle multiple helm charts together which may be what you are actually looking for. You can have one helm chart that passes the needed values down to the actual operator helm charts and therefore use something like the service-name in multiple locations.</p>
<blockquote>
<p>In the case of operators inside operators, is it still necessary to configure every sub-operator individually when configuring the operator?</p>
</blockquote>
<p>As mentioned above, it does not make any sense to do it like that, it is just an over-engineered approach. However, if you truly want to go with the operator approach there are basically two approaches you could take:</p>
<ul>
<li>Write an operator that configures the other operators by changing their CRs, ConfigMaps etc. ; with this this approach you will have a <em>somewhat</em> lightweight operator, however you will have to ensure it is compatible at all times with all the different operators you want it to interfere with (when they change to a new <code>apiVersion</code> with breaking changes, introduce new CRs or anything of that kind, you will have to adapt again).</li>
<li>Extract the entire logic from the existing operators into your operator (i.e. rebuild something that already already exists); with this approach you will have a big monolithic application that will be huge pain to maintain as you will continuously have to update your code whenever there is an update in the upstream operator</li>
</ul>
<p>Hopefully it is clear by now that building your own operator for "operating" other operators comes with lot of painful dependencies and should not be the way to go.</p>
<blockquote>
<p>Is it possible to deploy different configurations of images? Such as databases configured with different ports?</p>
</blockquote>
<p>Good operators and helm charts let you do that out of the box, either via a respective CR / ConfigMap or a <code>values.yaml</code> file, however, that now depends on what solutions you are going to use. So in general the answer is: yes, it is possible if supported.</p>
| F1ko |
<p>I have a service running as a <code>DaemonSet</code> across a number of kubernetes nodes. I would like to make some policy decisions based on the labels of the node on which each <code>DaemonSet</code> pod is running. From within a container, how do I know on which node this pod is running? Given this information, looking up the node labels through the API should be relatively easy.</p>
<p>What I'm doing right now is passing in the node name as an environment variable, like this:</p>
<pre><code> env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
</code></pre>
<p>Is that the correct way of getting at this information? More specifically, is there any sort of API call that will answer the question, "where am I?"?</p>
| larsks | <p>Answering your question <em>"How do I know what node I'm on?"</em>:</p>
<p>Your approach seems to be the best one - to use <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">the DownWard API</a> which allows us to access some pod's or container's fields. In our case it is pod's field <code>spec.nodeName</code> which is accessible.</p>
<p>You can use two options to expose pod information using this API:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">environment variable(s)</a></li>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">through file(s)</a></li>
</ul>
<p>You can also access <a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/" rel="nofollow noreferrer">Kubernetes API from a pod</a> and get this information from here but it is kind of workaround. It's better to take advantage of the fact that the name of the node is one of the pod's field and use previously described the DownWard API which is made for this purpose and officialy described in Kubernetes documentation.</p>
| Mikolaj S. |
<p>I am currently facing the current situation. I want to give users access to individual namespaces, such that they can</p>
<ul>
<li>create and deploy ressources with Helm charts (for instance, from Bitnami)</li>
</ul>
<p>On the other hand the users are not supposed to</p>
<ul>
<li>create/retrieve/modify/delete RBAC settings like ServiceAccounts, RoleBindings, Roles, NetworkPolicies</li>
<li>get hands on secrets associated to ServiceAccounts</li>
</ul>
<p>Of course, the crucial thing is to define the best Role for it here. Likely, the following is not the best idea here:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role
namespace: example-namespace
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>Hence, it would be great if you could come along with some sensible approach that the users can work on it as freely as possible, yet do not get hands on some more "dangerous" resources.</p>
<p>In essence, I want to follow the workflow outlined here (<a href="https://jeremievallee.com/2018/05/28/kubernetes-rbac-namespace-user.html" rel="nofollow noreferrer">https://jeremievallee.com/2018/05/28/kubernetes-rbac-namespace-user.html</a>). So what matters most is that individual users in one namespace, cannot read the secrets of the users in the same namespace, such that they cannot authenticate with the credentials of someone else.</p>
| tobias | <p>In my opinion the following strategy will help:</p>
<ol>
<li>RBAC to limit access to service accounts of own namespace only.</li>
<li>Make sure <strong><code>automountServiceAccountToken: false</code></strong> in secret and POD level using policies. This helps in protecting secrets when there is a node security breach. The secret will only be available for execution time and will not be stored in the POD.</li>
</ol>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server</a></p>
<ol start="3">
<li>Encrypt secrets stored in ETCD using kms(recommended). But if you dont have a kms provider then you can also choose other providers to ensure minimum security.</li>
</ol>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers</a></p>
| Rajesh Dutta |
<p>My purpose is to get all PDB in AKS all namespaces. If a namespace don't have PDB, it should just skip and went to another namespace.
In my local env, bash script can list PDB in all namespaces.
But in Azure pipeline, if a namespace don't contain PDB, the whole bash script just ended and throw error:
No resources found in default namespace.
##[error]Bash exited with code '1'.</p>
<p>My Bash script:</p>
<pre><code>namespace=$(kubectl get namespace|egrep -iv 'name|kube-system'|awk '{ print $1 }')
echo "${namespace}"
for ns in ${namespace};
do
poddisruptionbudget=$(kubectl get pdb -n ${ns}|awk '{ print $1 }'|grep -iv 'name');
for pdb in ${poddisruptionbudget};
do
echo "PDB $pdb in namespace $ns"
kubectl get -o=yaml pdb $pdb -n $ns >/azp/pdb/$ns-${pdb}.yaml;
kubectl delete pdb $pdb -n $ns
done;
done
</code></pre>
<p>Azure Pipeline Code:</p>
<pre><code>- task: Bash@3
displayName: Upgrade AKS cluster
inputs:
filePath: '$(System.DefaultWorkingDirectory)/aks-operation/aks_upgrade.sh'
</code></pre>
<p>Try to run a shell script in Azure DevOps pipeline, get all PDB in AKS.
However, some namespace don't container PDB, azure pipeline exited with failure:</p>
<pre><code>No resources found in default namespace.
##[error]Bash exited with code '1'.
</code></pre>
| hola | <p>In my whole script:<br />
use to have :</p>
<pre><code>#!/bin/bash
set -e
</code></pre>
<p>deleted <code>set -e</code>, fixed that, azure pipeline is weird</p>
| hola |
<p>Upon digging deeper into the Kubernetes architecture it seems that all Kubernetes clusters (both on-premises and cloud) must use Linux as their control-plane (a.k.a. master) nodes.</p>
<p>With that being said the following questions come to mind:</p>
<ul>
<li>How come this is the case?</li>
<li>Why couldn't Windows be used as the control-plane?</li>
</ul>
| Hajed.Kh | <p>First of all I want to say that from a technical perspective it <em>would</em> be possible to have a control plane running onto Windows. It is totally doable, however, no one wants to invest time into a solution which is worse than what already exist and it would take quite some time in order to make this work. Why eat soup with a fork if you already have a spoon?</p>
<p>Now one might wonder if I am exaggerating or not. So I'll try to explain some of the issues that Windows has when it comes to containerization. For that to happen I'll have to explain <strong>how containers work</strong> first:</p>
<p>Nowadays whenever people are talking about containers they are talking about Linux containers (which I am also going to do in this answer unless stated otherwise). Containers are essentially using Linux Kernel features, most importantly (but not limited to) <a href="https://man7.org/linux/man-pages/man7/namespaces.7.html" rel="nofollow noreferrer">Linux namespaces</a>. There are many different namespaces (PID, Network, ...) that can be used for "isolation". As an example one can create a new PID namespace, assign it a process and that process will only be able to see itself as the running process (because it is "isolated"). Sounds familiar? Well, if you ever executed <code>ps aux</code> in a container this is what is going to happen.
Since it is not possible to cover all the different kinds of Linux features that are essential in order for containers to work in a single post, I hope that by now it is clear that "normal" containers are essentially dependent on Linux.</p>
<p>Okay, so if what I am saying is true, <strong>how can containers work on Windows at all</strong>?</p>
<p>Guess what...they don't. What Windows is actually doing is spinning up a lightweight Linux machine in the background which then hosts containers. Sounds ridiculous? Well, it is. <a href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/container-docker-introduction/docker-defined" rel="nofollow noreferrer">Here</a> is a passage out of Microsoft's documentation:</p>
<blockquote>
<p>However, Windows images can run only on Windows hosts and Linux images can run on Linux hosts and Windows hosts (using a Hyper-V Linux VM, so far), where host means a server or a VM.</p>
</blockquote>
<p>So what about <strong>Windows containers</strong> then (as opposed to Linux containers)?</p>
<p>Windows containers do run natively on Windows by using features of the Windows kernel, similar as Linux containers do. Developers tried to mimic the behavior of Linux containers as much as possible, however, due to poor design of the Windows kernel this simply is not possible and many hacks had to be used. As one can imagine, many issues come with that decision, too many to actually mention them all. Just to mention one: Windows containers are way bigger than Linux containers. It is very common for Window containers to actually reach gigabyte sizes. <a href="https://devblogs.microsoft.com/dotnet/we-made-windows-server-core-container-images-40-smaller/" rel="nofollow noreferrer">Even after making Windows Server Core images smaller by 40% back in 2019</a> the insider image was still over 1GB (uncompressed even over 2.5GB).</p>
<p>With all that overhead in mind, Linux simply is superior in every way when it comes to containerization (and many other things as well) and there has never been a need to have a Windows control plane.</p>
<p><strong>TL;DR</strong></p>
<p>Because Windows is a poor operating system when it comes to containerization (and many other things).</p>
| F1ko |
<p>Each node has a PodCIDR assigned, is it unique cross all nodes? I did a test on kind multiple nodes, looks like they are unique:</p>
<pre><code>➜ ~ sh -x a.sh
+ kubectl describe nodes multi-nodes-worker
+ grep PodCIDRs
PodCIDRs: 10.244.3.0/24
+ + grep PodCIDRs
kubectl describe nodes multi-nodes-worker2
PodCIDRs: 10.244.5.0/24
+ grep PodCIDRs
+ kubectl describe nodes multi-nodes-worker3
PodCIDRs: 10.244.4.0/24
+ kubectl describe nodes multi-nodes-worker4
+ grep PodCIDRs
PodCIDRs: 10.244.1.0/24
+ kubectl describe nodes multi-nodes-worker5
+ grep PodCIDRs
PodCIDRs: 10.244.2.0/24
</code></pre>
| Daniel Wu | <p>Unique CIDR range is assigned to each node by Flannel so that your pod IP is unique across the cluster and knowing which node to reach for a particular pod.</p>
| gohm'c |
<p>I have successfully deployed</p>
<ul>
<li>prometheus via helm chart <strong>kube-prometheus-stack</strong> (<a href="https://prometheus-community.github.io/helm-charts" rel="nofollow noreferrer">https://prometheus-community.github.io/helm-charts</a>)</li>
<li>prometheus-adapter via helm chart <strong>prometheus-adapter</strong> (<a href="https://prometheus-community.github.io/helm-charts" rel="nofollow noreferrer">https://prometheus-community.github.io/helm-charts</a>)</li>
</ul>
<p>using default configuration with slight customization.</p>
<p>I can access prometheus, grafana and alertmanager, query metrics and see fancy charts.</p>
<p>But prometheus-adapter keeps complaining on startup that it can't access/discover metrics:</p>
<pre><code>I0326 08:16:52.266095 1 adapter.go:98] successfully using in-cluster auth
I0326 08:16:52.330094 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/run/serving-cert/tls.crt::/var/run/serving-cert/tls.key"
E0326 08:16:52.334710 1 provider.go:227] unable to update list of all metrics: unable to fetch metrics for query "{namespace!=\"\",__name__!~\"^container_.*\"}": bad_response: unknown response code 404
</code></pre>
<p>I've tried various prometheus URLs in the prometheus-adapter Deployment command line argument but the problem is more or less the same.</p>
<p>E.g. some of the URLs I've tried are</p>
<pre><code>--prometheus-url=http://prometheus-operated.prom.svc:9090
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090
</code></pre>
<p>There are the following services / pods running:</p>
<pre><code>$ kubectl -n prom get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 16h
prometheus-adapter-76fcc79b7b-7xvrm 1/1 Running 0 10m
prometheus-grafana-559b79b564-bh85n 2/2 Running 0 16h
prometheus-kube-prometheus-operator-8556f58759-kl84l 1/1 Running 0 16h
prometheus-kube-state-metrics-6bfcd6f648-ms459 1/1 Running 0 16h
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 16h
prometheus-prometheus-node-exporter-2x6mt 1/1 Running 0 16h
prometheus-prometheus-node-exporter-bns9n 1/1 Running 0 16h
prometheus-prometheus-node-exporter-sbcjb 1/1 Running 0 16h
$ kubectl -n prom get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 16h
prometheus-adapter ClusterIP 10.0.144.45 <none> 443/TCP 16h
prometheus-grafana ClusterIP 10.0.94.160 <none> 80/TCP 16h
prometheus-kube-prometheus-alertmanager ClusterIP 10.0.0.135 <none> 9093/TCP 16h
prometheus-kube-prometheus-operator ClusterIP 10.0.170.205 <none> 443/TCP 16h
prometheus-kube-prometheus-prometheus ClusterIP 10.0.250.223 <none> 9090/TCP 16h
prometheus-kube-state-metrics ClusterIP 10.0.135.215 <none> 8080/TCP 16h
prometheus-operated ClusterIP None <none> 9090/TCP 16h
prometheus-prometheus-node-exporter ClusterIP 10.0.70.247 <none> 9100/TCP 16h
kubectl -n kube-system get deployment/metrics-server
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 15d
</code></pre>
<p>Prometheus-adapter helm chart gets deployed using the following values:</p>
<pre><code>prometheus:
url: http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local
certManager:
enabled: true
</code></pre>
<p>What is the correct value for <code>--prometheus-url</code> for <strong>prometheus-adapter</strong> in my setup ?</p>
| mko | <p>I'm using both helm charts (<em>kube-prometheus-stack</em> and <em>prometheus-adapter</em>).</p>
<p>additional path prefix that works for me is "/", but, prometheus url must be with the name of your helm-install parameter for stack ("helm install "). I'm using "<strong>prostack</strong>" as stack name. So finally, it works for me:</p>
<pre><code>helm install <adapter-name> -n <namespace> --set prometheus.url=http://prostack-kube-prometheus-s-prometheus.monitoring.svc.cluster.local --set prometheus.port=9090 --set prometheus.path=/
</code></pre>
| Adal_gt |
<p>I'm setting up an on-premise kubernetes cluster with kubeadm.</p>
<p>Here is the Kubernestes version</p>
<pre><code>clientVersion:
buildDate: "2022-10-12T10:57:26Z"
compiler: gc
gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506
gitTreeState: clean
gitVersion: v1.25.3
goVersion: go1.19.2
major: "1"
minor: "25"
platform: linux/amd64
kustomizeVersion: v4.5.7
serverVersion:
buildDate: "2022-10-12T10:49:09Z"
compiler: gc
gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506
gitTreeState: clean
gitVersion: v1.25.3
goVersion: go1.19.2
major: "1"
minor: "25"
platform: linux/amd64
</code></pre>
<p>I have installed metallb version 0.13.7</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
</code></pre>
<p>Everything is running</p>
<pre><code>$ kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-84d6d4db45-l2r55 1/1 Running 0 35s
pod/speaker-48qn4 1/1 Running 0 35s
pod/speaker-ds8hh 1/1 Running 0 35s
pod/speaker-pfbcp 1/1 Running 0 35s
pod/speaker-st7n2 1/1 Running 0 35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 35s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 4 4 4 4 4 kubernetes.io/os=linux 35s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 35s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-84d6d4db45 1 1 1 35s
</code></pre>
<p>But when i try to apply an IPaddressPool CRD i get an error</p>
<pre><code>kubectl apply -f ipaddresspool.yaml
</code></pre>
<p>ipaddresspool.yaml file content</p>
<pre><code>apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.2.100-192.168.2.199
</code></pre>
<p>The error is a fail to call the validation webhook no route to host</p>
<pre><code>Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host
</code></pre>
<p>Here is the same error with line brakes</p>
<pre><code>Error from server (InternalError):
error when creating "ipaddresspool.yaml":
Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io":
failed to call webhook:
Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s":
dial tcp 10.104.14.119:443: connect: no route to host
</code></pre>
<p>The IP -address is correct</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 18m
</code></pre>
<p>I have also tried installing metallb v 0.13.7 using helm but with the same result</p>
<p>Does someone know why the webhook cannot be called?</p>
<p><strong>EDIT</strong></p>
<p>As an answer to Thomas question, here is the description for webhook-service. NOTE that this is from <strong>another cluster</strong> with the <strong>same problem</strong> because I deleted the last cluster so the IP is not the same as last time</p>
<pre><code>$ kubectl describe svc webhook-service -n metallb-system
Name: webhook-service
Namespace: metallb-system
Labels: <none>
Annotations: <none>
Selector: component=controller
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.157.72
IPs: 10.105.157.72
Port: <unset> 443/TCP
TargetPort: 9443/TCP
Endpoints: 172.17.0.3:9443
Session Affinity: None
Events: <none>
</code></pre>
| AxdorphCoder | <p>Once understood the issue is fairly simple.</p>
<p>The metallb setup described above works as it is supposed to.
However, the Kubernetes setup does not. Most likely due to bad network configuration.</p>
<hr />
<h3>Understanding the error</h3>
<p>The key to understanding what is going on is the following error:</p>
<pre><code>Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host
</code></pre>
<p>Part of the applied metallb manifest is going to deploy a so-called <code>ValidatingWebhookConfiguration</code>.</p>
<p><a href="https://i.stack.imgur.com/gWdSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gWdSM.png" alt="enter image description here" /></a></p>
<p>In the case of metallb this validating webhook will force the kube-apiserver to:</p>
<ol>
<li>send metallb-related objects like <code>IPAddressPool</code> to the webhook whenever someone creates or updates such an object</li>
<li>wait for the webhook to perform some checks on the object (e.g. validate that CIDRs and IPs are valid and not something like <code>481.9.141.12.27</code>)</li>
<li>and finally receive an answer from the webhook whether or not that object satisfies metallb's requirements and is allowed to be created (persisted to etcd)</li>
</ol>
<p>The error above pretty clearly suggests that the first out of the three outlined steps is failing.</p>
<hr />
<h3>Debugging</h3>
<p>To fix this error one has to debug the current setup, particularly the connection from the kube-apiserver to <code>webhook-service.metallb-system.svc:443</code>.</p>
<p>There is a wide range of possible network misconfigurations that could lead to the error. However, with the information available to us it is most likely going to be an error with the configured CNI.</p>
<p>Knowing that here is some help and a bit of guidance regarding the further debugging process:</p>
<p>Since the kube-apiserver is hardened by default it won't be possible to execute a shell into it.
For that reason one should deploy a debug application with the same network configuration as the kube-apiserver onto one of the control-plane nodes.
This can be achieved by executing the following command:</p>
<pre><code>kubectl debug -n kube-system node/<control-plane-node> -it --image=nicolaka/netshoot
</code></pre>
<p>Using common tools one can now reproduce the error inside the interactive shell. The following command is expected to fail (in a similar fashion to the kube-apiserver):</p>
<pre><code>curl -m 10 -k https://<webhook-service-ip>:443/
</code></pre>
<p>Given above error message it should fail due to bad routing on the node.
To check the routing table execute the following command:</p>
<pre><code>routel
</code></pre>
<blockquote>
<p>Does someone know why the webhook cannot be called?</p>
</blockquote>
<p>The output should show multiple CIDR ranges configured one of which is supposed to include the IP queried earlier.
Most likely the CIDR range in question will either be missing or a bad gateway configured which leads to the <code>no route to host</code> error.
It is the CNIs job to update routing tables on all nodes and ensure that nodes can reach these addresses so adding or editing new Kubernetes related entries to the routing table manually is not recommended.
Further debugging is dependent on the exact setup.
Depending on the setup and CNI of choice kube-proxy may or may not be involved in the issue as well.
However, inspecting the CNI configuration and logs is a good next step.</p>
<hr />
<h3>Some bonus information</h3>
<p>Some CNIs require the user to pay more attention to certain features and configuration as there can be issues involved otherwise.
Here are some popular CNIs that fall into this category:</p>
<ul>
<li>Calico (see <a href="https://metallb.universe.tf/configuration/calico/" rel="nofollow noreferrer">here</a>)</li>
<li>Weave (see <a href="https://metallb.universe.tf/configuration/weave/" rel="nofollow noreferrer">here</a>)</li>
<li>Kube-Router (see <a href="https://metallb.universe.tf/configuration/kube-router/" rel="nofollow noreferrer">here</a>)</li>
</ul>
| F1ko |
<p>I have one question which I couldn't find a clear explaination.</p>
<p>If I have a service :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-svc
namespace: myns
spec:
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8282
selector:
app: my-app
</code></pre>
<p>As you can see above, I explicitly declared <code>type: LoadBalancer</code>. I understand what it means. I am using AWS EKS. I wonder from traffic perspective, does it mean the incoming http traffic flow is :</p>
<pre><code>Load Balancer --> Node port --> service port(8080) --> Pod port(8282)
</code></pre>
<p>Or:</p>
<pre><code>Load Balancer --> service port(8080) --> Pod port(8282)
</code></pre>
<p>Which one is correct? If neither is correct, what would be the traffic flow in terms of the order in which each k8s component is involved?</p>
| user842225 | <p><code>Load Balancer --> Node port --> service port(8080) --> Pod port(8282)</code></p>
<p>Your diagram is correct for instance mode:</p>
<blockquote>
<p>Traffic reaching the ALB is routed to NodePort for your service and then proxied to your pods. This is the default traffic mode.</p>
</blockquote>
<p>There is an option of using IP mode where you have AWS LB Controller installed and set <code>alb.ingress.kubernetes.io/target-type: ip</code>:</p>
<blockquote>
<p>Traffic reaching the ALB is directly routed to pods for your service.</p>
</blockquote>
<p>More details can be found <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>My story is:</p>
<p>1, I create a spring-boot project, with a Dockerfile inside.
2, I successfully create the docker image IN LOCAL with above docker file.
3, I have a minikube build a K8s for my local.
4, However, when I try to apply the k8s.yaml, it tells me that there is no such docker image. Obviously my docker app search in public docker hub, so what I can do?</p>
<p>Below is my dockerfile</p>
<pre><code>FROM openjdk:17-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
expose 8080
ENTRYPOINT ["java","-jar","/app.jar"]
</code></pre>
<p>Below is my k8s.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: pkslow-springboot-deployment
spec:
selector:
matchLabels:
app: springboot
replicas: 2
template:
metadata:
labels:
app: springboot
spec:
containers:
- name: springboot
image: cicdstudy/apptodocker:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: springboot
name: pkslow-springboot-service
spec:
ports:
- port: 8080
name: springboot-service
protocol: TCP
targetPort: 8080
nodePort: 30080
selector:
app: springboot
type: NodePort
</code></pre>
| JoseLuis | <p>In Kubernetes there is no centralized built-in Container Image Registry exist.
Depending on the container runtime in the K8S cluster nodes you have, it might search first dockerhub to pull images.
Since free pull is not suggested or much allowed by Dockerhub now, it is suggested to create an account for development purposes. You will get 1 private repository and <a href="https://www.docker.com/pricing" rel="nofollow noreferrer">unlimited public repository</a> which means that whatever you pushed to public repositories, there somebody can access it.
If there is no much concern on Intellectual Property issues, you can continue that free account for development purposes. But when going production you need to change that account with a service/robot account.</p>
<ol>
<li><p>Create an Account on DockerHub <a href="https://id.docker.com/login/" rel="nofollow noreferrer">https://id.docker.com/login/</a></p>
</li>
<li><p>Login into your DockerHub account locally on the machine where you are building your container image</p>
<p>docker login --username=yourhubusername [email protected]</p>
</li>
<li><p>Build,re-tag and push your image once more (go to the folder where Dockerfile resides)</p>
</li>
</ol>
<ul>
<li>docker build -t mysuperimage:v1 .</li>
<li>docker tag mysuperimage:v1 yourhubusername/mysuperimage:v1</li>
<li>docker push yourhubusername/mysuperimage:v1</li>
</ul>
<ol start="4">
<li>Create a secret for image registry credentials</li>
</ol>
<p>kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username= --docker-password= --docker-email=</p>
<ol start="5">
<li>Create a service account for deployment</li>
</ol>
<p>kubectl create serviceaccount yoursupersa</p>
<ol start="6">
<li>Attach secret to the service account named "yoursupersa"</li>
</ol>
<p>kubectl patch serviceaccount yoursupersa -p '{"imagePullSecrets": [{"name": "docker-registry"}]}'</p>
<ol start="7">
<li>Now create your application as deployment resource object in K8S</li>
</ol>
<p>kubectl create deployment mysuperapp --image=yourhubusername/mysuperimage:v1 --port=8080</p>
<ol start="8">
<li>Then patch your deployment with service account which has attached registry credentials.(which will cause for re-deployment)</li>
</ol>
<p>kubectl patch deployment mysuperapp -p '{"spec":{"template":{"spec":{"serviceAccountName":"yoursupersa"}}}}'</p>
<ol start="9">
<li>the last step is expose your service</li>
</ol>
<p>kubectl expose deployment/mysuperapp</p>
<p>Then everything is awesome! :)</p>
| Burak Cansizoglu |
<p>I am having issues with a certain compiled binary file that has to run as a health check for a certain container in a deployment pod. On AWS environment it runs as expected, but on premises the script fails with a vague error (it's no use even sharing it). I understand that perhaps the problem is in the kubernetes environment not being configured as expected by the script, but I do not know where exactly to look because I don't know what the script does and have no access to its source code.</p>
<p>I've read online about debugging utilities for linux, but none of them seem useful for this purpose, or I didn't understand how to use them properly. Is it possible to find which files are being accessed by the script? That would allow me to compare those files between the on-premises environment and the AWS env. and see what is different, so I can solve this problem.</p>
| xtda4664 | <p>Solved, keeping this up for reference if anyone needs it in the future.
You can list all files accessed by your program using <code>strace /path/to/bin 2>&1 | grep openat</code>. Very useful for debugging without source code.</p>
| xtda4664 |
<p>I'm upgrading some AKS clusters for an app and have been testing out the <code>az aks nodepool upgrade</code> <code>--max-surge</code> flag to speed up the process. Our prod environment has 50+ nodes, and at the clocked speed per node I have seen on our lowers I estimate prod will take 9+ hours to complete. On one of the lower upgrades I ran a max surge at 50% which did help a little bit on speed, and all deployments kept a minimum available pods of 50%.</p>
<p>For this latest upgrade I tried out a max surge of 100%. Which spun up 6 new nodes(6 current nodes in the pool) on the correct version....but then it migrated every deployment/pod at the same time and took everything down to 0/2 pods. Before I started this process I made sure to have a pod disruption budget for every single deployment set at min available of 50%. This has worked on all of my other upgrades, except this one, which to me means the 100% surge is the cause.</p>
<p>I just can't figure out why my minimum available percentage was ignored. Below are the descriptions of an example PDB, and the corresponding deployment.</p>
<p>Pod disruption budget:</p>
<pre><code>Name: myapp-admin
Namespace: front-svc
Min available: 50%
Selector: role=admin
Status:
Allowed disruptions: 1
Current: 2
Desired: 1
Total: 2
Events:
</code></pre>
<p>Deployment(snippet):</p>
<pre><code>Name: myapp-admin
Namespace: front-svc
CreationTimestamp: Wed, 26 May 2021 16:17:00 -0500
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 104
Selector: agency=myorg,app=myapp,env=uat,organization=myorg,role=admin
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 15
RollingUpdateStrategy: 25% max unavailable, 1 max surge
Pod Template:
Labels: agency=myorg
app=myapp
buildnumber=1234
env=uat
organization=myorg
role=admin
Annotations: kubectl.kubernetes.io/restartedAt: 2022-03-12T09:00:11Z
Containers:
myapp-admin-ctr:
</code></pre>
<p>Is there something obvious I am doing wrong here?</p>
| cashman04 | <blockquote>
<p>... a max surge value of 100% provides the fastest possible upgrade
process (doubling the node count) but also causes <strong>all nodes</strong> in the
node pool to be drained simultaneously.</p>
</blockquote>
<p>From the official <a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster#customize-node-surge-upgrade" rel="nofollow noreferrer">documentation</a>. You may want to consider lower down your max surge.</p>
| gohm'c |
<p>I have the airflow deployed in Kubernetes and it is using the persistent volume method for dag deployment. I am trying to write a script (using GitHub action for CI/CD) for the deployment of my airflow dags which is somewhat like -</p>
<blockquote>
<pre><code>DAGS=(./*.py)
for dag in ${DAGS[@]};
do
kubectl cp "${dag}" --namespace=${NAMESPACE} ${WEB_SERVER_POD_NAME}:/path/to/dags/folder
done
</code></pre>
</blockquote>
<p>I can successfully deploy new dags and even update them.</p>
<p>But the problem is, I am unable to remove old dags (I used for testing purpose) present in the dags folder of airflow.</p>
<p>Is there a way I can do it?</p>
<p><em>P.S.</em> I cannot use the below command as it would delete any running dags -</p>
<pre><code>kubectl exec --namespace=${NAMESPACE} ${WEB_SERVER_POD_NAME} -- bash -c "rm -rf /path/to/dags/folder/*"
</code></pre>
| Satty | <p>I don't think this was an option when you originally posted, but for others:</p>
<p>Github Actions lets you create workflows that are manually triggered, and accept input values. <a href="https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/" rel="nofollow noreferrer">https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/</a></p>
<p>The command would be something like:</p>
<pre><code>kubectl exec --namespace=${NAMESPACE} ${WEB_SERVER_POD_NAME} -- bash -c "rm /path/to/dags/folder/${USER_INPUT_DAG_FILE_NAME}"
</code></pre>
| Constance Martineau |
<p>I installed docker and minikube through <code>Docker for Windows Installer.exe</code>. And this installed Docker Desktop 2.1.0.1.
<a href="https://i.stack.imgur.com/ta9HH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ta9HH.png" alt="enter image description here" /></a></p>
<p>Docker Version -</p>
<pre><code>PS C:\myk8syamls> docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:17:08 2019
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:17:52 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
</code></pre>
<p>k8s version -</p>
<pre><code>PS C:\myk8syamls> kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>After I have created k8s services, I am not able to access them through my local machine.</strong></p>
<pre><code>PS C:\myk8syamls> kubectl.exe get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 101d
nginx-clusterip-svc ClusterIP 10.96.214.171 <none> 80/TCP 26m
nginx-nodeport-svc NodePort 10.101.9.117 <none> 80:30007/TCP,8081:30008/TCP 26m
postgres NodePort 10.103.103.87 <none> 5432:32345/TCP 101d
</code></pre>
<p>I have tried - accessing nodeport service, <strong>nginx-nodeport-svc</strong> by hitting<br />
10.101.9.117:30007 and 10.101.9.117:80 - did not work</p>
<p>and</p>
<p>I have tried - accessing the clusterip service, <strong>nginx-clusterip-svc</strong> by hitting<br />
10.96.214.171:80 - did not work</p>
<p><strong>How can I access these service from local machine??</strong> This is quite critical for me to resolve, so any help is greatly appreciated.</p>
<hr />
<p><strong>Edit - following answer from @rriovall</strong><br />
i did this -</p>
<pre><code>kubectl expose deployment nginx-deployment --type=NodePort --name=nginx-nodeport-expose-svc
</code></pre>
<p>and on querying -</p>
<pre><code>PS C:\myk8syamls> kubectl.exe get svc nginx-nodeport-expose-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nodeport-expose-svc NodePort 10.107.212.76 <none> 80:30501/TCP 42s
</code></pre>
<p>Still there is no external IP and accessing <code>http://10.107.212.76:30501/</code> still does not work</p>
<p>or</p>
<pre><code>PS C:\myk8syamls> kubectl.exe get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready master 102d v1.14.3 192.168.65.3 <none> Docker Desktop 4.9.184-linuxkit docker://19.3.1
</code></pre>
<p>accessing <code>http://192.168.65.3:30501/</code> does not work either.</p>
| samshers | <p>You need to expose the nginx cluster as an external service.</p>
<pre><code>$ kubectl expose deployment nginx --port=80 --target-port=80 \
--type=LoadBalancer
service "nginx" exposed
</code></pre>
<p>It may take several minutes to see the value of EXTERNAL_IP.</p>
<p>You can then visit http://EXTERNAL_IP/ to see the server being served through network load balancing.</p>
| rriovall |
<p>I am instantiating a deployment from Helm. a few pods are getting created but the deployment stops right after creating few pods. Although I cannot share much info on the deployment as it is related to my company, how can I debug this kind of issue? The created pods have no problem as seen from logs and events.</p>
| Anvay | <p>to debug your application you should first of all :</p>
<ul>
<li>Check the pods logs using <code>sh kubectl logs pod <pod-name></code></li>
<li>check the event using <code> sh kubectl get events .....</code></li>
</ul>
<p>Sometimes if a pods crush you can find the logs or events so you need to add a flag to logs command :
<code>sh kubectl logs pods <pod-name> --previous=true </code></p>
<p>I hope that can help you to resolve your issue.</p>
| rassakra |
<p>I am trying to build a series of Micro-Frontends using Webpack 5 and the <strong>ModuleFederationPlugin</strong>.</p>
<p>In the webpack config of my container app I have to configure how the container is going to reach out to the other microfrontends so I can make use of those micro-frontends.</p>
<p>This all works fine when I am serving locally, not using Docker and Kubernetes and my Ingress Controller.</p>
<p>However because I am using Kubernetes and an Ingress Controller, I am unsure what the <strong>remote host</strong> would be.</p>
<p><a href="https://github.com/CodeNameNinja/Micro-Service-Ticketing" rel="nofollow noreferrer">Link to Repo</a></p>
<p>Here is my container webpack.dev.js file</p>
<pre class="lang-js prettyprint-override"><code>const { merge } = require("webpack-merge");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const commonConfig = require("./webpack.common");
const packageJson = require("../package.json");
const devConfig = {
mode: "development",
devServer: {
host: "0.0.0.0",
port: 8080,
historyApiFallback: {
index: "index.html",
},
compress: true,
disableHostCheck: true,
},
plugins: [
new ModuleFederationPlugin({
name: "container",
remotes: {
marketing:
"marketing@https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js",
},
shared: packageJson.dependencies,
}),
],
};
module.exports = merge(commonConfig, devConfig);
</code></pre>
<p>and here is my <strong>Ingress Config</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /marketing?(.*)
pathType: Prefix
backend:
service:
name: marketing-srv
port:
number: 8081
- path: /?(.*)
pathType: Prefix
backend:
service:
name: container-srv
port:
number: 8080
</code></pre>
<p>and here is my marketing webpack.dev.js file</p>
<pre class="lang-js prettyprint-override"><code>const { merge } = require("webpack-merge");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const commonConfig = require("./webpack.common");
const packageJson = require("../package.json");
const devConfig = {
mode: "development",
devServer: {
host: "0.0.0.0",
port: 8081,
historyApiFallback: {
index: "index.html",
},
compress: true,
disableHostCheck: true, // That solved it
},
plugins: [
new ModuleFederationPlugin({
name: "marketing",
filename: "remoteEntry.js",
exposes: {
"./core": "./src/bootstrap",
},
shared: packageJson.dependencies,
}),
],
};
module.exports = merge(commonConfig, devConfig);
</code></pre>
<p>I am totally stumped as to what the remote host would be to reach out to my marketing micro-frontend</p>
<p>serving it as usual without running it in a docker container or kubernetes cluster, the remote host would be</p>
<p><code>https://localhost:8081/remoteEntry.js</code></p>
<p>but that doesn't work in a kubernetes cluster</p>
<p>I tried using the ingress controller and namespace, but that too, does not work</p>
<p><code>https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js</code></p>
<p>This is the error I get
<a href="https://i.stack.imgur.com/HrPID.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HrPID.png" alt="enter image description here" /></a></p>
| Mitchell Yuen | <blockquote>
<p><a href="https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js" rel="nofollow noreferrer">https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js</a></p>
</blockquote>
<p>If your client and the node are on the same network (eg. can ping each other), do <code>kubectl get service ingress-nginx --namespace ingress-nginx</code> and take note of the nodePort# (TYPE=NodePort, PORT(S) 443:<nodePort#>/TCP). Your remote entry will be <code>https://<any of the worker node IP>:<nodePort#>/remoteEntry.js</code></p>
<p>If you client is on the Internet and your worker node has public IP, your remote entry will be <code>https://<public IP of the worker node>:<nodePort#>/remoteEntry.js</code></p>
<p>If you client is on the Internet and your worker node doesn't have public IP, you need to expose your ingress-nginx service with <code>LoadBalancer</code>. Do <code>kubectl get service ingress-nginx --namespace ingress-nginx</code> and take note of the <code>EXTERNAL IP</code>. Your remote entry become <code>https://<EXTERNAL IP>/remoteEntry.js</code></p>
| gohm'c |
<p>I am using Dockerfile to run shell script of jupyter notebook. When this jupyter terminal starts up, it's starting at /root path, but I want terminal to start with default path /nfs.
What change can be made in the Dockerfile such that this terminal starts at /nfs path ?</p>
| divya krishana | <p>you can add below entry in your dockerfile so everything after mentioned the WORKDIR step would be in the directory you have mentioned.</p>
<pre><code>WORKDIR /nfs
</code></pre>
| Sam-Sundar |
<p>I have a GKE cluster which doesn't scale up when a particular deployment needs more resources.
I've checked the cluster autoscaler logs and it has entries with this error:
<code>no.scale.up.nap.pod.zonal.resources.exceeded</code>. The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#noscaleup-reasons" rel="noreferrer">documentation</a> for this error says:</p>
<blockquote>
<p>Node auto-provisioning did not provision any node group for the Pod in
this zone because doing so would violate resource limits.</p>
</blockquote>
<p>I don't quite understand which resource limits are mentiond in the documentation and why it prevents node-pool from scaling up?</p>
<p>If I scale cluster up manually - deployment pods are scaled up and everything works as expected, so, seems it's not a problem with project quotas.</p>
| Oleksandr Bushkovskyi | <ul>
<li><p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#limits_for_clusters" rel="noreferrer">Limits for clusters</a> that you define are enforced based on the total CPU and memory resources used across your cluster, not just auto-provisioned pools.</p>
</li>
<li><p>When you are not using node auto provisioning (NAP), <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#disable" rel="noreferrer">disable node auto provisioning feature for the cluster.</a></p>
</li>
<li><p>When you are using NAP, then <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#enable" rel="noreferrer">update the cluster wide resource</a> limits defined in NAP for the cluster .</p>
</li>
<li><p>Try a workaround by specifying the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#custom_machine_family" rel="noreferrer">machine type explicitly</a> in the workload spec. Ensure to use a supported machine family with GKE node auto-provisioning</p>
</li>
</ul>
| rriovall |
<p>I'm trying to deploy Elasticsearch on RKE cluster.</p>
<p>Following instructions with this tutorial.</p>
<p><a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html</a></p>
<p>Kube deployment is on VMs behind proxy.</p>
<p>Due to lack of provisioner I provisioned pv myself and this is not the problem.</p>
<p>The error I'm getting from the proble is as follows:</p>
<p><strong>Readiness probe failed: {"timestamp": "2021-10-06T12:44:37+00:00", "message": "readiness probe failed", "curl_rc": "7"}</strong></p>
<p>In addition if I curl on master node i get diffrent error</p>
<p>curl <a href="https://127.0.0.0:9200" rel="nofollow noreferrer">https://127.0.0.0:9200</a>
curl: (56) Received HTTP code 503 from proxy after CONNECT</p>
<p>Inside the container i get
bash-4.4# curl <a href="https://127.0.0.0:9200" rel="nofollow noreferrer">https://127.0.0.0:9200</a>
curl: (7) Couldn't connect to server</p>
<p>also inside container:
curl <a href="https://0.0.0.0:9200" rel="nofollow noreferrer">https://0.0.0.0:9200</a>
curl: (56) Received HTTP code 503 from proxy after CONNECT</p>
<p>I established that readiness proble fails on performing curl command which is par of the script.
/mnt/elastic-internal/scripts/readiness-probe-script.sh</p>
<p>I attach the scrip and the contents of the pod describe output:</p>
<p>script:</p>
<pre><code>#!/usr/bin/env bash
# fail should be called as a last resort to help the user to understand why the probe failed
function fail {
timestamp=$(date --iso-8601=seconds)
echo "{\"timestamp\": \"${timestamp}\", \"message\": \"readiness probe failed\", "$1"}" | tee /proc/1/fd/2 2> /dev/null
exit 1
}
labels="/mnt/elastic-internal/downward-api/labels"
version=""
if [[ -f "${labels}" ]]; then
# get Elasticsearch version from the downward API
version=$(grep "elasticsearch.k8s.elastic.co/version" ${labels} | cut -d '=' -f 2)
# remove quotes
version=$(echo "${version}" | tr -d '"')
fi
READINESS_PROBE_TIMEOUT=${READINESS_PROBE_TIMEOUT:=3}
# Check if PROBE_PASSWORD_PATH is set, otherwise fall back to its former name in 1.0.0.beta-1: PROBE_PASSWORD_FILE
if [[ -z "${PROBE_PASSWORD_PATH}" ]]; then
probe_password_path="${PROBE_PASSWORD_FILE}"
else
probe_password_path="${PROBE_PASSWORD_PATH}"
fi
# setup basic auth if credentials are available
if [ -n "${PROBE_USERNAME}" ] && [ -f "${probe_password_path}" ]; then
PROBE_PASSWORD=$(<${probe_password_path})
BASIC_AUTH="-u ${PROBE_USERNAME}:${PROBE_PASSWORD}"
else
BASIC_AUTH=''
fi
# Check if we are using IPv6
if [[ $POD_IP =~ .*:.* ]]; then
LOOPBACK="[::1]"
else
LOOPBACK=127.0.0.1
fi
# request Elasticsearch on /
# we are turning globbing off to allow for unescaped [] in case of IPv6
ENDPOINT="${READINESS_PROBE_PROTOCOL:-https}://${LOOPBACK}:9200/"
status=$(curl -o /dev/null -w "%{http_code}" --max-time ${READINESS_PROBE_TIMEOUT} -XGET -g -s -k ${BASIC_AUTH} $ENDPOINT)
curl_rc=$?
if [[ ${curl_rc} -ne 0 ]]; then
fail "\"curl_rc\": \"${curl_rc}\""
fi
# ready if status code 200, 503 is tolerable if ES version is 6.x
if [[ ${status} == "200" ]] || [[ ${status} == "503" && ${version:0:2} == "6." ]]; then
exit 0
else
fail " \"status\": \"${status}\", \"version\":\"${version}\" "
fi
</code></pre>
<p>The following is the describe pod output:</p>
<pre><code>Name: quickstart-es-default-0
Namespace: default
Priority: 0
Node: rke-worker-1/10.21.242.216
Start Time: Wed, 06 Oct 2021 14:43:11 +0200
Labels: common.k8s.elastic.co/type=elasticsearch
controller-revision-hash=quickstart-es-default-666db95c77
elasticsearch.k8s.elastic.co/cluster-name=quickstart
elasticsearch.k8s.elastic.co/config-hash=2374451611
elasticsearch.k8s.elastic.co/http-scheme=https
elasticsearch.k8s.elastic.co/node-data=true
elasticsearch.k8s.elastic.co/node-data_cold=true
elasticsearch.k8s.elastic.co/node-data_content=true
elasticsearch.k8s.elastic.co/node-data_hot=true
elasticsearch.k8s.elastic.co/node-data_warm=true
elasticsearch.k8s.elastic.co/node-ingest=true
elasticsearch.k8s.elastic.co/node-master=true
elasticsearch.k8s.elastic.co/node-ml=true
elasticsearch.k8s.elastic.co/node-remote_cluster_client=true
elasticsearch.k8s.elastic.co/node-transform=true
elasticsearch.k8s.elastic.co/node-voting_only=false
elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
elasticsearch.k8s.elastic.co/version=7.15.0
statefulset.kubernetes.io/pod-name=quickstart-es-default-0
Annotations: cni.projectcalico.org/containerID: 1e03a07fc3a1cb37902231b69a5f0fcaed2d450137cb675c5dfb393af185a258
cni.projectcalico.org/podIP: 10.42.2.7/32
cni.projectcalico.org/podIPs: 10.42.2.7/32
co.elastic.logs/module: elasticsearch
update.k8s.elastic.co/timestamp: 2021-10-06T12:43:23.93263325Z
Status: Running
IP: 10.42.2.7
IPs:
IP: 10.42.2.7
Controlled By: StatefulSet/quickstart-es-default
Init Containers:
elastic-internal-init-filesystem:
Container ID: docker://cc72c63cb1bb5406a2edbcc0488065c06a130f00a73d2e38544cd7e9754fbc57
Image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:6ae227c688e05f7d487e0cfe08a5a3681f4d60d006ad9b5a1f72a741d6091df1
Port: <none>
Host Port: <none>
Command:
bash
-c
/mnt/elastic-internal/scripts/prepare-fs.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 06 Oct 2021 14:43:20 +0200
Finished: Wed, 06 Oct 2021 14:43:42 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
NODE_NAME: (v1:spec.nodeName)
NAMESPACE: default (v1:metadata.namespace)
HEADLESS_SERVICE_NAME: quickstart-es-default
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
/mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
Containers:
elasticsearch:
Container ID: docker://9fb879f9f0404a9997b5aa0ae915c788569c85abd008617447422ba5de559b54
Image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:6ae227c688e05f7d487e0cfe08a5a3681f4d60d006ad9b5a1f72a741d6091df1
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 06 Oct 2021 14:46:26 +0200
Last State: Terminated
Reason: Error
Exit Code: 134
Started: Wed, 06 Oct 2021 14:43:46 +0200
Finished: Wed, 06 Oct 2021 14:46:22 +0200
Ready: False
Restart Count: 1
Limits:
memory: 2Gi
Requests:
memory: 2Gi
Readiness: exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
NODE_NAME: (v1:spec.nodeName)
NAMESPACE: default (v1:metadata.namespace)
PROBE_PASSWORD_PATH: /mnt/elastic-internal/probe-user/elastic-internal-probe
PROBE_USERNAME: elastic-internal-probe
READINESS_PROBE_PROTOCOL: https
HEADLESS_SERVICE_NAME: quickstart-es-default
NSS_SDB_USE_CACHE: no
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
/usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
/usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
elasticsearch-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-quickstart-es-default-0
ReadOnly: false
downward-api:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
elastic-internal-elasticsearch-bin-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-elasticsearch-config:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-default-es-config
Optional: false
elastic-internal-elasticsearch-config-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-elasticsearch-plugins-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-http-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-http-certs-internal
Optional: false
elastic-internal-probe-user:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-internal-users
Optional: false
elastic-internal-remote-certificate-authorities:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-remote-ca
Optional: false
elastic-internal-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-scripts
Optional: false
elastic-internal-transport-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-default-es-transport-certs
Optional: false
elastic-internal-unicast-hosts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-unicast-hosts
Optional: false
elastic-internal-xpack-file-realm:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-xpack-file-realm
Optional: false
elasticsearch-logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 22m default-scheduler Successfully assigned default/quickstart-es-default-0 to rke-worker-1
Normal Pulled 21m kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.15.0" already present on machine
Normal Created 21m kubelet Created container elastic-internal-init-filesystem
Normal Started 21m kubelet Started container elastic-internal-init-filesystem
Normal Pulled 21m kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.15.0" already present on machine
Normal Created 21m kubelet Created container elasticsearch
Normal Started 21m kubelet Started container elasticsearch
Warning Unhealthy 21m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:43:57+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 21m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:02+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 21m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:07+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 21m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:12+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 21m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:17+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 20m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:22+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 20m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:27+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 20m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:32+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 20m kubelet Readiness probe failed: {"timestamp": "2021-10-06T12:44:37+00:00", "message": "readiness probe failed", "curl_rc": "7"}
Warning Unhealthy 115s (x223 over 20m) kubelet (combined from similar events): Readiness probe failed: {"timestamp": "2021-10-06T13:03:22+00:00", "message": "readiness probe failed", "curl_rc": "7"}
</code></pre>
<p>after probe restart I get following output:</p>
<pre><code>{"type": "deprecation.elasticsearch", "timestamp": "2021-10-07T11:58:28,007Z", "level": "DEPRECATION", "component": "o.e.d.c.r.OperationRouting", "cluster.name": "quickstart", "node.name": "quickstart-es-default-0", "message": "searches will not be routed based on awareness attributes starting in version 8.0.0; to opt into this behaviour now please set the system property [es.search.ignore_awareness_attributes] to [true]", "key": "searches_not_routed_on_awareness_attributes" }
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fc63c3eb122, pid=7, tid=261
#
# JRE version: OpenJDK Runtime Environment Temurin-16.0.2+7 (16.0.2+7) (build 16.0.2+7)
# Java VM: OpenJDK 64-Bit Server VM Temurin-16.0.2+7 (16.0.2+7, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# J 711 c1 org.yaml.snakeyaml.scanner.Constant.has(I)Z (42 bytes) @ 0x00007fc63c3eb122 [0x00007fc63c3eb100+0x0000000000000022]
#
# Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport %p %s %c %d %P %E" (or dumping to /usr/share/elasticsearch/core.7)
#
# An error report file with more information is saved as:
# logs/hs_err_pid7.log
Compiled method (c1) 333657 4806 3 org.yaml.snakeyaml.scanner.Constant::hasNo (15 bytes)
total in heap [0x00007fc63c50c010,0x00007fc63c50c7d0] = 1984
relocation [0x00007fc63c50c170,0x00007fc63c50c1f8] = 136
main code [0x00007fc63c50c200,0x00007fc63c50c620] = 1056
stub code [0x00007fc63c50c620,0x00007fc63c50c680] = 96
oops [0x00007fc63c50c680,0x00007fc63c50c688] = 8
metadata [0x00007fc63c50c688,0x00007fc63c50c6a8] = 32
scopes data [0x00007fc63c50c6a8,0x00007fc63c50c718] = 112
scopes pcs [0x00007fc63c50c718,0x00007fc63c50c7b8] = 160
dependencies [0x00007fc63c50c7b8,0x00007fc63c50c7c0] = 8
nul chk table [0x00007fc63c50c7c0,0x00007fc63c50c7d0] = 16
Compiled method (c1) 333676 4806 3 org.yaml.snakeyaml.scanner.Constant::hasNo (15 bytes)
total in heap [0x00007fc63c50c010,0x00007fc63c50c7d0] = 1984
relocation [0x00007fc63c50c170,0x00007fc63c50c1f8] = 136
main code [0x00007fc63c50c200,0x00007fc63c50c620] = 1056
stub code [0x00007fc63c50c620,0x00007fc63c50c680] = 96
oops [0x00007fc63c50c680,0x00007fc63c50c688] = 8
metadata [0x00007fc63c50c688,0x00007fc63c50c6a8] = 32
scopes data [0x00007fc63c50c6a8,0x00007fc63c50c718] = 112
scopes pcs [0x00007fc63c50c718,0x00007fc63c50c7b8] = 160
dependencies [0x00007fc63c50c7b8,0x00007fc63c50c7c0] = 8
nul chk table [0x00007fc63c50c7c0,0x00007fc63c50c7d0] = 16
Compiled method (c1) 333678 4812 3 org.yaml.snakeyaml.scanner.ScannerImpl::scanLineBreak (99 bytes)
total in heap [0x00007fc63c583990,0x00007fc63c584808] = 3704
relocation [0x00007fc63c583af0,0x00007fc63c583bf8] = 264
main code [0x00007fc63c583c00,0x00007fc63c584420] = 2080
stub code [0x00007fc63c584420,0x00007fc63c5844c0] = 160
oops [0x00007fc63c5844c0,0x00007fc63c5844c8] = 8
metadata [0x00007fc63c5844c8,0x00007fc63c584500] = 56
scopes data [0x00007fc63c584500,0x00007fc63c5845f0] = 240
scopes pcs [0x00007fc63c5845f0,0x00007fc63c5847b0] = 448
dependencies [0x00007fc63c5847b0,0x00007fc63c5847b8] = 8
nul chk table [0x00007fc63c5847b8,0x00007fc63c584808] = 80
Compiled method (c1) 333679 4693 2 java.lang.String::indexOf (7 bytes)
total in heap [0x00007fc63c6e0190,0x00007fc63c6e0568] = 984
relocation [0x00007fc63c6e02f0,0x00007fc63c6e0338] = 72
main code [0x00007fc63c6e0340,0x00007fc63c6e0480] = 320
stub code [0x00007fc63c6e0480,0x00007fc63c6e04d0] = 80
metadata [0x00007fc63c6e04d0,0x00007fc63c6e04e0] = 16
scopes data [0x00007fc63c6e04e0,0x00007fc63c6e0510] = 48
scopes pcs [0x00007fc63c6e0510,0x00007fc63c6e0560] = 80
dependencies [0x00007fc63c6e0560,0x00007fc63c6e0568] = 8
#
# If you would like to submit a bug report, please visit:
# https://github.com/adoptium/adoptium-support/issues
#
</code></pre>
| Michal | <p>The solution to my problem was so easy, that I did not behave.<br>
I narrowed down the problem to the TLS handshakes failing.<br>
The times on the nodes were different.<br>
I synced the times and dates on all the nodes and all problems vanished.<br>
It was due to that difference.<br>
The proxy was bloking the services like NTP to sync the time.<br></p>
<pre><code>NAME READY STATUS RESTARTS AGE
quickstart-es-default-0 1/1 Running 0 3m2s
quickstart-es-default-1 1/1 Running 0 3m2s
quickstart-es-default-2 1/1 Running 0 3m2s
kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 3 7.15.0 Ready 3m21s
</code></pre>
| Michal |
<p>I have a question regarding Kubernetes Liveness/Readiness probes configuration.</p>
<p>I have an application developed in <strong>netCore</strong> 3.1 that at this moment, in production env (version 1.0.0), doesn't have configured health checks.
I have implemented the <strong>health</strong> endpoints in the second release (version 2.0.0) but how can I manage the Kubernetes deployment template file in order to be compliant with version v1 that does not have an endpoint?</p>
<p>If I will deploy my template with probes configured, all container that runs on v1 will fail cause no endpoint are reachable.
I would like to understand if I can maintain one deployment yml file that will be compatible with v1 (without health) and v2 (with health).</p>
<p>Here I post an example of my actual deployment yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: "#{tenant}#-test-app"
name: "#{tenant}#-test-app"
labels:
app: "#{tenant}#-test-app"
product: "#{tenant}#-test-app"
app.kubernetes.io/name: "#{tenant}#-test-app"
app.kubernetes.io/version: "#{server_image_version}#"
app.kubernetes.io/component: "test-app"
app.kubernetes.io/part-of: "#{tenant}#-test-app"
app.kubernetes.io/managed-by: "#{managed_by}#"
spec:
selector:
matchLabels:
app: "#{tenant}#-test-app"
template:
metadata:
labels:
app: "#{tenant}#-test-app"
spec:
containers:
- name: "#{tenant}#-test-app"
image: mycontainerregistryurl/test-app:#{server_image_version}#
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: "#{tenant}#-test-app-variables-config"
env:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_SERVICE_NAME
value: "#{tenant}#-test-app"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
imagePullSecrets:
- name: test-registries
</code></pre>
<p>server_image_version variable could be used to identify if I have to perform an health check or not.</p>
<p>Thanks in advance, Dave.</p>
| DavideP | <p>To check the liveness for k8s you can use the command like that, we can define an environement variable and after that on the liveness section we can use the cammand to make an if-else to check the current version and specify what we need to execute on each section .</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: version
value: v2
livenessProbe:
exec:
command:
- /bin/bash
- -exc
- |
set +x
echo "running below scripts"
if [[ $version -eq "v1" ]]; then
echo "Running script or command for version 1 "
else
echo "Running wget to check the http healthy "
wget api/v2
fi
</code></pre>
<p>I hope that my idea can help you to resolve your issue .</p>
| rassakra |
<p>I am trying to use VS code editor for creating kubernetes yaml files, by some reason, vscode is not showing auto-complete commands or space in my yaml files even though I have installed Yaml Support by Redhat extension and configured yaml.schema file as per below:</p>
<p><strong>{
"yaml.schemas": {
"Kubernetes": "*.yaml"
}
}</strong></p>
<p>Any help would be appreciated.
Thanks</p>
| Anil | <p><a href="https://i.stack.imgur.com/WzKsf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WzKsf.jpg" alt="enter image description here" /></a><b>Fix This Problem On The Vscode Step by Step</b></p><br>
<b>Step 1 :</b> install yaml plugin on the vscode <br>
<b>Step 2 :</b> Edit this path vscode <b>file>prefrences>settings>Extention>YAML</b><br>
<b>Step 3 :</b> After Click Yaml on the right side find and edit <b> YAML: Custom Tag Edit in setings.json</b><br>
<b>Step 4 :</b> Append This lines in File Settings.json<br></p>
<p><i><b>"https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/v1.22.4-standalone-strict/all.json": ["/*.yaml","/*.yml"]</b></i></p>
<b>Step 5 :</b> Final Reload vscode You can use <b>Ctrl+Shift+p</b> and search <b>Reload Window </b> On The Vscode
| ramin |
<p>What happens when I stop aks cluster and start?
Will my pods remain in the same state?
Do the node pool and nodes inside that stop?
Do the services inside the cluster still runs and cost me if it is a load balancer?</p>
| SmartestVEGA | <p>Stopping cluster will lost all the pods and starting it again it will create a new pod with the same name but Ip address of pod will changes.</p>
<p>Pods are only scheduled once in their lifetime. Once a Pod is scheduled (assigned) to a Node, the Pod runs on that Node until it stops or is terminated.</p>
<blockquote>
<p>Do the node pool and nodes inside that stop?Do the services inside the cluster still runs and cost me if it is a load balancer?</p>
</blockquote>
<p>Yes It will Stop the nodes and Complete Node Pool as well.Service Inside the cluster will also stop and it will not cost as well.</p>
<p><em><strong>Reference : <a href="https://learn.microsoft.com/en-us/azure/aks/start-stop-cluster?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/start-stop-cluster?tabs=azure-cli</a></strong></em></p>
| RahulKumarShaw |
<p>Im trying to create an internal service only within kubernetes. Its a clamav service, the docker container runs on port 3310. I have created a deployment and a service.</p>
<p>I have put the service and deployment in a 'clamav' namespace</p>
<p>I have port forwarded both the service and pods and can confirm it works as expected.</p>
<p>If I connect directly to the pod and run <code>curl localhost:3310</code> its good, if i <code>curl clamav</code> (no port as ive configured 80 as the incoming service port) (which clamav being the service name) I get the following 503 error:</p>
<pre><code>upstream connect error or disconnect/reset before headers. reset reason: connection termination
</code></pre>
<p>If I connect to a different service and try the same <code>curl clamav.clamav</code> I get the same error. (no port as ive configured 80 as the incoming service port)</p>
<p>Do I need something else other than a service and deployment or is something misconfigured?</p>
<p>The service spec is shown below.</p>
<pre><code>spec:
clusterIP: 10.10.255.194
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3310
selector:
app: clamav
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>I have tried changing the name to "3310-3310" and changing the port to "3310" as well... had to update the curl commands to have the :3310 for the port... but same error.</p>
<p>Also just to add, I do have the istio service mesh running in this cluster and injected.</p>
<p>Thanks
Kevin</p>
| Kevin | <p>The answer to this was the service, changing it to:</p>
<pre><code>spec:
clusterIP: 10.10.255.194
ports:
- name: tcp-clamav
port: 80
targetPort: 3310
selector:
app: clamav
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>Fixed the issue</p>
| Kevin |
<p>I was following the steps mentioned here <a href="https://stackoverflow.com/questions/64121941/how-to-get-a-heap-dump-from-kubernetes-k8s-pod">How to get a heap dump from Kubernetes k8s pod?</a></p>
<p>I'm able to get the the process id using top command inside the pod. However, when I run jmap I get this:</p>
<pre><code>~ $ jmap
sh: jmap: not found
</code></pre>
<p>I access the pod with this command: kubectl exec -it -- sh</p>
<p>I also tried this command:</p>
<pre><code>kubectl exec -it <pod> -- jmap -dump:live,format=b,file=heapdump.bin 1
</code></pre>
<p>But I was getting:</p>
<pre><code>OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"jmap\": executable file not found in $PATH": unknown command terminated with exit code 126
</code></pre>
<p>Is there any other way to get the java heap dump from the pod?</p>
| MahmooDs | <p>Normally containers are limited on the tools available(like you get 'more', but you don't get 'less'), so the tools available for you depends on your container.</p>
<p>The 2 tools that are used to get a heap dump are jmap and jcmd, check if you got jcmd in the container.</p>
<p><a href="https://www.adam-bien.com/roller/abien/entry/taking_a_heap_dump_with" rel="nofollow noreferrer">https://www.adam-bien.com/roller/abien/entry/taking_a_heap_dump_with</a></p>
<p>If not, I recommend to put the java app in a container that has either jmap or jcmd and then run it; even if the container is "heavier" that won't affect the java app nor the heap dump so it will be the same.</p>
<p>If that's not an option, maybe this will be
<a href="https://techblog.topdesk.com/coding/extracting-a-heap-dump-from-a-running-openj9-java-process-in-kubernetes/" rel="nofollow noreferrer">https://techblog.topdesk.com/coding/extracting-a-heap-dump-from-a-running-openj9-java-process-in-kubernetes/</a>
(not mine).</p>
| Fernando Carrillo Castro |
<p>I have spot instance nodes in Azure Kubernetes Cluster. I want to simulate the eviction of a node so as to debug my code but not able to. All I could find in azure docs is we can simulate eviction for a single spot instance, using the following:</p>
<pre><code>az vm simulate-eviction --resource-group test-eastus --name test-vm-26
</code></pre>
<p>However, I need to simulate the eviction of a spot node pool or a spot node in an AKS cluster.</p>
| bor | <p>For simulating evictions, there is no AKS REST API or Azure CLI command because evictions of the underlying infrastructure is not handled by AKS RP.
Only during creation of the AKS cluster the AKS RP can set eviction Policy on the underlying infrastructure by instructing the Azure Compute RP to do so.
Instead to simulate the eviction of node infrastructure, the customer can use az vmsss simulate-eviction command or the corresponding REST API.</p>
<p><strong>az vmss simulate-eviction</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>az vmss simulate-eviction --instance-id
--name
--resource-group
[--subscription]</code></pre>
</div>
</div>
</p>
<p>Reference Documents:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_simulate_eviction" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_simulate_eviction</a></li>
<li><a href="https://learn.microsoft.com/en-us/rest/api/compute/virtual-machine-scale-set-vms/simulate-eviction" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/compute/virtual-machine-scale-set-vms/simulate-eviction</a></li>
</ul>
<hr />
<hr />
<p>Use the following commands to get the name of the vmss with nodepool:</p>
<p>1.</p>
<pre><code> az aks nodepool list -g $ClusterRG --cluster-name $ClusterName -o
table
</code></pre>
<p>Get the desired node pool name from the output</p>
<p>2.</p>
<pre><code> CLUSTER_RESOURCE_GROUP=$(az aks show –resource-group YOUR_Resource_Group --name YOUR_AKS_Cluster --query
nodeResourceGroup -o tsv)
</code></pre>
<ol start="3">
<li></li>
</ol>
<pre><code>az vmss list -g $CLUSTER_RESOURCE_GROUP --query "[?tags.poolName == '<NODE_POOL_NAME>'].{VMSS_Name:name}" -o tsv
</code></pre>
<p>References:</p>
<ol>
<li><a href="https://louisshih.gitbooks.io/kubernetes/content/chapter1.html" rel="nofollow noreferrer">https://louisshih.gitbooks.io/kubernetes/content/chapter1.html</a></li>
<li><a href="https://ystatit.medium.com/azure-ssh-into-aks-nodes-471c07ad91ef" rel="nofollow noreferrer">https://ystatit.medium.com/azure-ssh-into-aks-nodes-471c07ad91ef</a></li>
<li><a href="https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_list_instances" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_list_instances</a></li>
</ol>
<p>(you may create vmss if you dont have it configured. Refer :<a href="https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal" rel="nofollow noreferrer">create a VMSS</a>)</p>
| kavyaS |
<p>The use case is such that I need both JDK and Mongo images in a single container, the java process starts up the Mongo daemon process.</p>
| Devesh Lohumi | <p>Here's the minimum Dockerfile that bake JRE 11 to the mongo image.</p>
<pre><code>FROM mongo:latest
# Replace the version if desired
RUN apt-get update -y && apt-get install openjdk-11-jre-headless -y
# Install your app and stuffs here...
# Override for your own command
CMD ["java","-version"]
</code></pre>
<p>Build the image <code>docker build -t mongodb-java .</code></p>
<p>Test the image <code>docker run -t --rm mongodb-java</code> will output the JRE version.</p>
<p>Test the image <code>docker run -t --rm mongodb-java mongo --version</code> will output the MongoDB version.</p>
<p>You can then follow Kaniko <a href="https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-docker" rel="nofollow noreferrer">steps</a> to build the image.</p>
| gohm'c |
<p>I am trying to apply ingress rule in minikube but I am getting this error</p>
<pre><code>error: resource mapping not found for name: "dashboard-ingress" namespace: "kubernetes-dashboard" from "Desktop/minikube/dashboard-ingress.yaml": no matches for kind "Ingress" in version "networking.k8.io/v1"
</code></pre>
<p>dashboard-ingress.yaml</p>
<pre><code>apiVersion: networking.k8.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 80
</code></pre>
| etranz | <p>I have found the solution</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 80
</code></pre>
| etranz |
<p>Can you please help me figure out why kubectl apply fails?
When I try to run <code>kubectl apply -k k8s/overlays/dev</code> it fails with error message "error: rawResources failed to read Resources: Load from path ../../base failed: '../../base' must be a file"
But if I run <code>kustomize build k8s/overlays/dev</code> it works fine.</p>
<p>folder structure</p>
<pre><code>|____k8s
| |____overlays
| | |____dev
| | | |____kustomization.yaml
| |____base
| | |____deployment.yaml
| | |____kustomization.yaml
</code></pre>
<p>k8s/base/deployment.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: my-app:v1
ports:
- containerPort: 8080
protocol: TCP
</code></pre>
<p>k8s/base/kustomization.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
commonLabels:
app: my-app
</code></pre>
<p>k8s/overlays/dev/kustomization.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
</code></pre>
| Aleksandras Artemjevas | <p>Upgrading kubectl to v1.21.0 solved the issue.</p>
| Aleksandras Artemjevas |
<p>I'm using Terraform to provision an EKS cluster (mostly following the example <a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="nofollow noreferrer">here</a>). At the end of the tutorial, there's a method of outputting the configmap through the <code>terraform output</code> command, and then applying it to the cluster via <code>kubectl apply -f <file></code>. I'm attempting to wrap this <code>kubectl</code> command into the Terraform file using the <code>kubernetes_config_map</code> resource, however when running Terraform for the first time, I receive the following error:</p>
<pre><code>Error: Error applying plan:
1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)
</code></pre>
<p>The strange thing is, every subsequent <code>terraform apply</code> works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit <code>depends_on</code> argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.</p>
<pre><code>provider "kubernetes" {
config_path = "kube_config.yaml"
}
locals {
map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
ROLES
}
resource "kubernetes_config_map" "config_map_aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${local.map_roles}"
}
depends_on = ["aws_eks_cluster.eks_cluster"]
}
</code></pre>
<p>I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time. </p>
<p>I attempted to get more information by enabling the <code>TRACE</code> debug flag for terraform, however the only output I got was the exact same error as above.</p>
| jcploucha | <p>Well, I don't know if that is fresh yet but I was dealing with the same troubles and found out that:</p>
<p><a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543</a></p>
<p>So, in others words, I changed the cluster's name in <strong>aws_eks_cluster_auth</strong> block to a <strong>static name</strong>, and worked. Well, perhaps this is a bug on TF.</p>
| Daniel Andrade |
<p>Within one helm chart I deploy a PersistentVolume (EFS):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Release.Namespace }}-{{ .Release.Name }}
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: {{ .Values.pv.storageClassName }}
csi:
driver: efs.csi.aws.com
volumeHandle: {{ .Values.pv.volumeHandle | quote }}
claimRef:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
</code></pre>
<p>And PersistentVolumeClaim for it:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteMany
storageClassName: {{ .Values.pv.storageClassName }}
resources:
requests:
storage: 5Gi # Required but ignored in case of EFS
volumeName: {{ .Release.Namespace }}-{{ .Release.Name }}
</code></pre>
<p>And a pod uses the PVC as usual:</p>
<pre><code>volumeMounts:
- name: persistent-storage
mountPath: /efs
...
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: {{ .Release.Name }}
</code></pre>
<p>When I do helm install, I have a floating issue: pods continuously in a pending state with the next event:</p>
<blockquote>
<p>Warning FailedScheduling 56s fargate-scheduler Pod not supported on Fargate: volumes not supported: persistent-storage not supported because: PVC someRelease not bound</p>
</blockquote>
<p>If I would check state of the PVC, it's in Bound state and after a pod restart it works as expected. It looks like PV is not created yet at the moment when PVC tries to access it, hence the pod can't be created. Should I specify an order for the templates somehow or there is another solution?</p>
| Anton Balashov | <p>Try the following:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Release.Name }}
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
csi:
driver: efs.csi.aws.com
volumeHandle: {{ .Values.pv.volumeHandle | quote }}
claimRef:
name: {{ .Release.Namespace }}-{{ .Release.Name }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Namespace }}-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 5Gi # Required but ignored in case of EFS
</code></pre>
| gohm'c |
<p>I've just finished setting up AKS with AGIC and using Azure CNI. I'm trying to deploy NGINX to test if I set the AKS up correctly with the following configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.allow-http: "false"
appgw.ingress.kubernetes.io/use-private-ip: "false"
appgw.ingress.kubernetes.io/override-frontend-port: "443"
spec:
tls:
- hosts:
- my.domain.com
secretName: aks-ingress-tls
rules:
- host: my.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
component: nginx
template:
metadata:
labels:
component: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
component: nginx
ports:
- port: 80
protocol: TCP
</code></pre>
<p>There's no error or any other log message on apply the above configurations.</p>
<pre class="lang-bash prettyprint-override"><code>> k apply -f nginx-test.yml
deployment.apps/nginx-deployment created
service/nginx-service created
ingress.networking.k8s.io/nginx-ingress created
</code></pre>
<hr />
<p>But after a further investigation in the Application Gateway I found these entries in the Activity log popped up at the same time I applied the said configuration.</p>
<p><a href="https://i.stack.imgur.com/RbXI6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RbXI6.png" alt="Activity log in AGIC" /></a></p>
<p>Further details in one of the entries is as follows:</p>
<ul>
<li><strong>Operation name</strong>: Create or Update Application Gateway</li>
<li><strong>Error code</strong>: RequestDisallowedByPolicy</li>
<li><strong>Message</strong>: Resource 'my-application-gateway' was disallowed by policy.
<pre class="lang-json prettyprint-override"><code>[
{
"policyAssignment": {
"name": "Encryption In Transit",
"id": "/providers/Microsoft.Management/managementGroups/***/providers/Microsoft.Authorization/policyAssignments/EncryptionInTransit"
},
"policyDefinition": {
"name": "HTTPS protocol only on Application Gateway listeners",
"id": "/providers/microsoft.management/managementgroups/***/providers/Microsoft.Authorization/policyDefinitions/HttpsOnly_App_Gateways"
},
"policySetDefinition": {
"name": "Encryption In Transit",
"id": "/providers/Microsoft.Management/managementgroups/***/providers/Microsoft.Authorization/policySetDefinitions/EncryptionInTransit"
}
}
]
</code></pre>
</li>
</ul>
<p>My organization have a policy to enforce TLS but from my configuration I'm not sure what I did wrong as I have already configured the ingress to only use HTTPS and also have certificate (from the secret) installed.</p>
<p>I'm not sure where to look and wish someone could guide me in the correct direction. Thanks!</p>
| TacticalBacon | <p>• As you said, your organization has a <strong>policy for enforcing TLS for securing encrypted communication over HTTPS</strong>. Therefore, when you create an <strong>‘NGINX’ deployment through the ‘yaml’ file posted</strong>, you can see that the <strong>nginx application is trying to connect to the application gateway ingress controller over Port 80 which is reserved for HTTP communications. Thus, your nginx application has also disallowed the usage of private IPs with the AGIC due to which the nginx application is directly overriding the HTTPS 443 port for reaching out to the domain ‘my.domain.com’ over port 80 without using the SSL/TLS certificate-based port for communication</strong>.</p>
<p>Thus, would suggest you to please <em><strong>configure NGINX application for port 443 as the frontend port for the cluster IP and ensure ‘SSL redirection’ is set to enabled due to which when the NGINX application is deployed</strong></em>, it will be not face the policy restrictions and get failed. Also, refer to the below snapshot of the listeners in application gateway and load balancer when provisioning an AGIC for an AKS cluster.</p>
<p><a href="https://i.stack.imgur.com/lpx4N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lpx4N.png" alt="AKS application gateway backend port" /></a></p>
<p><a href="https://i.stack.imgur.com/QJNYT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QJNYT.png" alt="AKS application gateway frontend port" /></a></p>
<p>Also, for more detailed information on deploying the NGINX application in AKS cluster on ports, kindly refer to the below documentation link: -</p>
<p><strong><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli</a></strong></p>
| Kartik Bhiwapurkar |
<p>I am trying to run the bookinfo example on my local with wsl2 and docker desk. I am having issues when trying to access the productpage service via the gateway as I got the connection refused. I am not sure whether I missed anything. Here is what I have done after googled a lot on the internet</p>
<ol>
<li>Deployed all services from bookinfo example and all up running, I can curl productpage from other service using kubectl exec</li>
<li>Deployed bookinfo-gateway using the file from the example without any change under the default namespace</li>
</ol>
<pre><code>Name: bookinfo-gateway
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2021-06-06T20:47:18Z
Generation: 1
Managed Fields:
API Version: networking.istio.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:selector:
.:
f:istio:
f:servers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-06-06T20:47:18Z
Resource Version: 2053564
Self Link: /apis/networking.istio.io/v1beta1/namespaces/default/gateways/bookinfo-gateway
UID: aa390a1d-2e34-4599-a1ec-50ad7aa9bdc6
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
</code></pre>
<ol start="3">
<li><p>The istio-ingressgateway can expose to the outside via localhost (not sure how this can be configured as it is deployed during istio installation) on 80, which I as understand will be used by bookinfo-gateway
kubectl get svc istio-ingressgateway -n istio-system
<a href="https://i.stack.imgur.com/8eKH6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8eKH6.png" alt="enter image description here" /></a></p>
</li>
<li><p>following Determining the ingress IP and ports section in <a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">the instruction</a>.</p>
</li>
</ol>
<p>My INGRESS_HOST=127.0.0.1 and INGRESS_PORT is 80</p>
<ol start="5">
<li>curl -v -s <a href="http://127.0.0.1:80/productpage" rel="nofollow noreferrer">http://127.0.0.1:80/productpage</a> | grep -o ".*"</li>
</ol>
<pre><code>* Trying 127.0.0.1:80...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
</code></pre>
<ol start="6">
<li><p>trying this <a href="http://127.0.0.1/productpage" rel="nofollow noreferrer">http://127.0.0.1/productpage</a> on browser, return 404. Does this 404 mean the gateway is kind of up but virtual service is not working??</p>
</li>
<li><p>further question if it is relevant. I am a bit confusing how wsl2 works now. It looks like localhost on windows browser and wsl2 terminal are not the same thing, though I know there is kind of forwarding from windows to wsl2 server (which I can get its IP from /etc/resolv.conf). if it is the same, why one return connection refused and the other return 404</p>
</li>
<li><p>On windows I have tried to disable IIS or anything running on port 80 (net stop http). Somehow, I still can see something listen to port 80</p>
</li>
</ol>
<pre><code>netstat -aon | findstr :80
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4
tasklist /svc /FI "PID eq 4"
Image Name PID Services
========================= ======== ============================================
System 4 N/A
</code></pre>
<p>I am wondering whether this is what causes the difference in point 7? As windows is running on another http server on port 80?</p>
<p>I know this a lot of questions asked. I believe many of us that new to istio and wsl2 may have similar questions. Hopefully, this helps others as well. Please advise.</p>
| user1619397 | <p>I managed to get this working: This is what I did.
Shell into the distro (mine was Ubuntu 20.04 LTS)</p>
<p>Run:</p>
<pre><code> sudo apt-get -y install socat
sudo apt update
sudo apt upgrade
exit
</code></pre>
<p>The above will add socat (which I was getting errors with when looking at the istio logs - connection refused) and update the distro to the latest updates (and upgrade them)</p>
<p>Now you have to run a port-forward to be able to host localhost: to hit the istio gateway with:</p>
<pre><code> kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
</code></pre>
<p>If you are already using 8080, just remove it from the command, just use :80 and the port forward will select a free port.</p>
<p>Now go to</p>
<pre><code> http://localhost:8080/productpage
</code></pre>
<p>You should hit the page and the port-forward should output</p>
<pre><code> Handling connection for 8080
</code></pre>
<p>Hope that helps...
Good thing is now I don't have to use Hyper-V or another cluster installer like minikube/microk8s and use the built-in kubernetes in docker desktop and... my laptop doesn't seem under load for what I'm doing too.</p>
| Shaun Cartwright |
<p>I'm running rancher in centos with the master node being the same machine.
I can do everything but when i try to "apt-get update" inside the pods i get:</p>
<pre><code> Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Temporary failure resolving 'archive.ubuntu.com'
Err:2 http://security.ubuntu.com/ubuntu focal-security InRelease
Temporary failure resolving 'security.ubuntu.com'
Err:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Temporary failure resolving 'archive.ubuntu.com'
Err:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Temporary failure resolving 'archive.ubuntu.com'
Reading package lists... Done
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease Temporary failure resolving 'security.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.
</code></pre>
<p>The problem is in the firewalld of centos because when i disable the firewall i have internet access inside the pods. I have already added the ports provided in this <a href="https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/firewall/" rel="nofollow noreferrer">link</a>. But still i cant have access to the internet.
Is there another way without disabling the centos firewall?</p>
<p>I'm using Centos 8 and Rancher 2.</p>
| Joao Marono | <p>I was able to solve it. The problem was in the docker not being able to resolve DNS queries inside containers. The work around was, first add the <a href="https://rancher.com/docs/rancher/v2.0-v2.4/en/installation/resources/advanced/firewall/" rel="nofollow noreferrer">ports</a> and then executing the following commands:</p>
<pre><code># Check what interface docker is using, e.g. 'docker0'
ip link show
# Check available firewalld zones, e.g. 'public'
sudo firewall-cmd --get-active-zones
# Check what zone the docker interface it bound to, most likely 'no zone' yet
sudo firewall-cmd --get-zone-of-interface=docker0
# So add the 'docker0' interface to the 'public' zone. Changes will be visible only after firewalld reload
sudo nmcli connection modify docker0 connection.zone public
# Masquerading allows for docker ingress and egress (this is the juicy bit)
sudo firewall-cmd --zone=public --add-masquerade --permanent
# Reload firewalld
sudo firewall-cmd –reload
# Reload dockerd
sudo systemctl restart docker
</code></pre>
| Joao Marono |
<p>While creating a deployment using command</p>
<pre><code>kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
</code></pre>
<p>I am getting error <code>Error: unknown flag: --replicas</code></p>
<pre><code>controlplane $ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
controlplane $ kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
Error: unknown flag: --replicas
See 'kubectl create deployment --help' for usage.
</code></pre>
<p>Could anyone please help me with the reason for this as this command is working on other Kubernetes clusters?</p>
| Ashu | <p>You may try to put a blank character between -- and the commands
For example</p>
<p>kubectl create deploy nginx --image=nginx:1.7.8 -- replicas=2</p>
<p>It's work for me.</p>
| Tim Wang |
<p>I'm running Docker Desktop for MacOS and I don't know how to stop the Docker service. It runs all the time using up the MacBook battery.</p>
<img src="https://i.stack.imgur.com/buRf4.png" width="200" />
<p>On a simple search, there are docs showing how to stop the containers but not the docker service itself.</p>
<p>I might be missing something obvious, but is there a way to stop both Kubernetes and Docker service without having to kill the desktop app?</p>
| Ébe Isaac | <p>The docker desktop app starts a qemu vm, so the desktop app has no control over the PIDs.
To overcome the "situation" do the following:</p>
<ul>
<li><p>open the Terminal app</p>
</li>
<li><p>edit the file <code>~/.bash_profile</code></p>
</li>
<li><p>add the following lines</p>
</li>
</ul>
<pre>
#macro to kill the docker desktop app and the VM (excluding vmnetd -> it's a service)
function kdo() {
ps ax|grep -i docker|egrep -iv 'grep|com.docker.vmnetd'|awk '{print $1}'|xargs kill
}
</pre>
<ul>
<li>save the file</li>
</ul>
<p>Quit the terminal app and open it again.</p>
<p>Type <code>kdo</code> to kill all the dependend apps (hypervisor, docker daemon etc.)</p>
| Hannes Stoolmann |
<p>We have a kubernetes system that among other activities handling thousands of incoming inputs from sensors. Some sensors can stop reporting from time to time, so we can have an alert about the event of disconnection. When sensor is back we would like also to get an event for this as well. So, between these events (connection and disconnection) the status of a specific sensor can be OK or NOK and we would like to see the status of currently disconnected sensors without going over all the issued events and finding out each time.</p>
<p>Can we do that with Prometheus Alertmanager?
If yes, can you please refer to the possible ways to accomplish this?
If no, what will be your default way to handle this requirement?</p>
| Alex Levit | <p>This has to be managed at Prometheus Server side by adding self-monitoring alerts, and more precisely the PrometheusTargetMissing alert for your case</p>
<pre><code> - alert: PrometheusTargetMissing
expr: up == 0
for: 0m
labels:
severity: critical
annotations:
summary: Prometheus target missing (instance {{ $labels.instance }})
description: A Prometheus target has disappeared. An exporter might be crashed.\n VALUE = {{ $value }}\n LABELS: {{ $labels }}
</code></pre>
<p>Reference:
<a href="https://awesome-prometheus-alerts.grep.to/rules.html#rule-prometheus-self-monitoring-2" rel="nofollow noreferrer">https://awesome-prometheus-alerts.grep.to/rules.html#rule-prometheus-self-monitoring-2</a></p>
| Yiadh TLIJANI |
<p>As far as I understand from the <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">VPA documentation</a> the vertical pod autoscaler stop/restart the pod based-on the predicted request/limit's lower/upper bounds and target.
In the "auto" mode it says that the pod will be stopped and restarted, however, I don't get the point of doing a prediction and restarting the pod while it is still working because although we know that it might go out of resource eventually it is still working and we can wait to rescale it once it has really gone out of memory/cpu. Isn't it more efficient to just wait for the pod to go out of memory/cpu and then restart it with the new predicted request?</p>
<p>Is recovering from a dead container more costly than stopping and restarting the pod ourselves? If yes, in what ways?</p>
| Saeid Ghafouri | <blockquote>
<p>Isn't it more efficient to just wait for the pod to go out of
memory/cpu and then restart it with the new predicted request?</p>
</blockquote>
<p>In my opinion this is not the best solution. If the pod would try to use more CPU than available limits than the container's CPU use is being throttled, if the container is trying to use more memory than limits kubernetes OOM kills the container due to limit overcommit but limit on npods usually can be higher than sum of node capacity so this can lead to memory exhaust in the node and can case the death of other workload/pods.</p>
<p>Answering your question - VPA was designed to simplify those scenarios:</p>
<blockquote>
<p>Vertical Pod Autoscaler (VPA) frees the users from necessity of
setting up-to-date resource limits and requests for the containers in
their pods. When configured, it will set the requests automatically
based on usage and thus allow proper scheduling onto nodes so that
appropriate resource amount is available for each pod. It will also
maintain ratios between limits and requests that were specified in
initial containers configuration.</p>
</blockquote>
<p>In addition VPA should is not only responsible for scaling up but also for scaling down:
it can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on their usage over time.</p>
<blockquote>
<p>Is recovering from a dead container more costly than stopping and
restarting the pod ourselves? If yes, in what ways?</p>
</blockquote>
<p>Talking about the cost of recovering from the dead container - the main possible cost might be requests that can eventually get lost during OOM killing process as per the official <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#quick-start" rel="nofollow noreferrer">doc</a>.</p>
<p>As per the <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#quick-start" rel="nofollow noreferrer">official documentation</a> VPAs operates in those mode:</p>
<blockquote>
<p>"Auto": VPA assigns resource requests on pod creation as well as
updates them on existing pods using the preferred update mechanism
Currently this is equivalent to "Recrete".</p>
<p>"Recreate": VPA assigns resource requests on pod creation as well as
updates them on existing pods by evicting them when the requested
resources differ significantly from the new recommendation (respecting
the Pod Disruption Budget, if defined).</p>
<p>"Initial": VPA only assigns resource requests on pod creation and
never changes them later.</p>
<p>"Off": VPA does not automatically change resource requirements of the
pods.</p>
</blockquote>
<p>NOTE:
VPA Limitations</p>
<ul>
<li>VPA recommendation might exceed available resources, such as you cluster capacity or your team’s quota. Not enough available resources may cause pods to go pending.</li>
<li>VPA in Auto or Recreate mode won’t evict pods with one replica as this would cause disruption.</li>
<li>Quick memory growth might cause the container to be out of memory killed. As out of memory killed pods aren’t rescheduled, VPA won’t apply new resource.</li>
</ul>
<p>Please also take a look at some of the <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#known-limitations" rel="nofollow noreferrer">VPA Known limitations</a>:</p>
<blockquote>
<ul>
<li>Updating running pods is an experimental feature of VPA. Whenever VPA updates the pod resources the pod is recreated, which causes all
running containers to be restarted. The pod may be recreated on a
different node.</li>
<li>VPA does not evict pods which are not run under a controller. For such pods Auto mode is currently equivalent to Initial.</li>
<li>VPA reacts to most out-of-memory events, but not in all situations.</li>
</ul>
</blockquote>
<p>Additional resources:
<a href="https://povilasv.me/vertical-pod-autoscaling-the-definitive-guide/" rel="nofollow noreferrer">VERTICAL POD AUTOSCALING: THE DEFINITIVE GUIDE</a></p>
| Jakub Siemaszko |
<p>I'd like to create a nginx ingress controller with AWS internal NLB, the requirement is fix the IP address of NLB endpoint, for example, currently the NLB dns of Nginx ingress service is abc.elb.eu-central-1.amazonaws.com which is resolved to ip address 192.168.1.10, if I delete and re-create nginx ingress controller, I want the NLB DNS must be the same as before.
Having a look in kubernetes service annotation, I did not see any way to re-use existing NLB, however, I find out the annotation service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses in <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/" rel="nofollow noreferrer">link</a>, as far as I understand that it allow me to set ip address for NLB, but it not work as my expectation, everytime I re-created nginx controller, the ip address is difference, Below is K8s service yaml file.</p>
<pre><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "10.136.103.251"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-00df069133b22"
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
spec:
type: LoadBalancer
externalTrafficPolicy: Local
</code></pre>
<p>I know this requirement is werid, is it possible to do that?</p>
| Tien Dung Tran | <p>If your Kubernetes cluster runs on a VPC with more than one subnet (which is probably the case), you must provide a private ip address for each subnet.</p>
<p>I installed the <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">AWS Load balancer controller</a> with the helm chart, then i installed the nginx ingress controller with this helm chart :</p>
<pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress -f internal-ingress-values.yaml
</code></pre>
<p>Here the content of internal-ingress-values.yaml</p>
<pre><code>controller:
ingressClass: nginx
service:
enableHttp: false
enableHttps: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.136.103.251, 10.136.104.251
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-00a1a7f9949aa0ba1, subnet-12ea9f1df24aa332c
ingressClassResource:
enabled: true
default: true
</code></pre>
<p>According to the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#subnets" rel="nofollow noreferrer">documentation</a> the <strong>service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses</strong> annotation <em>length/order must match subnets</em></p>
<p>So, you must provide IP addresses and subnet in the same order (don't mismatch).
If you take my example above, you must make sure that :</p>
<ul>
<li>10.136.103.251 is included in subnet-00a1a7f9949aa0ba1</li>
<li>10.136.104.251 is included in subnet-12ea9f1df24aa332c</li>
</ul>
<p>It's a good idea to tag your subnets according to the <a href="https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/" rel="nofollow noreferrer">documentation</a> :</p>
<p>Key: kubernetes.io/cluster/my-cluster-name
Value: shared</p>
<p>Key: kubernetes.io/role/internal-elb
Value: 1</p>
<p>I tested this K8S on 1.20 and it works for my project.
Don't provide "ingressClassResource" if you're on K8S <= 1.17.</p>
| fboulais |
<p>I'm aware of the concept "provisioner" but i do not understand what intree ebs driver means.
Is ebs.csi.aws.com the csi driver maintained by the aws and the other maintained by k8s itself?
Is one better than the other?</p>
| pandawithcat | <p>As per <a href="https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi" rel="noreferrer">the official documentation</a>:</p>
<blockquote>
<p>Prior to CSI, Kubernetes provided a powerful volume plugin system. These volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries. However, adding support for new volume plugins to Kubernetes was challenging. Vendors that wanted to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain. Using the Container Storage Interface in Kubernetes resolves these major issues.</p>
</blockquote>
<blockquote>
<p>As more CSI Drivers were created and became production ready, we wanted all Kubernetes users to reap the benefits of the CSI model. However, we did not want to force users into making workload/configuration changes by breaking the existing generally available storage APIs. The way forward was clear - we would have to replace the backend of the “in-tree plugin” APIs with CSI.</p>
</blockquote>
<p>So answering your question - yes, ebs.csi.aws.com is maintained by AWS while the in-tree plugin is maintained by Kubernetes but it seems like they've stopped implementing new features as per <a href="https://grepmymind.com/its-all-about-the-data-a-journey-into-kubernetes-csi-on-aws-f2b998676ce9" rel="noreferrer">this article</a>:</p>
<blockquote>
<p>The idea of this journey started picking up steam when I realized that the in-tree storage plugins were deprecated and no new enhancements were being made to them starting with Kubernetes 1.20. When I discovered that simply switching from gp2 to gp3 volumes meant I had to start using the AWS CSI Driver I realized I was behind the times.</p>
</blockquote>
<p>Answering your last question it's probably better to use ebs.csi.aws.com as per <a href="https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-ebs-container-storage-interface-driver-is-now-generally-available/" rel="noreferrer">this note</a>:</p>
<blockquote>
<p>The existing in-tree EBS plugin is still supported, but by using a CSI
driver, you benefit from the decoupling between the Kubernetes
upstream release cycle and the CSI driver release cycle.</p>
</blockquote>
| Jakub Siemaszko |
<p>I'm trying to make a list of all deployments' variables in kubernetes cluster with kubectl command. I'm doing something like</p>
<pre><code>kubectl get deploy --all-namespaces -o custom-columns='NAME:metadata.name,ENV:spec.template.spec.containers.env.name'
</code></pre>
<p>but something always go wrong. How should I write kubectl command to get table of deployments and their variables with values?</p>
| cusl | <p>Here is the right command:</p>
<pre><code>kubectl get deploy --all-namespaces -o custom-columns='NAME:metadata.name,ENV:spec.template.spec.containers[*].env[*].name'
</code></pre>
| Yiadh TLIJANI |
<p>When I run the command</p>
<pre><code>kubectl create -f .k8s/deployment.yaml --context=cluster-1
</code></pre>
<p>I get the error</p>
<blockquote>
<p>error: error validating ".k8s/deployment.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false</p>
</blockquote>
<p><code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
...
volumes:
- name: auth
secret:
secretName: d-secrets
items:
- key: SECRETS
path: foobar.json
</code></pre>
<p>What can be?</p>
| Rodrigo | <p><code>...unknown field "volumes" in io.k8s.api.core.v1.Container</code></p>
<p>Your <code>volumes</code> section is placed wrongly. Try:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
...
template:
...
spec:
containers:
- name: ...
...
volumes: <-- should be same level as `containers`
- name: auth
secret:
secretName: d-secrets
items:
- key: SECRETS
path: foobar.json
</code></pre>
| gohm'c |
<p>The Kubernetes documentation says:</p>
<blockquote>
<p>The administrator creates one <code>ResourceQuota</code> for each namespace.</p>
</blockquote>
<p>However, Kubernetes API does not prevent from creating more than one <code>ResourceQuota</code> per namespace.</p>
<p><strong>What happens if I have two <code>ResourceQuota</code>s in one namespace?</strong> Which one is used? The one with lower limits or the one with higher limits?</p>
<p>I cannot find the answer without testing it, which takes some time.</p>
| Daniel Andrzejewski | <p>Yes, that's true, it's possible to create multiple ResourceQuota in the same namespace and it has been mentioned in few cases in <a href="https://github.com/kubernetes/kubernetes/issues/23698" rel="nofollow noreferrer">this issue</a>. It's important to remember that the ResourceQuota is one of the Admission Controllers turned on by default as per <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#resourcequota" rel="nofollow noreferrer">the official documentation</a>:</p>
<blockquote>
<p>This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ResourceQuota object in a Namespace.</p>
</blockquote>
<p>That means that regardless of the number of ResourceQuotas in the same namespace, if they are all meeting the requirements it'll be fine. A problem will occur if there will be any ResourceQuota/s which is/are violating the requirements and than it'll simply fail.</p>
| Jakub Siemaszko |
<p>how to using crictl to get the dangling images? The server was not install docker right now. is it possible to using crictl to get the dangling images? I have tried to using this command:</p>
<pre><code>crictl images
</code></pre>
<p>but could not recognized which images could be remove.</p>
| Dolphin | <p>It is not possible to get the dangling images using crictl.Safest and easiest way to clean up dangling images is by using <a href="https://stackoverflow.com/questions/45142528/what-is-a-dangling-image-and-what-is-an-unused-image">docker</a>.</p>
<p>You can use the <code>$ docker image prune</code> command which allows you to clean up unused images. By default, docker image prune only cleans up dangling imagesBy default, docker image prune only cleans up dangling images.</p>
<p>Try listing your images with <code>crictl images</code> and if you want to remove all unused images run the below command:</p>
<pre><code>crictl rmi --prune
</code></pre>
<p>You need a rather current crictl for that. From the help:</p>
<pre><code>$ crictl rmi --help
NAME:
crictl rmi - Remove one or more images
USAGE:
crictl rmi [command options] IMAGE-ID [IMAGE-ID...]
OPTIONS:
--all, -a Remove all images (default: false)
--prune, -q Remove all unused images (default: false)
--help, -h show help (default: false)
</code></pre>
<p>Refer to the <a href="https://stackoverflow.com/questions/69981852/how-to-use-local-docker-images-in-kubernetes-deployments-not-minikube">stackpost</a> for more information.</p>
| Fariya Rahmat |
<p>I am trying to deploy this <code>docker-compose</code> app on GCP kubernetes.</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.5"
x-environment:
&default-back-environment
# Database settings
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: taiga
POSTGRES_HOST: taiga-db
# Taiga settings
TAIGA_SECRET_KEY: "taiga-back-secret-key"
TAIGA_SITES_SCHEME: "http"
TAIGA_SITES_DOMAIN: "localhost:9000"
TAIGA_SUBPATH: "" # "" or "/subpath"
# Email settings. Uncomment following lines and configure your SMTP server
# EMAIL_BACKEND: "django.core.mail.backends.smtp.EmailBackend"
# DEFAULT_FROM_EMAIL: "[email protected]"
# EMAIL_USE_TLS: "False"
# EMAIL_USE_SSL: "False"
# EMAIL_HOST: "smtp.host.example.com"
# EMAIL_PORT: 587
# EMAIL_HOST_USER: "user"
# EMAIL_HOST_PASSWORD: "password"
# Rabbitmq settings
# Should be the same as in taiga-async-rabbitmq and taiga-events-rabbitmq
RABBITMQ_USER: taiga
RABBITMQ_PASS: taiga
# Telemetry settings
ENABLE_TELEMETRY: "True"
x-volumes:
&default-back-volumes
- taiga-static-data:/taiga-back/static
- taiga-media-data:/taiga-back/media
# - ./config.py:/taiga-back/settings/config.py
services:
taiga-db:
image: postgres:12.3
environment:
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: taiga
volumes:
- taiga-db-data:/var/lib/postgresql/data
networks:
- taiga
taiga-back:
image: taigaio/taiga-back:latest
environment: *default-back-environment
volumes: *default-back-volumes
networks:
- taiga
depends_on:
- taiga-db
- taiga-events-rabbitmq
- taiga-async-rabbitmq
taiga-async:
image: taigaio/taiga-back:latest
entrypoint: ["/taiga-back/docker/async_entrypoint.sh"]
environment: *default-back-environment
volumes: *default-back-volumes
networks:
- taiga
depends_on:
- taiga-db
- taiga-back
- taiga-async-rabbitmq
taiga-async-rabbitmq:
image: rabbitmq:3.8-management-alpine
environment:
RABBITMQ_ERLANG_COOKIE: secret-erlang-cookie
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: taiga
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga-async-rabbitmq-data:/var/lib/rabbitmq
networks:
- taiga
taiga-front:
image: taigaio/taiga-front:latest
environment:
TAIGA_URL: "http://localhost:9000"
TAIGA_WEBSOCKETS_URL: "ws://localhost:9000"
TAIGA_SUBPATH: "" # "" or "/subpath"
networks:
- taiga
# volumes:
# - ./conf.json:/usr/share/nginx/html/conf.json
taiga-events:
image: taigaio/taiga-events:latest
environment:
RABBITMQ_USER: taiga
RABBITMQ_PASS: taiga
TAIGA_SECRET_KEY: "taiga-back-secret-key"
networks:
- taiga
depends_on:
- taiga-events-rabbitmq
taiga-events-rabbitmq:
image: rabbitmq:3.8-management-alpine
environment:
RABBITMQ_ERLANG_COOKIE: secret-erlang-cookie
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: taiga
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga-events-rabbitmq-data:/var/lib/rabbitmq
networks:
- taiga
taiga-protected:
image: taigaio/taiga-protected:latest
environment:
MAX_AGE: 360
SECRET_KEY: "taiga-back-secret-key"
networks:
- taiga
taiga-gateway:
image: nginx:1.19-alpine
ports:
- "9000:80"
volumes:
- ./taiga-gateway/taiga.conf:/etc/nginx/conf.d/default.conf
- taiga-static-data:/taiga/static
- taiga-media-data:/taiga/media
networks:
- taiga
depends_on:
- taiga-front
- taiga-back
- taiga-events
volumes:
taiga-static-data:
taiga-media-data:
taiga-db-data:
taiga-async-rabbitmq-data:
taiga-events-rabbitmq-data:
networks:
taiga:
</code></pre>
<p>I have used <code>Kompose</code> to generate my kubernetes deployment files. All the pods are running bare two. However, they show no error except this.</p>
<blockquote>
<p>Unable to attach or mount volumes: unmounted
volumes=[taiga-static-data taiga-media-data], unattached
volumes=[kube-api-access-9c74v taiga-gateway-claim0 taiga-static-data
taiga-media-data]: timed out waiting for the condition</p>
</blockquote>
<p>Pod Status</p>
<pre><code>taiga-async-6c7d9dbd7b-btv79 1/1 Running 19 16h
taiga-async-rabbitmq-86979cf759-lvj2m 1/1 Running 0 16h
taiga-back-7bc574768d-hst2v 0/1 ContainerCreating 0 6m34s
taiga-db-59b554854-qdb65 1/1 Running 0 16h
taiga-events-74f494df97-8rpjd 1/1 Running 0 16h
taiga-events-rabbitmq-7f558ddf88-wc2js 1/1 Running 0 16h
taiga-front-6f66c475df-8cmf6 1/1 Running 0 16h
taiga-gateway-77976dc77-w5hp4 0/1 ContainerCreating 0 3m6s
taiga-protected-7794949d49-crgbt 1/1 Running 0 16h
</code></pre>
<p>It is a problem with mounting the volume, I am certain as it seems from an earlier error that <code>taiga-back</code> and <code>taiga-db</code> share a volume.</p>
<p>This is the <code>Kompose</code> file I have.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: taiga-gateway
name: taiga-gateway
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: taiga-gateway
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.network/taiga: "true"
io.kompose.service: taiga-gateway
spec:
containers:
- image: nginx:1.19-alpine
name: taiga-gateway
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: taiga-gateway-claim0
- mountPath: /taiga/static
name: taiga-static-data
- mountPath: /taiga/media
name: taiga-media-data
restartPolicy: Always
volumes:
- name: taiga-gateway-claim0
persistentVolumeClaim:
claimName: taiga-gateway-claim0
- name: taiga-static-data
persistentVolumeClaim:
claimName: taiga-static-data
- name: taiga-media-data
persistentVolumeClaim:
claimName: taiga-media-data
status: {}
</code></pre>
<p>Perhaps if I can fix one I can figure out the other pod as well. This is the application
<a href="https://github.com/kaleidos-ventures/taiga-docker" rel="nofollow noreferrer">https://github.com/kaleidos-ventures/taiga-docker</a> . Any pointers are welcome. <code>kubectl describe pod</code> output</p>
<pre><code>Name: taiga-gateway-77976dc77-w5hp4
Namespace: default
Priority: 0
Node: gke-taiga-cluster-default-pool-9e5ed1f4-0hln/10.128.0.18
Start Time: Wed, 13 Apr 2022 05:32:10 +0000
Labels: io.kompose.network/taiga=true
io.kompose.service=taiga-gateway
pod-template-hash=77976dc77
Annotations: kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/taiga-gateway-77976dc77
Containers:
taiga-gateway:
Container ID:
Image: nginx:1.19-alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/etc/nginx/conf.d/default.conf from taiga-gateway-claim0 (rw)
/taiga/media from taiga-media-data (rw)
/taiga/static from taiga-static-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c74v (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
taiga-gateway-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-gateway-claim0
ReadOnly: false
taiga-static-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-static-data
ReadOnly: false
taiga-media-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-media-data
ReadOnly: false
kube-api-access-9c74v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/taiga-gateway-77976dc77-w5hp4 to gke-taiga-cluster-default-pool-9e5ed1f4-0hln
Warning FailedMount 5m49s (x4 over 14m) kubelet Unable to attach or mount volumes: unmounted volumes=[taiga-static-data taiga-media-data], unattached volumes=[taiga-gateway-claim0 taiga-static-data taiga-media-data kube-api-access-9c74v]: timed out waiting for the condition
Warning FailedMount 81s (x3 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[taiga-static-data taiga-media-data], unattached volumes=[kube-api-access-9c74v taiga-gateway-claim0 taiga-static-data taiga-media-data]: timed out waiting for the condition
</code></pre>
| Abhishek Rai | <pre><code>volumes:
taiga-static-data:
taiga-media-data:
taiga-db-data:
taiga-async-rabbitmq-data:
taiga-events-rabbitmq-data:
</code></pre>
<p>Base on your origin docker spec you can replace <code>persistentVolumeClaim</code> with <code>emptyDir</code>.</p>
<pre><code>volumes:
- name: taiga-gateway-claim0
emptyDir: {}
- name: taiga-static-data
emptyDir: {}
- name: taiga-media-data
emptyDir: {}
</code></pre>
<p>Or if you want to persist your data (continue using <code>persistentVolumeClaim</code>), you should create the PVC first:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-gateway-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-static-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taiga-media-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
</code></pre>
<p>The above spec will dynamically provision 3 persistent volumes for your pod using the default StorageClass on your <strong>GKE</strong> cluster.</p>
| gohm'c |
<p>I wish to understand co-relation between "kubectl top pods" output and linux command "top" executed inside POD.
How do they co-relate ?</p>
<p>While top shows current load/usage status along with 5-min & 15-min details, do we have any such functionality for kubectl top pods command ?</p>
<p>Thanks in advance.</p>
| Vijay Gharge | <p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top" rel="nofollow noreferrer">Kubectl top</a> - Allows you to see the resource consumption for nodes or pods. This command requires Metrics Server to be correctly configured and working on the server.</p>
<p>The <strong>top</strong> command is used to show the Linux processes. It provides a dynamic real-time view of the running system</p>
<p>If you run top inside the pod, it will be like you run it on the host system because the pod is using the kernel of the host system. Unix top uses the proc virtual filesystem and reads /proc/meminfofile to get actual information about current memory status. Containers inside pods partially share /proc with the host system include a path about memory and CPU information.</p>
<p>For more information refer to <a href="https://www.unixtutorial.org/commands/top" rel="nofollow noreferrer">top command in linux</a> and <a href="https://www.containiq.com/post/kubectl-top-pod-node-for-metrics" rel="nofollow noreferrer">Kubernetes Top pod/node</a> authored by Kasper Siig.</p>
| Fariya Rahmat |
<p>In my cluster, my servers have different computing power and bandwidth, so sometimes I want to decide which service replicas running on which node. I know we can choose the replicas with the docker service create command, but how to update it when after the service is created and running? In the official docs, the update command only allows changing the number of replicas.</p>
| CharlesC | <p><code>...I want to decide which service replicas running on which node.</code></p>
<p>You can modify a live service constraints by using <code>--constraint-rm</code> and <code>--constraint-add</code>. Example presumed node(s) are labeld with a key named "type": <code>docker service update --constraint-rm node.labels.type==small --constraint-add node.labels.type==large my-redis</code>.</p>
| gohm'c |
<p>I have a local setup of three raspberry pi's and followed the tutorial on <a href="https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview" rel="nofollow noreferrer">here</a>.
I managed to get my microk8s cluster running. Next I wanted to deploy <a href="https://www.jenkins.io/doc/book/installing/kubernetes/#install-jenkins-with-yaml-files" rel="nofollow noreferrer">Jenkins</a>.</p>
<p>Whenever I execute the first command:</p>
<pre><code>kubectl create -f jenkins-deployment.yaml -n jenkins
</code></pre>
<p>I get the following error:</p>
<pre><code>standard_init_linux.go:219: exec user process caused: exec format error
</code></pre>
<p>Some other searches suggest installing docker. However in the <a href="https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview" rel="nofollow noreferrer">tutorial</a> there is nothing about installing docker. Any ideas what is happening here?</p>
| greedsin | <p><strong>Docker vs. containerd</strong></p>
<p>Regarding your suggestion about the docker.
<a href="https://stackoverflow.com/a/55478145/9929015">From Version 1.14.0 of MicroK8s (released 25 March 2019) containerd replaced dockerd</a>.
Starting from version 1.14.0 containerd automatically ships with MicroK8S installation So, you don't need dockerd as CRI.
Below you can find modules MicroK8S set up during installation:
The <a href="https://tharangarajapaksha.medium.com/start-k8s-with-microk8s-85b67738b557" rel="nofollow noreferrer">following systemd services</a> will be running in your system:</p>
<ul>
<li>snap.microk8s.daemon-apiserver, is the kube-apiserver daemon started using the arguments in <code>${SNAP_DATA}/args/kube-apiserver</code></li>
<li>snap.microk8s.daemon-controller-manager, is the kube-controller-manager daemon started using the arguments in <code>${SNAP_DATA}/args/kube-controller-manager</code></li>
<li>snap.microk8s.daemon-scheduler, is the kube-scheduler daemon started using the arguments in <code>${SNAP_DATA}/args/kube-scheduler</code></li>
<li>snap.microk8s.daemon-kubelet, is the kubelet daemon started using the arguments in <code>${SNAP_DATA}/args/kubelet</code></li>
<li>snap.microk8s.daemon-proxy, is the kube-proxy daemon started using the arguments in <code>${SNAP_DATA}/args/kube-proxy</code></li>
<li><strong>snap.microk8s.daemon-containerd</strong>, is the containerd daemon started using the configuration in <code>${SNAP_DATA}/args/containerd</code> and <code>${SNAP_DATA}/args/containerd-template.toml</code>.</li>
<li>snap.microk8s.daemon-etcd, is the etcd daemon started using the arguments in <code>${SNAP_DATA}/args/etcd</code></li>
</ul>
<hr />
<p><strong>ARM architecture</strong></p>
<p>Next, Raspberry Pi and, as mentioned previously by community, ARM.
You can not use regular amd64-based images for ARM architecture.</p>
<p><strong>Possible solutions</strong></p>
<p>To solve a problem, I recommend you 2 options below.</p>
<ol>
<li><strong>Use already prepared ARM-based image</strong> of <a href="https://hub.docker.com/r/mlucken/jenkins-arm" rel="nofollow noreferrer">Jenkins for ARM architecture</a>. Also you can search images for Raspberry Pi with filters. Just select which architecture you would like to use: ARM, ARM64, etc.</li>
</ol>
<p>Some images have been ported for other architectures, and many of these are officially supported (to various degrees).</p>
<p>ARMv6 32-bit (<code>arm32v6</code>): <a href="https://hub.docker.com/u/arm32v6/" rel="nofollow noreferrer">https://hub.docker.com/u/arm32v6/</a></p>
<p>ARMv7 32-bit (<code>arm32v7</code>): <a href="https://hub.docker.com/u/arm32v7/" rel="nofollow noreferrer">https://hub.docker.com/u/arm32v7/</a></p>
<p>ARMv8 64-bit (<code>arm64v8</code>): <a href="https://hub.docker.com/u/arm64v8/" rel="nofollow noreferrer">https://hub.docker.com/u/arm64v8/</a></p>
<ol start="2">
<li><strong>Prepare your own image for ARM using <a href="https://github.com/docker/buildx" rel="nofollow noreferrer">buildx</a></strong></li>
</ol>
<p>References:</p>
<ul>
<li><a href="https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/" rel="nofollow noreferrer">Multi-arch build and images, the simple way</a></li>
<li><a href="https://carlosedp.medium.com/cross-building-arm64-images-on-docker-desktop-254d1e0bc1f9" rel="nofollow noreferrer">Cross building ARM images on Docker Desktop</a></li>
</ul>
| Andrew Skorkin |
<p>I would like to replicate volume data among multiple nodes for redundancy.</p>
<p>I saw that the CSI drivers support <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">snapshots</a> but I was looking for something more <a href="https://linuxize.com/post/how-to-use-rsync-for-local-and-remote-data-transfer-and-synchronization/" rel="nofollow noreferrer">rsync</a>.</p>
<p>Any help is greatly appreciated.</p>
| sashok_bg | <p>Having analysed the comments it looks like one of the options would be to deploy <code>rsync</code> as a container.</p>
<p>For example <code>rsync</code> deployments one can visit below links:</p>
<ul>
<li><a href="https://hub.docker.com/r/steveltn/rsync-deploy/" rel="nofollow noreferrer">rsync-deploy</a></li>
<li><a href="https://medium.com/jaequery/a-simple-docker-deployment-with-only-rsync-and-ssh-b283ad5129d1" rel="nofollow noreferrer">A simple Docker deployment with just rsync and SSH</a></li>
</ul>
| Jakub Siemaszko |
<p>How can I check timestamp when a kubernetes pod was added or removed from service (endpointslice)?
Maybe kubernetes leaves a log or event of it, but I can't find any documents regarding it.</p>
<p>My deployment replicas scale all the time, but I don't see any events regarding service or endpointslices.</p>
| rigophil | <p>You can use the <code>kubectl get events</code> command to check the timestamp of the kubernetes pod.This will give you firstTimestamp and lastTimestamp for each event. You can request the output in yaml/json format if you want to check the event timestamps by running the below command:</p>
<pre><code>kubectl get events -o yaml
</code></pre>
<p>You can also use a combination of customs columns and field selector by running the below command:</p>
<pre><code>$ kubectl get events -o custom-columns=FirstSeen:.firstTimestamp,LastSeen:.lastTimestamp,Count:.count,From:.source.component,Type:.type,Reason:.reason,Message:.message \ --field-selector involvedObject.kind=Pod,involvedObject.name=my-pod
</code></pre>
| Fariya Rahmat |
<p>I'm preparing all the Ingress manifest files to keep the latest apiVersion (<strong>networking.k8s.io/v1</strong>) to upgrade my cluster from 1.19 to 1.22.</p>
<p>I'm deleting the previous Ingress rule and then recreating:</p>
<pre><code>k delete ingress/my-ingress
k create -f /tmp/ingress.yaml
</code></pre>
<p>Unfortunately, the Ingress is created but with apiVersion <strong>extensions/v1beta1</strong> that's different for what I have on my manifest:</p>
<pre class="lang-yaml prettyprint-override"><code>$ k get ingress/my-ingress -o yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
creationTimestamp: "2021-08-11T19:42:08Z"
</code></pre>
<p>Here is an example of the YAML I'm using:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
labels:
app.kubernetes.io/instance: my-app
app.kubernetes.io/name: my-app
name: my-ingress
namespace: default
spec:
rules:
- host: application.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: my-app
port:
number: 443
</code></pre>
<p>Kubernetes version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:09:25Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.13-eks-8df270", GitCommit:"8df2700a72a2598fa3a67c05126fa158fd839620", GitTreeState:"clean", BuildDate:"2021-07-31T01:36:57Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Ingress controller version (I upgraded from 0.41 to avoid any kind of issues):</p>
<pre><code>Image: k8s.gcr.io/ingress-nginx/controller:v0.48.1@sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
</code></pre>
| fpaganetto | <p><a href="https://github.com/kubernetes/kubernetes/issues/94761" rel="nofollow noreferrer">This is working as expected</a>, in particular check <a href="https://github.com/kubernetes/kubernetes/issues/94761#issuecomment-691982480" rel="nofollow noreferrer">github answer</a></p>
<p>When you create an ingress object, it can be read via any version - the server handles converting into the requested version.
In your request <code>get ingress/my-ingress -o yaml</code> you not specified version, which should be read. In such case kubectl searches documents returned by the server to find the first among them with requested resource. And it can be any version, as in your case.</p>
<p>That is why, if you want to check particular version, you can:</p>
<ol>
<li>Improve your request with adding your manifest file, since version specified in the file</li>
</ol>
<pre><code> $ kubectl get -f ingress.yaml -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
...
</code></pre>
<ol start="2">
<li>Other option is to qualify necessary version in get request:</li>
</ol>
<pre><code> $ kubectl get ingresses.v1.networking.k8s.io
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> application.com 80 12m
$ kubectl get ingresses.v1beta1.networking.k8s.io
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> application.com 80 13m
</code></pre>
| Andrew Skorkin |
<p>We are using a self-hosted microk8s cluster (single-node for now) for our internal staging workloads. From time to time, the server becomes unresponsive and I can't even ssh into it. The only way out is a restart.</p>
<p>I can see that before the server crashes, its memory usage goes to the limit and the CPU load shoots up to over 1000. So running out of resources is likely to blame.</p>
<p>That brings me to the question - <strong>how can I set global limits for microk8s to not consume <em>everything</em>?</strong></p>
<hr />
<p>I know there are resource limits that can be assigned to Kubernetes pods, and ResourceQuotas to limit aggregate namespace resources. But that has the downside of low resource utilization (if I understand those right). For simplicity, let's say:</p>
<ul>
<li>each pod is the same</li>
<li>its real memory needs can go from <code>50 MiB</code> to <code>500 MiB</code></li>
<li>each pod is running in its own namespace</li>
<li>there are 30 pods</li>
<li>the server has 8 GiB of RAM</li>
</ul>
<ol>
<li><p>I assign <code>request: 50 Mi</code> and <code>limit: 500 Mi</code> to the pod. As long as the node has at least <code>50 * 30 Mi = 1500 Mi</code> of memory, it should run all the requested pods. But there is nothing stopping all of the pods using <code>450 Mi</code> of memory each, which is under the individual limits, but still in total being <code>450 Mi * 30 = 13500 Mi</code>, which is more than the server can handle. And I suspect this is what leads to the server crash in my case.</p>
</li>
<li><p>I assign <code>request: 500 Mi</code> and <code>limit: 500 Mi</code> to the pod to ensure the total memory usage never goes above what I anticipate. This will of course allow me to only schedule 16 pods. But when the pods run with no real load and using just <code>50 Mi</code> of memory, there is severe RAM underutilization.</p>
</li>
<li><p>I am looking for a third option. Something to let me schedule pods freely and <em>only</em> start evicting/killing them when the total memory usage goes above a certain limit. And that limit needs to be configurable and lower than the total memory of the server, so that it does not die.</p>
</li>
</ol>
<hr />
<p>We are using microk8s but I expect this is a problem all self-hosted nodes face, as well as something AWS/Google/Azure have to deal with too.</p>
<p>Thanks</p>
| Martin Melka | <p>Since microk8s running on the host machine, then all resources of the host are allocated for it. That is why if you want to keep your cluster resources in borders, you have to manage them in one of the ways below:</p>
<ol>
<li>Setup <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">LimitRange</a> policy for pods in a namespace.</li>
</ol>
<blockquote>
<p>A <em>LimitRange</em> provides constraints that can:</p>
<ul>
<li>Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.</li>
<li>Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.</li>
<li>Enforce a ratio between request and limit for a resource in a namespace.</li>
<li>Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.</li>
</ul>
</blockquote>
<ol start="2">
<li>Use <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#compute-resource-quota" rel="nofollow noreferrer">Resource Quotas</a> per namespace.</li>
</ol>
<blockquote>
<p>A resource quota, defined by a <em>ResourceQuota</em> object, provides
constraints that limit aggregate resource consumption per namespace.
It can limit the quantity of objects that can be created in a
namespace by type, as well as the total amount of compute resources
that may be consumed by resources in that namespace.</p>
</blockquote>
<ol start="3">
<li>Assign necessary <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">requests and limits</a> for each pod.</li>
</ol>
<blockquote>
<p>When you specify the resource <em>request</em> for Containers in a Pod, the
scheduler uses this information to decide which node to place the Pod
on. When you specify a resource <em>limit</em> for a Container, the kubelet
enforces those limits so that the running container is not allowed to
use more of that resource than the limit you set. The kubelet also
reserves at least the <em>request</em> amount of that system resource
specifically for that container to use.</p>
</blockquote>
| Andrew Skorkin |
<p>I see these errors in the kubelet.log on my worker nodes</p>
<pre><code>Feb 11 19:39:41 my-node-ip kubelet[4358]: I0711 19:39:41.666680 4358 log.go:198] http: TLS handshake error from 172.16.4.71:58965: EOF
Feb 11 19:39:42 my-node-ip kubelet[4358]: I0711 19:39:42.121386 4358 log.go:198] http: TLS handshake error from 172.16.4.40:21053: EOF
Feb 11 19:39:45 my-node-ip kubelet[4358]: I0711 19:39:45.001122 4358 log.go:198] http: TLS handshake error from 172.16.4.71:36777: EOF
Feb 11 19:39:45 my-node-ip kubelet[4358]: I0711 19:39:45.455301 4358 log.go:198] http: TLS handshake error from 172.16.4.40:31905: EOF
Feb 11 19:39:48 my-node-ip kubelet[4358]: I0711 19:39:48.333620 4358 log.go:198] http: TLS handshake error from 172.16.4.71:1877: EOF
</code></pre>
<ul>
<li>What does the error "http: TLS handshake error from 172.16.4.71:58965: EOF" indicate?</li>
<li>What could be these ips <code>172.16.4.71</code> and <code>172.16.4.40</code> and port correspond to?</li>
</ul>
<p>What should I understand by this error message?</p>
| Senthil Kumaran | <p>As mentioned in this <a href="https://aboutssl.org/fix-ssl-tls-handshake-failed-error/" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>Whenever an SSL/TLS Handshake fails, it’s mostly due to certain things going on with the server, website, and the configuration of its installed SSL/TLS. Presently the culprit is TLS configuration as support for SSL 3.0 is deprecated. However, there’s a distinct possibility that a client-side error can be the reason behind the SSL/TLS Handshake Failed error. And, some of the common ones are like incorrect system time or browser updates.</p>
</blockquote>
<p>Also this is an known issue as mentioned in the github <a href="https://github.com/open-policy-agent/gatekeeper/issues/2142" rel="nofollow noreferrer">link</a></p>
<blockquote>
<p>EOF errors seem to be related to a <a href="https://github.com/golang/go/issues/50984" rel="nofollow noreferrer">Go bug</a> and appear on kubernetes 1.22, 1.23 and 1.24. This is not affecting any functional issues and these are generated from core kubernetes. We need to wait until a fix is suggested by kubernetes.</p>
</blockquote>
<p>Can you also try this work around to disable the TLS Handshake errors by following this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">document</a>.</p>
| Fariya Rahmat |
<p>I am having below questions related to Ingress resource in Kubernetes</p>
<ol>
<li>Can a single Ingress controller (ex: NginxIngress Controller) be mapped to multiple Ingress resources?</li>
<li>If the Ingress resources are mapped to single namespace, how to requested be routed in case of multiple ingress resources?</li>
<li>Is the Ingress resource mapped to unique hostname?</li>
<li>Is the ingress controller (ex: Nginx Ingress controller) bound to a namespace or is it a cluster level resource?</li>
</ol>
| zilcuanu | <ol>
<li>Yes, it's possible, you can have a look here: <a href="https://stackoverflow.com/questions/65171303/is-it-possible-to-have-multiple-ingress-resources-with-a-single-gke-ingress-cont">Is it possible to have multiple ingress resources with a single GKE ingress controller</a></li>
<li>Considering ingress resources are ingress rules:</li>
</ol>
<blockquote>
<p>If you create an Ingress resource without any hosts defined in the
rules, then any web traffic to the IP address of your Ingress
controller can be matched without a name based virtual host being
required.</p>
<p>For example, the following Ingress routes traffic requested for
first.bar.com to service1, second.bar.com to service2, and any traffic
to the IP address without a hostname defined in request (that is,
without a request header being presented) to service3.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="nofollow noreferrer">Name based virtual hosting</a></p>
<p>3.</p>
<blockquote>
<p>An optional host. In this example, no host is specified, so the rule
applies to all inbound HTTP traffic through the IP address specified.
If a host is provided (for example, foo.bar.com), the rules apply to
that host.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules" rel="nofollow noreferrer">Ingress rules</a></p>
<p>4.</p>
<blockquote>
<p>Parameters field has a scope and namespace field that can be used to
reference a namespace-specific resource for configuration of an
Ingress class. Scope field defaults to Cluster, meaning, the default
is cluster-scoped resource. Setting Scope to Namespace and setting the
Namespace field will reference a parameters resource in a specific
namespace:</p>
<p>Namespace-scoped parameters avoid the need for a cluster-scoped
CustomResourceDefinition for a parameters resource. This further
avoids RBAC-related resources that would otherwise be required to
grant permissions to cluster-scoped resources.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#namespace-scoped-parameters" rel="nofollow noreferrer">Namespace-scoped parameters</a></p>
| Jakub Siemaszko |
<p>As far as I understand there are two or more helm repos with nginx-ingress.</p>
<p>nginx-stable > <a href="https://helm.nginx.com/stable" rel="nofollow noreferrer">https://helm.nginx.com/stable</a><br />
ingress-nginx > <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a></p>
<p>Firstly I have installed from nginx-stable, but this installation by default use selfsigned certs. When I try to investigate this question I have found out that in official tutorial <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a> (which address is very similar to ingress-nginx helm repo) gives another repo <a href="https://helm.nginx.com/stable" rel="nofollow noreferrer">https://helm.nginx.com/stable</a>
When I have tried to generate helm templates for both these repos, I found out that result is different. Could anyone explain why there are two repos, and what distinguish between them</p>
| kaetana | <p><code>...there are two or more helm repos with nginx-ingress</code></p>
<p>There is only one <a href="https://github.com/kubernetes/ingress-nginx/" rel="nofollow noreferrer">ingress-nginx</a> project. The helm charts referred in your question are actually 2 different projects. <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">ingress-nginx</a> is managed by k8s community and <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">kubernetes-ingress</a> is managed by Nginx (F5). Here's a <a href="https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-4-nginx-ingress-controller-options/#NGINX-vs.-Kubernetes-Community-Ingress-Controller" rel="nofollow noreferrer">guide</a> about their differences.</p>
| gohm'c |
<p>I'm setting up a Kubernetes cluster with windows nodes. I accidentally created a local image in the default namespace.</p>
<p>As shown by <code>ctr image ls</code>, my image is in the <strong>default namespace</strong>:</p>
<pre><code>REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/myimage:latest application/vnd.docker.distribution.manifest.v2+json sha256:XXX 6.3 GiB windows/amd64 -
</code></pre>
<p>Therefore, Kubernetes <strong>cannot find the image while creating the pod</strong> (<code>ErrImageNeverPull</code>, <code>imagePullPolicy</code> is set to <code>Never</code>). The reason for this is, the image isn't in the right <strong>namespace k8s.io</strong>:<br />
The command <code>ctr --namespace k8s.io image ls</code> shows the base Kubernetes images:</p>
<pre><code>REF TYPE DIGEST SIZE PLATFORMS LABELS
mcr.microsoft.com/oss/kubernetes/pause:3.6 application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
mcr.microsoft.com/oss/kubernetes/pause@sha256:DIGEST application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
...
</code></pre>
<p>The most straight-forward approach I tried, was exporting the image, deleting the image, and importing the image with different namespace. (as mentioned on a <a href="https://github.com/kubernetes-sigs/cri-tools/issues/546#issuecomment-646909445" rel="nofollow noreferrer">Github comment in the cri-tools project</a>)</p>
<pre><code>ctr --namespace k8s.io image import --base-name foo/myimage container_import.tar
</code></pre>
<p>It works. But I wonder, if there is any shorter (less time consuming) way than re-importing the image.
(Maybe by running a simple command or changing a text file.)</p>
<hr />
<p>To clarify my question: I have one node with a container stored in namespace "default". I want to have the same container stored in namespace "k8s.io" on the same node.
What else can I do, instead of running the following two (slow) commands?</p>
<pre><code>ctr -n default image export my-image.tar my-image
ctr -n k8s.io image import my-image.tar
</code></pre>
<p>I assume a more faster way of renaming the namespace, since it is just editing some meta data.</p>
| Tim Wißmann | <p>As @ P Ekambaram suggested, the <code>podman save</code> and <code>podman load</code> commands let you share images across multiple servers and systems when they aren't available locally or remotely.</p>
<p>You can use <a href="https://www.redhat.com/sysadmin/share-container-image-podman-save#:%7E:text=The%20%60podman%20save%60%20and%20%60,t%20available%20locally%20or%20remotely.&text=Container%20images%20are%20the%20foundations%20that%20containers%20run%20on." rel="nofollow noreferrer">Podman</a> to manage images and containers.</p>
<p>The podman save command saves an image to an archive, making it available to be loaded on another server.</p>
<p>For instance, to save a group of images on a host named servera:</p>
<pre><code>[servera]$ podman save --output images.tar \
docker.io/library/redis \
docker.io/library/mysql \
registry.access.redhat.com/ubi8/ubi \
registry.access.redhat.com/ubi8/ubi:8.5-226.1645809065 \
quay.io/centos7/mysql-80-centos7 docker.io/library/nginx
</code></pre>
<p>Once complete, you can take the file images.tar to serverb and load it with podman load:</p>
<pre><code>[serverb]$ podman load --input images.tar
</code></pre>
<p>The newly released <a href="https://www.redhat.com/sysadmin/podman-transfer-container-images-without-registry" rel="nofollow noreferrer">Podman 4.0</a> includes the new <code>podman image scp</code> command, a useful command to help you manage and transfer container images.</p>
<p>With Podman's podman image scp, you can transfer images between local and remote machines without requiring an image registry.</p>
<p>Podman takes advantage of its SSH support to copy images between machines, and it also allows for local transfer. Registryless image transfer is useful in a couple of key scenarios:</p>
<p>Doing a local transfer between users on one system
Sharing images over the network</p>
| Fariya Rahmat |
<p>Where does the NGINX ingress controller stores temporary files?</p>
<p>This is the message I receive and I am pretty sure it is storing the file on a volume attached to one of my pods:</p>
<pre><code>2021/09/27 20:33:23 [warn] 33#33: *26 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000002, client: 10.42.1.0, server: _, request: "POST /api/adm/os/image HTTP/1.1", host: "vzredfish.cic.shidevops.com", referrer: "https://vzredfish.cic.shidevops.com/setting"
</code></pre>
<p>But when I go into the location <code>/var/cache/nginx/client_temp</code> there is nothing.</p>
<p>I checked on the ingress controller pods too and there is another there either.</p>
<p>I would like to know how to troubleshoot the issue we have. I'm trying to upload a file directly to the pod memory but instead it uploads it to a temporary location first.</p>
<p>Thanks for the help.</p>
<p>Danilo</p>
| Danilo Patrucco | <p>Answering your question indirectly there seems to some ways to skip proxy buffering to achieve your goal of uploading a file directly to the pod memory, I've found an interesting article <a href="https://www.getpagespeed.com/server-setup/nginx/tuning-proxy_buffer_size-in-nginx" rel="nofollow noreferrer">here</a>, have a look at the <em>Disable proxy buffering</em> section.</p>
| Jakub Siemaszko |
<p>Running the <code>minikube start</code> command, I am getting this message:<br />
This container is having trouble accessing <a href="https://registry.k8s.io" rel="nofollow noreferrer">https://registry.k8s.io</a></p>
<p>and after this the Booting up control plane process takes a long time then gives the following error:</p>
<p>Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1</p>
<p>I have the right <code>minikube</code>, <code>kubectl</code> , <code>docker</code> ... versions.<br />
<code>$ echo $(minikube docker-env)</code> this command outputs the following error:</p>
<p>Exiting due to GUEST_STATUS: state: unknown state "minikube": docker container inspect minikube --format=: exit status 1</p>
<p>stderr:</p>
<p>Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/minikube/json": dial unix /var/run/docker.sock: connect: permission denied</p>
<p>But what i don't understand, if I run the <code>docker run hello-world</code> , it works (I have the super user permission)</p>
| Oliver Domokos | <p>Try running the below commands:</p>
<p>Remove unused data:</p>
<pre><code>docker system prune
</code></pre>
<p>Clear minikube's local state:</p>
<pre><code>minikube delete
</code></pre>
<p>Start the cluster:</p>
<pre><code>minikube start --driver=<driver_name>
</code></pre>
<p>(In your case driver name is docker as per minikube profile list info shared by you)</p>
<p>Check the cluster status:</p>
<pre><code>minikube status
</code></pre>
<p>Also refer to this <a href="https://github.com/kubernetes/minikube/issues/11530" rel="nofollow noreferrer">Github</a> link.</p>
| Fariya Rahmat |
<p>The <code>help</code> of the <code>kubectl port-forward</code> says <code>The forwarding session ends when the selected pod terminates, and rerun of the command is needed to resume forwarding.</code></p>
<p>Although it does not auto-reconnect when the pod terminates the command does not return either and just hangs with errors:</p>
<pre><code>E0929 11:57:50.187945 62466 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249, uid : network namespace for sandbox "a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249" is closed
Handling connection for 8000
E0929 12:02:44.505938 62466 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249, uid : failed to find sandbox "a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249" in store: not found
</code></pre>
<p>I would like it to return so that I can handle this error and make the script that will rerun it.</p>
<p>Is there any way or workaround for how to do it?</p>
| Sergii Bishyr | <p>Based on the information, described on <a href="https://github.com/kubernetes/kubernetes/issues/67059" rel="nofollow noreferrer">Kubernetes Issues page on GitHub</a>, I can suppose that it is a normal behavior for your case: port-forward connection cannot be canceled on pod deletion, since there is no a connection management inside REST connectors on server side.</p>
<blockquote>
<p>A connection being maintained from kubectl all the way through to the kubelet hanging open even if the pod doesn't exist.</p>
</blockquote>
<blockquote>
<p>We'll proxy a websocket connection kubectl->kubeapiserver->kubelet on port-forward.</p>
</blockquote>
| Andrew Skorkin |
<p>We have several resources deployed as part of a helm (v3) chart. Some time ago, I made changes to resources deployed by that helm chart manually, via <code>kubectl</code>. This caused some drift between the values in the yaml resources deployed by the helm release (as show by <code>helm get values <release></code>) and what is actually deployed in the cluster</p>
<p>Example: <code>kubectl describe deployment <deployment></code> shows an updated image that was manually applied via a <code>kubectl re-apply</code>. Whereas <code>helm show values <release></code> shows the original image used by helm for said deployment.</p>
<p>I realize that I should have performed a <code>helm upgrade</code> with a modified values.yaml file to execute the image change, but I am wondering if there is a way for me to sync the state of the values I manually updated with the values in the helm release. The goal is to create a new default <code>values.yaml</code> that reflect the current state of the cluster resources.</p>
<p>Thanks!</p>
| Andy | <p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>According to the <a href="https://github.com/helm/helm/issues/2730" rel="nofollow noreferrer">Helm issue 2730</a> this feature will not be added in the Helm, as it is outside of the scope of the project.</p>
<p>It looks like there is no existing tool right from the Helm, that would help to port/adapt the life kubernetes resource back into existing or new helm charts/releases.</p>
<p>Based on this, you can use one of the following options:</p>
<ol>
<li>As suggested by @David Maze. The <a href="https://github.com/databus23/helm-diff" rel="nofollow noreferrer">Helm Diff Plugin</a> will show you the difference between the chart output and the cluster, but then you need to manually update values.yaml and templates.</li>
<li>The <a href="https://github.com/HamzaZo/helm-adopt" rel="nofollow noreferrer">helm-adopt plugin</a> is a helm plugin to adopt existing k8s resources into a new generated helm chart.</li>
</ol>
| Andrew Skorkin |
<p>I am running EFK using ECK 8.5.3. fluentd <code>ConfigMap</code>:</p>
<pre><code> @type geoip
# Specify one or more geoip lookup field which has ip address (default: host)
geoip_lookup_keys IP
# Specify optional geoip database (using bundled GeoLiteCity databse by default)
# geoip_database "/path/to/your/GeoIPCity.dat"
# Specify optional geoip2 database
# geoip2_database "/path/to/your/GeoLite2-City.mmdb" (using bundled GeoLite2-City.mmdb by default)
# Specify backend library (geoip2_c, geoip, geoip2_compat)
backend_library geoip2_c
# Set adding field with placeholder (more than one settings are required.)
<record>
city ${city.names.en["IP"]}
latitude ${location.latitude["IP"]}
longitude ${location.longitude["IP"]}
country_code ${country.iso_code["IP"]}
country_name ${country.names.en["IP"]}
postal_code ${postal.code["IP"]}
location_properties '{ "lat" : ${location.latitude["IP"]}, "lon" : ${location.longitude["IP"]} }'
location_string ${location.latitude["IP"]},${location.longitude["IP"]}
location_array '[${location.longitude["IP"]},${location.latitude["IP"]}]'
</record>
</code></pre>
<p>ES template:</p>
<pre><code> "mappings": {
"properties": {
"location_properties": { "type": "geo_point" },
"location_string": { "type": "geo_point" },
"location_array": { "type": "geo_point" }
}
}
</code></pre>
<p>I don't see any of the properties in Kibana ECK 8.5.3 at all. What do I miss?</p>
| Kok How Teh | <p>The issue can be fixed by using JSON format string.</p>
<p>As mentioned in the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>As with geo_shape and point, geo_point can be specified in GeoJSON and
Well-Known Text formats. However, there are a number of additional
formats that are supported for convenience and historical reasons. In
total there are six ways that a geopoint may be specified.</p>
</blockquote>
<p>You can also refer to this <a href="https://stackoverflow.com/questions/33132731/missing-geo-point-fields-in-elasticsearch-response">stack post</a> for more information.</p>
| Fariya Rahmat |
<p>I was passing one of the sample tests for CKA and one question says this:</p>
<p>"Configure a LivenessProbe which simply runs <code>true</code>"</p>
<p>This is while creating simple nginx pod(s) in the general question, then they ask that as one of the items. What does that mean and how to do it?</p>
| wti | <p><code>...Configure a LivenessProbe which simply runs true...while creating simple nginx pod...</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
livenessProbe:
exec:
command: ["true"]
</code></pre>
<p><code>true</code> is a command that returns zero. In this case it means the probe simply return no error. Alternately, you can probe nginx with: <code>command: ["ash","-c","nc -z localhost 80"]</code>.</p>
| gohm'c |
<p>I'm trying to exec a command into a running pod. I'm using go K8sclient to achieve this but facing a issue. I also don't know if solution is correct or not. Can anyone please check and provide correct solution?</p>
<p>This is my code.</p>
<pre><code> namespace := getNamespace()
podName := "maxscale-0"
config, err := rest.InClusterConfig()
if err != nil {
log.Fatal(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
req := clientset.CoreV1().Pods(namespace).Exec(podName, &corev1.PodExecOptions{
Command: []string{"sh", "-c", "grep -oP '\"name\": \"\\K[^\"]*' /var/lib/maxscale/MariaDB-Monitor_journal.json"},
})
// Set up a stream to capture the output
execStream, err := req.Stream()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// Print the output
buf := new(bytes.Buffer)
buf.ReadFrom(execStream)
fmt.Println(buf.String())
</code></pre>
<p>The error I got is</p>
<pre><code>clientset.CoreV1().Pods(namespace).Exec undefined (type "k8s.io/client-go/kubernetes/typed/core/v1".PodInterface has no field or method Exec)
</code></pre>
| Dhanu | <p>As @David Maze shared, to use k8's go client to exec command in a pod follow the below code:</p>
<pre><code>import (
"io"
v1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/remotecommand"
)
// ExecCmd exec command on specific pod and wait the command's output.
func ExecCmdExample(client kubernetes.Interface, config *restclient.Config, podName string,
command string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error {
cmd := []string{
"sh",
"-c",
command,
}
req := client.CoreV1().RESTClient().Post().Resource("pods").Name(podName).
Namespace("default").SubResource("exec")
option := &v1.PodExecOptions{
Command: cmd,
Stdin: true,
Stdout: true,
Stderr: true,
TTY: true,
}
if stdin == nil {
option.Stdin = false
}
req.VersionedParams(
option,
scheme.ParameterCodec,
)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
return err
}
err = exec.Stream(remotecommand.StreamOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
})
if err != nil {
return err
}
return nil
}
</code></pre>
<p>Also refer to this <a href="https://github.com/kubernetes/client-go/issues/912" rel="nofollow noreferrer">link</a> for more information</p>
| Fariya Rahmat |
<p>When running 'kubectl top nodes' has error:</p>
<p>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)</p>
<pre><code>k8s version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[root@manager ~]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
[root@manager ~]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-km7jc 1/1 Running 0 2d21h
kube-system coredns-86c58d9df4-vltm9 1/1 Running 0 2d21h
kube-system etcd-manager 1/1 Running 0 2d23h
kube-system kube-apiserver-manager 1/1 Running 0 5h47m
kube-system kube-controller-manager-manager 1/1 Running 1 2d23h
kube-system kube-flannel-ds-amd64-5g8w8 1/1 Running 0 2d23h
kube-system kube-flannel-ds-amd64-69lcm 1/1 Running 0 2d23h
kube-system kube-flannel-ds-amd64-9hx2f 1/1 Running 0 2d23h
kube-system kube-proxy-9s6zm 1/1 Running 0 2d23h
kube-system kube-proxy-k4qwz 1/1 Running 0 2d23h
kube-system kube-proxy-wnzgd 1/1 Running 0 2d23h
kube-system kube-scheduler-manager 1/1 Running 1 2d23h
kube-system kubernetes-dashboard-79ff88449c-7fpw6 1/1 Running 0 2d23h
kube-system metrics-server-68d85f76bb-pj8bs 1/1 Running 0 111m
kube-system tiller-deploy-5478b6c547-bf82v 1/1 Running 0 4h7m
[root@manager ~]# kubectl logs -f -n kube-system metrics-server-68d85f76bb-pj8bs
I1217 06:42:43.451782 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
[restful] 2018/12/17 06:42:44 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
[restful] 2018/12/17 06:42:44 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
I1217 06:42:44.099720 1 serve.go:96] Serving securely on [::]:443
</code></pre>
<p>And has no system error log.
How can I resolve this problem?</p>
<p>OS is :CentOS Linux release 7.5.1804 (Core)</p>
| 大傻瓜 | <p>i had solved this problem, add <code>hostNetwork: true</code> to metries-server.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
hostNetwork: true ## add
</code></pre>
<p>docs:</p>
<pre class="lang-sh prettyprint-override"><code>[root@xx yaml]# kubectl explain deployment.spec.template.spec.hostNetwork
KIND: Deployment
VERSION: apps/v1
FIELD: hostNetwork <boolean>
DESCRIPTION:
Host networking requested for this pod. Use the host's network namespace.
If this option is set, the ports that will be used must be specified.
Default to false.
</code></pre>
<p>background:<br>
successful running metries, but <code>kubectl top nodes</code> occuied: <code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)</code>.</p>
| ChenDehua |
Subsets and Splits