Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>I am currently trying to identify the required AWS VPC and Subnet structure for EKS.</p>
<p>However, would like to check whether I can deploy the UI containers (pods) and the Spring Boot based API Gateway pods in public subnet and deploy all other back-end service pods in private subnet.</p>
<p>Is it possible to create such Kubernetes YAML deployment configuration?</p>
| Avinash | <p>You deploy the UI containers (pods) and the Spring Boot based API Gateway pods in <code>nodes that run in public subnet</code>, all other back-end service pods in <code>nodes that run in private subnet</code>. Where nodes are really just EC2 instance that runs kubelet that have joined your EKS cluster. Typically, you use <code>nodeSelector</code> or <code>affinity</code> to direct which node for your pod to run.</p>
| gohm'c |
<p>I followed the steps from <code>AWS</code> knowledge base to create persistent storage: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="noreferrer">Use persistent storage in Amazon EKS</a></p>
<p>Unfortunately, <code>PersistentVolume</code>(PV) wasn't created:</p>
<pre><code>kubectl get pv
No resources found
</code></pre>
<p>When I checked the PVC logs, I'm getting the following provisioning failed message:</p>
<pre><code>storageclass.storage.k8s.io "ebs-sc" not found
failed to provision volume with StorageClass "ebs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
</code></pre>
<p>I'm using <code>Kubernetes v1.21.2-eks-0389ca3</code></p>
<hr />
<p>Update:</p>
<p>The storageclass.yaml used in the example has provisioner set to ebs.csi.aws.com</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>When I updated it using @gohm'c answer, it created a pv.</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
</code></pre>
| cloudviz | <pre><code>storageclass.storage.k8s.io "ebs-sc" not found
failed to provision volume with StorageClass "ebs-sc"
</code></pre>
<p>You need to create the storage class "ebs-sc" after EBS CSI driver is installed, example:</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
</code></pre>
<p>See <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">here</a> for more options.</p>
| gohm'c |
<p>I'd like to get information about k8s cronjob time.
There are so many jobs in my k8s program.
So It's hard to count what time there are focused on.
I want to distribute my jobs evenly.
Is there are way to count cronjob time or sort by time?</p>
| SungsooKim | <p>I have tried to find a suitable tool that can help with your case.
Unfortunately, I did not find anything suitable and easy to use at the same time.</p>
<p>It is possible to use <code>Prometheus + Grafana</code> to monitor <code>CronJobs</code> e.g using this <a href="https://grafana.com/grafana/dashboards/10884" rel="nofollow noreferrer">Kubernetes Cron and Batch Job monitoring</a> dashboard.<br />
However, I don't think you will find any useful information in this way, just a dashboard that displays the number of <code>CronJobs</code> in the cluster.</p>
<hr />
<p>For this reason, I decided to write a Bash script that is able to display the last few <code>CronJobs</code> run in a readable manner.</p>
<p>As described in the <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob documentation</a>:</p>
<blockquote>
<p>A CronJob creates Jobs on a repeating schedule.</p>
</blockquote>
<p>To find out how long a specific <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Job</a> was running, we can check its <code>startTime</code> and <code>completionTime</code> e.g. using the commands below:</p>
<pre><code># kubectl get job <JOB_NAME> --template '{{.status.startTime}}' # "startTime"
# kubectl get job <JOB_NAME> --template '{{.status.completionTime}}' # "completionTime"
</code></pre>
<p>To get the duration of <code>Jobs</code> in seconds, we can convert <code>startTime</code> and <code>completionTime</code> dates to <strong>epoch</strong>:</p>
<pre><code># date -d "<SOME_DATE> +%s
</code></pre>
<p>And this is the entire Bash script:<br />
<strong>NOTE:</strong> We need to pass the namespace name as an argument.</p>
<pre><code>#!/bin/bash
# script name: cronjobs_timetable.sh <NAMESPACE>
namespace=$1
for cronjob_name in $(kubectl get cronjobs -n $namespace --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'); do
echo "===== CRONJOB_NAME: ${cronjob_name} ==========="
printf "%-15s %-15s %-15s %-15s\n" "START_TIME" "COMPLETION_TIME" "DURATION" "JOB_NAME"
for job_name in $(kubectl get jobs -n $namespace --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -w "${cronjob_name}-[0-9]*$"); do
startTime="$(kubectl get job ${job_name} -n $namespace --template '{{.status.startTime}}')"
completionTime="$(kubectl get job ${job_name} -n $namespace --template '{{.status.completionTime}}')"
if [[ "$completionTime" == "<no value>" ]]; then
continue
fi
duration=$[ $(date -d "$completionTime" +%s) - $(date -d "$startTime" +%s) ]
printf "%-15s %-15s %-15s %-15s\n" "$(date -d ${startTime} +%X)" "$(date -d ${completionTime} +%X)" "${duration} s" "$job_name"
done
done
</code></pre>
<p>By default, this script only displays the last three <code>Jobs</code>, but it may by modified in the Job configuration using the <code>.spec.successfulJobsHistoryLimit</code> and <code>.spec.failedJobsHistoryLimit</code> fields (for more information see <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="nofollow noreferrer">Kubernetes Jobs History Limits</a>)</p>
<p>We can check how it works:</p>
<pre><code>$ ./cronjobs_timetable.sh default
===== CRONJOB_NAME: hello ===========
START_TIME COMPLETION_TIME DURATION JOB_NAME
02:23:00 PM 02:23:12 PM 12 s hello-1616077380
02:24:02 PM 02:24:13 PM 11 s hello-1616077440
02:25:03 PM 02:25:15 PM 12 s hello-1616077500
===== CRONJOB_NAME: hello-2 ===========
START_TIME COMPLETION_TIME DURATION JOB_NAME
02:23:01 PM 02:23:23 PM 22 s hello-2-1616077380
02:24:02 PM 02:24:24 PM 22 s hello-2-1616077440
02:25:03 PM 02:25:25 PM 22 s hello-2-1616077500
===== CRONJOB_NAME: hello-3 ===========
START_TIME COMPLETION_TIME DURATION JOB_NAME
02:23:01 PM 02:23:32 PM 31 s hello-3-1616077380
02:24:02 PM 02:24:34 PM 32 s hello-3-1616077440
02:25:03 PM 02:25:35 PM 32 s hello-3-1616077500
===== CRONJOB_NAME: hello-4 ===========
START_TIME COMPLETION_TIME DURATION JOB_NAME
02:23:01 PM 02:23:44 PM 43 s hello-4-1616077380
02:24:02 PM 02:24:44 PM 42 s hello-4-1616077440
02:25:03 PM 02:25:45 PM 42 s hello-4-1616077500
</code></pre>
<p>Additionally, you'll likely want to create exceptions and error handling to make this script work as expected in all cases.</p>
| matt_j |
<p>I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...</p>
<p>I have the app functioning as I need without any issues, my current directory structure is.</p>
<pre class="lang-bash prettyprint-override"><code>.
βββ README.md
βββ backend
β βββ Dockerfile
β βββ Pipfile
β βββ Pipfile.lock
β βββ main.py
βββ frontend
βββ Dockerfile
βββ index.html
βββ package-lock.json
βββ package.json
βββ postcss.config.js
βββ src
β βββ App.jsx
β βββ favicon.svg
β βββ index.css
β βββ logo.svg
β βββ main.jsx
βββ tailwind.config.js
βββ vite.config.js
</code></pre>
<p>I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.</p>
<p>I have a <code>.env</code> file in each, only simple things like URLs or hosts.</p>
<p>I currently run the app, with the front end and backend separately as an example.</p>
<pre class="lang-bash prettyprint-override"><code>> ./frontend
> npm run dev
</code></pre>
<pre class="lang-bash prettyprint-override"><code>> ./backend
> uvicorn ....
</code></pre>
<p>Can anyone give me tips /advice on how I can dockerize this as one?</p>
| mrpbennett | <p>Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.</p>
<p>Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.</p>
<pre><code>.
βββ README.md
βββ docker-compose.yml
βββ backend
β βββ .env
β βββ Dockerfile
β βββ app/
β βββ venv/
β βββ requirements.txt
β βββ main.py
βββ frontend
βββ .env
βββ Dockerfile
βββ package.json
βββ package-lock.json
βββ src/
βββ ...
</code></pre>
<p>backend/Dockerfile</p>
<pre><code>FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>frontend/Dockerfile</p>
<pre><code># pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install [email protected] -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
</code></pre>
<p>docker-compose.yml</p>
<pre><code>version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006
</code></pre>
| Peter Hamfelt |
<p><code>uname -srm</code></p>
<p>THis gives Linux kernel version.</p>
<p>How to find Linux kernel version of all the containers running inside my EKS deployments? CAn we do it using <code>kubectl</code> command?</p>
| Biju | <p>You can check with kubectl if your pod support bash: <code> kubectl exec --namespace <if not default> <pod> -- uname -srm</code></p>
| gohm'c |
<p>I have the following ingress config:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
serviceName: web-service
servicePort: 5000
paths:
- path: /api/tasks/.*
pathType: Prefix
backend:
serviceName: tasks-service
servicePort: 5004
paths:
- path: /api/.*
pathType: Prefix
backend:
serviceName: um-service
servicePort: 5001
</code></pre>
<p>I'm intending to load the frontend by default then, use the other paths for loading other services. I would like to get the total tasks count from <code>/api/tasks/total_count</code> and raise new task from <code>/api/tasks/raise</code>. At the same time, I would like to login using <code>/api/auth/login/</code> and view other users with <code>/api/users/list</code> both served with the um-service.
The above configuration only returns the default path of the last service which is the um-service.
How do I configure so that web loads by default, then either <code>/api/auth/login</code> or <code>/api/users/list</code> are routed to the um-service and <code>/api/tasks/</code> is also routed to the tasks service? Kindly advice</p>
| Denn | <p>If I understand you correctly, you want to achieve that result:</p>
<pre><code>$ curl <MY_DOMAIN>/api/auth/login
um-service
$ curl <MY_DOMAIN>/api/users/list
um-service
$ curl <MY_DOMAIN>/api/tasks/
tasks-service
$ curl <MY_DOMAIN>/
web-service
</code></pre>
<p>You almost did everything right, but <code>paths</code> should only be given once</p>
<p>Try this configuration:<br></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 5000
- path: /api/tasks/.*
backend:
serviceName: tasks-service
servicePort: 5004
- path: /api/.*
backend:
serviceName: um-service
servicePort: 5001
</code></pre>
| matt_j |
<p>I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on AWS. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.</p>
<p>I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ghost
labels:
app: ghost
spec:
ports:
- port: 80
selector:
app: ghost
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
containers:
- image: ghost:4-alpine
name: ghost
env:
- name: database_client
valueFrom:
secretKeyRef:
name: eks-keys
key: client
- name: database_connection_host
valueFrom:
secretKeyRef:
name: eks-keys
key: host
- name: database_connection_user
valueFrom:
secretKeyRef:tha
- name: database_connection_password
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcp
- name: database_connection_database
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcd
ports:
- containerPort: 2368
name: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
volumes:
- name: ghost-persistent-storage
persistentVolumeClaim:
claimName: efs-ghost
</code></pre>
<p>I ran this line on cmd in the folder container:</p>
<p><code>kubectl create -f deployment-ghost.yaml --validate=false</code></p>
<blockquote>
<p>service/ghost created
Error from server (BadRequest): error when creating "deployment-ghost.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.ValueFrom: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|lueFrom":"secretKeyR|..., bigger context ...|},{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},{"name":"database_connection_pa|...</p>
</blockquote>
<p>I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?</p>
| Vahid | <blockquote>
<p>{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},</p>
</blockquote>
<p>Your spec has error:</p>
<pre><code>...
- name: database_connection_user # <-- The error message points to this env variable
valueFrom:
secretKeyRef:
name: <secret name, eg. eks-keys>
key: <key in the secret>
...
</code></pre>
| gohm'c |
<p>I've try yo measure DNS latency in my docker-compose/kubernetes cluster.</p>
<pre><code> setInterval(() => {
console.time('dns-test');
dns.lookup('http://my-service', (_, addresses, __) => {
console.log('addresses:', addresses);
console.timeEnd('dns-test');
});
}, 5000);
</code></pre>
<p>But get <code>addresses: undefined</code>, any ideas?</p>
| Ballon Ura | <p><code>...dns.lookup('http://my-service'...</code></p>
<p>The <a href="https://nodejs.org/api/dns.html#dnslookuphostname-options-callback" rel="nofollow noreferrer">lookup</a> function (with example usage) takes the first parameter as the host name that you want to lookup, eg. google.com. You should remove "http://" from the name you passed in.</p>
| gohm'c |
<p>I am writing ansible scripts for deploying services using Kubernetes, I am stuck with a step that is for the post-deployment process:</p>
<p>I have deployed a service having "<strong>replicas: 3</strong>", and all the replicas are up and running now my problem is to I have to do a migration for which I have to get into the container and run a script already present there.</p>
<p>I can do it manually by getting into the container individually and then run the script but this will again require manual intervention.</p>
<p>What I want to achieve is once the deployment is done and all the replicas are up and running I want to run the scripts by getting into the containers and all these steps should be performed by ansible script and no manual effort required.</p>
<p>Is there a way to do this?</p>
| Anuj Kishor | <p>@Vasili Angapov is right - <strong>k8s_exec</strong> module is probably the best solution in this case but I would like to add some useful notes.</p>
<hr />
<p>To use <strong>k8s_exec</strong> we need to know the exact <code>Pod</code> name (we need to pass it as <code>pod</code> parameter in ansible task). As you wrote, I assume that your <code>Pods</code> are managed by <code>Deployment</code>, so every <code>Pod</code> has random string in its name added by <code>ReplicaSet</code>. Therefore, you have to find the full names of the <code>Pods</code> somehow.<br><br>
I've created simple playbook to illustrate how we can find <code>Pod</code> names for all <code>Pods</code> with label: <code>app=web</code> and then run sample <code>touch file123456789</code> command on these <code>Pods</code>.</p>
<pre><code>---
- hosts: localhost
collections:
- community.kubernetes
tasks:
- name: "Search for all Pods labelled app=web"
k8s_info:
kind: Pod
label_selectors:
- app = web
register: pod_names
- name: "Get Pod names"
set_fact:
pod_names: "{{ pod_names | json_query('resources[*].metadata.name') }}"
- name: "Run command on every Pod labelled app=web"
k8s_exec:
namespace: default
pod: "{{ item }}"
command: touch file123456789
with_items: "{{ pod_names }}"
</code></pre>
<p><strong>NOTE:</strong> Instead of <code>k8s_exec</code> module you can use <code>command</code> module as well.
In our example instead of <code>k8s_exec</code> task we can have:<br></p>
<pre><code>- name: "Run command on every Pod labelled app=web"
command: >
kubectl exec "{{ item }}" -n default -- touch file123456789
with_items: "{{ pod_names }}"
</code></pre>
| matt_j |
<p>What am I going wrong in the below query?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>The error I am getting:</p>
<pre><code>error validating "ingress.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with
--validate=false
</code></pre>
| Subhajit Das | <p>Ingress spec has changed since updated to v1. Try:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nginx
pathType: ImplementationSpecific
backend:
service:
name: nginx
port:
number: 80
</code></pre>
| gohm'c |
<p>I have local kubernetes cluster on vm.
I use containerd as a CRI.
and when i install calico, have the next error with calico-kube-controllers</p>
<blockquote>
<p>"Warning FailedCreatePodSandBox 2m41s (x638 over 140m) kubelet,
serverhostname (combined from similar events): Failed to create pod
sandbox: rpc error: code = Unknown desc = failed to setup network for
sandbox
"a46b6b0c52c2adec7749fff781401e481ca911a198e0406d7fa646c6d5d5e781":
error getting ClusterInformation: Get
https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: tls: server selected unsupported protocol version 301"</p>
</blockquote>
<p>P.S with docker as a CRI, works fine</p>
<p>OS version Red Hat Enterprise Linux Server release 7.7 (Maipo)</p>
<p>Openssl version OpenSSL 1.1.1 11 Sep 2018</p>
<p>Configuring tls-min-version for kubelet and kube-api-server didn't help.</p>
| MAGuire | <p>solve the promlem.
my cluster works behind corp proxy, and containerd sent requests to 10.96.0.1 through the proxy.
i just add IP 10.96.0.1 to non proxy list to containerd proxy conf.</p>
| MAGuire |
<p>I have build new Kubernetes cluster <code>v1.20.1</code> single master and single node with Calico CNI.</p>
<p>I deployed the <code>busybox </code>pod in default namespace.</p>
<pre><code># kubectl get pods busybox -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 12m 10.203.0.129 node02 <none> <none>
</code></pre>
<p><strong>nslookup not working</strong></p>
<pre><code>kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
</code></pre>
<p>cluster is running RHEL 8 with latest update</p>
<p>followed this steps: <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<p><strong>nslookup command not able to reach nameserver</strong></p>
<pre><code># kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
</code></pre>
<p><strong>resolve.conf file</strong></p>
<pre><code># kubectl exec -ti dnsutils -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
</code></pre>
<p><strong>DNS pods running</strong></p>
<pre><code># kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-472vx 1/1 Running 1 85m
coredns-74ff55c5b-c75bq 1/1 Running 1 85m
</code></pre>
<p><strong>DNS pod logs</strong></p>
<pre><code> kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
</code></pre>
<p><strong>Service is defined</strong></p>
<pre><code># kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 86m
**I can see the endpoints of DNS pod**
# kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.203.0.5:53,10.203.0.6:53,10.203.0.5:53 + 3 more... 86m
</code></pre>
<p><strong>enabled the logging, but didn't see traffic coming to DNS pod</strong></p>
<pre><code># kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
</code></pre>
<p><strong>I can ping DNS POD</strong></p>
<pre><code># kubectl exec -i -t dnsutils -- ping 10.203.0.5
PING 10.203.0.5 (10.203.0.5): 56 data bytes
64 bytes from 10.203.0.5: seq=0 ttl=62 time=6.024 ms
64 bytes from 10.203.0.5: seq=1 ttl=62 time=6.052 ms
64 bytes from 10.203.0.5: seq=2 ttl=62 time=6.175 ms
64 bytes from 10.203.0.5: seq=3 ttl=62 time=6.000 ms
^C
--- 10.203.0.5 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 6.000/6.062/6.175 ms
</code></pre>
<p><strong>nmap show port filtered</strong></p>
<pre><code># ke netshoot-6f677d4fdf-5t5cb -- nmap 10.203.0.5
Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-15 22:29 UTC
Nmap scan report for 10.203.0.5
Host is up (0.0060s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
53/tcp filtered domain
8080/tcp filtered http-proxy
8181/tcp filtered intermapper
Nmap done: 1 IP address (1 host up) scanned in 14.33 seconds
</code></pre>
<p><strong>If I schedule the POD on master node, nslookup works nmap show port open?</strong></p>
<pre><code># ke netshoot -- bash
bash-5.0# nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
nmap -p 53 10.96.0.10
Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-15 22:46 UTC
Nmap scan report for kube-dns.kube-system.svc.cluster.local (10.96.0.10)
Host is up (0.000098s latency).
PORT STATE SERVICE
53/tcp open domain
Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds
</code></pre>
<p>Why nslookup from POD running on worker node is not working? how to troubleshoot this issue?</p>
<p>I re-build the server two times, still same issue.</p>
<p>Thanks</p>
<p>SR</p>
<h2>Update adding kubeadm config file</h2>
<pre><code># cat kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
kubeletExtraArgs:
cgroup-driver: "systemd"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "master01:6443"
networking:
dnsDomain: cluster.local
podSubnet: 10.0.0.0/14
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs
</code></pre>
<p>"</p>
| sfgroups | <p>First of all, according to the docs - please note that <a href="https://docs.projectcalico.org/getting-started/kubernetes/requirements" rel="noreferrer">Calico</a> and <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noreferrer">kubeadm</a> support <strong>Centos/RHEL 7+</strong>.<br />
In both <code>Calico</code> and <code>kubeadm</code> documentation we can see that they only support <strong>RHEL7+</strong>.</p>
<p>By default <strong>RHEL8</strong> uses <code>nftables</code> instead of <code>iptables</code> ( we can still use <code>iptables</code> but "iptables" on <strong>RHEL8</strong> is actually using the kernel's nft framework in the background - look at <a href="https://access.redhat.com/discussions/5114921" rel="noreferrer">"Running Iptables on RHEL 8"</a>).</p>
<blockquote>
<p><a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/networking_considerations-in-adopting-rhel-8" rel="noreferrer">9.2.1. nftables replaces iptables as the default network packet filtering framework</a></p>
</blockquote>
<p>I believe that <code>nftables</code> may cause this network issues because as we can find on nftables <a href="https://wiki.nftables.org/wiki-nftables/index.php/Adoption" rel="noreferrer">adoption page</a>:</p>
<blockquote>
<p>Kubernetes does not support nftables yet.</p>
</blockquote>
<p><strong>Note:</strong> For now I highly recommend you to use <strong>RHEL7</strong> instead of <strong>RHEL8</strong>.</p>
<hr />
<p>With that in mind, I'll present some information that may help you with <strong>RHEL8</strong>.<br />
I have reproduced your issue and found a solution that works for me.</p>
<ul>
<li>First I opened ports required by <code>Calico</code> - these ports can be found
<a href="https://docs.projectcalico.org/getting-started/kubernetes/requirements" rel="noreferrer">here</a> under "Network requirements".<br />
<strong>As workaround:</strong></li>
<li>Next I reverted to the old <code>iptables</code> backend on all cluster
nodes, you can easily do so by setting <code>FirewallBackend</code> in
<code>/etc/firewalld/firewalld.conf</code> to <code>iptables</code> as described<br />
<a href="https://firewalld.org/2018/07/nftables-backend" rel="noreferrer">here</a>.</li>
<li>Finally I restarted <code>firewalld</code> to make the new rules active.</li>
</ul>
<p>I've tried <code>nslookup</code> from <code>Pod</code> running on worker node (kworker) and it seems to work correctly.</p>
<pre><code>root@kmaster:~# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/web 1/1 Running 0 112s 10.99.32.1 kworker <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.99.0.1 <none> 443/TCP 5m51s <none>
root@kmaster:~# kubectl exec -it web -- bash
root@web:/# nslookup kubernetes.default
Server: 10.99.0.10
Address: 10.99.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.99.0.1
root@web:/#
</code></pre>
| matt_j |
<p>I am getting <code>ServiceUnavailable</code> error when I try to run <code>kubectl top nodes</code> or <code>kubectl top pods</code> command in EKS. I am running my cluster in EKS , and I am not finding any solution for this online. If any one have faced this issue in EKS please let me know how we can resolve this issue</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
<p>out put of <code>kubectl get apiservices v1beta1.metrics.k8s.io -o yaml</code></p>
<pre><code>apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apiregistration.k8s.io/v1","kind":"APIService","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"metrics-server","namespace":"kube-system"},"version":"v1beta1","versionPriority":100}}
creationTimestamp: "2022-02-03T08:22:59Z"
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
resourceVersion: "1373088"
uid: 2066d4cb-8105-4aea-9678-8303595dc47b
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2022-02-03T08:22:59Z"
message: 'failing or missing response from https://10.16.55.204:4443/apis/metrics.k8s.io/v1beta1:
Get "https://10.16.55.204:4443/apis/metrics.k8s.io/v1beta1": dial tcp 10.16.55.204:4443:
i/o timeout'
reason: FailedDiscoveryCheck
status: "False"
type: Available
</code></pre>
<p><code>metrics-server 1/1 1 1 3d22h</code></p>
<p><code>kubectl describe deployment metrics-server -n kube-system</code></p>
<pre><code>Name: metrics-server
Namespace: kube-system
CreationTimestamp: Thu, 03 Feb 2022 09:22:59 +0100
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 2
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.0
Port: 4443/TCP
Host Port: 0/TCP
Args:
--cert-dir=/tmp
--secure-port=4443
--kubelet-insecure-tls=true
--kubelet-preferred-address-types=InternalIP
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--kubelet-use-node-status-port
--metric-resolution=15s
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5dcd6cbcb9 (1/1 replicas created)
Events: <none>
</code></pre>
| Waseem Mir | <p>Download the <a href="https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml" rel="nofollow noreferrer">components.yaml</a>, find and replace 4443 to 443 and do a <code>kubectl replace -f components.yaml -n kube-system --force</code>.</p>
| gohm'c |
<p>I'm using <a href="https://github.com/kubernetes/client-go" rel="noreferrer">https://github.com/kubernetes/client-go</a> and all works well.</p>
<p>I have a manifest (YAML) for the official Kubernetes Dashboard: <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml" rel="noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml</a></p>
<p>I want to mimic <code>kubectl apply</code> of this manifest in Go code, using client-go.</p>
<p>I understand that I need to do some (un)marshalling of the YAML bytes into the correct API types defined in package: <a href="https://github.com/kubernetes/api" rel="noreferrer">https://github.com/kubernetes/api</a></p>
<p>I have successfully <code>Create</code>ed single API types to my cluster, <strong>but how do I do this for a manifest that contains a list of types that are not the same</strong>? Is there a resource <code>kind: List*</code> that supports these different types?</p>
<p>My current workaround is to split the YAML file using <code>csplit</code> with --- as the delimiter</p>
<pre class="lang-sh prettyprint-override"><code>csplit /path/to/recommended.yaml /---/ '{*}' --prefix='dashboard.' --suffix-format='%03d.yaml'
</code></pre>
<p>Next, I loop over the new (14) parts that were created, read their bytes, switch on the type of the object returned by the UniversalDeserializer's decoder and call the correct API methods using my k8s clientset.</p>
<p>I would like to do this to programmatically to make updates to any new versions of the dashboard into my cluster. I will also need to do this for the Metrics Server and many other resources. The alternative (maybe simpler) method is to ship my code with kubectl installed to the container image and directly call <code>kubectl apply -f -</code>; but that means I also need to write the kube config to disk or maybe pass it inline so that kubectl can use it.</p>
<p>I found this issue to be helpful: <a href="https://github.com/kubernetes/client-go/issues/193" rel="noreferrer">https://github.com/kubernetes/client-go/issues/193</a>
The decoder lives here: <a href="https://github.com/kubernetes/apimachinery/tree/master/pkg/runtime/serializer" rel="noreferrer">https://github.com/kubernetes/apimachinery/tree/master/pkg/runtime/serializer</a></p>
<p>It's exposed in client-go here: <a href="https://github.com/kubernetes/client-go/blob/master/kubernetes/scheme/register.go#L69" rel="noreferrer">https://github.com/kubernetes/client-go/blob/master/kubernetes/scheme/register.go#L69</a></p>
<p>I've also taken a look at the RunConvert method that is used by kubectl: <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/convert/convert.go#L139" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/convert/convert.go#L139</a> and assume that I can provide my own <a href="https://github.com/kubernetes/cli-runtime/blob/master/pkg/genericclioptions/io_options.go#L27" rel="noreferrer">genericclioptions.IOStreams</a> to get the output?</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/convert/convert.go#L141-L145" rel="noreferrer">It looks like RunConvert is on a deprecation path</a></p>
<p>I've also looked at other questions tagged [client-go] but most use old examples or use a YAML file with a single <code>kind</code> defined, and the API has changed since.</p>
<p>Edit: Because I need to do this for more than one cluster and am creating clusters programmatically (AWS EKS API + CloudFormation/<a href="https://github.com/weaveworks/eksctl" rel="noreferrer">eksctl</a>), I would like to minimize the overhead of creating <code>ServiceAccount</code>s across many cluster contexts, across many AWS accounts. Ideally, the only authentication step involved in creating my clientset is using <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" rel="noreferrer">aws-iam-authenticator</a> to get a token using cluster data (name, region, CA cert, etc). There hasn't been a release of aws-iam-authenticator for a while, but the contents of <code>master</code> allow for the use of a third-party role cross-account role and external ID to be passed. IMO, this is cleaner than using a <code>ServiceAccount</code> (and <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="noreferrer">IRSA</a>) because there are other AWS services the application (the backend API which creates and applies add-ons to these clusters) needs to interact with.</p>
<p>Edit: I have recently found <a href="https://github.com/ericchiang/k8s" rel="noreferrer">https://github.com/ericchiang/k8s</a>. It's definitely simpler to use than client-go, at a high-level, but doesn't support this behavior.</p>
| Simon | <p>It sounds like you've figured out how to deserialize YAML files into Kubernetes <code>runtime.Object</code>s, but the problem is dynamically deploying a <code>runtime.Object</code> without writing special code for each Kind.</p>
<p><code>kubectl</code> achieves this by interacting with the <a href="https://godoc.org/k8s.io/client-go/rest" rel="nofollow noreferrer">REST API</a> directly. Specifically, via <a href="https://godoc.org/k8s.io/cli-runtime/pkg/resource#Helper" rel="nofollow noreferrer">resource.Helper</a>.</p>
<p>In my code, I have something like:</p>
<pre><code>import (
meta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/cli-runtime/pkg/resource"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/restmapper"
)
func createObject(kubeClientset kubernetes.Interface, restConfig rest.Config, obj runtime.Object) (runtime.Object, error) {
// Create a REST mapper that tracks information about the available resources in the cluster.
groupResources, err := restmapper.GetAPIGroupResources(kubeClientset.Discovery())
if err != nil {
return nil, err
}
rm := restmapper.NewDiscoveryRESTMapper(groupResources)
// Get some metadata needed to make the REST request.
gvk := obj.GetObjectKind().GroupVersionKind()
gk := schema.GroupKind{Group: gvk.Group, Kind: gvk.Kind}
mapping, err := rm.RESTMapping(gk, gvk.Version)
if err != nil {
return nil, err
}
namespace, err := meta.NewAccessor().Namespace(obj)
if err != nil {
return nil, err
}
// Create a client specifically for creating the object.
restClient, err := newRestClient(restConfig, mapping.GroupVersionKind.GroupVersion())
if err != nil {
return nil, err
}
// Use the REST helper to create the object in the "default" namespace.
restHelper := resource.NewHelper(restClient, mapping)
return restHelper.Create(namespace, false, obj)
}
func newRestClient(restConfig rest.Config, gv schema.GroupVersion) (rest.Interface, error) {
restConfig.ContentConfig = resource.UnstructuredPlusDefaultContentConfig()
restConfig.GroupVersion = &gv
if len(gv.Group) == 0 {
restConfig.APIPath = "/api"
} else {
restConfig.APIPath = "/apis"
}
return rest.RESTClientFor(&restConfig)
}
</code></pre>
| Kevin Lin |
<p>I am using jenkins and kubernetes,
each time I trigger a job a new inbound-agent pod is created to execute the job, so far so good.</p>
<p>However, pipelines having underneath pipelines is an issue:</p>
<pre><code>pipeline {
agent { label 'kubernetePod' }
stages {
stage('Building') {
steps{
build job: 'Build', parameters: []
}
}
stage('Testing') {
steps{
build job: 'Test', parameters: []
}
}
}
</code></pre>
<p>In this case, we have three pipelines</p>
<ul>
<li>'main' pipeline having 'building' stage and 'testing 'stage</li>
<li>'build' pipeline</li>
<li>'test' pipeline</li>
</ul>
<p>So a <strong>pod 'A'</strong> is created to execute the main pipeline,</p>
<p>then a <strong>pod 'B'</strong> is created to checkout and build the solution</p>
<p>finally a <strong>pod 'C'</strong> is created to execute solution tests <strong>but it crashed</strong> because the solution is contained in the <strong>pod 'B'</strong>.</p>
<p><strong>My question is how do I keep the pod 'A' for executing the underneath pipelines!</strong></p>
<p>thank you for your help.</p>
<p>best regards</p>
| itbob | <p>I think that Kubernetes slaves are designed to be stateless and are meant for single use.</p>
<p>The only workaround I think may help in some cases is to use <code>idleMinutes</code> to set how long the <code>Pod</code> should be in the <code>Running</code> state.<br />
In your case if you use <code>idleMinutes</code> with appropriate values, you will be able to run <code>Build</code> and <code>Test</code> jobs using the same <code>Pod</code> (<code>podB</code> from your example).</p>
<hr />
<p>I've created an example to illustrate you how it works.</p>
<p>This is the pipeline snippet:</p>
<pre><code>pipeline {
agent { kubernetes {
label 'test'
idleMinutes "60"
}}
stages {
stage('Building') {
steps{
build job: 'Build'
}
}
stage('Testing') {
steps{
build job: 'Test'
}
}
}
}
</code></pre>
<p>This is the <code>Build</code> job:</p>
<pre><code>node('test') {
stage('Run shell') {
sh 'touch /home/jenkins/test_file'
sh 'echo 123abc > /home/jenkins/test_file'
}
}
</code></pre>
<p>This is the <code>Test</code> job:<br />
<strong>NOTE:</strong> This job reads the file created by the <code>Build</code> job.</p>
<pre><code>node('test') {
stage('Run shell') {
sh 'cat /home/jenkins/test_file'
}
}
</code></pre>
<p>As a result we can see that two Pods were created (<code>podA</code> and <code>podB</code> from your example) and they are still in the <code>Running</code> state:</p>
<pre><code># kubectl get pods
NAME READY STATUS RESTARTS AGE
test-99sfv-7fhw2 1/1 Running 0 4m45s
test-99sfv-vth3j 1/1 Running 0 4m25s
</code></pre>
<p>Additionally in the <code>test-99sfv-vth3j</code> <code>Pod</code> there is <code>test_file</code> created by the <code>Build</code> job:</p>
<pre><code># kubectl exec test-99sfv-vth3j -- cat /home/jenkins/test_file
123abc
</code></pre>
| matt_j |
<p>I have several pods running inside the Kubernetes cluster.
How I can make http call to a specific Pod, without calling LoadBalancer service?</p>
| Oleksandr Onopriienko | <p><code>...make http call to a specific Pod, without calling LoadBalancer service?</code></p>
<p>There are several ways, try <code>kubectl port-forward <pod name> 8080:80</code>, then open another terminal and you can now do <code>curl localhost:8080</code> which will forward your request to the pod. More details <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>I am new to Kubernetes and AWS and I have a problem. I am trying to run parallel Kubernetes jobs on an EKS cluster. How can I get the environment variable JOB_COMPLETION_INDEX?
I have tested my Java code before with Minikube "locally", there everything works fine. But when I switch to the EKS cluster, System.getenv("JOB_COMPLETION_INDEX") = null. What am I missing? What am I doing wrong?</p>
<p>I used EKS version 1.21.2.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: calculator
labels:
jobgroup: calculator
spec:
parallelism: 2
completions: 4
completionMode: Indexed
template:
metadata:
name: calculator
spec:
containers:
- name: calculater
image: fbaensch/calculator_test:latest
imagePullPolicy: Always
restartPolicy: Never
</code></pre>
| fbaensch | <p>This is a v1.22 beta feature which currently not available on EKS v1.21.x.</p>
| gohm'c |
<p>I have a Kubernetes Cluster with a NGINX Ingress Controller. In the cluster I have deployed a <a href="https://gitea.io/" rel="noreferrer">Gitea</a> POD. The Web UI and the HTTPS access is exposed via an Ingress object like this one:</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: gitea-service
namespace: gitea-repo
spec:
selector:
app: gitea
ports:
- name: gitea-http
port: 3000
- name: gitea-ssh
port: 22
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: git-tls
namespace: gitea-repo
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- git.foo.com
secretName: tls-gitea
rules:
- host: git.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitea-service
port:
number: 3000
</code></pre>
<p>This works all fine for HTTPS.</p>
<p>But Gitea also provides a SSH access through port 22. My Question is, how can I tell the NGINX Ingress Controller to route also port 22 to my pod?</p>
<p>As far as I understand I should patch my NGINX controller deployment with something like that:</p>
<pre><code>spec:
template:
spec:
containers:
- name: controller
# defind cusotm tcp/udp configmap
args:
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmap-giteassh
ports:
- name: ssh
containerPort: 22
protocol: TCP
</code></pre>
<p>and providing a config map pointing to my gitea service:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-configmap-giteassh
namespace: ingress-nginx
data:
22: "gitea-repo/gitea-service:22"
</code></pre>
<p>Do I need also an additional Ingress configuration in my Gitea POD now?</p>
<p>I wonder if this is the correct approach. Why I am forced to define this in the NGINX controller and not in my POD namespace as I do for HTTP? This would mean that for each POD exposing a TCP port other than HTTP I have to adjust my Ingress NGINX Controller?</p>
| Ralph | <p>I decided to improve my answer by adding more details and explanations.<br />
I found your <a href="https://ralph.blog.imixs.com/2021/02/25/running-gitea-on-kubernetes/" rel="noreferrer">blog post</a> which contains all the information I need to reproduce this problem.</p>
<p>In your example you need <strong>TCP traffic</strong> to pass through on port <code>22</code> (it is TCP protocol).<br />
NGINX Ingress Controller doesn't support TCP protocol, so additional configuration is necessary, which can be found in the <a href="https://ralph.blog.imixs.com/2021/02/25/running-gitea-on-kubernetes/" rel="noreferrer">documentation</a>.<br />
You can follow the steps below to expose the TCP service:</p>
<ol>
<li>Create a <code>ConfigMap</code> with the specified TCP service configuration.</li>
<li>Add the <code>--tcp-services-configmap</code> flag to the Ingress controller configuration.</li>
<li>Expose port <code>22</code> in the <code>Service</code> defined for the Ingress.</li>
</ol>
<hr />
<p><em>Ad 1.</em> We need to create a <code>ConfigMap</code> with the key that is the external port to use and the value that indicates the service to expose (in the format <code><namespace/service name>:<service port>:[PROXY]:[PROXY]</code>).<br />
<strong>NOTE:</strong> You may need to change <code>Namespace</code> depending on where the NGINX Ingress controller is deployed.</p>
<pre><code>$ cat ingress-nginx-tcp.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-tcp
namespace: default
data:
"22": gitea-repo/gitea-service:22
</code></pre>
<p><em>Ad 2.</em> After creating the <code>ConfigMap</code>, we can point to it using the <code>--tcp-services-configmap</code> flag in the Ingress controller configuration.<br />
<strong>NOTE:</strong> Additionally, if you want to use a name (e.g. <code>22-tcp</code>) in the port definition and then reference this name in the <code>targetPort</code> attribute of a Service, you need to define port <code>22</code> (see: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="noreferrer">Defining a Service</a> documentation).</p>
<pre><code>$ kubectl get deployment ingress-nginx-controller -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-controller
namespace: default
spec:
...
template:
...
spec:
containers:
- args:
- /nginx-ingress-controller
- --tcp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-tcp
...
</code></pre>
<p><em>Ad 3.</em> Then we need to expose port <code>22</code> in the Service defined for the Ingress.</p>
<pre><code>$ kubectl get svc -oyaml ingress-nginx-controller
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: default
spec:
ports:
- name: 22-tcp
nodePort: 30957
port: 22
protocol: TCP
targetPort: 22
...
type: LoadBalancer
...
</code></pre>
<p>Finally, we can check if it works as expected by creating a new repository on the command line:<br />
<strong>Note:</strong> We need to have a gitea user with the proper SSH keys associated with this account.</p>
<pre><code>$ git add README.md
$ git commit -m "first commit"
[master (root-commit) c6fa042] first commit
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 README.md
$ git remote add origin git@<PUBLIC_IP>:<USERNAME>/repository.git
$ git push -u origin master
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 211 bytes | 211.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: . Processing 1 references
remote: Processed 1 references in total
To <PUBLIC_IP>:<USERNAME>/repository.git
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
</code></pre>
<p>In addition, we can log to the NGINX Ingress Controller Pod and check if it is listening on port <code>22</code>:</p>
<pre><code>$ kubectl exec -it ingress-nginx-controller-784d4c9d9-jhvnm -- bash
bash-5.1$ netstat -tulpn | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 :::22 :::* LISTEN -
</code></pre>
| matt_j |
<pre><code>[root@k8s001 ~]# docker exec -it f72edf025141 /bin/bash
root@b33f3b7c705d:/var/lib/ghost# ps aux`enter code here`
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1012 4 ? Ss 02:45 0:00 /pause
root 8 0.0 0.0 10648 3400 ? Ss 02:57 0:00 nginx: master process nginx -g daemon off;
101 37 0.0 0.0 11088 1964 ? S 02:57 0:00 nginx: worker process
node 38 0.9 0.0 2006968 116572 ? Ssl 02:58 0:06 node current/index.js
root 108 0.0 0.0 3960 2076 pts/0 Ss 03:09 0:00 /bin/bash
root 439 0.0 0.0 7628 1400 pts/0 R+ 03:10 0:00 ps aux
</code></pre>
<p>The display come from internet, it says pause container is the parent process of other containers in the pod, if you attach pod or other containers, do <code>ps aux</code>, you would see that.
Is it correct, I do in my k8s,different, PID 1 is not /pause.</p>
| zhuwei | <p><code>...Is it correct, I do in my k8s,different, PID 1 is not /pause.</code></p>
<p>This has changed, pause no longer hold PID 1 despite being the first container created by the container runtime to setup the pod (eg. cgroups, namespace etc). Pause is isolated (hidden) from the rest of the containers in the pod regardless of your ENTRYPOINT/CMD. See <a href="https://github.com/cri-o/cri-o/issues/91" rel="nofollow noreferrer">here</a> for more background information.</p>
| gohm'c |
<p>I'm struggling to deploy the playbook below (adding a namespace to Openshift 3.11 cluster):</p>
<pre><code>---
- hosts: kubernetesmastergfm
gather_facts: false
vars:
name_namespace: testingnamespace
tasks:
- name: Create a k8s namespace
k8s:
host: "https://{{ cluster.endpoint }}"
ca_cert: "/etc/origin/master/ca.crt" <--WHERE IS THIS IN OPENSHIFT 3.11?
api_key: "/etc/origin/master/admin.key"<--WHERE IS THIS IN OPENSHIFT 3.11?
validate_certs: no
name: pippo
api_version: v1
kind: Namespace
state: present
</code></pre>
<p>I'm getting the error:</p>
<pre><code> ...
kubernetes.client.rest.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 16 Feb 2021 16:05:03 GMT', 'Content-Length': '129', 'Content-Type': 'application/json', 'Cache-Control': 'no-store'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
</code></pre>
<p>I suspect that the certificates in the path below are wrong:
/etc/origin/master/ca.crt
/etc/origin/master/admin.key</p>
<p>Any suggestion is welcome.
Gian Filippo</p>
| Gian Filippo Maniscalco | <p>The <code>api_key</code> parameter is the value of the <code>ServiceAccount</code> token.
I think you should paste this token directly as a <code>api_key</code> parameter value becuse providing the path to the file with token doesn't seem to work.</p>
<p>I will describe required steps on a simple example to illustrate you how it works.</p>
<p>To find the token name associated with a specific <code>ServiceAccount</code> you can use:</p>
<pre><code>### kubectl describe sa <SERVICE ACCOUNT NAME> | grep "Token"
# kubectl describe sa namespace-creator | grep "Token"
Tokens: namespace-creator-token-hs6zn
</code></pre>
<p>And then to display the value of this token:</p>
<pre><code>### kubectl describe secret <TOKEN NAME> | grep "token:"
# kubectl describe secret namespace-creator-token-hs6zn | grep "token:"
token: ey(...)3Q
</code></pre>
<p>Finally pass this token value as the <code>api_key</code> parameter value:</p>
<pre><code>---
...
tasks:
- name: Create a k8s namespace
community.kubernetes.k8s:
...
api_key: "ey(...)3Q"
validate_certs: no
...
</code></pre>
<p>To find out where the CA certificate is located, you may look at the <code>--client-ca-file</code> parameter of the API server e.g:</p>
<pre><code># kubectl describe pod kube-apiserver-master -n kube-system | grep "client-ca-file"
--client-ca-file=/etc/kubernetes/ssl/ca.crt
</code></pre>
<p><strong>NOTE:</strong> If you are using <code>validate_certs: no</code>, you don't need to provide <code>ca_cert</code> parameter.</p>
<p>Additionally, if you want instead of <a href="https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html#parameter-api_key" rel="nofollow noreferrer">api_key</a>, you can use <a href="https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html#parameter-kubeconfig" rel="nofollow noreferrer">kubeconfig</a> with path to an existing Kubernetes config file.</p>
| matt_j |
<p>Can anyone tell me why the out put of this job come out as not text?</p>
<p><a href="https://i.stack.imgur.com/IBTtY.png" rel="nofollow noreferrer">Job.yaml</a></p>
<p>the output is οΏ½Η« when its supposed to be user</p>
<p>the secret looks like this: <a href="https://i.stack.imgur.com/KnVCV.png" rel="nofollow noreferrer">Secret.yaml</a></p>
| Anthony Delgado | <p>Because the secret value was not base64 encoded during creation. Use <code>stringData</code> for un-encoded value:</p>
<pre><code>...
stringData:
username: user
password: password
</code></pre>
| gohm'c |
<p>In <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">Google Cloud blog</a> they say that if Readiness probe fails, then traffic will not be routed to a <strong>pod</strong>. And if Liveliness probe fails, a <strong>pod</strong> will be restarted.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Kubernetes docs</a> they say that the kubelet uses Liveness probes to know if a <strong>container</strong> needs to be restarted. And Readiness probes are used to check if a <strong>container</strong> is ready to start accepting requests from clients.</p>
<p>My current understanding is that a pod is considered Ready and Alive when <strong>all</strong> of its containers are ready. This in turn implies that if 1 out of 3 containers in a pod fails, then the entire pod will be considered as failed (not Ready / not Alive). And if 1 out of 3 containers was restarted, then it means that the entire pod was restarted. Is this correct?</p>
| kamokoba | <p>A <code>Pod</code> is ready only when all of its containers are ready.
When a Pod is ready, it should be added to the load balancing pools of all matching Services because it means that this Pod is able to serve requests.<br />
As you can see in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes" rel="nofollow noreferrer">Readiness Probe documentation</a>:</p>
<blockquote>
<p>The kubelet uses readiness probes to know when a container is ready to start accepting traffic.</p>
</blockquote>
<p>Using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer"><strong>readiness probe</strong></a> can ensure that traffic does not reach a container that is not ready for it.<br />
Using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer"><strong>liveness probe</strong></a> can ensure that container is restarted when it fail ( the kubelet will kill and restart only the specific container).</p>
<p>Additionally, to answer your last question, I will use an example:</p>
<blockquote>
<p>And if 1 out of 3 containers was restarted, then it means that the entire pod was restarted. Is this correct?</p>
</blockquote>
<p>Let's have a simple <code>Pod</code> manifest file with <code>livenessProbe</code> for one container that always fails:</p>
<pre><code>---
# web-app.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: web-app
name: web-app
spec:
containers:
- image: nginx
name: web
- image: redis
name: failed-container
livenessProbe:
httpGet:
path: /healthz # I don't have this endpoint configured so it will always be failed.
port: 8080
</code></pre>
<p>After creating <code>web-app</code> <code>Pod</code> and waiting some time, we can check how the <code>livenessProbe</code> works:</p>
<pre><code>$ kubectl describe pod web-app
Name: web-app
Namespace: default
Containers:
web:
...
State: Running
Started: Tue, 09 Mar 2021 09:56:59 +0000
Ready: True
Restart Count: 0
...
failed-container:
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Ready: False
Restart Count: 7
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
...
Normal Killing 9m40s (x2 over 10m) kubelet Container failed-container failed liveness probe, will be restarted
...
</code></pre>
<p>As you can see, only the <code>failed-container</code> container was restarted (<code>Restart Count: 7</code>).</p>
<p>More information can be found in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Liveness, Readiness and Startup Probes documentation</a>.</p>
| matt_j |
<p>I try to run this command</p>
<pre><code>kubectl patch deployment w-app-kuku-com -n stage -p '{"spec":{"template":{"spec":{"containers":[{"livenessProbe":{"successThreshold": "5"}}]}}}}'
</code></pre>
<p>And get this error</p>
<pre><code>Error from server: map: map[livenessProbe:map[successThreshold:5]] does not contain declared merge key: name
</code></pre>
<p>I try to change livenessProbe or ReadnesProbe parameters for example successThreshold but cant !</p>
| Dimitri Goldshtein | <blockquote>
<p>successThreshold: Minimum consecutive successes for the probe to be
considered successful after having failed. Defaults to 1. <strong>Must be 1
for liveness and startup Probes</strong>. Minimum value is 1.</p>
</blockquote>
<p>Asserted from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">here</a>, you can't patch <code>successThreshold</code> to other value beside setting it to 1 for <code>livenessProbe</code>.</p>
| gohm'c |
<p>I am learning kubernetes and have the following question related to command and argument syntax for POD.</p>
<p>Are there any specific syntax that we need to follow to write a shell script kind of code in the arguments of a POD? For example</p>
<p>In the following code, how will I know that the while true need to end with a semicolon ; why there is no semi colon after do but after If etc</p>
<pre><code> while true;
do
echo $i;
if [ $i -eq 5 ];
then
echo "Exiting out";
break;
fi;
i=$((i+1));
sleep "1";
done
</code></pre>
<p>We don't write shell script in the similar way from semicolon prespective so why do we have to do this in POD.</p>
<p>I tried the command in /bin/bash format as well</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bash
name: bash
spec:
containers:
- image: nginx
name: nginx
args:
- /bin/bash
- -c
- >
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>Error with new code</p>
<pre><code>/bin/bash: -c: line 2: syntax error near unexpected token `echo'
/bin/bash: -c: line 2: ` echo "Welcome $i times"'
</code></pre>
| James Stan | <p><code>Are there any specific syntax that we need to follow to write a shell script kind of code in the arguments of a POD?</code></p>
<p>No, shell syntax is the same across.</p>
<p><code>...how will I know that the while true need to end with a semicolon</code></p>
<p>Used <code>|</code> for your text block to be treated like an ordinary shell script:</p>
<pre><code>...
args:
- /bin/bash
- -c
- |
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
</code></pre>
<p>When you use <code>></code> your text block is merge into a single line where newline is replaced with white space. Your command become invalid in such case. If you want your command to be a single line, then write them with <code>;</code> like you would in ordinary terminal. This is shell scripting standard and is not K8s specific.</p>
<p>If you must use <code>></code>, you need to either add empty line or indented the next line correctly:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: bash
name: bash
spec:
containers:
- image: nginx
name: nginx
args:
- /bin/bash
- -c
- >
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
restartPolicy: Never
</code></pre>
<p><code>kubectl logs bash</code> to see the 5 echos and <code>kubectl delete pod bash</code> to clean-up.</p>
| gohm'c |
<p>In kubernetes you can set volumes permissions at pod ,pv or pvc levels.
You can define pv | pvc as read only but still can write to the mount point if <strong>readOnly</strong> attribute is set to <strong>false</strong> which is pretty confusing.
I have read a lot of articles about this but still can't fully understand the purpose.</p>
<p>What I inderstand:</p>
<ul>
<li>Permissions at pv level are for requesting available ressouces from the host file system with at least the same permissions defined in pv.</li>
<li>Permissions at pvc level are for requesting pv with at least the same permissions defined in pvc.</li>
<li>Permissions at pod level are for setting permissions to the mount point.</li>
</ul>
<p>Please correct me if I'm wrong</p>
| Amine Bouzid | <p>The <code>PV</code>'s (and <code>PVC</code>'s) access modes are used only for binding <code>PVC</code> (<code>PV</code>).</p>
<p>As you can see in this <a href="https://github.com/kubernetes/kubernetes/issues/60903#issuecomment-377086770" rel="nofollow noreferrer">github discussion</a>:</p>
<blockquote>
<p>AccessModes as defined today, only describe node attach (not pod mount) semantics, and doesn't enforce anything.</p>
</blockquote>
<p>Additionally you can find useful information in the <a href="https://docs.openshift.com/dedicated/4/storage/understanding-persistent-storage.html#pv-access-modes_understanding-persistent-storage" rel="nofollow noreferrer">PV AccessModes documentation</a>:</p>
<blockquote>
<p>A volumeβs AccessModes are descriptors of the volumeβs capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volumeβs ROX capability.</p>
</blockquote>
<p>To enforce <code>readOnly</code> mode you can use:</p>
<p><code>Pod.spec.volumes.persistentVolumeClaim.readOnly</code> - controls if volume is in readonly mode.</p>
<p>I think this <a href="https://stackoverflow.com/a/52208276/14801225">answer</a> may be of great help to you.</p>
| matt_j |
<p>I am taking a helm chart class and the 1st lab creates a pod, service and ingress. I am relatively new to k8s and I am running on minikube. The pod and service get created without issue; however the ingress.yaml file gives the following error:</p>
<p><strong>unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1</strong></p>
<p>I am guessing something is obsolete in the ingress.yaml file but have no idea how to fix it. here's the class repo:</p>
<pre><code>https://github.com/phcollignon/helm3
</code></pre>
<p>here's the pod frontend.yaml :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- image: phico/frontend:1.0
imagePullPolicy: Always
name: frontend
ports:
- name: frontend
containerPort: 4200
</code></pre>
<p>here's the frontend_service.yaml :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: frontend
name: frontend
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 4200
selector:
app: frontend
</code></pre>
<p>Here's the problem file ingress.yaml :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook-ingress
spec:
rules:
- host: frontend.minikube.local
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- host: backend.minikube.local
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80%
</code></pre>
<p>Here's minikube version (kubectrl version) :</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Any help is very much appreciated.</p>
<p>I changed the ingress.yaml file to use
<code>apiVersion: networking.k8s.io/v1:</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: guestbook-ingress
spec:
rules:
- host: frontend.minikube.local
http:
paths:
- path: /
backend:
service:
name: frontend
port:
number: 80
- host: backend.minikube.local
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
</code></pre>
<p>now I am getting an error:</p>
<p><strong>error: error parsing ingress.yaml: error converting YAML to JSON: yaml: line 17: mapping values are not allowed in this context</strong></p>
<p>line 17 is the second "paths:" line.</p>
<p>Again, any help appreciated.</p>
| user15223679 | <p>Ingress spec <code>apiVersion: extensions/v1beta1</code> has deprecated. You can update it to <code>apiVersion: networking.k8s.io/v1</code></p>
<p>Second question:</p>
<pre><code>kind: Ingress
metadata:
name: guestbook-ingress
spec:
rules:
- host: frontend.minikube.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- host: backend.minikube.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
</code></pre>
| gohm'c |
<p>Today my colleague asked me a question, I don't know how to answer, he explained that "service-cluster-ip-range=10.96.0.0/12" has been set, but why are there still different cluster ips in the k8s cluster.</p>
<p>Who can help me answer this question ?</p>
<p>thanks</p>
<p><a href="https://i.stack.imgur.com/TmKHE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TmKHE.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/o4PEZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o4PEZ.png" alt="enter image description here" /></a></p>
| yanzhiluo | <p><code>...he explained that "service-cluster-ip-range=10.96.0.0/12" has been set</code></p>
<p>The use of CIDR <code>10.96.0.0/12</code> will have range start from <code>10.96.0.1</code> to <code>10.111.255.254</code>. There is nothing wrong with those IP(s) in the screenshot.</p>
| gohm'c |
<p>I deploy my application to AWS EKS cluster with 3 nodes. When I run describe, it shows me this message: <code>(Total limits may be over 100 percent, i.e., overcommitted.)</code>. But based on the full message, it doesn't seem there are many resources. Why do i see this message in the output?</p>
<pre><code>$ kubectl describe node ip-192-168-54-184.ap-southeast-2.compute.internal
Name: ip-192-168-54-184.ap-southeast-2.compute.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.medium
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=scalegroup
eks.amazonaws.com/nodegroup-image=ami-0ecaff41b4f38a650
failure-domain.beta.kubernetes.io/region=ap-southeast-2
failure-domain.beta.kubernetes.io/zone=ap-southeast-2b
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-192-168-54-184.ap-southeast-2.compute.internal
kubernetes.io/os=linux
node.kubernetes.io/instance-type=t3.medium
topology.kubernetes.io/region=ap-southeast-2
topology.kubernetes.io/zone=ap-southeast-2b
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 04 Mar 2021 22:27:50 +1100
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ip-192-168-54-184.ap-southeast-2.compute.internal
AcquireTime: <unset>
RenewTime: Fri, 05 Mar 2021 09:13:16 +1100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 05 Mar 2021 09:11:33 +1100 Thu, 04 Mar 2021 22:27:50 +1100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 05 Mar 2021 09:11:33 +1100 Thu, 04 Mar 2021 22:27:50 +1100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 05 Mar 2021 09:11:33 +1100 Thu, 04 Mar 2021 22:27:50 +1100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 05 Mar 2021 09:11:33 +1100 Thu, 04 Mar 2021 22:28:10 +1100 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.54.184
ExternalIP: 13.211.200.109
Hostname: ip-192-168-54-184.ap-southeast-2.compute.internal
InternalDNS: ip-192-168-54-184.ap-southeast-2.compute.internal
ExternalDNS: ec2-13-211-200-109.ap-southeast-2.compute.amazonaws.com
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 20959212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3970504Ki
pods: 17
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 1930m
ephemeral-storage: 18242267924
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3415496Ki
pods: 17
System Info:
Machine ID: ec246b12e91dc516024822fbcdac4408
System UUID: ec246b12-e91d-c516-0248-22fbcdac4408
Boot ID: 5c6a3d95-c82c-4051-bc90-6e732b0b5be2
Kernel Version: 5.4.91-41.139.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.6
Kubelet Version: v1.19.6-eks-49a6c0
Kube-Proxy Version: v1.19.6-eks-49a6c0
ProviderID: aws:///ap-southeast-2b/i-03c0417efb85b8e6c
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
cert-manager cert-manager-cainjector-9747d56-qwhjw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10h
kube-system aws-node-m296t 10m (0%) 0 (0%) 0 (0%) 0 (0%) 10h
kube-system coredns-67997b9dbd-cgjdj 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 10h
kube-system kube-proxy-dc5fh 100m (5%) 0 (0%) 0 (0%) 0 (0%) 10h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 210m (10%) 0 (0%)
memory 70Mi (2%) 170Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events: <none>
</code></pre>
| Joey Yi Zhao | <p>Let's quickly analyze the <a href="https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go" rel="nofollow noreferrer">source code</a> of the <code>kubectl describe</code> command, in particular the <a href="https://github.com/kubernetes/kubectl/blob/36e660864e725d0b37dd382588d47ced1cfc4c25/pkg/describe/describe.go#L3795" rel="nofollow noreferrer">describeNodeResource</a> function.</p>
<p>Inside the <code>describeNodeResource(...)</code> function we see ( <a href="https://github.com/kubernetes/kubectl/blob/36e660864e725d0b37dd382588d47ced1cfc4c25/pkg/describe/describe.go#L3816" rel="nofollow noreferrer">this line</a> ):</p>
<pre><code>w.Write(LEVEL_0, "Allocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n")
</code></pre>
<p>There is no condition to check when this message should be printed, it is just an informational message that is printed every time.</p>
| matt_j |
<p>I'm trying to install SAP HANA Express docker image in a Kubernete node in Google Cloud Platform as per guide <a href="https://developers.sap.com/tutorials/hxe-k8s-advanced-analytics.html#7f5c99da-d511-479b-8745-caebfe996164" rel="nofollow noreferrer">https://developers.sap.com/tutorials/hxe-k8s-advanced-analytics.html#7f5c99da-d511-479b-8745-caebfe996164</a> however, during execution of step 7 "Deploy your containers and connect to them" I'm not getting the expected result.</p>
<p>I'm executing command <code>kubectl create -f hxe.yaml</code> and here is the yaml file I'm using it:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2018-01-18T19:14:38Z
name: hxe-pass
data:
password.json: |+
{"master_password" : "HXEHana1"}
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-vol-hxe
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 150Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/hxe_pv"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hxe-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hxe
labels:
name: hxe
spec:
selector:
matchLabels:
run: hxe
app: hxe
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
run: hxe
app: hxe
role: master
tier: backend
spec:
initContainers:
- name: install
image: busybox
command: [ 'sh', '-c', 'chown 12000:79 /hana/mounts' ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
volumes:
- name: hxe-data
persistentVolumeClaim:
claimName: hxe-pvc
- name: hxe-config
configMap:
name: hxe-pass
imagePullSecrets:
- name: docker-secret
containers:
- name: hxe-container
image: "store/saplabs/hanaexpress:2.00.045.00.20200121.1"
ports:
- containerPort: 39013
name: port1
- containerPort: 39015
name: port2
- containerPort: 39017
name: port3
- containerPort: 8090
name: port4
- containerPort: 39041
name: port5
- containerPort: 59013
name: port6
args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
- name: hxe-config
mountPath: /hana/hxeconfig
- name: sqlpad-container
image: "sqlpad/sqlpad"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hxe-connect
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 39013
targetPort: 39013
name: port1
- port: 39015
targetPort: 39015
name: port2
- port: 39017
targetPort: 39017
name: port3
- port: 39041
targetPort: 39041
name: port5
selector:
app: hxe
---
apiVersion: v1
kind: Service
metadata:
name: sqlpad
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 3000
targetPort: 3000
protocol: TCP
name: sqlpad
selector:
app: hxe
</code></pre>
<p>I'm also using the last version of HANA Express Edition docker image: <code>store/saplabs/hanaexpress:2.00.045.00.20200121.1</code> that you can see available here: <a href="https://hub.docker.com/_/sap-hana-express-edition/plans/f2dc436a-d851-4c22-a2ba-9de07db7a9ac?tab=instructions" rel="nofollow noreferrer">https://hub.docker.com/_/sap-hana-express-edition/plans/f2dc436a-d851-4c22-a2ba-9de07db7a9ac?tab=instructions</a></p>
<p>The error I'm getting is the following:</p>
<p><a href="https://i.stack.imgur.com/kAx3k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kAx3k.png" alt="Error: failed to start container "install": Error response from daemon: error while creating mount source path '/data/hxe_pv': mk
dir /data: read-only file system" /></a></p>
<p>Any thought on what could be wrong?</p>
<p>Best regards and happy new year for everybody.</p>
| andres_chacon | <p>Thanks to the Mahboob suggestion now I can start the pods (partially) and the issue is not poppin up in the "busybox" container starting stage. The problem was that I was using an Container-Optimized image for the node pool and the required one is Ubuntu. If you are facing a similar issue double check the image flavor you are choosing at the moment of node pool creation.</p>
<p>However, I have now a different issue, the pods are starting (both the hanaxs and the other for sqlpad), nevertheless one of them, the sqlpad container, is crashing at some point after starting and the pod gets stuck in CrashLoopBackOff state. As you can see in picture below, the pods are in CrashLoopBackOff state and only 1/2 started and suddenly both are running.</p>
<p><a href="https://i.stack.imgur.com/aTq3O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aTq3O.png" alt="kubectl get pods command" /></a></p>
<p>I'm not hitting the right spot to solve this problem since I'm a newcomer to the kubernetes and docker world. Hope some of you can bring some light to me.</p>
<p>Best regards.</p>
| andres_chacon |
<p>I am trying to use workflow identity for my kubernetes cluster. I have created the service account on a new namespace. My issue is that I am not able to specify the name space when I am trying to add the service account name on the pod deployment YML.</p>
<p>Following is my pod spect file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-scheduler
spec:
replicas: 1
selector:
matchLabels:
app: test-scheduler
template:
metadata:
labels:
app: test-scheduler
spec:
serviceAccountName: test-na/test-k8-sa
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: "true"
containers:
- name: test-scheduler
image: gcr.io/PROJECT_ID/IMAGE:TAG
ports:
- name: scheduler-port
containerPort: 8002
protocol: TCP
env:
- name: NAMESPACE
value: test-scheduler
- name: CONTAINER_NAME
value: test-scheduler
---
apiVersion: v1
kind: Service
metadata:
name: test-scheduler
spec:
selector:
app: test-scheduler
ports:
- port: 8002
protocol: TCP
targetPort: scheduler-port
</code></pre>
<p>When I deploy this code using github actions I get this error:</p>
<pre><code>The Deployment "test-scheduler" is invalid: spec.template.spec.serviceAccountName: Invalid value: "test-na/test-k8-sa": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',
</code></pre>
<p>When I remove the namespace in a file like this:</p>
<pre><code>serviceAccountName: test-k8-sa
</code></pre>
<p>It searches for the service account on default name space and fails.</p>
<p>My question here is what is the right way to specify the custom namespace with the service account in kubernetes?</p>
<p>I can start using the default but I am inclined to keep the namespace. I saw some reference to service account file but I don't really understand how to use them.</p>
<p>By the way, I am using this guide <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gcloud_3" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gcloud_3</a></p>
| shobhit | <p><code>...I have created the service account on a new namespace. My issue is that I am not able to specify the name space when I am trying to add the service account name on the pod deployment YML.</code></p>
<p>To assign the created service account to your deployment, you can create the deployment in the same namespace as the service account:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-scheduler
namespace: test-na # <-- add this line with the namespace where the service account resides
spec:
...
template:
...
spec:
serviceAccountName: test-k8-sa
...
</code></pre>
| gohm'c |
<p>I have a K3s (v1.20.4+k3s1) cluster with 3 nodes, each with two interfaces. The default interface has a public IP, the second one a 10.190.1.0 address. I installed K3s with and without the -flannel-backend=none option and then deployed flannel via " kubectl apply -f <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml%22" rel="noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"</a>, previously binding the kube-flannel container to the internal interface via the args "--iface=". In this setup the kube-flannel pods get the node-ip of the internal interface, but I can't reach the pods on the other nodes via ICPM. If I deploy flannel without -iface arg, the kube-flannel pods get an address from the 10.42.0.0 network. Then I can reach the pods of the other hosts, but the traffic will be routed through the public interfaces, which I want to avoid. Does anyone have a tip for me?</p>
| oss648 | <p>The problem was resolved in the comments section but for better visibility I decided to provide an answer.</p>
<p>As we can see in the <a href="https://rancher.com/docs/k3s/latest/en/installation/network-options/#:%7E:text=By%20default%2C%20K3s%20will%20run,VXLAN%20as%20the%20default%20backend." rel="noreferrer">K3s documentation</a>, K3s uses flannel as the CNI by default:</p>
<blockquote>
<p>By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. To change the CNI, refer to the section on configuring a custom CNI.</p>
</blockquote>
<p>By default, flannel selects the first interface on a host (look at the <a href="https://github.com/flannel-io/flannel/blob/master/Documentation/troubleshooting.md#vagrant" rel="noreferrer">flannel documentation</a>), but we can override this behavior with the <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#agent-networking" rel="noreferrer">--flannel-iface</a> flag.<br />
Additionally we can explicitly set IP address to advertise for node using the <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#agent-networking" rel="noreferrer">--node-ip</a> flag.</p>
<hr />
<p>I've created a simple example to illustrate how it works.</p>
<p>On my host machine I have two network interfaces (<code>ens4</code> and <code>ens5</code>):</p>
<pre><code>kmaster:~# ip a s | grep -i "UP\|inet"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
inet 10.156.15.197/32 brd 10.156.15.197 scope global dynamic ens4
3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.0.2/32 brd 192.168.0.2 scope global dynamic ens5
</code></pre>
<p>Without setting the <code>--flannel-iface</code> and <code>--node-ip</code> flags, flannel will select the first interface (<code>ens4: 10.156.15.197</code>):</p>
<pre><code>kmaster:~# curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.20.4+k3s1 as release
...
[INFO] systemd: Starting k3s
kmaster:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
kmaster Ready control-plane,master 97s v1.20.4+k3s1 10.156.15.197
</code></pre>
<p>But as I mentioned before we are able to override default flannel interface with the <code>--flannel-iface</code> flag:</p>
<pre><code>kmaster:~# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=192.168.0.2 --flannel-iface=ens5" sh -
[INFO] Finding release for channel stable
[INFO] Using v1.20.4+k3s1 as release
...
[INFO] systemd: Starting k3s
kmaster:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
kmaster Ready control-plane,master 64s v1.20.4+k3s1 192.168.0.2
</code></pre>
| matt_j |
<p>I am working with logs in my system.<br />
I want to use a log sidecar to collect business container's log.</p>
<p>And my business container's log will write to its <strong>STDOUT</strong>.</p>
<p>So I want to redirect this STDOUT to pod's volume file, because in a pod all containers share the same volume, so my sidecar can collect log from volume.</p>
<p>How should I configuer this?<br />
I mean maybe I should write some configuration in my k8s yaml so k8s will automaticlly redirect the container's <strong>STDOUT</strong> to pod's volume?</p>
| elon_musk | <p>Adding this <code>2>&1 > /<your_path_to_volume_inside_pod>/file.log</code> to your <code>command</code> would redirect <code>STDOUT</code> and <code>STDERR</code> to a file</p>
| SCcagg5 |
<p>I created below statfulset on microk8s:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql13
spec:
selector:
matchLabels:
app: postgresql13
serviceName: postgresql13
replicas: 1
template:
metadata:
labels:
app: postgresql13
spec:
containers:
- name: postgresql13
image: postgres:13
imagePullPolicy: Always
ports:
- containerPort: 5432
name: sql-tcp
volumeMounts:
- name: postgresql13
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
volumeClaimTemplates:
- metadata:
name: postgresql13
spec:
storageClassName: "microk8s-hostpath"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Ki
</code></pre>
<p>in the <code>volumeClaimTemplates</code> i gave it only 1Ki (this is one KB right ?)
But the DB started normally and when i run <code>kubectl exec postgresql13-0 -- df -h</code> on the pod i get this</p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
overlay 73G 11G 59G 15% /
tmpfs 64M 0 64M 0% /dev
/dev/mapper/ubuntu--vg-ubuntu--lv 73G 11G 59G 15% /data
shm 64M 16K 64M 1% /dev/shm
tmpfs 7.7G 12K 7.7G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
</code></pre>
<p>Isn't is supposed to not use more than what the PVC have ?
I intentially sat the storage class <code>AllowVolumeExpansion: False</code></p>
<p>what am i missing here ?</p>
| frisky5 | <p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion" rel="nofollow noreferrer">allowVolumeExpansion</a> and storage size does not apply to <code>hostPath</code>. The actual size will be the host volume size where the host path resides.</p>
| gohm'c |
<h2>UPDATE</h2>
<p>Regarding space, I realized I can mount the root path '/' into the container and use this piece of code to get my stats:</p>
<pre><code>import shutil
total, used, free = shutil.disk_usage("/")
print("Total: %d GiB" % (total // (2**30)))
print("Used: %d GiB" % (used // (2**30)))
print("Free: %d GiB" % (free // (2**30)))
</code></pre>
<p>Still Looking for a way to do it through Kubernetes itself, though.</p>
<h2>Original Question</h2>
<p>I'm building a service that part of its function is to monitor system resources for my kubernetes cluster (not specific pods - the entire machine the node runs on)
And I realized that <code>kubectl top node</code> is a good way to get that information (excluding storage).</p>
<p>Is there a way to get this information using the kubernetes python package?</p>
<p>I've tried to find a solution in the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">package documentation</a> and realized that action is not there. (maybe under a different name? I couldn't find it)<br />
Also found <a href="https://github.com/kubernetes-client/python/issues/435" rel="nofollow noreferrer">this issue</a> on Github which partially solves my problem but I'm still looking for something simpler.</p>
<p><em><strong>My question: How can I check for system resources like Memory, storage, and CPU usage for my kubernetes node through the Kubernetes Python package?</strong></em></p>
| Oren_C | <p>Resource usage metrics, such as pod or node CPU and memory usage, are available in Kubernetes through the Metrics API.<br />
You can access the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="noreferrer">Metrics API</a> using <code>kubectl get --raw</code> command, e.g.:</p>
<pre><code>kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
</code></pre>
<p>More examples can be found in the <a href="https://github.com/feiskyer/kubernetes-handbook/blob/master/en/addons/metrics.md#metrics-api" rel="noreferrer">Metrics API documentation</a>.</p>
<hr />
<p>Using <a href="https://github.com/kubernetes-client/python/#kubernetes-python-client" rel="noreferrer">Kubernetes Python Client</a> you should be able to check memory and cpu usage of Kubernetes nodes, just like with the <code>kubectl top nodes</code> command:<br />
<strong>NOTE:</strong> Memory is in <code>kilobytes</code> but can easily be converted to <code>megabytes</code>.</p>
<pre><code>#!/usr/bin/python
# Script name: usage_info.py
from kubernetes import client, config
config.load_kube_config()
api = client.CustomObjectsApi()
k8s_nodes = api.list_cluster_custom_object("metrics.k8s.io", "v1beta1", "nodes")
for stats in k8s_nodes['items']:
print("Node Name: %s\tCPU: %s\tMemory: %s" % (stats['metadata']['name'], stats['usage']['cpu'], stats['usage']['memory']))
</code></pre>
<p>And sample output:</p>
<pre><code>$ ./usage_info.py
Node Name: node01 CPU: 82845225n Memory: 707408Ki
Node Name: node02 CPU: 99717207n Memory: 613892Ki
Node Name: node03 CPU: 74841362n Memory: 625316Ki
</code></pre>
<p>In terms of storage usage, I think it should be checked in a different way as this information isn't available at the <code>/apis/metrics.k8s.io/v1beta1/nodes</code> endpoint.</p>
| matt_j |
<p>I have a <code>data-config.json</code> that is used by my ASP.NET Core application.</p>
<p>The app was built to a image and the goal is to create a Kubernetes environment (using Minikube and a myapp.yaml to create and deploy the Minikube and the pods) and copy the <code>data-config.json</code> from a specific place in my local machine to a directory in the Node (the place I want to be <code>~/app/publish/data-config.json</code>, in other words, in the root directory of the node).</p>
<p>I read a lot the documentation and I think ConfigMap can be useful in this case. I already implemented a Volume too. But I don't think write the json content inside the ConfigMap configuration is the best way to do that, I want to depend only of the <code>data-config.json</code> file and the YAML.</p>
<p>In the <code>docker-compose.yml</code> file, to test in Docker Desktop, it works and the code is showed above:</p>
<pre><code> dataService:
image: webapp
build:
context: ../..
dockerfile: webapp
container_name: webapp-container
ports:
- "9000:8080"
volumes:
- "../data-service/data-config.json:/app/publish/data-config.json"
</code></pre>
<p>And it worked. Now I need to translate or find a way to copy this file and save it in the <code>/app/publish/</code> directory of my node.</p>
| Hugo Mata | <p>I solved this question by creating a ConfigMap that maps the <code>data-config.json</code> from my local machine directory to the container pod. The example shows the implementation of the YAML file used by Minikube to create and start the cluster:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: data-deployment
labels:
app: data-app
spec:
replicas: 1
selector:
matchLabels:
app: data-app
template:
metadata:
labels:
app: data-app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: data-app
image: validation-image:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: DataSettings_ConfigPath
value: /app/publish/data-config.json
volumeMounts:
- name: data-config-dir
mountPath: /app/publish/data-config.json
subPath: data-config.json
restartPolicy: Always
hostname: data-service
volumes:
- name: data-config-dir
configMap:
name: data-configmap
items:
- key: data-config.json
path: data-config.json
</code></pre>
<p>PS: you must run the command below in terminal to create the ConfigMap for data-config.json file:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl create configmap data-configmap --from-file ../data-service/data-config.json
</code></pre>
| Hugo Mata |
<p>I need to get an ALB name or id to attach WAF rules to it.
The ALB is created by Kubernetes and not used anywhere in Terraform.
Oficial data resource only supports name and arn with no filtering.</p>
<pre><code>data "aws_lb" "test" {
name = ...
arn = ...
}
</code></pre>
<p>Is there a way to get ALB somehow or attach WAF rules to it?</p>
| Adam | <p>I'm currently facing the same issue, the name of the ALB doesn't appear to be something that you can set whilst you're deploying the Helm chart and there doesn't appear to be a way of getting the name once the chart has been deployed.</p>
<p>The only workaround I can think of is to describe the ingress resource and then do a trim of some sort on the ingress address using Terraform (ignoring everything after the 4th dash).</p>
<p>It's not a great workaround but is the only one that I've come up with to get this all working through Terraform. Do let me know if you find a better solution for this.</p>
<p>EDIT: It appears that there is already an open issue for this on GitHub: <a href="https://github.com/hashicorp/terraform-provider-aws/issues/12265" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-aws/issues/12265</a> There is a solution posted a bit further down in the thread which is similar to what I had originally suggested - using regex to get the name of the load balancer from the ingress resource.</p>
| Syngate |
<p>I have created a persistent volume claim where I will store some ml model weights as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: models-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: model-storage-bucket
resources:
requests:
storage: 8Gi
</code></pre>
<p>However this configuration will provision a disk on a compute engine, and it is a bit cumbersome to copy stuff there and to upload/update any data. Would be so much more convenient if I could create a <code>PersistentVolume</code> abstracting a google cloud storage bucket. However, I couldn't find anywhere including the google documentation a way to do this. I am baffled because I would expect this to be a very common use case. Anyone knows how I can do that?
I was expecting to find something along the lines of</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
storageBucketPersistentDisk:
pdName: gs://my-gs-bucket
</code></pre>
| DarioB | <p>To mount cloud storage bucket you need to install <a href="https://ofek.dev/csi-gcs/getting_started/" rel="nofollow noreferrer">Google Cloud Storage driver</a> (<strong>NOT</strong> the persistent disk nor file store) on your cluster, create the StorageClass and then provision the bucket backed storage either dynamically or static; just as you would like using persistent disk or file store csi driver. Checkout the link for detailed steps.</p>
| gohm'c |
<p>Having an issue with a kubernetes cronjob, when it runs it says that /usr/share/nginx/html is not a directory, "no such file or directory", yet it definitely is, it's baked into the image, if i load the image up straight on docker the folder is definitely there.</p>
<p>Here is the yaml:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: php-cron
spec:
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 1800
completions: 2
parallelism: 2
template:
spec:
containers:
- name: php-cron-video
image: my-image-here
command:
- "cd /usr/share/nginx/html"
- "php bin/console processvideo"
volumeMounts:
- mountPath: /usr/share/nginx/html/uploads
name: uploads-volume
- mountPath: /usr/share/nginx/html/private_uploads
name: private-uploads-volume
restartPolicy: Never
volumes:
- name: uploads-volume
hostPath:
path: /data/website/uploads
type: DirectoryOrCreate
- name: private-uploads-volume
hostPath:
path: /data/website/private_uploads
type: DirectoryOrCreate
schedule: "* * * * *"
docker run -it --rm my-image-here bash
</code></pre>
<p>Loads up straight into the /usr/share/nginx/html folder</p>
<p>What's going on here? The same image works fine as well as a normal deployment.</p>
| noname | <p>Assumed your image truly has <code>/usr/share/nginx/html/bin</code> baked in, try changed the <code>command</code> to:</p>
<pre><code>...
command: ["sh","-c","cd /usr/share/nginx/html && php bin/console processvideo"]
...
</code></pre>
| gohm'c |
<p>I am encountering problems when using <code>nodeSelector</code> in my Kubernetes manifest. I have a nodegroup in EKS with the label <code>eks.amazonaws.com/nodegroup=dev-nodegroup</code>. This node has a name with the corresponding ip, as usual in AWS. If I set the <code>nodeName</code> in the manifest, everything works and the pod is deployed in the corresponding node but when I do:</p>
<pre><code>nodeSelector:
eks.amazonaws.com/nodegroup: dev-nodegroup
</code></pre>
<p>in my manifest, at the same indentation level as the <code>containers</code> there is a <code>FailedScheduling</code></p>
<pre><code> Warning FailedScheduling 3m31s (x649 over 11h) default-scheduler 0/1 nodes are available: 1 node(s) had no available disk.
</code></pre>
<p>Am I doing something wrong? I would also like to add the <code>zone</code> label to the node selector but it yields the same problem.</p>
<p>What does 'had no available disk' mean? I have chechedk my node doing <code>df -h</code> and there is enough free disk space. I have seen other questions where the output is that the node is unreachable or have some taint, mine doesn't have any.</p>
<p>Any help is greatly appreciated.</p>
<p><strong>EDIT</strong></p>
<p>I have a volume mounted in the pod like this:</p>
<pre><code>volumes:
- name: <VOLUME_NAME>
awsElasticBlockStore:
volumeID: <EBS_ID>
fsType: ext4
</code></pre>
<p>Since EBS are deployed only in one <code>zone</code> I would need to set the <code>zone</code> selector as well.</p>
<p>Also I have this storageClass (just noticed it):</p>
<pre><code>Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
</code></pre>
<p><strong>EDIT2</strong></p>
<p>My cluster has only one nodegroup with one node, in case this helps, too.</p>
| Gonzalo Donoso | <p><code>Yes, otherwise it would not deploy the pod when I set the nodeName instead</code></p>
<p>For EBS volume it can only mount to a node once. The second time you run a pod trying to mount the <strong>same volume on the same node</strong> you will get this error. For your case, you should delete the pod that currently have the volume mounted since you only have 1 node (given your error: <code>default-scheduler 0/1 nodes are available: 1 node(s) had no available disk.</code>), before you run another pod that would mount the same volume again.</p>
| gohm'c |
<p>I am fairly new to the system. Fortunately Google cloud plateform has a one click setup for master slave mariadb replication which i opted or(<a href="https://console.cloud.google.com/marketplace/details/google/mariadb?q=mariadb" rel="nofollow noreferrer">https://console.cloud.google.com/marketplace/details/google/mariadb?q=mariadb</a>).<br> After which I could login in as root and create a user. I ran the following sql command.</p>
<pre><code>create user 'user'@'localhost' identified by 'password';
</code></pre>
<p>After sucessfully creating the user, If i am inside either master or slave pod i could login in local instance but I could not login from outside those pods or from one pod to another pod in mariadb instances.For loggin I used the following command</p>
<pre><code>mysql -u user -h mariadb-1-mariadb.mariadb.svc.cluster.local -p
</code></pre>
<p>where mariadb-1-mariadb = service and mariadb=namespace<br><br>
I keep on getting error.<br>
Error 1045(2800): Access denied for the user <br>
Any help is greatly appreciated. I think you might want to read the yaml file on gcp if you are to think about port blocking but I think it shouldn't be that issue.</p>
| bedrockelectro | <p>I had issue in creating user. When I tried to login as user, it used to somehow pick up my container ip I am trying to login from. All I needed to do was make user not local host but global with '%' and that solved the issue.</p>
<pre><code>create user 'user'@'%' identified by 'password';
</code></pre>
| bedrockelectro |
<p>I have a simple Service that connects to a port from a container inside a pod.<br />
All pretty straight forward.</p>
<p>This was working too but out of nothing, the endpoint is not created for port 18080.<br />
So I began to investigate and looked at <a href="https://stackoverflow.com/questions/63556471/kubernetes-apply-service-but-endpoints-is-none">this question</a> but nothing that helped there.<br />
The container is up, no errors/events, all green.<br />
I can also call the request with the pods ip:18080 from an internal container, so the endpoint should be reachable for the service.</p>
<p>I can't see errors in:</p>
<pre><code>journalctl -u snap.microk8s.daemon-*
</code></pre>
<p>I am using microk8s v1.20.</p>
<p>Where else can I debug this situation?<br />
I am out of tools.</p>
<p>Service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: aedi-service
spec:
selector:
app: server
ports:
- name: aedi-host-ws #-port
port: 51056
protocol: TCP
targetPort: host-ws-port
- name: aedi-http
port: 18080
protocol: TCP
targetPort: fcs-http
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
srv: os-port-mapping
name: dns-service
spec:
hostname: fcs
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
</code></pre>
<p>Service Description:</p>
<pre><code>Name: aedi-service
Namespace: fcs-only
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fcs-only
meta.helm.sh/release-namespace: fcs-only
Selector: app=server
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.247
IPs: 10.152.183.247
Port: aedi-host-ws 51056/TCP
TargetPort: host-ws-port/TCP
Endpoints: 10.1.116.70:51056
Port: aedi-http 18080/TCP
TargetPort: fcs-http/TCP
Endpoints:
Session Affinity: None
Events: <none>
</code></pre>
<p>Pod Info:</p>
<pre><code>NAME READY STATUS RESTARTS AGE LABELS
server-deployment-76b5789754-q48xl 6/6 Running 0 23m app=server,name=dns-service,pod-template-hash=76b5789754,srv=os-port-mapping
</code></pre>
<p>kubectl get svc aedi-service -o wide:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
aedi-service ClusterIP 10.152.183.247 <none> 443/TCP,1884/TCP,51052/TCP,51051/TCP,51053/TCP,51056/TCP,18080/TCP,51055/TCP 34m app=server
</code></pre>
| madmax | <p>Your service spec refer to a port named "fcs-http" but it was not declared in the deployment. Try:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
...
ports:
- containerPort: 18080
name: fcs-http # <-- add the name here
...
</code></pre>
| gohm'c |
<p>We have an app that uses UDP broadcast messages to form a "cluster" of all instances running in the same subnet.</p>
<p>We can successfully run this app in our (pretty std) local K8s installation by using <code>hostNetwork:true</code> for pods. This works because all K8s nodes are in the same subnet and broadcasting is possible. (a minor note: the K8s setup uses flannel networking plugin)</p>
<p>Now we want to move this app to the managed K8s service @ AWS. But our initial attempts have failed. The 2 daemons running in 2 different pods didn't see each other. We thought that was most likely due to the auto-generated EC2 worker node instances for the AWS K8s service residing on different subnets. Then we created 2 completely new EC2 instances in the same subnet (and the same availability-zone) and tried running the app directly on them (not as part of K8s), but that also failed. They could not communicate via broadcast messages even though the 2 EC2 instances were on the same subnet/availability-zone.</p>
<p>Hence, the following questions:</p>
<ul>
<li><p>Our preliminary search shows that AWS EC2 does probably not support broadcasting/multicasting, but still wanted to ask if there is a way to enable it? (on AWS or other cloud provider)?</p>
</li>
<li><p>We had used <code>hostNetwork:true</code> because we thought it would be much harder, if not impossible, to get broadcasting working with K8s pod-networking. But it seems some companies offer K8s network plugins that support this. Does anybody have experience with (or recommendation for) any of them? Would they work on AWS for example, considering that AWS doesn't support it on EC2 level?</p>
</li>
<li><p>Would much appreciate any pointers as to how to approach this and whether we have any options at all..</p>
</li>
</ul>
<p>Thanks</p>
| murtiko | <p>Conceptually, you need to create overlay network on top of the VPC native like <a href="https://aws.amazon.com/articles/overlay-multicast-in-amazon-virtual-private-cloud/" rel="nofollow noreferrer">this</a>. There's a <a href="https://github.com/k8snetworkplumbingwg/multus-cni" rel="nofollow noreferrer">CNI</a> that support multicast and here's the AWS <a href="https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-multus-cni/" rel="nofollow noreferrer">blog</a> about it.</p>
| gohm'c |
<p>Along with the container image in kubernetes, I would like to update the sidecar image as well.</p>
<p>What will be the <code>kubectl</code> command for this process?</p>
| gaurav sharma | <p>Assumed you have a deployment spec look like this:</p>
<pre><code>...
kind: Deployment
metadata:
name: mydeployment
...
spec:
...
template:
...
spec:
...
containers:
- name: application
image: nginx:1.14.0
...
- name: sidecar
image: busybox:3.15.0
...
</code></pre>
<p><code>kubectl set image deployment mydeployment application=nginx:1.16.0 sidecar=busybox:3.18.0</code></p>
| gohm'c |
<p>I have the ingress setup for Kubernetes as the following</p>
<pre><code> apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: dev-ingress
#namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: "self-signed"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
spec:
rules:
- host: my.host.com
http:
paths:
- backend:
serviceName: webui-svc
servicePort: 80
path: /webui(/|$)(.*)
</code></pre>
<p>I am running an Angular app deployment under the webui-svc. The angular app is containerized using docker and I have included a Nginx configuration in the docker container like the following</p>
<pre><code> # Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/json max;
application/javascript max;
~image/ max;
}
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
expires $expires;
gzip on;
}
</code></pre>
<p>When i am requesting for <a href="http://my.host.com/webui/" rel="nofollow noreferrer">http://my.host.com/webui/</a> the internal calls are getting redirected without the webui prefixed to it for e.g</p>
<p>When i request for <a href="http://my.host.com/webui/" rel="nofollow noreferrer">http://my.host.com/webui/</a> there are several calls made to get the main.js, vendor.js etc these all the requested via <a href="http://my.host.com/runtime.js" rel="nofollow noreferrer">http://my.host.com/runtime.js</a> so it fails, it would succeed if it would get redirected like <a href="http://my.host.com/webui/runtime.js" rel="nofollow noreferrer">http://my.host.com/webui/runtime.js</a></p>
<p>Is there a configuration in the angular application or the ingress that I am missing. Any help is very much appreciated, Thanks in advance</p>
| Saumadip Mazumder | <p>Your rewrite target is stripping off the <code>/webui/</code>. Consider your path:</p>
<p><code>path: /webui(/|$)(.*)</code></p>
<p>Now check the <code>rewrite-target</code>:</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code>.</p>
<p>If you remove this annotation (and you can remove the <code>use-regex</code> annotation since it's now not needed), it will pass the <code>/webui/</code> path onto your backend.</p>
| Tyzbit |
<p>I want to delete single pod of kubernetes permanently but it will recreate that pod</p>
<p>i have tried many commands but it doesn't help me.</p>
<pre><code>1. kubectl delete pod <pod-name>
</code></pre>
<p>2nd</p>
<pre><code>kubectl get deployments
kubectl delete deployments <deployments- name>
</code></pre>
<pre><code>kubectl get rs --all-namespaces
kubectl delete rs your_app_name
</code></pre>
<p>but None of that works</p>
| NewbieCoder | <p><code>my replica count is 0</code></p>
<p><code>...it will successfully delete the pod but then after it will restart</code></p>
<p>Try:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ...
spec:
restartPolicy: Never # <-- add this
containers:
- name: ...
</code></pre>
<p>If the pod still restart, post output of <code>kubectl describe pod <pod name> --namespace <name></code> to your question.</p>
| gohm'c |
<p>I have a backup job running, scheduled to run every 24 hours. I have the concurrency policy set to "Forbid." I am testing my backup, and I create jobs manually for testing, but these tests are not forbidding concurrent runs. I use:</p>
<pre><code>kubectl create job --from=cronjob/my-backup manual-backup-(timestamp)
</code></pre>
<p>... and when I run them twice in close succession, I find that both begin the work.</p>
<p>Does the concurrency policy only apply to jobs created by the Cron job scheduler? Does it ignore manually-created jobs? If it is ignoring those, are there other ways to manually run the job such that the Cron job scheduler knows they are there?</p>
| macetw | <p><code>...Does the concurrency policy only apply to jobs created by the Cron job scheduler?</code></p>
<p><code>concurrencyPolicy</code> applies to <code>CronJob</code> as it influences how <code>CronJob</code> start job. It is part of <code>CronJob</code> spec and not the <code>Job</code> spec.</p>
<p><code>...Does it ignore manually-created jobs?</code></p>
<p>Yes.</p>
<p><code>...ways to manually run the job such that the Cron job scheduler knows they are there?</code></p>
<p>Beware that when <code>concurrencyPolicy</code> is set to <code>Forbid</code> and when the time has come for CronJob to run job; but it detected there is job belongs to this <code>CronJob</code> is running; it will count the current attempt as <strong>missed</strong>. It is better to temporary set the <code>CronJob</code> <code>spec.suspend</code> to true if you manually start a job base out of the <code>CronJob</code> and the execution time will span over the next schedule time.</p>
| gohm'c |
<p>I have a pod that had a <code>Pod Disruption Budget</code> that says at least one has to be running. While it generally works very well it leads to a peculiar problem.</p>
<p>I have this pod sometimes in a failed state (due to some development) so I have two pods, scheduled for two different nodes, both in a <code>CrashLoopBackOff</code> state.</p>
<p>Now if I want to run a <code>drain</code> or k8s version upgrade, what happens is that pod cannot ever be evicted since it knows that there should be at least one running, which will never happen.</p>
<p>So k8s does not evict a pod due to <code>Pod Disruption Budget</code> even if the pod is not running. Is there a way to do something with this? I think ideally k8s should treat failed pods as candidates for eviction regardless of the budget (as deleting a failing pod cannot "break" anything anyway)</p>
| Ilya Chernomordik | <blockquote>
<p>...if I want to run a drain or k8s version upgrade, what happens is that pod cannot ever be evicted since it knows that there should be at least one running...</p>
</blockquote>
<p><code>kubectl drain --disable-eviction <node></code> will delete pod that is protected by PDB. Since you are fully aware of the downtime, you can first delete the PDB in question <em>before</em> draining the node.</p>
| gohm'c |
<p>After deploying Prometheus -operator according to the documentation, I find that <code>kubectl top Nodes</code> cannot run properly.</p>
<pre><code>$ kubectl get apiService v1beta1.metrics.k8s.io
v1beta1.metrics.k8s.io monitoring/prometheus-adapter False (FailedDiscoveryCheck) 44m
$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1"
Error from server (ServiceUnavailable): the server is currently unable to handle the request
</code></pre>
<blockquote>
<p>prometheus-adapter.yaml</p>
</blockquote>
<pre><code>...
- args:
- --cert-dir=/var/run/serving-cert
- --config=/etc/adapter/config.yaml
- --logtostderr=true
- --metrics-relist-interval=1m
- --prometheus-url=http://prometheus-k8s.monitoring.svc.cluster.local:9090/prometheus
- --secure-port=6443
...
</code></pre>
<p>When I was looking for a problem, I found a solution (<a href="https://github.com/banzaicloud/banzai-charts/issues/1060" rel="nofollow noreferrer">#1060</a>) by adding <code>hostNetwork: true</code> to the configuration file.</p>
<p>When I thought the solution was successful, I found that <code>kubectl top nodes</code> still does not work.</p>
<pre><code>$ kubectl get apiService v1beta1.metrics.k8s.io
v1beta1.metrics.k8s.io monitoring/prometheus-adapter True 64m
$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1"
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
</code></pre>
<p>View logs of Prometheus-adapter</p>
<pre><code>E0812 10:03:02.469561 1 provider.go:265] failed querying node metrics: unable to fetch node CPU metrics: unable to execute query: Get "http://prometheus-k8s.monitoring.svc.cluster.local:9090/prometheus/api/v1/query?query=sum+by+%28node%29+%28%0A++1+-+irate%28%0A++++node_cpu_seconds_total%7Bmode%3D%22idle%22%7D%5B60s%5D%0A++%29%0A++%2A+on%28namespace%2C+pod%29+group_left%28node%29+%28%0A++++node_namespace_pod%3Akube_pod_info%3A%7Bnode%3D~%22node02.whisper-tech.net%7Cnode03.whisper-tech.net%22%7D%0A++%29%0A%29%0Aor+sum+by+%28node%29+%28%0A++1+-+irate%28%0A++++windows_cpu_time_total%7Bmode%3D%22idle%22%2C+job%3D%22windows-exporter%22%2Cnode%3D~%22node02.whisper-tech.net%7Cnode03.whisper-tech.net%22%7D%5B4m%5D%0A++%29%0A%29%0A&time=1628762582.467": dial tcp: lookup prometheus-k8s.monitoring.svc.cluster.local on 100.100.2.136:53: no such host
</code></pre>
<p>The cause of the problem was that <code>hostNetwork: true</code> was added to the <code>Prometheus-Adapter</code>, which prevented pod from accessing <code>Prometheus-K8s</code> in the cluster through <code>coreDNS</code>.</p>
<p>One idea I've come up with is to have <code>Kubernetes nodes</code> access the inner part of the cluster through <code>coreDNS</code></p>
<p>Is there a better way to solve the current problem? What should I do?</p>
| Xsky | <p>Your Pods are running with <code>hostNetwork</code>, so you should explicitly set its DNS policy "ClusterFirstWithHostNet" as described in the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">Pod's DNS Policy</a> documentation:</p>
<blockquote>
<p>"ClusterFirstWithHostNet": For Pods running with hostNetwork, you should explicitly set its DNS policy "ClusterFirstWithHostNet".</p>
</blockquote>
<p>I've created a simple example to illustrate how it works.</p>
<hr />
<p>First, I created the <code>app-1</code> Pod with <code>hostNetwork: true</code>:</p>
<pre><code>$ cat app-1.yml
kind: Pod
apiVersion: v1
metadata:
name: app-1
spec:
hostNetwork: true
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
$ kubectl apply -f app-1.yml
pod/app-1 created
</code></pre>
<p>We can test that the <code>app-1</code> cannot resolve e.g. <code>kubernetes.default.svc</code>:</p>
<pre><code>$ kubectl exec -it app-1 -- sh
/ # nslookup kubernetes.default.svc
Server: 169.254.169.254
Address: 169.254.169.254#53
** server can't find kubernetes.default.svc: NXDOMAIN
</code></pre>
<p>Let's add the <code>dnsPolicy: ClusterFirstWithHostNet</code> to the <code>app-1</code> Pod and recreate it:</p>
<pre><code>$ cat app-1.yml
kind: Pod
apiVersion: v1
metadata:
name: app-1
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
$ kubectl delete pod app-1 && kubectl apply -f app-1.yml
pod "app-1" deleted
pod/app-1 created
</code></pre>
<p>Finally, we can check if the <code>app-1</code> Pod is able to resolve <code>kubernetes.default.svc</code>:</p>
<pre><code>$ kubectl exec -it app-1 -- sh
/ # nslookup kubernetes.default.svc
Server: 10.8.0.10
Address: 10.8.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.8.0.1
</code></pre>
<p>As you can see in the example above, everything works as expected with the <code>ClusterFirstWithHostNet</code> dnsPolicy.</p>
<p>For more information, see the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a> documentation.</p>
| matt_j |
<p>I have a TLS Secret. And it looks like the following one...</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: tls-ingress-secret
namespace: ingress-namespace
type: kubernetes.io/tls
data:
tls.key: |
-----BEGIN PRIVATE KEY-----
MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCtwUrZ6zS+GdAw
ldIUxIXpnajvAZun1mf8DD0nWJRBonzbBIZhLyHqQyPvz4B+ZfZ/ql/vucpLEPnq
V3HqJraydt7kw/MBCS6a8GRObFx/faIWolbF5FjVgJexAxydeE35A7+hJUdElA7e
jOVPzafz53oJvyCdtdRTVwbH6EA+aJGZ0eTmzRobLVdqmqCejN4soDeRZQcMXYrG
uW+rTy3dyRCbMGV33GzYYJk2qBNFz+DqZbp1TyFcOQKBgQDW3IvXES4hfgtmj8NK
0BKdX7gUyANdZooJ1tXoMjVmcFbdvoUprED3hpiI1WTYiZ3dqKD7QrHGsBW/yHZB
UfFFxSj+vKotgxBhe06o2SDkXCnWfuQSJDZEgL/TuI9Qb/w1QeDxTZG4KCPiBrPD
MiXRtvV7qdyWoPjUolWfWyef4K5NVo34TF4DHseY1QMoI8dTmB0nnZiDfZA6B+t0
jgrnP8RpqaAOH8UjRwC+QMCfuq0SejUWocSobc/7K+7HJlMRwi6FuPXb7omyut+5
34pCkfAj8Lwtleweh/PbSDnX9g==
-----END PRIVATE KEY-----
tls.crt: |
-----BEGIN CERTIFICATE-----
MIIEDDCCAvSgAwIBAgIUDr8pM7eB+UPyMD0sY0yR5XmWrVQwDQYJKoZIhvcNAQEL
BQAwgY8xCzAJBgNVBAYTAlJVMQ8wDQYDVQQIDAZSdXNzaWExDzANBgNVBAcMBk1v
c2NvdzEmMCQGA1UECgwdS2lyaWxsIEtsaW11c2hpbnMgQ29ycG9yYXRpb24xHDAa
BgNVBAsME09yZ2FuaXphdGlvbmFsIFVuaXQxGDAWBgNVBAMMD3d3dy5zdG9yZXJ1
LmNvbTAeFw0yMjA3MjgxMTAyMThaFw0yMzA1MjQxMTAyMThaMIGPMQswCQYDVQQG
PkBW2sS7dMxNLLeHyZ3st1SJfmWZhya1LsPvo1ilU3+d8rD5JjlC/cQ7EAF9DDXR
i3/9zNzx3R6MMgNqkzQ89dDjHH+FZ2R0VkBKp35MYVg=
-----END CERTIFICATE-----
</code></pre>
<p>So the question is "is it possible to retrieve it as an env vars like: "tls.cert" and "tls.key", so I would be able to access it in my application...</p>
<p>What I want to receive from that is...</p>
<pre class="lang-golang prettyprint-override"><code>
SSlCertFile := os.Getenv("tls.cert") // cert file with payload.
SslCertKey := os.Getenv("tls.key") // cert file key.
</code></pre>
| CraZyCoDer | <p>Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
commands: ["ash","-c","sleep 3600"]
envFrom:
- secretRef:
name: tls-ingress-secret
</code></pre>
<p>After you create the pod, try <code>kubectl exec -it busybox -- env</code></p>
| gohm'c |
<p>I am using Kustomize to manage my Kubernetes project with a StetefulSet that deploys a PostgreSQL cluster with three pods. I am working on Vagrant/VirtualBox so no dynamic provisioning of PV exists. For this reason, I have my own <code>pv.yaml</code> containing the manifest to deploy these 3 PVs.</p>
<p>Then I have a <code>kustomization.yaml</code> file like this:</p>
<pre><code>namespace: ibm-cfdb
bases:
- ../../bases
resources:
- pv.yaml
</code></pre>
<p>the folder <code>../../bases</code> contains the file to deploy the StatefulSet. When I run:
<code>kubectl apply -k kustomize/</code> everything is correctly deployed. PVs are created before the StetefulSet that contains a <code>volumeClaimTemplates</code> that declare the Claim for these PVs.</p>
<p>The problem is that when I try to remove the deployment with the command:
<code>kubectl delete -k kustomize/</code> the removal of PV is executed (it seems I don't have control about the order). I suppose these PVs cannot be deleted because Claims use them. Then the StatefulSet removal stuck.</p>
<p>What is the best approach to manage PV static provisioning with Kustomize?</p>
| Salvatore D'angelo | <p>You encountered an interesting problem regarding StatefulSet and PVC removal. There is a <a href="https://github.com/kubernetes/kubernetes/issues/55045" rel="nofollow noreferrer">discussion</a> whether PVCs created by the StatefulSet should be deleted when deleting the corresponding StatefulSet. We recently received <a href="https://github.com/kubernetes/kubernetes/issues/55045#issuecomment-824937931" rel="nofollow noreferrer">information</a> that the feature to autodelete the PVCs created by StatefulSet will probably be available in the <a href="https://github.com/kubernetes/kubernetes/issues/55045#issuecomment-884298382" rel="nofollow noreferrer">1.23 release</a>. According to the <a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/1847-autoremove-statefulset-pvcs#changes-required" rel="nofollow noreferrer">feature documentation</a>, this will allow us to specifiy if the VolumeClaimTemplate PVCs will be deleted after deleting their StatefulSet.
I suspect that with this feature it'll be easy to delete your StatefulSet along with PVC and PV.</p>
<p>For now, you can consider moving the file with the PV to another directory and manage it separately.
However, I will propose another solution which is kind of a workaround but you may be interested.</p>
<p>Basically, we can use the <code>-o</code> flag with the <code>kustomize build</code> command. This creates one file per resource, which gives us more control over resources creation.</p>
<hr />
<p>I will give you an example to illustrate how this can work.</p>
<p>Suppose I have a similar environment to you:</p>
<pre><code>$ tree
.
βββ base
β βββ kustomization.yaml
β βββ statefulset.yaml
βββ overlays
βββ dev
βββ kustomization.yaml
βββ pv.yaml
$ cat overlays/dev/kustomization.yaml
bases:
- ../../base
resources:
- pv.yaml
</code></pre>
<p>Now let's create a directory where our manifest files generated by <code>kustomize</code> will be stored:</p>
<pre><code>$ mkdir generated_manifests
</code></pre>
<p>Then we can check if the command <code>kustomize build overlays/dev -o generated_manifests</code> works as expected. First we'll apply the generated manifests (it'll create the <code>web</code> StatefulSet and <code>pv0003</code> PersistentVolume):</p>
<pre><code>$ kustomize build overlays/dev -o generated_manifests && kubectl apply -Rf generated_manifests/
statefulset.apps/web created
persistentvolume/pv0003 created
</code></pre>
<p>As you can see, the appropriate manifest files have been created in the <code>generated_manifests</code> directory:</p>
<pre><code>$ ls generated_manifests/
apps_v1_statefulset_web.yaml v1_persistentvolume_pv0003.yaml
</code></pre>
<p>Finally, we can try to delete only the <code>web</code> StatefulSet:</p>
<pre><code>$ kustomize build overlays/dev -o generated_manifests && kubectl delete -f generated_manifests/apps_v1_statefulset_web.yaml
statefulset.apps "web" deleted
</code></pre>
<hr />
<p>I would also like to mention that <code>kustomize</code> has a feature like "ignore pv.yaml" but it will also be used when creating resources, not just when removing. This is known as a delete patch and a good example can be found <a href="https://stackoverflow.com/a/66074466/14801225">here</a>.</p>
| matt_j |
<p>I have kubernetes file with arguments that I am propagating through the <code>args</code>. Everything works OK!
Instead of the <code>args</code> can I read the configuration from some <code>.ini</code> or properties file or configMap ? I cannot find example for using the values from properties file or configMap instead of the <code>args</code> properties.
Or what will be the proposed best approach how these values should be read and how I should modify my <code>deployment YAML</code>.
Thanks!</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ads
spec:
replicas: 1
template:
metadata:
name: ads
spec:
containers:
- name: ads
image: repo/ads:test1
#args:
args: ["-web_server", "10.1.1.1",
"-web_name", "WEB_Test1",
"-server_name", "ads"]
selector:
matchLabels:
app: ads
</code></pre>
| vel | <p>Sure you can, here's the official K8s <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">example</a> how to store properties in ConfigMap and how to access them in your container. If this document doesn't meet your requirement, do update your question with the reason and your exact expectation.</p>
| gohm'c |
<p>I was doing the CKA and stumbled upon a question I couldn't figure out. I don't remember all the details, but it goes something like this:</p>
<p>Get the <strong>top 1 node/pod by CPU</strong> consumption and place it in a file at {path}.</p>
<pre><code>kubectl top nodes/pod --sort-by cpu <-- this orders by ascending. So you have to hardcode the last node/pod.
</code></pre>
| gary rizzo | <p>If you need to extract out the name of the top pod and save it in the file you can do this:</p>
<blockquote>
<p>Let us say you have 3 pods:</p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>$ kubectl top pod --sort-by cpu
NAME CPU(cores) MEMORY(bytes)
nats-depl-6c4b4dfb7c-tjpgv 2m 4Mi
project-depl-595bbd56db-lb6vb 8m 180Mi
auth-depl-64cccc484f-dn7w5 4m 203Mi
</code></pre>
<p>You can do this:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl top pod --sort-by cpu | head -2 | tail -1 | awk {'print $1'}
chat-depl-6dd798699b-l7wcl #### < == returns name of the pod
## you can redirect it to any file
$ kubectl top pod --sort-by cpu | head -2 | tail -1 | awk {'print $1'} > ./file_name
$ cat ./file_name
chat-depl-6dd798699b-l7wcl #### < == returns name of the pod
</code></pre>
| Karan Kumar |
<p>I need in the pods to know the total number of running pods for a deployment. I know that there is a downwards api for passing information about pods to the pods themselves in kubernetes.</p>
<p>Is it possible to even do this?</p>
| Alecu | <p>As you mentioned, the Downward API provides you with ways to pass pod/container fields to running containers. The number of running containers in a deployment is not a concern of nor a pod/container field, so this method won't work for you.</p>
<p>You can pass the number of running pods in a deployment to another pod by using an arbitrary environment variable via configmaps or mounted volumes.</p>
<p>Without knowing what exactly you're trying to achieve, here's a working example (I'm taking advantage of bash variable expansion, which allows me to assign value on the fly without having to actually create configmaps nor volumes). I'm assuming the pod counter is NOT part of the deployment you want to observe:</p>
<pre><code>cat > podcounter.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: podcounter
labels:
app: podcounter
spec:
replicas: 1
selector:
matchLabels:
app: podcounter
template:
metadata:
labels:
app: podcounter
spec:
containers:
- name: podcounter
image: busybox:1.28
command: ["sh", "-c", "env"]
env:
- name: NUM_PODS
value: $(kubectl get deploy -o wide | grep YOUR_DEPLOYMENT | awk '{print $3}')
EOF
</code></pre>
<p>*$3 returns the count of available pods in that deployment.</p>
<p>** This method populates the environment variable at container-creation time and won't be updated further down the container's lifecycle. You'd need a cronjob for that.</p>
| piscesgeek |
<p>How to create kubernetes job configuration spec to run perl script ? once the script completes execution , job created pod should go to completion .</p>
| Kishore Kumar | <p>As we can find in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Kubernetes Jobs documentation</a> - when a specified number of successful completions is reached, the Job is complete:</p>
<blockquote>
<p>A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete.</p>
</blockquote>
<p>I guess you would like to use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#parallel-jobs" rel="nofollow noreferrer">non-parallel Job</a>:</p>
<blockquote>
<ul>
<li>normally, only one Pod is started, unless the Pod fails.</li>
<li>the Job is complete as soon as its Pod terminates successfully.</li>
</ul>
</blockquote>
<p>A container in a Pod may fail for a number of reasons, such as because the process in it (e.g a perl script) exited with a non-zero exit code. This will happen, for example, when the perl script fails:<br />
<strong>NOTE:</strong> I typed <code>prin</code> instead of <code>print</code>.</p>
<pre><code>$ cat test-perl.pl
#!/usr/bin/perl
use warnings;
use strict;
prin("Hello World\n");
$ ./test-perl.pl
Undefined subroutine &main::prin called at ./test-perl.pl line 5.
$ echo $?
255 ### non-zero exit code
</code></pre>
<p>I've created an example to illustrate how you can create a Job that runs a perl script and completes successfully.</p>
<hr />
<p>First, I created a simple perl script:</p>
<pre><code>$ cat perl-script.pl
#!/usr/bin/perl
use warnings;
use strict;
my @i = (1..9);
for(@i){
print("$_: Hello, World!\n");
$ ./perl-script.pl
1: Hello, World!
2: Hello, World!
3: Hello, World!
4: Hello, World!
5: Hello, World!
6: Hello, World!
7: Hello, World!
8: Hello, World!
9: Hello, World!
$ echo $?
0
</code></pre>
<p>Then I created a Docker image with the above script and pushed it to my <a href="https://hub.docker.com/" rel="nofollow noreferrer">DockerHub</a> repository:<br />
<strong>NOTE:</strong> I used the <a href="https://hub.docker.com/_/perl" rel="nofollow noreferrer">perl</a> image.</p>
<pre><code>$ ls
Dockerfile perl-script.pl
$ cat Dockerfile
FROM perl:5.20
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "perl", "./perl-script.pl" ]
$ docker build . --tag mattjcontainerregistry/perl-script
Sending build context to Docker daemon 3.072kB
...
Successfully tagged mattjcontainerregistry/perl-script:latest
$ docker push mattjcontainerregistry/perl-script:latest
The push refers to repository [docker.io/mattjcontainerregistry/perl-script]
...
latest: digest: sha256:2f8789af7f61cfb021337810963a9a19f133d78e9ad77159fbc1d425cfb1d7db size: 3237
</code></pre>
<p>Finally, I created a <code>perl-script</code> Job which runs the perl script I created earlier:</p>
<pre><code>$ cat perl-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: perl-script
spec:
template:
spec:
containers:
- name: perl-script
image: mattjcontainerregistry/perl-script:latest
restartPolicy: Never
$ kubectl apply -f perl-job.yml
job.batch/perl-script created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
perl-script-gspzz 0/1 Completed 0 10s
$ kubectl get job
NAME COMPLETIONS DURATION AGE
perl-script 1/1 1s 18s
$ kubectl logs -f perl-script-gspzz
1: Hello, World!
2: Hello, World!
3: Hello, World!
4: Hello, World!
5: Hello, World!
6: Hello, World!
7: Hello, World!
8: Hello, World!
9: Hello, World!
</code></pre>
<p>As you can see, the <code>perl-script</code> Job is complete, so it works as expected.</p>
| matt_j |
<p>I am running a 2-node K8s cluster on OVH Bare Metal Servers. I've set up <strong>MetalLB</strong> and <strong>Nginx-Ingress</strong>.The 2 servers both have public IPs and are not in the same network segment. I've used one of the IPs as the entrypoint for the LB. The deployments I created 3 nginx containers & services to test the forwarding.
When I use host based routing, the endpoints are reachable via the internet, but when I use path based forwarding, only the / path is reachable. For the rest, I get the default backend.
My host based Ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-2
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-main
port:
number: 80
- host: blue.nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-blue
port:
number: 80
- host: green.nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-green
port:
number: 80
</code></pre>
<p>The path based Ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-3
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
- path: /blue
pathType: Prefix
backend:
service:
name: nginx-deploy-blue
port:
number: 80
- path: /green
pathType: Prefix
backend:
service:
name: nginx-deploy-green
port:
number: 80
</code></pre>
<p>The endpoints are all reachable in both cases</p>
<pre><code># kubectl describe ing ingress-resource-2
Name: ingress-resource-2
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.example.com
/ nginx:80 (192.168.107.4:80)
blue.nginx.example.com
/ nginx-deploy-blue:80 (192.168.164.212:80)
green.nginx.example.com
/ nginx-deploy-green:80 (192.168.164.213:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 13m nginx-ingress-controller Configuration for default/ingress-resource-2 was added or updated
</code></pre>
<pre><code># kubectl describe ing ingress-resource-3
Name: ingress-resource-3
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.example.com
/ nginx:80 (192.168.107.4:80)
/blue nginx-deploy-blue:80 (192.168.164.212:80)
/green nginx-deploy-green:80 (192.168.164.213:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 109s nginx-ingress-controller Configuration for default/ingress-resource-3 was added or updated
</code></pre>
<p>Getting the Nginx-Ingress logs:</p>
<pre><code># kubectl -n nginx-ingress logs pod/nginx-ingress-6947fb84d4-m9gkk
W0803 17:00:48.516628 1 flags.go:273] Ignoring unhandled arguments: []
I0803 17:00:48.516688 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.0 PlusFlag=false
I0803 17:00:48.516692 1 flags.go:191] Commit=979db22d8065b22fedb410c9b9c5875cf0a6dc66 Date=2022-07-12T08:51:24Z DirtyState=false Arch=linux/amd64 Go=go1.18.3
I0803 17:00:48.527699 1 main.go:210] Kubernetes version: 1.24.3
I0803 17:00:48.531079 1 main.go:326] Using nginx version: nginx/1.23.0
2022/08/03 17:00:48 [notice] 26#26: using the "epoll" event method
2022/08/03 17:00:48 [notice] 26#26: nginx/1.23.0
2022/08/03 17:00:48 [notice] 26#26: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/08/03 17:00:48 [notice] 26#26: OS: Linux 5.15.0-41-generic
2022/08/03 17:00:48 [notice] 26#26: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/03 17:00:48 [notice] 26#26: start worker processes
2022/08/03 17:00:48 [notice] 26#26: start worker process 27
2022/08/03 17:00:48 [notice] 26#26: start worker process 28
2022/08/03 17:00:48 [notice] 26#26: start worker process 29
2022/08/03 17:00:48 [notice] 26#26: start worker process 30
2022/08/03 17:00:48 [notice] 26#26: start worker process 31
2022/08/03 17:00:48 [notice] 26#26: start worker process 32
2022/08/03 17:00:48 [notice] 26#26: start worker process 33
2022/08/03 17:00:48 [notice] 26#26: start worker process 34
I0803 17:00:48.543403 1 listener.go:54] Starting Prometheus listener on: :9113/metrics
2022/08/03 17:00:48 [notice] 26#26: start worker process 35
2022/08/03 17:00:48 [notice] 26#26: start worker process 37
I0803 17:00:48.543712 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginx-ingress-leader-election...
2022/08/03 17:00:48 [notice] 26#26: start worker process 38
...
2022/08/03 17:00:48 [notice] 26#26: start worker process 86
I0803 17:00:48.645253 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.645512 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646550 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646629 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.646810 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646969 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.647259 1 event.go:285] Event(v1.ObjectReference{Kind:"Secret", Namespace:"nginx-ingress", Name:"default-server-secret", UID:"d8271053-2785-408f-b87b-88b9bb9fc488", APIVersion:"v1", ResourceVersion:"1612716", FieldPath:""}): type: 'Normal' reason: 'Updated' the special Secret nginx-ingress/default-server-secret was updated
2022/08/03 17:00:48 [notice] 26#26: signal 1 (SIGHUP) received from 88, reconfiguring
2022/08/03 17:00:48 [notice] 26#26: reconfiguring
2022/08/03 17:00:48 [notice] 26#26: using the "epoll" event method
2022/08/03 17:00:48 [notice] 26#26: start worker processes
2022/08/03 17:00:48 [notice] 26#26: start worker process 89
2022/08/03 17:00:48 [notice] 26#26: start worker process 90
...
2022/08/03 17:00:48 [notice] 26#26: start worker process 136
2022/08/03 17:00:48 [notice] 27#27: gracefully shutting down
2022/08/03 17:00:48 [notice] 27#27: exiting
2022/08/03 17:00:48 [notice] 35#35: gracefully shutting down
2022/08/03 17:00:48 [notice] 31#31: exiting
2022/08/03 17:00:48 [notice] 38#38: gracefully shutting down
2022/08/03 17:00:48 [notice] 32#32: exiting
2022/08/03 17:00:48 [notice] 30#30: exiting
2022/08/03 17:00:48 [notice] 40#40: gracefully shutting down
2022/08/03 17:00:48 [notice] 35#35: exiting
2022/08/03 17:00:48 [notice] 45#45: gracefully shutting down
2022/08/03 17:00:48 [notice] 40#40: exiting
2022/08/03 17:00:48 [notice] 48#48: gracefully shutting down
2022/08/03 17:00:48 [notice] 47#47: exiting
2022/08/03 17:00:48 [notice] 57#57: gracefully shutting down
2022/08/03 17:00:48 [notice] 52#52: exiting
2022/08/03 17:00:48 [notice] 55#55: gracefully shutting down
2022/08/03 17:00:48 [notice] 55#55: exiting
2022/08/03 17:00:48 [notice] 51#51: gracefully shutting down
2022/08/03 17:00:48 [notice] 51#51: exiting
2022/08/03 17:00:48 [notice] 31#31: exit
2022/08/03 17:00:48 [notice] 34#34: gracefully shutting down
2022/08/03 17:00:48 [notice] 34#34: exiting
2022/08/03 17:00:48 [notice] 41#41: exiting
2022/08/03 17:00:48 [notice] 49#49: gracefully shutting down
....
2022/08/03 17:00:48 [notice] 49#49: exiting
2022/08/03 17:00:48 [notice] 57#57: exit
.....
2022/08/03 17:00:48 [notice] 43#43: exit
2022/08/03 17:00:48 [notice] 58#58: gracefully shutting down
2022/08/03 17:00:48 [notice] 38#38: exiting
2022/08/03 17:00:48 [notice] 53#53: gracefully shutting down
2022/08/03 17:00:48 [notice] 48#48: exiting
2022/08/03 17:00:48 [notice] 59#59: gracefully shutting down
2022/08/03 17:00:48 [notice] 58#58: exiting
2022/08/03 17:00:48 [notice] 62#62: gracefully shutting down
2022/08/03 17:00:48 [notice] 60#60: gracefully shutting down
2022/08/03 17:00:48 [notice] 53#53: exiting
2022/08/03 17:00:48 [notice] 61#61: gracefully shutting down
2022/08/03 17:00:48 [notice] 63#63: gracefully shutting down
2022/08/03 17:00:48 [notice] 64#64: gracefully shutting down
2022/08/03 17:00:48 [notice] 59#59: exiting
2022/08/03 17:00:48 [notice] 65#65: gracefully shutting down
2022/08/03 17:00:48 [notice] 62#62: exiting
2022/08/03 17:00:48 [notice] 60#60: exiting
2022/08/03 17:00:48 [notice] 66#66: gracefully shutting down
2022/08/03 17:00:48 [notice] 67#67: gracefully shutting down
2022/08/03 17:00:48 [notice] 63#63: exiting
2022/08/03 17:00:48 [notice] 68#68: gracefully shutting down
2022/08/03 17:00:48 [notice] 64#64: exiting
2022/08/03 17:00:48 [notice] 61#61: exiting
2022/08/03 17:00:48 [notice] 69#69: gracefully shutting down
2022/08/03 17:00:48 [notice] 65#65: exiting
2022/08/03 17:00:48 [notice] 66#66: exiting
2022/08/03 17:00:48 [notice] 71#71: gracefully shutting down
2022/08/03 17:00:48 [notice] 70#70: gracefully shutting down
2022/08/03 17:00:48 [notice] 67#67: exiting
...
2022/08/03 17:00:48 [notice] 65#65: exit
2022/08/03 17:00:48 [notice] 73#73: gracefully shutting down
...
2022/08/03 17:00:48 [notice] 74#74: exiting
2022/08/03 17:00:48 [notice] 83#83: gracefully shutting down
2022/08/03 17:00:48 [notice] 72#72: exiting
2022/08/03 17:00:48 [notice] 77#77: gracefully shutting down
2022/08/03 17:00:48 [notice] 77#77: exiting
2022/08/03 17:00:48 [notice] 77#77: exit
I0803 17:00:48.780547 1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"nginx-ingress", Name:"nginx-config", UID:"961b1b89-3765-4eb8-9f5f-cfd8212012a8", APIVersion:"v1", ResourceVersion:"1612730", FieldPath:""}): type: 'Normal' reason: 'Updated' Configuration from nginx-ingress/nginx-config was updated
I0803 17:00:48.780573 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.780585 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 72
2022/08/03 17:00:48 [notice] 26#26: worker process 72 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 30
2022/08/03 17:00:48 [notice] 26#26: worker process 30 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 35 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 77 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 73
2022/08/03 17:00:48 [notice] 26#26: worker process 73 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 37
2022/08/03 17:00:48 [notice] 26#26: worker process 29 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 32 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 37 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 38 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 41 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 47 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 49 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 63 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 64 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 75 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 47
2022/08/03 17:00:48 [notice] 26#26: worker process 34 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 43 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 48 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 53 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 54 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 59 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 61 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 66 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 55
2022/08/03 17:00:48 [notice] 26#26: worker process 50 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 55 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 83
2022/08/03 17:00:48 [notice] 26#26: worker process 28 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 31 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 42 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 51 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 52 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 56 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 62 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 68 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 71 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 83 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 33
2022/08/03 17:00:48 [notice] 26#26: worker process 33 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 58
2022/08/03 17:00:48 [notice] 26#26: worker process 58 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 57
2022/08/03 17:00:48 [notice] 26#26: worker process 27 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 57 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 40
2022/08/03 17:00:48 [notice] 26#26: worker process 40 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 45 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 60 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 65 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 67 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 69 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 70 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 74 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 86 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
</code></pre>
<p>I'm not sure what the issue is, and I can't figure out why it's working when I use different hosts, and not working when I try to use different paths.</p>
<p>I thought it could be resource limits, but I only have the requests, no limits. There is already a default IngressClass</p>
<p>I installed the ingress controller via manifests following the steps <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">here</a></p>
<p>Update: To add the deployments running in the cluster.</p>
<pre><code># nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx-main
template:
metadata:
labels:
run: nginx-main
spec:
containers:
- image: nginx
name: nginx
</code></pre>
<pre><code># nginx-deploy-green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-green
spec:
replicas: 1
selector:
matchLabels:
run: nginx-green
template:
metadata:
labels:
run: nginx-green
spec:
volumes:
- name: webdata
emptyDir: {}
initContainers:
- name: web-content
image: busybox
volumeMounts:
- name: webdata
mountPath: "/webdata"
command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=green>GREEN</font></h1>" > /webdata/index.html']
containers:
- image: nginx
name: nginx
volumeMounts:
- name: webdata
mountPath: "/usr/share/nginx/html"
</code></pre>
<pre><code># nginx-deploy-blue.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-blue
spec:
replicas: 1
selector:
matchLabels:
run: nginx-blue
template:
metadata:
labels:
run: nginx-blue
spec:
volumes:
- name: webdata
emptyDir: {}
initContainers:
- name: web-content
image: busybox
volumeMounts:
- name: webdata
mountPath: "/webdata"
command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=blue>BLUE</font></h1>" > /webdata/index.html']
containers:
- image: nginx
name: nginx
volumeMounts:
- name: webdata
mountPath: "/usr/share/nginx/html"
</code></pre>
| igithiu | <p>Based on the comments from zer0 answered, try:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-3
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / # <-- add
spec:
ingressClassName: nginx
...
</code></pre>
<p>Page with different font color should response when your browse to <code>http://nginx.example.com/blue</code> or <code>green</code></p>
| gohm'c |
<p>I'm trying to create a simple microservice, where a JQuery app in one Docker container uses this code to get a JSON object from another (analytics) app that runs in a different container:</p>
<pre><code><script type="text/javascript">
$(document).ready(function(){
$('#get-info-btn').click(function(){
$.get("http://localhost:8084/productinfo",
function(data, status){
$.each(data, function(i, obj) {
//some code
});
});
});
});
</script>
</code></pre>
<p>The other app uses this for the <code>Deployment</code> containerPort.</p>
<pre><code> ports:
- containerPort: 8082
</code></pre>
<p>and these for the <code>Service</code> ports.</p>
<pre><code> type: ClusterIP
ports:
- targetPort: 8082
port: 8084
</code></pre>
<p>The 'analytics' app is a golang program that listens on 8082.</p>
<pre><code>func main() {
http.HandleFunc("/productinfo", getInfoJSON)
log.Fatal(http.ListenAndServe(":8082", nil))
}
</code></pre>
<p>When running this on Minikube, I encountered issues with CORS, which was resolved by using this in the golang code when returning a JSON object as a response:</p>
<pre><code>w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
</code></pre>
<p>All this worked fine on Minikube (though in Minikube I was using <code>localhost:8082</code>). The first app would send a GET request to <code>http://localhost:8084/productinfo</code> and the second app would return a JSON object.</p>
<p>But when I tried it on a cloud Kubernetes setup by accessing the first app via :, when I open the browser console, I keep getting the error <code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8084/productinfo</code>.</p>
<p><strong>Question:</strong>
Why is it working on Minikube but not on the cloud Kubernetes worker nodes? Is using <code>localhost</code> the right way to access another container? How can I get this to work? How do people who implement microservices use their GET and POST requests across containers? All the microservice examples I found are built for simple demos on Minikube, so it's difficult to get a handle on this nuance.</p>
| Nav | <p>@P.... is absolutely right, I just want to provide some more details about <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services</a> and communication between containers in the same Pod.</p>
<h4>DNS for Services</h4>
<p>As we can find in the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">documentation</a>, Kubernetes Services are assigned a DNS A (or AAAA) record, for a name of the form <code><serviceName>.<namespaceName>.svc.<cluster-domain></code>. This resolves to the cluster IP of the Service.</p>
<blockquote>
<p>"Normal" (not headless) Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.</p>
</blockquote>
<p>Let's break down the form <code><serviceName>.<namespaceName>.svc.<cluster-domain></code> into individual parts:</p>
<ul>
<li><p><code><serviceName></code> - The name of the Service you want to connect to.</p>
</li>
<li><p><code><namespaceName></code> - The name of the Namespace in which the Service to which you want to connect resides.</p>
</li>
<li><p><code>svc</code> - This should not be changed - <code>svc</code> stands for Service.</p>
</li>
<li><p><code><cluster-domain></code> - cluster domain, by default it's <code>cluster.local</code>.</p>
</li>
</ul>
<p>We can use <code><serviceName></code> to access a Service in the same Namespace, however we can also use <code><serviceName>.<namespaceName></code> or <code><serviceName>.<namespaceName>.svc</code> or FQDN <code><serviceName>.<namespaceName>.svc.<cluster-domain></code>.</p>
<p>If the Service is in a different Namespace, a single <code><serviceName></code> is not enough and we need to use <code><serviceName>.<namespaceName></code> (we can also use: <code><serviceName>.<namespaceName>.svc</code> or <code><serviceName>.<namespaceName>.svc.<cluster-domain></code>).</p>
<p>In the following example, <code>app-1</code> and <code>app-2</code> are in the same Namespace and <code>app-2</code> is exposed with ClusterIP on port <code>8084</code> (as in your case):</p>
<pre><code>$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl run app-2 --image=nginx
pod/app-2 created
$ kubectl expose pod app-2 --target-port=80 --port=8084
service/app-2 exposed
$ kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/app-1 1/1 Running 0 45s
pod/app-2 1/1 Running 0 41s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-2 ClusterIP 10.8.12.83 <none> 8084/TCP 36s
</code></pre>
<p><strong>NOTE:</strong> The <code>app-2</code> is in the same Namespace as <code>app-1</code>, so we can use <code><serviceName></code> to access it from <code>app-1</code>, you can also notice that we got the FQDN for <code>app-2</code> (<code>app-2.default.svc.cluster.local</code>):</p>
<pre><code>$ kubectl exec -it app-1 -- bash
root@app-1:/# nslookup app-2
Server: 10.8.0.10
Address: 10.8.0.10#53
Name: app-2.default.svc.cluster.local
Address: 10.8.12.83
</code></pre>
<p><strong>NOTE:</strong> We need to provide the port number because <code>app-2</code> is listening on <code>8084</code>:</p>
<pre><code>root@app-1:/# curl app-2.default.svc.cluster.local:8084
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Let's create <code>app-3</code> in a different Namespace and see how to connect to it from <code>app-1</code>:</p>
<pre><code>$ kubectl create ns test-namespace
namespace/test-namespace created
$ kubectl run app-3 --image=nginx -n test-namespace
pod/app-3 created
$ kubectl expose pod app-3 --target-port=80 --port=8084 -n test-namespace
service/app-3 exposed
</code></pre>
<p><strong>NOTE:</strong> Using <code>app-3</code> (<code><serviceName></code>) is not enough, we also need to provide the name of the Namespace in which <code>app-3</code> resides (<code><serviceName>.<namespaceName></code>):</p>
<pre><code># nslookup app-3
Server: 10.8.0.10
Address: 10.8.0.10#53
** server can't find app-3: NXDOMAIN
# nslookup app-3.test-namespace
Server: 10.8.0.10
Address: 10.8.0.10#53
Name: app-3.test-namespace.svc.cluster.local
Address: 10.8.12.250
# curl app-3.test-namespace.svc.cluster.local:8084
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<h4>Communication Between Containers in the Same Pod</h4>
<p>We can use <code>localhost</code> to communicate with other containers, but <strong>only</strong> within the same Pod (Multi-container pods).</p>
<p>I've created a simple multi-container Pod with two containers: <code>nginx-container</code> and <code>alpine-container</code>:</p>
<pre><code>$ cat multi-container-app.yml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-app
spec:
containers:
- image: nginx
name: nginx-container
- image: alpine
name: alpine-container
command: ["sleep", "3600"]
$ kubectl apply -f multi-container-app.yml
pod/multi-container-app created
</code></pre>
<p>We can connect to the <code>alpine-container</code> container and check if we can access the nginx web server located in the <code>nginx-container</code> with <code>localhost</code>:</p>
<pre><code>$ kubectl exec -it multi-container-app -c alpine-container -- sh
/ # netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 :::80 :::* LISTEN -
/ # curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>More information on communication between containers in the same Pod can be found <a href="https://stackoverflow.com/questions/67061603/how-to-communicate-between-containers-in-same-pod-in-kubernetes">here</a>.</p>
| matt_j |
<p>The goal is to monitor the flowable project deployed on Kubernetes using Prometheus/Grafana</p>
<p>Install kube-prometheus-stack using helm charts:</p>
<pre><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack
</code></pre>
<p>Its successfully deployed and we are able to start monitoring the other resources inside our Kubernetes cluster using Prometheus/Grafana</p>
<p>Next, Flowable is running as a pod, which I want to get the flowable pod metrics onto Prometheus and come up the dasboard.</p>
<p>Any suggestions on how to achieve the monitoring for a flowable application running as a pod inside kubernetes</p>
| Vikram | <p>Flowable (as a Spring Boot application) uses Micrometer that will provide metrics in prometheus format as soon as you add the <code>micrometer-registry-prometheus</code> dependency. Endpoint is then <code>actuator/prometheus</code>.</p>
<p>Creating your own prometheus metric is actually not that difficult. You can create a bean implementing <code>FlowableEventListener</code> and <code>MetricBinder</code>and then listen to the FlowableEngineEventType <code>PROCESS_COMPLETED</code> to increase a micrometer <code>Counter</code> every time a process gets completed.</p>
<p>Register your counter to the <code>MeterRegistry</code> in the bindTo() method and the metric should be available over the prometheus endpoint. No need for a dedicated exporter pod.</p>
| Roger Villars |
<p>I have a custom Container Image for postgresql and try to run this as a Stateful kubernetes application</p>
<p>The image knows 2 Volumes which are mounted into</p>
<ol>
<li><code>/opt/db/data/postgres/data</code> (the <code>$PGDATA</code> directory of my postgres intallation)</li>
<li><code>/opt/db/backup</code></li>
</ol>
<p>the backup Volume contains a deeper folder structure which is defined in the Dockerfile</p>
<h3>Dockerfile</h3>
<p>(excerpts)</p>
<pre><code>...
...
# Environment variables required for this build (do NOT change)
# -------------------------------------------------------------
...
ENV PGDATA=/opt/db/data/postgres/data
ENV PGBASE=/opt/db/postgres
...
ENV PGBACK=/opt/db/backup/postgres/backups
ENV PGARCH=/opt/db/backup/postgres/archives
# Set up user and directories
# ---------------------------
RUN mkdir -p $PGBASE $PGBIN $PGDATA $PGBACK $PGARCH && \
useradd -d /home/postgres -m -s /bin/bash --no-log-init --uid 1001 --user-group postgres && \
chown -R postgres:postgres $PGBASE $PGDATA $PGBACK $PGARCH && \
chmod a+xr $PGBASE
# set up user env
# ---------------
USER postgres
...
...
# bindings
# --------
VOLUME ["$PGDATA", "$DBBASE/backup"]
...
...
# Define default command to start Database.
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["postgres", "-D", "/opt/db/data/postgres/data"]
</code></pre>
<p>When I run this as a container or single pod in kubernetes without any <code>volumeMounts</code> all is good and the folder structure looks like it should</p>
<pre><code>find /opt/db/backup -ls
2246982 4 drwxr-xr-x 3 root root 4096 Feb 18 09:00 /opt/db/backup
2246985 4 drwxr-xr-x 4 root root 4096 Feb 18 09:00 /opt/db/backup/postgres
2246987 4 drwxr-xr-x 2 postgres postgres 4096 Feb 11 14:59 /opt/db/backup/postgres/backups
2246986 4 drwxr-xr-x 2 postgres postgres 4096 Feb 11 14:59 /opt/db/backup/postgres/archives
</code></pre>
<p>However once I run this based on the Statefulset below (which mounts to Volumes into the pod @ <code>/opt/db/data/postgres/data</code> & <code>/opt/db/backup</code> (which includes folder structure that goes deeper then the mount point, as listed above) this is not bein carried out as intended</p>
<pre><code>find /opt/db/backup βls
2 4 drwxr-xr-x 3 postgres postgres 4096 Feb 17 16:40 /opt/db/backup
11 16 drwx------ 2 postgres postgres 16384 Feb 17 16:40 /opt/db/backup/lost+found
</code></pre>
<p><code>/opt/db/backup/postgres/backups</code> & <code>/opt/db/backup/postgres/archives</code>, inherit in the Image are gone.</p>
<p>Can anybody point me to where to start looking for a solution in this?</p>
<h3>Statefulset</h3>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: postgres-stateful
name: postgres-stateful
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/part-of: postgres
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/part-of: postgres
app: postgres
template:
metadata:
labels:
app: postgres
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/part-of: postgres
spec:
serviceAccountName: default
initContainers: # Give `postgres` user (id 1001) permissions to mounted volumes
- name: take-volume-mounts-ownership
image: dev-local.dev.dlz.net/postgresql:14.2-deb11
securityContext:
readOnlyRootFilesystem: true
env:
- name: "PGDATA"
value: "/opt/db/data/postgres/data"
command: [ "/bin/sh" ]
args: ["-c", "chown -R 1001:1001 /opt/db/data/postgres /opt/db/backup /tmp" ]
volumeMounts:
- name: pv-data
mountPath: /opt/db/data/postgres
- name: pv-backup
mountPath: /opt/db/backup # /opt/db/backup/postgres
- name: emptydir-tmp
mountPath: /tmp
containers:
- name: postgres
image: dev-local.dev.dlz.net/postgresql:14.2-deb11
imagePullPolicy: Always
readinessProbe:
exec:
command: ["pg_isready", "-q"]
periodSeconds: 10
initialDelaySeconds: 7
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-q", "-c", "SELECT 1 WHERE 1=0"]
periodSeconds: 15
initialDelaySeconds: 20
timeoutSeconds: 2
env:
- name: "PGDATA"
value: "/opt/db/data/postgres/data"
envFrom:
- configMapRef:
name: postgres-configuration
ports:
- containerPort: 5432
name: postgresdb
resources:
requests:
memory: "256Mi"
cpu: "50m"
limits:
memory: "1Gi"
cpu: "1"
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
allowPrivilegeEscalation: false
volumeMounts:
- name: pv-data
mountPath: /opt/db/data/postgres/data # /var/lib/postgresql/data
- name: pv-backup
mountPath: /opt/db/backup # /opt/db/backup/postgres
- name: emptydir-tmp
mountPath: /tmp
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: pgdata-ina-pvc
- name: pv-backup
persistentVolumeClaim:
claimName: pgbackup-ina-pvc
- name: emptydir-tmp
emptyDir: {}
</code></pre>
| vrms | <p>The directory which you had created in Dockerfile will be overlaid when you mount persistent volume to the same path. You can re-construct the directory structure in your "take-volume-mounts-ownership" container:</p>
<pre><code>...
initContainers:
- name: take-volume-mounts-ownership
...
env:
- name: PGDATA
value: /opt/db/data/postgres/data
- name: PGBASE
value: /opt/db/postgres
- name: PGBACK
value: /opt/db/backup/postgres/backups
- name: PGARCH
value: /opt/db/backup/postgres/archives
...
args: ["-c", "mkdir -p $PGBASE $PGBIN $PGDATA $PGBACK $PGARCH && chown -R 1001:1001 /opt/db/data/postgres /opt/db/backup /tmp" ]
...
</code></pre>
| gohm'c |
<p>I have created the following persistent volume in GoogleCloud by deploying the following yml file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: staging-filestore
labels:
filestore: standard
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
mountOptions:
- lookupcache=positive #Disables caching to make all writes be sync to the server
nfs:
path: /staging
server: 10.64.16.130
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: staging-filestore-pvc
spec:
selector:
matchLabels:
filestore: standard
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1T
</code></pre>
<p>The volume is created successfully and mounted in thein the service below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo
labels:
app: demo
vendor: rs
spec:
selector:
app: demo
vendor: rs
ports:
- port: 4010
name: internal
targetPort: 4010
- port: 80
name: external
targetPort: 4010
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
labels:
app: demo
vendor: rs
spec:
selector:
matchLabels:
app: demo
replicas: 1
revisionHistoryLimit: 2
template:
metadata:
labels:
app: demo
vendor: rs
spec:
imagePullSecrets:
- name: dockerhub-secret
containers:
- name: demo
image: rs/demo:latest
imagePullPolicy: Always
resources:
requests:
memory: "1Gi"
limits:
memory: "2Gi"
volumeMounts:
- mountPath: /dumps/heap-dump
name: dumpstorage
readinessProbe:
httpGet:
path: /_health
port: http
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 5
livenessProbe:
httpGet:
path: /_health
port: http
initialDelaySeconds: 25
periodSeconds: 5
failureThreshold: 10
envFrom:
- configMapRef:
name: rs-config
env:
- name: JAVA_TOOL_OPTIONS
value: " -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\"/dumps/demo-heap-dump\""
ports:
- containerPort: 4010
name: http
volumes:
- name: dumpstorage
persistentVolumeClaim:
claimName: staging-filestore-PVC
readOnly: false
</code></pre>
<p>I want that the heap dump generated when the application has a memory overflow error to be stored and the staging-filestore-PVC volume. To test this, I set the heap size to 1MB with the option <code>-Xmx16m</code>. The application crashes causing also the pod to crash. I can see in the logs that say the heap dump is created:</p>
<pre><code>2022-08-08 15:34:55.767 CESTDumping heap to /dumps/demo-heap-dump ...
2022-08-08 15:34:55.990 CESTHeap dump file created [26757864 bytes in 0.223 secs]
</code></pre>
<p>I deploy the service yml file again with the <code>-Xmx1G</code> option. And enter inside the pod terminal with the command <code>kubectl exec -it demo-77f84d9957-vxrsz -- sh</code>, go to the /dumps directory there is no demo-heap-dump file.
<a href="https://i.stack.imgur.com/0X3U1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0X3U1.png" alt="enter image description here" /></a></p>
<p>I am struggling to understand why the file is not present although no error is thrown and the logs say that the file is created. Thank you for the help.</p>
<p>Regards,
Rando.</p>
| Rando Shtishi | <p>Try:</p>
<pre><code>...
env:
- name: JAVA_TOOL_OPTIONS
value: " -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\"/dumps/heap-dump\"" # <-- update the path
...
</code></pre>
| gohm'c |
<p>Kustomize directory structure</p>
<pre><code>βββ base
βΒ Β βββ deployment.yaml
βΒ Β βββ kustomization.yaml
βββ overlays
βββ prod
βββ kustomization.yaml
βββ namespace-a
βΒ Β βββ deployment-a1
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ deployment-a2
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ namespace.yaml
βββ namespace-b
βΒ Β βββ deployment-b1
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ deployment-b2
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ namespace.yaml
βββ namespace-c
</code></pre>
<p>As you can see above, I have <code>prod</code> environment with <code>namesapce-a</code> and <code>namespace-b</code> and few more.
To create deployment for all, I can simply run the below command:</p>
<pre><code> > kustomize overlays/prod
</code></pre>
<p>Which works flawlessly, both namespaces are created along with other deployment files for all deployments.</p>
<p>To create a deployment for only namespace-a:</p>
<pre><code> > kustomize overlays/prod/namespace-a
</code></pre>
<p>That also works. :)</p>
<p>But that's not where the story ends for me at-least.</p>
<p>I would like to keep the current functionality and be able to deploy <code>deployment-a1, deployment-a2 ...</code></p>
<pre><code> > kustomize overlays/prod/namespace-a/deployment-a1
</code></pre>
<p>If I put the namespace.yaml inside <code>deployment-a1</code> folder and add it in <code>kustomization.yaml</code>
then the above command works but previous 2 fails with error because now we have 2 namespace files with same name.</p>
<p>I have 2 queries.</p>
<ol>
<li>Can this directory structure be improved?</li>
<li>How can I create namesapce with single deployment without breaking the other functionality?</li>
</ol>
<p>Full code can be seen <a href="https://github.com/deepak-gc/kustomize-namespace-issue" rel="nofollow noreferrer">here</a></p>
| Arian | <p>In your particular case, in the most ideal scenario, all the required namespaces should already be created before running the <code>kustomize</code> command.
However, I know that you would like to create namespaces dynamically as needed.</p>
<p>Using a Bash script as some kind of wrapper can definitely help with this approach, but I'm not sure if you want to use this.</p>
<p>Below, I'll show you how this can work, and you can choose if it's right for you.</p>
<hr />
<p>First, I created a <code>kustomize-wrapper</code> script that requires two arguments:</p>
<ol>
<li>The name of the Namespace you want to use.</li>
<li>Path to the directory containing the <code>kustomization.yaml</code> file.</li>
</ol>
<p><strong>kustomize-wrapper.sh</strong></p>
<pre><code>$ cat kustomize-wrapper.sh
#!/bin/bash
if [ -z "$1" ] || [ -z "$2" ]; then
echo "Pass required arguments !"
echo "Usage: $0 NAMESPACE KUSTOMIZE_PATH"
exit 1
else
NAMESPACE=$1
KUSTOMIZE_PATH=$2
fi
echo "Creating namespace"
sed -i "s/name:.*/name: ${NAMESPACE}/" namespace.yaml
kubectl apply -f namespace.yaml
echo "Setting namespace: ${NAMESPACE} in the kustomization.yaml file"
sed -i "s/namespace:.*/namespace: ${NAMESPACE}/" base/kustomization.yaml
echo "Deploying resources in the ${NAMESPACE}"
kustomize build ${KUSTOMIZE_PATH} | kubectl apply -f -
</code></pre>
<p>As you can see, this script creates a namespace using the <code>namespace.yaml</code> file as the template. It then sets the same namespace in the <code>base/kustomization.yaml</code> file and finally runs the <code>kustomize</code> command with the path you provided as the second argument.</p>
<p><strong>namespace.yaml</strong></p>
<pre><code>$ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name:
</code></pre>
<p><strong>base/kustomization.yaml</strong></p>
<pre><code>$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace:
resources:
- deployment.yaml
</code></pre>
<p><strong>Directory structure</strong></p>
<pre><code>$ tree
.
βββ base
β βββ deployment.yaml
β βββ kustomization.yaml
βββ kustomize-wrapper.sh
βββ namespace.yaml
βββ overlays
βββ prod
βββ deployment-a1
β βββ kustomization.yaml
β βββ patch.yaml
βββ deployment-a2
β βββ kustomization.yaml
β βββ patch.yaml
βββ kustomization.yaml
</code></pre>
<p>We can check if it works as expected.</p>
<p>Creating the <code>namespace-a</code> Namespace along with <code>app-deployment-a1</code> and <code>app-deployment-a2</code> Deployments:</p>
<pre><code>$ ./kustomize-wrapper.sh namespace-a overlays/prod
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created
deployment.apps/app-deployment-a2 created
</code></pre>
<p>Creating only the <code>namespace-a</code> Namespace and <code>app-deployment-a1</code> Deployment:</p>
<pre><code>$ ./kustomize-wrapper.sh namespace-a overlays/prod/deployment-a1
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created
</code></pre>
| matt_j |
<p>How can I make a flask session work across multiple instances of containers using Kubernetes? Or does Kubernetes always maintain the same container for a given session?</p>
| Nishant Aklecha | <p>Default Flask sessions are stored client-side (in the browser) as a cookie and cryptographically signed to prevent tampering. Every request to your Flask application is accompanied by this cookie. Therefore, if all running containers have the same app (at least the same secret key used for signing), then they should all have access to the same session data.</p>
<p>Note:</p>
<ul>
<li>This is cryptographically signed, but it is not encrypted, so don't store sensitive information in the session</li>
<li>Flask-Session can be installed for server-side session support</li>
</ul>
| Brandon Fuerst |
<p>My goal is to attach a dynamic volume claim to a deployment on EKS cluster, I am using this tutorial
<a href="https://aws.amazon.com/blogs/storage/persistent-storage-for-kubernetes/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/storage/persistent-storage-for-kubernetes/</a></p>
<p>My manifests are storage class, volume claim and a deployment : ( Same as from tutorial)</p>
<pre class="lang-yaml prettyprint-override"><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-XXXX
directoryPerms: "700"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim-1
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
name: debug-app
spec:
replicas: 1
selector:
matchLabels:
component: debug-app
template:
metadata:
labels:
app: debug-app
component: debug-app
spec:
containers:
- image: ubuntu:20.04
imagePullPolicy: IfNotPresent
name: debug-app
command: ["/bin/sh","-c"]
args:
- sleep 3650d
#command: ["/bin/sh","-c", "sleep 365d"]
ports:
- containerPort: 8000
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 10m
memory: 16Mi
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim-1
</code></pre>
<p>Storage class is well created, but</p>
<pre class="lang-bash prettyprint-override"><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
efs-sc efs.csi.aws.com Delete Immediate false 5m17s
</code></pre>
<p>persistent volume claim is on <code>pending</code> state with the following description</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 2m18s (x17 over 6m18s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
</code></pre>
<p>It's clear the efs storage can't create the persisted volume, I went back and tried with newly created efs storage following
<a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html</a></p>
<p>But the same issue persisted,</p>
<p>Any possible fixes here please?
Thanks</p>
<p>PS: Just mentioning here, I'm using no Fargate Profile</p>
| Reda E. | <blockquote>
<p>I went back and tried with newly created efs storage following <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html</a></p>
</blockquote>
<p>But you didn't install the <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html#efs-install-driver" rel="nofollow noreferrer">driver</a>. Hence you get <code>...waiting for a volume to be created, either by external provisioner "efs.csi.aws.com"</code>.</p>
| gohm'c |
<p>I have installed redis in K8S Cluster via Helm with Namespace redis1 and using port 6379,26379.</p>
<p>And I installed another redis in the same K8S Cluster via Helm with Namespace redis2 and using port 6380,26380.</p>
<p>redis1 works but redis2 error :</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned redis2/redis-redis-ha-server-0 to worker3
Normal Pulled 30m kubelet Container image "redis:v5.0.6-alpine" already present on machine
Normal Created 30m kubelet Created container config-init
Normal Started 30m kubelet Started container config-init
Normal Pulled 29m kubelet Container image "redis:v5.0.6-alpine" already present on machine
Normal Created 29m kubelet Created container redis
Normal Started 29m kubelet Started container redis
Normal Killing 28m (x2 over 29m) kubelet Container sentinel failed liveness probe, will be restarted
Normal Pulled 28m (x3 over 29m) kubelet Container image "redis:v5.0.6-alpine" already present on machine
Normal Created 28m (x3 over 29m) kubelet Created container sentinel
Normal Started 28m (x3 over 29m) kubelet Started container sentinel
Warning Unhealthy 14m (x25 over 29m) kubelet Liveness probe failed: dial tcp xx.xxx.x.xxx:26380: connect: connection refused
Warning BackOff 4m56s (x85 over 25m) kubelet Back-off restarting failed container
</code></pre>
<p>I had previously installed rabbitmq the same way before in the same cluster it works. So I hope I can use the same method with redis.</p>
<p>Please advise what should be done.</p>
| Siriphong Wanjai | <p>As this issue was resolved in the comments section by @David Maze, I decided to provide a Community Wiki answer just for better visibility to other community members.</p>
<p>Services in Kubernetes allow applications to receive traffic and can be exposed in different ways as there are different types of Kubernetes services (see: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">Overview of Kubernetes Services</a>). In case of the default <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-clusterip" rel="nofollow noreferrer">ClusterIP</a> type, it exposes the Service on an internal IP (each Service also has its own IP address) in the cluster and makes the Service only reachable from within the cluster. Each Service has its own IP address, so it's okay if they listen on the same port (but each on their own IP address).</p>
<hr />
<p>Below is a simple example to illustrate that it's possible to have two (or more) Services listening on the same port ( <code>80</code> port).</p>
<p>I've created two Deployments (<code>app1</code> and <code>app2</code>) and exposed it with <code>ClusterIP</code> Services using the same port number:</p>
<pre><code>$ kubectl create deploy app-1 --image=nginx
deployment.apps/app-1 created
$ kubectl create deploy app-2 --image=nginx
deployment.apps/app-2 created
$ kubectl expose deploy app-1 --port=80
service/app-1 exposed
$ kubectl expose deploy app-2 --port=80
service/app-2 exposed
$ kubectl get pod,svc
NAME READY STATUS RESTARTS
pod/app-1-5d9ccdb595-x5s55 1/1 Running 0
pod/app-2-7747dcb588-trj8d 1/1 Running 0
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.12.54 <none> 80/TCP
service/app-2 ClusterIP 10.8.11.181 <none> 80/TCP
</code></pre>
<p>Finally, we can check if it works as expected:</p>
<pre><code>$ kubectl run test --image=nginx
pod/test created
$ kubectl exec -it test -- bash
root@test:/# curl 10.8.12.54:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
root@test:/# curl 10.8.11.181:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
| matt_j |
<p>Autopilot makes all decisions about the nodes but, why are all nodes being created with <code>cloud.google.com/gke-boot-disk=pd-standard</code>? Is it possible to all nodes be created with ssd disk? If so, how can it be done?</p>
| Bruno TomΓ© | <p>Currently, Autopilot managed node does not use SSD as the boot device, it doesn't support local SSD either. This behavior is not amendable.</p>
| gohm'c |
<p>I have Google cloud composer running in 2 GCP projects. I have updated composer environment variable in both. One composer restarted fine within few minutes. I have problem in another & it shows below error as shown in images.</p>
<p><strong>Update operation failed. Couldn't start composer-agent, a GKE job that updates kubernetes resources. Please check if your GKE cluster exists and is healthy.</strong></p>
<p><a href="https://i.stack.imgur.com/vMytz.png" rel="nofollow noreferrer">This is the error what I see when I enter the composer</a></p>
<p><a href="https://i.stack.imgur.com/JYTJP.png" rel="nofollow noreferrer">This is the environment overview</a></p>
<p><a href="https://i.stack.imgur.com/RMWSq.png" rel="nofollow noreferrer">GKE cluster notification</a></p>
<p><a href="https://i.stack.imgur.com/mDyes.png" rel="nofollow noreferrer">GKE pods overview</a></p>
<p>I am trying to find how to resolve the problem but I didn't find any satisfied answers. My colleagues are assuming firewall & org policies issue but I haven't changed any.</p>
<p>Can some one let me know what caused this problem as the google composer is managed by google & how to resolve this issue now?</p>
| John | <p>Once the Cloud Composer is the managed resource and when the GKE which serves the environment for your composer is unhealthy you should try to contact Google Cloud Support. That GKE should work just fine and you do not need even know about its existence.</p>
<p>Also check whether you do not reacy any limits or quotas in your project.</p>
<p>When nothing helps recreation of Cloud Composer is always good idea.</p>
| Wojtek Smolak |
<p>I am currently deploying <a href="https://docs.openfaas.com/" rel="nofollow noreferrer">openfaas</a> on my local virtual machine's <a href="https://kubernetes.io/" rel="nofollow noreferrer">kubernetes</a> cluster. I found that the time zone of the container started after publishing the function is inconsistent with the host machine. How should I solve this problem?</p>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-node-1 ~]# date
# Host time
2021εΉ΄ 06ζ 09ζ₯ ζζδΈ 11:24:40 CST
[root@k8s-node-1 ~]# docker exec -it 5410c0b41f7a date
# Container time
Wed Jun 9 03:24:40 UTC 2021
</code></pre>
| silentao | <p>As <strong>@coderanger</strong> pointed out in the comments section, the timezone difference is not related to <code>OpenFaaS</code>.<br />
It depends on the image you are using, most of the images use <code>UTC</code> timezone.
Normally this shouldn't be a problem, but in some special cases you may want to change this timezone.</p>
<p>As described in this <a href="https://bobcares.com/blog/change-time-in-docker-container/" rel="nofollow noreferrer">article</a>, you can use the <code>TZ</code> environment variable to set the timezone of a container (there are also other ways to change the timezone).</p>
<p>If you have your own <code>Dockerfile</code>, you can use the <a href="https://docs.docker.com/engine/reference/builder/#env" rel="nofollow noreferrer">ENV</a> instruction to set this variable:<br />
<strong>NOTE:</strong> The <code>tzdata</code> package has to be installed in the container for setting the <code>TZ</code> variable.</p>
<pre><code>$ cat Dockerfile
FROM nginx:latest
RUN apt-get install -y tzdata
ENV TZ="Europe/Warsaw"
$ docker build -t mattjcontainerregistry/web-app-1 .
$ docker push mattjcontainerregistry/web-app-1
$ kubectl run time-test --image=mattjcontainerregistry/web-app-1
pod/time-test created
$ kubectl exec -it time-test -- bash
root@time-test:/# date
Wed Jun 9 17:22:03 CEST 2021
root@time-test:/# echo $TZ
Europe/Warsaw
</code></pre>
| matt_j |
<p>I started to use Kubernetes to understant concepts like pods, objects and so on. I started to learn about Persistent Volume and Persistent Volume Claim, from my understanding, if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...</p>
<p>I have a spring boot pod where i save data in mysql pod, data is saved, i can retreived, but when i restart my pods, delete or replace them, that saved data is lost, so i think i messed up something, can you give me a hint, please? Thanks...</p>
<p>Bellow are my Kubernetes files:</p>
<ul>
<li>Mysql pod:</li>
</ul>
<hr />
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels: #must match Service and DeploymentLabels
app: mysql
spec:
containers:
- image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
name: mysql #name of the db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret #name of the secret obj
key: password #which value from inside the secret to take
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-config
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts: #mount volume obtained from PVC
- name: mysql-persistent-storage
mountPath: /var/lib/mysql #mounting in the container will be here
volumes:
- name: mysql-persistent-storage #obtaining volume from PVC
persistentVolumeClaim:
claimName: mysql-pv-claim # can use the same claim in different pods
</code></pre>
<hr />
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql #DNS name
labels:
app: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector: #mysql pod should contain same label
app: mysql
clusterIP: None # we use DNS
</code></pre>
<p>Persistent Volume and Persistent Volume Claim files:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim #name of our pvc
labels:
app: mysql
spec:
volumeName: host-pv #claim that volume created with this name
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
</code></pre>
<hr />
<pre><code>apiVersion: v1 #version of our PV
kind: PersistentVolume #kind of obj we gonna create
metadata:
name: host-pv # name of our PV
spec: #spec of our PV
capacity: #size
storage: 4Gi
volumeMode: Filesystem #storage Type, File and Blcok
storageClassName: standard
accessModes:
- ReadWriteOnce # can be mount from multiple pods on a single nod, cam be use by multiple pods, multiple pods can use this pv but only from a single node
# - ReadOnlyMany # on multiple nodes
# - WriteOnlyMany # doar pt multiple nods, nu hostPath type
hostPath: #which type of pv
path: "/mnt/data"
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
</code></pre>
<p>My Spring book K8 file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-example
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-deployment
spec:
replicas: 1
selector:
matchLabels:
app: book-example
template:
metadata:
labels:
app: book-example
spec:
containers:
- name: book-container
image: cinevacineva/kubernetes_book_pv:latest
imagePullPolicy: Always
# ports:
# - containerPort: 8080
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
# & minikube -p minikube docker-env | Invoke-Expression links docker images we create with minikube, nu mai trebe sa ppusham
</code></pre>
| MyProblems | <p><code>...if i save data from mysql pod to a persistent volume, the data is saved no matter if i delete the mysql pod, the data is saved on the volume, but i don't think it works in my case...</code></p>
<p>Your previous data will not be available when the pod switch node. To use <code>hostPath</code> you don't really need PVC/PV. Try:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
...
spec:
...
template:
...
spec:
...
nodeSelector: # <-- make sure your pod runs on the same node
<node label>: <value unique to the mysql node>
volumes: # <-- mount the data path on the node, no pvc/pv required.
- name: mysql-persistent-storage
hostPath:
path: /mnt/data
type: DirectoryOrCreate
containers:
- name: mysql
...
volumeMounts: # <-- let mysql write to it
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
</code></pre>
| gohm'c |
<p>We're having a medium sized Kubernetes cluster. So imagine a situation where approximately 70 pods are being connecting to a one socket server. It works fine most of the time, however, from time to time one or two pods just fail to resolve k8s DNS, and it times out with the following error:</p>
<pre><code>Error: dial tcp: lookup thishost.production.svc.cluster.local on 10.32.0.10:53: read udp 100.65.63.202:36638->100.64.209.61:53: i/o timeout at
</code></pre>
<p>What we noticed is that this is not the only service that's failing intermittently. Other services experience that from time to time. We used to ignore it, since it was very random and rate, however in the above case that is very noticeable. The only solution is to actually kill the faulty pod. (Restarting doesn't help)</p>
<p>Has anyone experienced this? Do you have any tips on how to debug it/ fix?</p>
<p>It almost feels as if it's beyond our expertise and is fully related to the internals of the DNS resolver.</p>
<p>Kubernetes version: 1.23.4
Container Network: cilium</p>
| user3677173 | <p>this issue most probably will be related to the CNI.
I would suggest following the link to debug the issue:
<a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<p>and to be able to help you we need more information:</p>
<ol>
<li><p>is this cluster on-premise or cloud?</p>
</li>
<li><p>what are you using for CNI?</p>
</li>
<li><p>how many nodes are running and are they all in the same subnet? if yes, dose they have other interfaces?</p>
</li>
<li><p>share the below command result.</p>
<p>kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o wide</p>
</li>
<li><p>when you restart the pod to solve the issue temp does it stay on the same node or does it change?</p>
</li>
</ol>
| mohalahmad |
<p>I'm following this tutorial <a href="https://docs.aws.amazon.com/enclaves/latest/user/kubernetes.html" rel="nofollow noreferrer">AWS Enclaves with Amazon EKS </a>. Unfortunately, I run into <code>exceeded max wait time for StackCreateComplete waiter</code> error after around 35 minutes and I don't know why...</p>
<p>It seems to stuck when it tries to create the managed nodegroup. Here is the output of the last few lines in the terminal:</p>
<pre class="lang-bash prettyprint-override"><code>2023-03-18 23:24:20 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:25:33 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:27:26 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:28:55 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:30:30 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:32:15 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:33:09 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:34:36 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:36:17 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:37:13 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:38:31 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:38:53 [βΉ] waiting for CloudFormation stack "eksctl-ne-cluster-nodegroup-managed-ng-1"
2023-03-18 23:38:53 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2023-03-18 23:38:53 [βΉ] to cleanup resources, run 'eksctl delete cluster --region=eu-central-1 --name=ne-cluster'
2023-03-18 23:38:53 [β] exceeded max wait time for StackCreateComplete waiter
Error: failed to create cluster "ne-cluster"
</code></pre>
<p>This is my launch template config:</p>
<pre><code>{
"ImageId": "ami-0499632f10efc5a62",
"InstanceType": "m5.xlarge",
"TagSpecifications": [{
"ResourceType": "instance",
"Tags": [{
"Key":"Name",
"Value":"webserver"
}]
}],
"UserData":"TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PU1ZQk9VTkRBUlk9PSIKCi0tPT1NWUJPVU5EQVJZPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgoKIyEvYmluL2Jhc2ggLWUKcmVhZG9ubHkgTkVfQUxMT0NBVE9SX1NQRUNfUEFUSD0iL2V0Yy9uaXRyb19lbmNsYXZlcy9hbGxvY2F0b3IueWFtbCIKIyBOb2RlIHJlc291cmNlcyB0aGF0IHdpbGwgYmUgYWxsb2NhdGVkIGZvciBOaXRybyBFbmNsYXZlcwpyZWFkb25seSBDUFVfQ09VTlQ9MgpyZWFkb25seSBNRU1PUllfTUlCPTc2OAoKIyBUaGlzIHN0ZXAgYmVsb3cgaXMgbmVlZGVkIHRvIGluc3RhbGwgbml0cm8tZW5jbGF2ZXMtYWxsb2NhdG9yIHNlcnZpY2UuCmFtYXpvbi1saW51eC1leHRyYXMgaW5zdGFsbCBhd3Mtbml0cm8tZW5jbGF2ZXMtY2xpIC15CgojIFVwZGF0ZSBlbmNsYXZlJ3MgYWxsb2NhdG9yIHNwZWNpZmljYXRpb246IGFsbG9jYXRvci55YW1sCnNlZCAtaSAicy9jcHVfY291bnQ6LiovY3B1X2NvdW50OiAkQ1BVX0NPVU5UL2ciICRORV9BTExPQ0FUT1JfU1BFQ19QQVRICnNlZCAtaSAicy9tZW1vcnlfbWliOi4qL21lbW9yeV9taWI6ICRNRU1PUllfTUlCL2ciICRORV9BTExPQ0FUT1JfU1BFQ19QQVRICiMgUmVzdGFydCB0aGUgbml0cm8tZW5jbGF2ZXMtYWxsb2NhdG9yIHNlcnZpY2UgdG8gdGFrZSBjaGFuZ2VzIGVmZmVjdC4Kc3lzdGVtY3RsIHJlc3RhcnQgbml0cm8tZW5jbGF2ZXMtYWxsb2NhdG9yLnNlcnZpY2UKZWNobyAiTkUgdXNlciBkYXRhIHNjcmlwdCBoYXMgZmluaXNoZWQgc3VjY2Vzc2Z1bGx5LiIKLS09PU1ZQk9VTkRBUlk9PQ=="
}
</code></pre>
<p>and I can run the following command to create the launch template successfully:</p>
<pre><code>aws ec2 create-launch-template \
--launch-template-name TemplateForEnclaveServer \
--version-description WebVersion1 \
--tag-specifications 'ResourceType=launch-template,Tags=[{Key=purpose,Value=production}]' \
--launch-template-data file://lt_nitro_config.json
</code></pre>
<p>In step two the cluster creation fails. The configuration is the following:</p>
<pre><code>apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ne-cluster
region: eu-central-1
managedNodeGroups:
- name: managed-ng-1
launchTemplate:
id: lt-04c4d2c58e20db555
version: "1" # optional (uses the default launch template version if unspecified)
minSize: 1
desiredCapacity: 1
</code></pre>
<p>The I run the command:</p>
<pre><code>eksctl create cluster -f cluster_nitro_config.yaml
</code></pre>
<p>The cluster creation works fine, but the managed nodegroup fails with the following output:</p>
<pre><code>
2023-03-18 23:38:53 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2023-03-18 23:38:53 [βΉ] to cleanup resources, run 'eksctl delete cluster --region=eu-central-1 --name=ne-cluster'
2023-03-18 23:38:53 [β] exceeded max wait time for StackCreateComplete waiter
Error: failed to create cluster "ne-cluster"
</code></pre>
<p>The console output is:</p>
<pre><code>Resource handler returned message: "[Issue(Code=NodeCreationFailure, Message=Instances failed to join the kubernetes cluster, ResourceIds=[i-00bf11cb814138f64])] (Service: null, Status Code: 0, Request ID: null)" (RequestToken: 5772ff82-596e-3e57-eb8f-c7ae277f0df2, HandlerErrorCode: GeneralServiceException)
</code></pre>
<p>I have no idea what this means.
Thanks in advance!</p>
| Denis | <p>Remove the <code>ImageId</code> from your LT and try again.</p>
| gohm'c |
<p>I have a small instance of influxdb running in my kubernetes cluster.<br>
The data of that instance is stored in a persistent storage.<br>
But I also want to run the backup command from influx at scheduled interval.<br></p>
<pre><code>influxd backup -portable /backuppath
</code></pre>
<p>What I do now is exec into the pod and run it manually.<br>
Is there a way that I can do this automatically?</p>
| Geert | <p>You can consider running a CronJob with <a href="https://bitnami.com/stack/kubectl/containers" rel="nofollow noreferrer">bitnami kubectl</a> which will execute the backup command. This is the same as <code>exec into the pod and run</code> except now you automate it with CronJob.</p>
| gohm'c |
<p>What would be the best solution to stream AKS Container logs and cluster level logs to Azure Eventhub?</p>
| Mehrdad Abdolghafari | <p>Experiencing the same issue, unable to forward the application logs directly to event hub. The work around is to create a rule to forward the logs to the storage account and create a rule to pick it up from storage blob and stream it real time to event hub.
Creating a job to forward logs to storage account:
<a href="https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azure-event-hubs#add-action" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azure-event-hubs#add-action</a>
Stream logs from storage account to event hub:
<a href="https://learn.microsoft.com/en-us/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub</a></p>
| Fedor Gorin |
<p>the webservice works just fine on my computer but after it's deployed it doesn't work anymore and throw this error <code>413 Request Entity Too Large</code> I added this code to <code>startup.cs</code> file</p>
<pre><code>services.Configure<FormOptions>(x =>
{
x.ValueLengthLimit = int.MaxValue;
x.MultipartBodyLengthLimit = int.MaxValue;
x.MemoryBufferThreshold = int.MaxValue;
x.MultipartBoundaryLengthLimit = 209715200;
});
</code></pre>
<p>but didn't help at all . So after some research I added <code>nginx.ingress.kubernetes.io/proxy-body-size: "100m"</code> to <code>deployment.yaml</code> but also didn't help</p>
| Thameur Saadi | <p>You can do this:
Login server:</p>
<blockquote>
<p>sudo nano /etc/nginx/nginx.conf</p>
</blockquote>
<p>Set:</p>
<blockquote>
<p>client_max_body_size 2M;</p>
</blockquote>
<p>This working with me.</p>
| Manh NV |
<p>We have an use case where we run pods with hostNetwork set to true, and these pods will be deployed in a dedicated node pool where maxPodRange is set to 32. To avoid IP wastage we are trying to understand if there is a way to override this maxPodRange constraint, so that the kube-scheduler will not restrict us with 32 pods during deployment. Please let me know if you have come across any solution or workaround. Thanks in advance!</p>
| Jay | <p>For autopilot cluster 32 is the max and this restriction cannot be bypass. You can use up to 256 IPs for non-autopilot cluster. Checkout the limits <a href="https://cloud.google.com/kubernetes-engine/quotas#limits_per_cluster" rel="nofollow noreferrer">here</a> and how to configure <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr#setting_the_maximum_number_of_pods_in_a_new_node_pool_for_an_existing_cluster" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>I have a jenkins service deployed in EKS v 1.16 using helm chart. The PV and PVC had been accidentally deleted so I have recreated the PV and PVC as follows:</p>
<p>Pv.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-vol
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://us-east-2b/vol-xxxxxxxx
capacity:
storage: 120Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: jenkins-ci
namespace: ci
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
volumeMode: Filesystem
status:
phase: Bound
</code></pre>
<p>PVC.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-ci
namespace: ci
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 120Gi
volumeMode: Filesystem
volumeName: jenkins-vol
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 120Gi
phase: Bound
</code></pre>
<p>kubectl describe sc gp2</p>
<pre><code>Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2","namespace":""},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<p>The issue I'm facing is that the pod is not running when its scheduled on a node in a different availability zone than the EBS volume? How can I fix this</p>
| DevopsinAfrica | <p>Add following labels to the PersistentVolume.</p>
<pre><code> labels:
failure-domain.beta.kubernetes.io/region: us-east-2b
failure-domain.beta.kubernetes.io/zone: us-east-2b
</code></pre>
<p>example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.beta.kubernetes.io/gid: "1000"
labels:
failure-domain.beta.kubernetes.io/region: us-east-2b
failure-domain.beta.kubernetes.io/zone: us-east-2b
name: test-pv-1
spec:
accessModes:
- ReadWriteOnce
csi:
driver: ebs.csi.aws.com
fsType: xfs
volumeHandle: vol-0d075fdaa123cd0e
capacity:
storage: 100Gi
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
</code></pre>
<p>With the above labels the pod will automatically run in the same AZ where the volume is.</p>
| GZU5 |
<p>I have a requirement to rewrite all URLs to lowercase.</p>
<p>E.g. <code>test.com/CHILD</code> to <code>test.com/child</code></p>
<p>Frontend application is developed on docker on azure kubernetes services. Ingress is controlled on nginx ingress controller.</p>
| Arup Nayak | <p>You can rewrite URLs using Lua as described in the <a href="https://www.rewriteguide.com/nginx-enforce-lower-case-urls/" rel="nofollow noreferrer">Enforce Lower Case URLs (NGINX)</a> article.</p>
<p>All we need to do is add the following configuration block to nginx:</p>
<pre><code>location ~ [A-Z] {
rewrite_by_lua_block {
ngx.redirect(string.lower(ngx.var.uri), 301);
}
}
</code></pre>
<p>I will show you how it works.</p>
<hr />
<p>First, I created an Ingress resource with the previously mentioned configuration:</p>
<pre><code>$ cat test-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
location ~ [A-Z] {
rewrite_by_lua_block {
ngx.redirect(string.lower(ngx.var.uri), 301);
}
}
spec:
rules:
- http:
paths:
- path: /app-1
pathType: Prefix
backend:
service:
name: app-1
port:
number: 80
$ kubectl apply -f test-ingress.yaml
ingress.networking.k8s.io/test-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress <none> * <PUBLIC_IP> 80 58s
</code></pre>
<p>Then I created a sample <code>app-1</code> Pod and exposed it on port <code>80</code>:</p>
<pre><code>$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl expose pod app-1 --port=80
service/app-1 exposed
</code></pre>
<p>Finally, we can test if rewrite works as expected:</p>
<pre><code>$ curl -I <PUBLIC_IP>/APP-1
HTTP/1.1 301 Moved Permanently
Date: Wed, 06 Oct 2021 13:53:56 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: /app-1
$ curl -L <PUBLIC_IP>/APP-1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Additionally, in the <code>ingress-nginx-controller</code> logs, we can see the following log entries:</p>
<pre><code>10.128.15.213 - - [06/Oct/2021:13:54:34 +0000] "GET /APP-1 HTTP/1.1" 301 162 "-" "curl/7.64.0" 83 0.000 [-] [] - - - - c4720e38c06137424f7b951e06c3762b
10.128.15.213 - - [06/Oct/2021:13:54:34 +0000] "GET /app-1 HTTP/1.1" 200 615 "-" "curl/7.64.0" 83 0.001 [default-app-1-80] [] 10.4.1.13:80 615 0.001 200 f96b5664765035de8832abebefcabccf
</code></pre>
| matt_j |
<p>This is an easy to run version of the code I wrote to do port-forwarding via client-go. There are hardcoded pod name, namespace, and port. You can change them with the one you have running.</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"flag"
"net/http"
"os"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/portforward"
"k8s.io/client-go/transport/spdy"
)
func main() {
stopCh := make(<-chan struct{})
readyCh := make(chan struct{})
var kubeconfig *string
if home := "/home/gianarb"; home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
reqURL := clientset.RESTClient().Post().
Resource("pods").
Namespace("default").
Name("test").
SubResource("portforward").URL()
transport, upgrader, err := spdy.RoundTripperFor(config)
if err != nil {
panic(err)
}
dialer := spdy.NewDialer(upgrader, &http.Client{Transport: transport}, http.MethodPost, reqURL)
fw, err := portforward.New(dialer, []string{"9999:9999"}, stopCh, readyCh, os.Stdout, os.Stdout)
if err != nil {
panic(err)
}
if err := fw.ForwardPorts(); err != nil {
panic(err)
}
}
</code></pre>
<p>Version golang 1.13:</p>
<pre><code> k8s.io/api v0.0.0-20190409021203-6e4e0e4f393b
k8s.io/apimachinery v0.0.0-20190404173353-6a84e37a896d
k8s.io/cli-runtime v0.0.0-20190409023024-d644b00f3b79
k8s.io/client-go v11.0.0+incompatible
</code></pre>
<p>The error I get is </p>
<blockquote>
<p>error upgrading connection: </p>
</blockquote>
<p>but there is nothing after the <code>:</code>.
Do you have any experience with this topic?
Thanks</p>
| GianArb | <pre><code>clientset.CoreV1().RESTClient().Post().
Resource("pods").
Namespace("default").
Name("test").
SubResource("portforward").URL()
</code></pre>
<p>works for me and gives url with .../api/v1/namespaces...</p>
| hsin |
<p>I had a Jenkins values.yaml file written prior to v1.19, I need some help to change it to be v1.19 compliant.</p>
<p>In the old <code>Values.yaml</code> below, I tried adding <code>http path:/</code>. Should the <code>pathType</code> be <code>ImplementationSpecific</code>?</p>
<p>Only <code>defaultBackend</code> works for some reason, not sure what I'm doing wrong with <code>path</code> and <code>pathType</code>.</p>
<pre><code>ingress:
enabled: true
# Override for the default paths that map requests to the backend
paths:
# - backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
- backend:
serviceName: >-
{{ template "jenkins.fullname" . }}
# Don't use string here, use only integer value!
servicePort: 8080
# For Kubernetes v1.14+, use 'networking.k8s.io/v1'
apiVersion: "networking.k8s.io/v1"
labels: {}
annotations:
kubernetes.io/ingress.global-static-ip-name: jenkins-sandbox-blah
networking.gke.io/managed-certificates: jenkins-sandbox-blah
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# Set this path to jenkinsUriPrefix above or use annotations to rewrite path
# path: "/jenkins"
# configures the hostname e.g. jenkins.example.com
hostName: jenkins.sandbox.io
</code></pre>
| kgunjikar | <p>There are several changes to the definition of Ingress resources between <code>v1.18</code> and <code>v1.19</code>.</p>
<p>In <code>v1.18</code>, we defined paths like this (see: <a href="https://v1-18.docs.kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">A minimal Ingress resource example</a>):</p>
<pre><code> paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
</code></pre>
<p>In version <code>1.19</code> it was changed to: (see: <a href="https://v1-19.docs.kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">A minimal Ingress resource example</a>):</p>
<pre><code> paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
</code></pre>
<p>In your example, you can slightly modified the <code>values.yaml</code> and try again:<br />
<strong>NOTE:</strong> You may need to change the port number and <a href="https://v1-19.docs.kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">pathType</a> depending on your configuration. Additionally, I've added the <code>kubernetes.io/ingress.class: nginx</code> annotation because I'm using <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a> and I didn't configure the hostname.</p>
<pre><code>$ cat values.yaml
controller:
ingress:
enabled: true
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: >-
{{ template "jenkins.fullname" . }}
port:
number: 8080
apiVersion: "networking.k8s.io/v1"
annotations:
kubernetes.io/ingress.global-static-ip-name: jenkins-sandbox-blah
networking.gke.io/managed-certificates: jenkins-sandbox-blah
kubernetes.io/ingress.allow-http: "true"
kubernetes.io/ingress.class: nginx
# configures the hostname e.g. jenkins.example.com
# hostName: jenkins.sandbox.io
</code></pre>
| matt_j |
<p>Openshift provides by-default "node-tuning-operator" for tuning down system.</p>
<p>We can create our custom profiles using custom resource (CR).</p>
<p>But, the custom profiles are not being loaded/activated by the operator.</p>
<p>Instead of activating my custom profile, it is activating default profiles provided by openshift.</p>
<p>I am still working on to figure out correct profile configuration.</p>
<p>What may be the issue that tuned operator is not activating my custom profiles?</p>
| Pankaj Yadav | <p>Documentation for tuned operator can be found at <a href="https://docs.openshift.com/container-platform/4.7/scalability_and_performance/using-node-tuning-operator.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.7/scalability_and_performance/using-node-tuning-operator.html</a>.</p>
<p><strong>Generic Information about Tuned Operator:</strong></p>
<ul>
<li>Namespace/Project : openshift-cluster-node-tuning-operator</li>
<li>Operator : cluster-node-tuning-operator</li>
<li>DaemonSet : tuned</li>
<li>CRD : tuneds.tuned.openshift.io</li>
<li>CR : Tuned/default & Tuned/rendered</li>
</ul>
<p>The documentation says that we can create our own custom resources of <strong>kind=Tuned</strong> apart from default resources provided by openshift named "<strong>Tuned/default & Tuned/rendered</strong>".</p>
<p>These resources provide default profiles named <strong>"openshift", "openshift-node" and "openshift-control-plane"</strong>.</p>
<p>More information can be seen using below command:</p>
<pre><code>oc get Tuned/default -n openshift-cluster-node-tuning-operator -o yaml
</code></pre>
<p>Now, we can create our own custom profile as part of custom resource to tune our own settings.</p>
<p>The trick here is that the configuration in custom resource yaml file regarding custom profile should be correct. If it is correct, tuned operator will load the profile and activate it. If it is incorrect, then tuned operator will NOT activate it and it will ignore any future correct configuration also.</p>
<p>This is a bug in tuned operator which is addressed as part of <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1919970" rel="nofollow noreferrer">https://bugzilla.redhat.com/show_bug.cgi?id=1919970</a>.</p>
<p><strong>Fix:</strong> Upgrade openshift cluster version to 4.7 and above.</p>
<p><strong>Workaround:</strong> Delete the tuned pod so that operator will create new pod. Once new pod is created, it will activate correct profile. (Hoping configuration in your CR.yaml was corrected).</p>
<p><strong>Important Commands:</strong></p>
<ul>
<li>To find out pod on which tuned operator itself is running:</li>
</ul>
<blockquote>
<p>oc get pod -n openshift-cluster-node-tuning-operator -o wide</p>
</blockquote>
<ul>
<li>To check logs of the operator pod: (Actual pod name can be found from
above command)</li>
</ul>
<blockquote>
<p>oc logs pod/cluster-node-tuning-operator-6644cd48bb-z2qxn -n
openshift-cluster-node-tuning-operator</p>
</blockquote>
<ul>
<li>To check which all Custom Resouces of kind=Tuned are present:</li>
</ul>
<blockquote>
<p>oc get Tuned -n openshift-cluster-node-tuning-operator</p>
</blockquote>
<ul>
<li>To describe and check default profiles:</li>
</ul>
<blockquote>
<p>oc get Tuned/default -n openshift-cluster-node-tuning-operator -o yaml</p>
</blockquote>
<ul>
<li>To find out all tuned pods and nodes on which they are running in the
cluster:</li>
</ul>
<blockquote>
<p>oc get pod -n openshift-cluster-node-tuning-operator -o wide</p>
</blockquote>
<ul>
<li>To check logs of a particular tuned pod: (Actual pod name can be
found from above command)</li>
</ul>
<blockquote>
<p>oc logs tuned-h8xgh -n openshift-cluster-node-tuning-operator -f</p>
</blockquote>
<ul>
<li>To login into tuned pod and manually confirm tuning is applied or
not: (Actual pod name can be found from previous-to-above command)</li>
</ul>
<blockquote>
<p>oc exec -it tuned-h8xgh -n openshift-cluster-node-tuning-operator --bash</p>
</blockquote>
<ul>
<li><p>You can execute below commands after login into tuned pod using above
command to verify tuning settings:</p>
<blockquote>
<p>bash-4.4# cat /etc/tuned/infra-nodes/tuned.conf<br />
[main]
summary=Optimize systems running OpenShift Infra nodes<br />
[sysctl]
fs.inotify.max_user_watches = 1048576
vm.swappiness = 1</p>
</blockquote>
<blockquote>
<p>bash-4.4# tuned-adm recommend Cannot talk to Tuned daemon via DBus.
Is Tuned daemon running? infra-nodes bash-4.4#</p>
</blockquote>
</li>
</ul>
<blockquote>
<pre><code>bash-4.4# tuned-adm active
Cannot talk to Tuned daemon via DBus. Is Tuned daemon running?
Current active profile: openshift-control-plane
bash-4.4#
</code></pre>
</blockquote>
<p><strong>Note:</strong> The above sample code exactly depicts the issue asked in this question. If you notice, active profile is "openshift-control-plane" whereas recommended/loaded one is "infra-nodes". This is due to the existing bug as mentioned previously. Once you delete tuned pod (tuned-h8xgh), operator will recover and activate correct profile.</p>
<p><strong>Sample issue in custom profile configuration:</strong>
If profile priorities are same as default profiles, then operator will give warning something similar as below:</p>
<pre><code>W0722 04:24:25.490704 1 profilecalculator.go:480] profiles openshift-control-plane/infra-node have the same priority 30, please use a different priority for your custom profiles!
</code></pre>
| Pankaj Yadav |
<p>I'm working on attaching Amazon EKS (NFS) to Kubernetes pod using terraform.</p>
<p>Everything runs without an error and is created:</p>
<ul>
<li>Pod victoriametrics</li>
<li>Storage Classes</li>
<li>Persistent Volumes</li>
<li>Persistent Volume Claims</li>
</ul>
<p>However, the volume <code>victoriametrics-data</code> doesn't attach to the pod. Anyway, I can't see one in the pod's shell.
Could someone be so kind to help me understand where I'm wrong, please?</p>
<p>I have cut some unimportant code for the question to get code shorted.</p>
<pre class="lang-golang prettyprint-override"><code>resource "kubernetes_deployment" "victoriametrics" {
...
spec {
container {
image = var.image
name = var.name
...
volume_mount {
mount_path = "/data"
mount_propagation = "None"
name = "victoriametrics-data"
read_only = false
}
}
volume {
name = "victoriametrics-data"
}
}
}
...
}
</code></pre>
<pre class="lang-golang prettyprint-override"><code>resource "kubernetes_csi_driver" "efs" {
metadata {
name = "${local.cluster_name}-${local.namespace}"
annotations = {
name = "For store data of ${local.namespace}."
}
}
spec {
attach_required = true
pod_info_on_mount = true
volume_lifecycle_modes = ["Persistent"]
}
}
</code></pre>
<pre class="lang-golang prettyprint-override"><code>resource "kubernetes_storage_class" "efs" {
metadata {
name = "efs-sc"
}
storage_provisioner = kubernetes_csi_driver.efs.id
reclaim_policy = "Retain"
mount_options = ["file_mode=0700", "dir_mode=0777", "mfsymlinks", "uid=1000", "gid=1000", "nobrl", "cache=none"]
}
</code></pre>
<pre class="lang-golang prettyprint-override"><code>resource "kubernetes_persistent_volume" "victoriametrics" {
metadata {
name = "${local.cluster_name}-${local.namespace}"
}
spec {
storage_class_name = "efs-sc"
persistent_volume_reclaim_policy = "Retain"
volume_mode = "Filesystem"
access_modes = ["ReadWriteMany"]
capacity = {
storage = var.size_of_persistent_volume_claim
}
persistent_volume_source {
nfs {
path = "/"
server = local.eks_iput_target
}
}
}
}
</code></pre>
<pre class="lang-golang prettyprint-override"><code>resource "kubernetes_persistent_volume_claim" "victoriametrics" {
metadata {
name = local.name_persistent_volume_claim
namespace = local.namespace
}
spec {
access_modes = ["ReadWriteMany"]
storage_class_name = "efs-sc"
resources {
requests = {
storage = var.size_of_persistent_volume_claim
}
}
volume_name = kubernetes_persistent_volume.victoriametrics.metadata.0.name
}
}
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: victoriametrics
namespace: victoriametrics
labels:
k8s-app: victoriametrics
purpose: victoriametrics
annotations:
deployment.kubernetes.io/revision: '1'
name: >-
VictoriaMetrics - The High Performance Open Source Time Series Database &
Monitoring Solution.
spec:
replicas: 1
selector:
matchLabels:
k8s-app: victoriametrics
purpose: victoriametrics
template:
metadata:
name: victoriametrics
creationTimestamp: null
labels:
k8s-app: victoriametrics
purpose: victoriametrics
annotations:
name: >-
VictoriaMetrics - The High Performance Open Source Time Series
Database & Monitoring Solution.
spec:
containers:
- name: victoriametrics
image: 714154805721.dkr.ecr.us-east-1.amazonaws.com/victoriametrics:v1.68.0
ports:
- containerPort: 8428
protocol: TCP
- containerPort: 2003
protocol: TCP
- containerPort: 2003
protocol: UDP
volumeMounts:
- mountPath: /data
name: victoriametrics-data
- mountPath: /var/log
name: varlog
env:
- name: Name
value: victoriametrics
resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: victoriametrics-data
emptyDir: {}
- name: varlog
emptyDir: {}
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: true
shareProcessNamespace: false
securityContext: {}
schedulerName: default-scheduler
tolerations:
- key: k8s-app
operator: Equal
value: victoriametrics
effect: NoSchedule
enableServiceLinks: true
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
minReadySeconds: 15
revisionHistoryLimit: 10
progressDeadlineSeconds: 300
</code></pre>
| Rostyslav Malenko | <p>You need to use the persistent volume claim that you have created instead of <code>emptyDir</code> in your deployment:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: victoriametrics
...
volumes:
- name: victoriametrics-data
persistentVolumeClaim:
claimName: <value of local.name_persistent_volume_claim>
</code></pre>
| gohm'c |
<p>Weβre using EKS to host our cluster and just getting our external facing services up and running</p>
<p>Weβve got external ips for our services using LoadBalancer type.</p>
<p>If we have a UI that needs an external IP address for a server. How can we assign this to the UI service without having to manually add it?</p>
<p>Is there a way to get the EXTERNAL ip of the a service in the config map of another</p>
| Dan Stein | <p><code>...got external ips for our services using LoadBalancer type...</code></p>
<p>Your UI can refer to the "external facing services" using the service DNS name. For example, if the "external facing services" runs in namespace called "external", the UI can find the service with: <code><name of external facing service>.external</code>, FQDN will be <code><name of external facing service>.external.svc.cluster.local</code></p>
| gohm'c |
<p>I am trying to deploy ingress-controller in GKE - K8S cluster where RBAC is enabled, but I am getting below error.</p>
<p><a href="https://i.stack.imgur.com/2pw4n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2pw4n.png" alt="W"></a></p>
<p>This is the command I ran ...</p>
<p>helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true </p>
<p>it gave me below error<br>
Error: validation failed: [serviceaccounts "nginx-ingress" not found, serviceaccounts "nginx-ingress-backend" not found, clusterroles.rbac.authorization.k8s.io "nginx-ingress" not found, clusterrolebindings.rbac.authorization.k8s.io "nginx-ingress" not found, roles.rbac.authorization.k8s.io "nginx-ingress" not found, rolebindings.rbac.authorization.k8s.io "nginx-ingress" not found, services "nginx-ingress-controller" not found, services "nginx-ingress-default-backend" not found, deployments.apps "nginx-ingress-controller" not found, deployments.apps "nginx-ingress-default-backend" not found]</p>
<p>I am following this link : <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/nginx-ingress-gke</a></p>
<p>Could you please share your thoughts to debug this issue and also to fix. Thanks in advance.</p>
| args | <p>It is a known issue in Helm 2.16.4: <a href="https://github.com/helm/helm/issues/7797" rel="nofollow noreferrer">https://github.com/helm/helm/issues/7797</a></p>
<p>You can upgrade Helm to 2.16.5 to solve the problem.</p>
| qinjunjerry |
<p>I'm trying to set up my metrics-server for HPA but I'm encountering some issues.</p>
<p>This is my metrics-server.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
</code></pre>
<p>I've tried adding <code>- --kubelet-insecure-tls</code> to my args as you can see and that didn't help, also tried increasing the</p>
<pre><code>initialDelaySeconds: 20
periodSeconds: 10
</code></pre>
<p>to 300 and 20 respectively and that didn't work either.</p>
<p>Here is the describe of the pod:</p>
<pre><code>PS E:\OceniFilm> kubectl -n kube-system describe pod metrics-server
Name: metrics-server-7f6fdd8fc5-6msrp
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: docker-desktop/192.168.65.4
Start Time: Sat, 14 May 2022 12:14:12 +0200
Labels: k8s-app=metrics-server
pod-template-hash=7f6fdd8fc5
Annotations: <none>
Status: Running
IP: 10.1.1.152
IPs:
IP: 10.1.1.152
Controlled By: ReplicaSet/metrics-server-7f6fdd8fc5
Containers:
metrics-server:
Container ID: docker://21d8129133f3fac78fd9df3b97b41f455ca11d816a5b4484db3dedf5e2d31e6c
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
Image ID: docker-pullable://k8s.gcr.io/metrics-server/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00
Port: 4443/TCP
Host Port: 0/TCP
Args:
--cert-dir=/tmp
--secure-port=4443
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--kubelet-use-node-status-port
--metric-resolution=15s
--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP
State: Running
Started: Sat, 14 May 2022 12:14:13 +0200
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhb6s (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-xhb6s:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 60s default-scheduler Successfully assigned kube-system/metrics-server-7f6fdd8fc5-6msrp to docker-desktop
Normal Pulled 60s kubelet Container image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1" already present on machine
Normal Created 60s kubelet Created container metrics-server
Normal Started 60s kubelet Started container metrics-server
Warning Unhealthy 1s (x4 over 31s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
</code></pre>
<p>My kubectl version:</p>
<pre><code>PS E:\OceniFilm> kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"windows/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.24) and server (1.22) exceeds the supported minor version skew of +/-1
</code></pre>
<p>And this is where I got my yaml file, which I downloaded (latest) and edited <a href="https://github.com/kubernetes-sigs/metrics-server/releases/tag/metrics-server-helm-chart-3.8.2" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/metrics-server/releases/tag/metrics-server-helm-chart-3.8.2</a></p>
| curiousQ | <p>Using version 4.5 worked as expected by using this yaml:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.5
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
</code></pre>
<p>No idea why this version works, but the latest doesn't</p>
| curiousQ |
<p>We recently deployed Apache Ranger version 2.0.0 on Kubernetes.
We wanted to configure Readiness and Liveness probes for Apache Ranger service which is running inside pods.</p>
<p>Is there any health endpoint for Apache Ranger which we can use?</p>
| chitender kumar | <p>There is on api endpoint to check the service status as of now, however, you can use any api to check it is connecting, for example something lie below:-</p>
<pre><code>curl -s -o /dev/null -w "%{http_code}" -u admin:admin -H "Content-Type: application/json" -X GET http://`hostname -f`:6080/service/tags/tags
</code></pre>
<p>if above call returns following value means ranger is live and serving trafic</p>
<blockquote>
<p>200</p>
</blockquote>
<p>if it returns following which means ranger is down</p>
<blockquote>
<p>000</p>
</blockquote>
| rikamamanus |
<p>I'm studying kubernetes and testing some example.</p>
<p>I got a problem with applying external metrics to hpa.</p>
<p>i made a external metrics with prometheus adapter.</p>
<p>so I can get external metrics using</p>
<pre><code>kubectl get --raw /apis/external.metrics.k8s.io/v1beta1/
</code></pre>
<p>command.</p>
<p>result is below.</p>
<pre><code>{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"external.metrics.k8s.io/v1beta1","resources":[{"name":"redis_keys","singularName":"","namespaced":true,"kind":"ExternalMetricValueList","verbs":["get"]}]}
</code></pre>
<p>and i can get metrics value using</p>
<pre><code>kubectl get --raw /apis/external.metrics.k8s.io/v1beta1/namespaces/default/redis_keys
</code></pre>
<p>command.</p>
<p>result is below.</p>
<pre><code>{"kind":"ExternalMetricValueList","apiVersion":"external.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/external.metrics.k8s.io/v1beta1/namespaces/default/redis_keys"},"items":[{"metricName":"redis_keys","metricLabels":{},"timestamp":"2020-10-28T08:39:09Z","value":"23"}]}
</code></pre>
<p>and i applied the metric to hpa.</p>
<p>below is hpa configuration.</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: taskqueue-autoscaler
spec:
scaleTargetRef:
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
name: taskqueue-consumer
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: redis_keys
targetAverageValue: 20
</code></pre>
<p>after mading hpa,</p>
<p>i tested this command.</p>
<pre><code>kubectl get hpa
</code></pre>
<p>and result is weird.</p>
<pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
taskqueue-autoscaler Deployment/taskqueue-consumer 11500m/20 (avg) 1 10 2 63m
</code></pre>
<p>i think it's value(11500m) is wrong because result of query value is 23.</p>
<p>Where should i see in this case?</p>
| choogwang | <p>Actually it's right, but its also complicated because it gets into a couple of different things with the HPA resource that are not immediately obvious. So I will tackle explaining this one thing at a time.</p>
<p>First, the units of measurement. The Metrics API will try to return whole units when possible, but does return milli-units as well, which can result in the above. Really, what you are seeing is 11.5 converted to 11500m, but the two are the same. Check out this link on the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-quantities" rel="noreferrer">"Quantities" of HPA metrics</a> that covers it a bit more.</p>
<p>Next, you are seeing two replicas at the moment with a value from the Metrics API of 23. Since you have set the metric to be the AverageValue of the External Metric, it is dividing the value of the metric by the number of replicas in the cluster to result in 11.5 or 11500m when you view the HPA resource. This explains why you are seeing only 2 replicas, while the value of the metric is "above" your threshold. Check out this link on <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="noreferrer">Autoscaling with Multiple & Custom metrics, specifically the section about "Object" metrics</a>. And lower down the page they slip in this line regarding External Metrics, confirming why you are seeing the above.</p>
<blockquote>
<p>External metrics support both the Value and AverageValue target types,
which function exactly the same as when you use the Object type.</p>
</blockquote>
<p>Hope this helps and a few tweaks should make it line up better with your expectations. Good luck out there!</p>
| Jack Barger |
<p>Coming from classic Java application development and being new to "all this cloud stuff", I have a (potentially naive) basic question about scalability e.g. in Kubernetes.</p>
<p>Let's assume I've written an application for the JVM (Java or Kotlin) that scales well locally across CPUs / CPU cores for compute-intensive work that is managed via job queues, where different threads (or processes) take their work items from the queues.</p>
<p>Now, I want to scale the same application beyond the local machine, in the cloud. In my thinking the basic way of working stays the same: There are job queues, and now "cloud nodes" instead of local processes get their work from the queues, requiring some network communication.</p>
<p>As on that high level work organization is basically the same, I was hoping that there would be some JVM framework whose API / interfaces I would simply need to implement, and that framework has then different backends / "engines" that are capable of orchestrating a job queuing system either locally via processes, or in the cloud involving network communication, so my application would "magically" just scale indefinitely depending on now many cloud resources I throw at it (in the end, I want dynamic horizontal auto-scaling in Kubernetes).</p>
<p>Interestingly, I failed to find such a framework, although I was pretty sure that it would be a basic requirement for anyone who wants to bring local applications to the cloud for better scalability. I found e.g. <a href="https://github.com/jobrunr/jobrunr" rel="nofollow noreferrer">JobRunr</a>, which seems to come very close, but unfortunately so far lacks the capability to dynamically ramp up Kubernetes nodes based on load.</p>
<p>Are there other frameworks someone can recommend?</p>
| sschuberth | <p>Scale your code as Kubernetes Jobs, try <a href="https://keda.sh/docs/2.4/concepts/scaling-jobs/" rel="nofollow noreferrer">keda</a>.</p>
| gohm'c |
<p>I am <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer">mounting a filesystem</a> on minikube:</p>
<pre><code>minikube mount /var/files/:/usr/share/ -p multinode-demo
</code></pre>
<p>But I found two complications:</p>
<ul>
<li>My cluster has two nodes. The pods in the first node are able to access the host files at <code>/var/files/</code>, but the pods in the second node are not. What can be the reason for that?</li>
<li>I have to mount the directory before the pods have been created. If I <code>apply</code> my deployment first, and then do the <code>mount</code>, the pods never get the filesystem. Is Kubernetes not able to apply the mounting later, over an existing deployment that required it?</li>
</ul>
| user1156544 | <p>As mentioned in the comments section, I believe your problem is related to the following GitHub issues: <a href="https://github.com/kubernetes/minikube/issues/12165#issuecomment-895104495" rel="nofollow noreferrer">Storage provisioner broken for multinode mode</a> and <a href="https://github.com/kubernetes/minikube/issues/11765#issuecomment-868965821" rel="nofollow noreferrer">hostPath permissions wrong on multi node</a>.</p>
<p>In my opinion, you might be interested in using NFS mounts instead, and I'll briefly describe this approach to illustrate how it works.</p>
<hr />
<p>First we need to install the NFS Server and create the NFS export directory on our host:<br />
<strong>NOTE:</strong> I'm using Debian 10 and your commands may be different depending on your Linux distribution.</p>
<pre><code>$ sudo apt install nfs-kernel-server -y
$ sudo mkdir -p /mnt/nfs_share && sudo chown -R nobody:nogroup /mnt/nfs_share/
</code></pre>
<p>Then, grant permissions for accessing the NFS server and export the NFS share directory:</p>
<pre><code>$ cat /etc/exports
/mnt/nfs_share *(rw,sync,no_subtree_check,no_root_squash,insecure)
$ sudo exportfs -a && sudo systemctl restart nfs-kernel-server
</code></pre>
<p>We can use the <code>exportfs -v</code> command to display the current export list:</p>
<pre><code>$ sudo exportfs -v
/mnt/nfs_share <world>(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
</code></pre>
<p>Now it's time to create a minikube cluster:</p>
<pre><code>$ minikube start --nodes 2
$ kubectl get nodes
NAME STATUS ROLES VERSION
minikube Ready control-plane,master v1.23.1
minikube-m02 Ready <none> v1.23.1
</code></pre>
<p>Please note, we're going to use the <code>standard</code> <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a>:</p>
<pre><code>$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
standard (default) k8s.io/minikube-hostpath Delete Immediate false
</code></pre>
<p>Additionally, we need to find the Minikube gateway address:</p>
<pre><code>$ minikube ssh
docker@minikube:~$ ip r | grep default
default via 192.168.49.1 dev eth0
</code></pre>
<p>Let's create <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> which will use the NFS share:<br />
<strong>NOTE:</strong> The address <code>192.168.49.1</code> is the Minikube gateway.</p>
<pre><code>$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
nfs:
server: 192.168.49.1
path: "/mnt/nfs_share"
$ kubectl apply -f pv.yaml
persistentvolume/nfs-volume created
$ cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-claim
namespace: default
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
$ kubectl apply -f pvc.yaml
persistentvolumeclaim/nfs-claim created
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs-volume 1Gi RWX Retain Bound default/nfs-claim standard 71s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nfs-claim Bound nfs-volume 1Gi RWX standard 56s
</code></pre>
<p>Now we can use the NFS PersistentVolume - to test if it works properly I've created <code>app-1</code> and <code>app-2</code> Deployments:<br />
<strong>NOTE:</strong> The <code>app-1</code> will be deployed on different node than <code>app-2</code> (I've specified <code>nodeName</code> in the PodSpec).</p>
<pre><code>$ cat app-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: app-share
mountPath: /mnt/app-share
nodeName: minikube
volumes:
- name: app-share
persistentVolumeClaim:
claimName: nfs-claim
$ kubectl apply -f app-1.yaml
deployment.apps/app-1 created
$ cat app-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-2
name: app-2
spec:
replicas: 1
selector:
matchLabels:
app: app-2
template:
metadata:
labels:
app: app-2
spec:
nodeName: minikube
containers:
- image: nginx
name: nginx
volumeMounts:
- name: app-share
mountPath: /mnt/app-share
nodeName: minikube-m02
volumes:
- name: app-share
persistentVolumeClaim:
claimName: nfs-claim
$ kubectl apply -f app-2.yaml
deployment.apps/app-2 created
$ kubectl get deploy,pods -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/app-1 1/1 1 1 24s nginx nginx app=app-1
deployment.apps/app-2 1/1 1 1 21s nginx nginx app=app-2
NAME READY STATUS RESTARTS AGE NODE
pod/app-1-7874b8d7b6-p9cb6 1/1 Running 0 23s minikube
pod/app-2-fddd84869-fjkrw 1/1 Running 0 21s minikube-m02
</code></pre>
<p>To verify that our NFS share works as expected, we can create a file in the <code>app-1</code> and then check that we can see that file in the <code>app-2</code> and on the host:</p>
<p><em>app-1:</em></p>
<pre><code>$ kubectl exec -it app-1-7874b8d7b6-p9cb6 -- bash
root@app-1-7874b8d7b6-p9cb6:/# df -h | grep "app-share"
192.168.49.1:/mnt/nfs_share 9.7G 7.0G 2.2G 77% /mnt/app-share
root@app-1-7874b8d7b6-p9cb6:/# touch /mnt/app-share/app-1 && echo "Hello from the app-1" > /mnt/app-share/app-1
root@app-1-7874b8d7b6-p9cb6:/# exit
exit
</code></pre>
<p><em>app-2:</em></p>
<pre><code>$ kubectl exec -it app-2-fddd84869-fjkrw -- ls /mnt/app-share
app-1
$ kubectl exec -it app-2-fddd84869-fjkrw -- cat /mnt/app-share/app-1
Hello from the app-1
</code></pre>
<p><em>host:</em></p>
<pre><code>$ ls /mnt/nfs_share/
app-1
$ cat /mnt/nfs_share/app-1
Hello from the app-1
</code></pre>
| matt_j |
<p>I installed pods and services with <code>deployment.yml</code>,<code>service.yml</code> and <code>ingress.yml</code> like this below.</p>
<p><code>deployment.yml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: apa000dep
spec:
selector:
matchLabels:
app: apa000kube
replicas : 3
template:
metadata:
labels:
app: apa000kube
spec:
containers:
- name: apa000ex91
image: httpd
ports:
- containerPort: 80
</code></pre>
<p><code>service.yml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apa000ser
spec:
type: NodePort
ports:
- port: 8099
targetPort: 80
protocol: TCP
nodePort: 30080
selector:
app: apa000kube
</code></pre>
<p>Both works well and, I can access pod directly <code>localhost:30080</code></p>
<p>then i installed ingress.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: demo.localdev.me
http:
paths:
- pathType: Prefix
path: /*
backend:
service:
name: apa000ser
port:
number: 8099
</code></pre>
<p>The hostname is set correctly</p>
<pre><code>$kubectl get ingress
NAME CLASS HOSTS ADDRESS
PORTS AGE
example-ingress <none> demo.localdev.me 80 95m
</code></pre>
<p>However when I access <code>http://demo.localdev.me</code> it returns 404 error.</p>
<p><code>http://localhost:30080/</code> returns <code><span>it works!</span></code></p>
<p>So, I guess some ingress setting is wrong. where should I check?</p>
<hr />
<p>Even I stop <code>example-ingress</code>, result is same.</p>
<p>So it means.</p>
<pre><code>nginx-ingress -> ingress -> service -> pods.
</code></pre>
<p>Every request is fetchd in first <code>nginx-ingress</code>, and not sent to <code>ingress</code>?</p>
| whitebear | <p>Two things that caught my eye:</p>
<ol>
<li><p>The Ingress controller you are using <del>seems to</del> might require an <code>ingressClassName</code> defined in the Ingress resource, so that the ingress controller takes this resource (and its configs) into account.</p>
<p>More about <code>Ingress class</code>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class" rel="nofollow noreferrer">Ingress class</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class" rel="nofollow noreferrer">Default IngressClass</a></li>
</ul>
</li>
<li><p>You should change the <code>path: /*</code> to <code>path: /</code>.</p>
<ul>
<li><p><code>path: /*</code> makes the Ingress controller create a location block Γ la</p>
<pre><code>location /* {
...
}
</code></pre>
<p><code>/*</code> is interpreted as a normal prefix in this case and has nothing to do with regex (probably not what you assumed).</p>
<p>In your case: the <em>404</em> comes rather from the ingress itself (since request URI <code>/</code> not found - must be '/*' instead). In order for a request to be proxied to the httpd server at all, the request must be as follows: "http://demo.localdev.me/*", to which the httpd would again respond with 404 (since the resource '/*' also doesn't exist on httpd by default).</p>
</li>
<li><p>Whereas <code>path: /</code> does the following:</p>
<pre><code>location / {
...
}
</code></pre>
<p>The <code>location /</code> block is a special case that matches any URI that starts with a slash (/), which includes all URIs. (This is also the default location block that Nginx uses if no other location block matches the request's URI.)</p>
</li>
</ul>
<p>More about nginx <code>location</code>:</p>
<ul>
<li><a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="nofollow noreferrer">nginx doc</a></li>
<li><a href="https://stackoverflow.com/questions/59846238/guide-on-how-to-use-regex-in-nginx-location-block-section">Guide on how to use regex in Nginx location block section?</a></li>
</ul>
</li>
</ol>
<p><strong>Final result:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
ingressClassName: nginx # add
rules:
- host: demo.localdev.me
http:
paths:
- pathType: Prefix
path: / # change
backend:
service:
name: apa000ser
port:
number: 8099
</code></pre>
| Kenan GΓΌler |
<p>I have a server to server calls and I use GRPC (with .net core 5) It's working and test in local.</p>
<p>After that, I have moved all the services to Kubernetes Pod (Docker Desktop) and also tested through the flow (with swagger post-call) and it's working there too.</p>
<p>Now for monitoring, I added ISTIO and added the label to my namespace "<em>istio-injection=enabled</em>"
restarted all my pods and now all are having 2 containers inside each pod.</p>
<p>I tested the basic services (again swagger) and it's working. when it comes to testing the GRPC call. The call is failing from the caller side saying</p>
<p><strong>Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="upstream connect error or disconnect/reset before headers. reset reason: protocol error")</strong></p>
<p>I checked the logs at GRPC server-side and it has no clue about this call and the service is just running. then I am kind of thinking that error is coming from the caller side whereas it is not able to or make a call to GRPC server.</p>
<p><a href="https://i.stack.imgur.com/Jgv0j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jgv0j.png" alt="enter image description here" /></a></p>
<p>The error detail:</p>
<pre><code>Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="upstream connect error or disconnect/reset before headers. reset reason: protocol error")
at Basket.API.GrpcServices.DiscountGrpcService.GetDiscount(String productName) in /src/Services/Basket/Basket.API/GrpcServices/DiscountGrpcService.cs:line 21
at Basket.API.Controllers.BasketController.UpdateBasket(ShoppingCart basket) in /src/Services/Basket/Basket.API/Controllers/BasketController.cs:line 47 at lambda_method7(Closure , Object )
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.AwaitableObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Obje
</code></pre>
<p><em><strong>Again, I remove the Istio and tested and that's started working again (without changing anything) I added istio back and it started failing again. all other services are working with istio but not this call (This is only GRPC call I have).</strong></em></p>
| Brijesh Shah | <p>I found a solution at <a href="https://istiobyexample.dev/grpc/" rel="nofollow noreferrer">https://istiobyexample.dev/grpc/</a> where it describes the missing item.</p>
<p>istio recommends using the name and version tag as Label but more importantly when working with GRPC, the service that exposes the GRPC needs to have the port name GRPC.</p>
<p>I have added that restarted the service and it got started working as expected.
Again it's not something I resolved. All credit goes to the link <a href="https://istiobyexample.dev/grpc/" rel="nofollow noreferrer">https://istiobyexample.dev/grpc/</a> and the image posted below.</p>
<p><a href="https://i.stack.imgur.com/NDvCp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NDvCp.png" alt="enter image description here" /></a></p>
| Brijesh Shah |
<p>I wanted to hit a command which searches pod with the service name and identify its pod's status as "Ready"</p>
<p>I tried some of the commands but it does not work and it does not search with service-name.</p>
<pre><code>kubectl get svc | grep my-service | --output="jsonpath={.status.containerStatuses[*].ready}" | cut -d' ' -f2
</code></pre>
<p>I tried to use the loop also, but the script does not give the desired output.</p>
<p>Can you please help me figure out the exact command?</p>
| sagar verma | <p>If I understood you correctly, you want to find if specific <code>Pod</code> connected to specific <code>Endpoint</code> - is in <code>"Ready"</code> status .</p>
<p>Using <code>JSON PATH</code> you can display all <code>Pods</code> in specific <code>namespace</code> with their status:<br></p>
<pre><code>$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
</code></pre>
<p>If you are looking for status for <code>Pods</code> connected to specific <code>Endpoint</code>, you can use below script:</p>
<pre><code>#!/bin/bash
endpointName="web-1"
for podName in $(kubectl get endpoints $endpointName -o=jsonpath={.subsets[*].addresses[*].targetRef.name}); do
if [ ! -z $podName ]; then
kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}' | grep $podName
fi
done
for podName in $(kubectl get endpoints $endpointName -o=jsonpath={.subsets[*].notReadyAddresses[*].targetRef.name}); do
if [ ! -z $podName ]; then
kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}' | grep $podName
fi
done
</code></pre>
<p>Please note that you need to change <code>Endpoint</code> name to your needs, in above example I use <code>web-1</code> name.</p>
<p>If this response doesn't answer your question, please specify your exact purpose.</p>
| matt_j |
<p>I use haproxy as a load balancer pod, the requests received by the pod is from a NLB . The request received by the hsproxy pod is sent to a nginx webserver pod which serves traffic . This configuration works on both http and https . My idea is to have a redirect for web-dev.xxxx.com.The ssl certificate is at the NLB</p>
<pre><code>{
apiVersion: "v1",
kind: "ConfigMap",
metadata: {
name: "haproxy-config",
namespace: "xxxx",
},
data: {
"haproxy.cfg":
"# This configuration use acl's to distinguish between url's passwd and then route
# them to the right backend servers. For the backend servers to handle it correctly, you
# need to setup virtual hosting there as well, on whatever you use, tomcat, nginx, apache, etc.
# For this to work with SSL, put pound before HAproxy and use a configuration file similar to
# https://gist.github.com/1984822 to get it working
global
log stdout format raw local0
maxconn 4096
stats socket /var/run/haproxy.sock mode 660 level admin
pidfile /var/run/haproxy.pid
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor except 127.0.0.1
retries 3
option redispatch
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
# status page.
listen stats
bind :8000
mode http
stats enable
stats hide-version
stats uri /stats
frontend http-in
bind *:80 accept-proxy
# http-request set-header X-Client-IP %[src]
# Capturing specific request headers
capture request header x-wap-msisdn len 64
capture request header x-wap-imsi len 64
capture request header Host len 64
capture request header User-Agent len 64
#### Setup virtual host routing
# haproxy-dev.xxxx.com
acl is_haproxy_stats hdr_end(host) -i haproxy-dev.xxxx.com
use_backend haproxy-stats if is_haproxy_stats
# ACL for api-dev.xxxx.com
acl is_api hdr_end(host) -i api-dev.xxxx.com
http-request set-header X-Forwarded-Proto https if is_api
use_backend api if is_api
# ACL for he.web-dev.xxxx.com
acl is_he_web hdr_beg(host) -i he.web-dev.xxxx.com
# ACL for he-dev.xxxx.com
acl is_he hdr_beg(host) -i he-dev.xxxx.com
# ACL for path begins with /projects
acl is_products_uri path -i -m beg /products
# ACL redirect for he.web-dev.xxxx.com/projects
http-request redirect location https://web-dev.xxxx.com/products/?msisdn=%[req.hdr(x-wap-msisdn)] code 301 if is_he_web is_products_uri
# ACL redirect for he-dev.xxxx.com/products
http-request redirect location https://web-dev.xxxx.com/products/?msisdn=%[req.hdr(x-wap-msisdn)] code 301 if is_he is_products_uri
# ACL redirect for he-dev.xxxx.com
http-request redirect location https://web-dev.xxxx.com?msisdn=%[req.hdr(x-wap-msisdn)] code 301 if is_he
# ACL redirect for he.web-dev.xxxx.com
http-request redirect location https://web-dev.xxxx.com?msisdn=%[req.hdr(x-wap-msisdn)] code 301 if is_he_web
# ACL for web-dev.xxxx.com
acl is_web hdr_beg(host) -i web-dev.xxxx.com
redirect scheme https if { hdr(Host) -i web-dev.xxxx.com } !{ ssl_fc }
use_backend web if is_web
default_backend api
frontend web-dev.xxxx.com-https
bind *:9000 accept-proxy
# HSTS
http-request set-header X-Forwarded-For %[src]
http-request set-header X-Forwarded-Proto https
default_backend web
backend haproxy-stats
balance roundrobin
option redispatch
option httpchk GET /stats HTTP/1.1
option httpclose
option forwardfor
server haproxy haproxy-stats.x:8000 check inter 10s
backend api
balance roundrobin
option redispatch
option httpchk GET /ping/rails?haproxy HTTP/1.0\\r\\nUser-agent:\\ HAProxy
option httpclose
option forwardfor
server foo-rails foo-rails.xxxx:80 check inter 10s
backend web
balance roundrobin
option redispatch
cookie SERVERID insert nocache indirect
option httpchk GET /nginx_status HTTP/1.0
option httpclose
option forwardfor
http-response set-header X-XSS-Protection 1
http-response set-header X-Frame-Options DENY
http-response set-header X-Content-Type-Options nosniff
http-response set-header Strict-Transport-Security max-age=31536000;includeSubDomains;preload
server foo foo.xxxx:80 check inter 10s
",
}
}
</code></pre>
<p>The</p>
| Arun Karoth | <p>Your problem seems to be here.</p>
<pre><code>redirect scheme https if { hdr(Host) -i web-dev.xxxx.com } !{ ssl_fc }
</code></pre>
<p>Traffic is coming in to HAProxy on port 80 so <code>ssl_fc</code> will never match.</p>
| dcorbett |
<p>What options are available to me if I want to restrict the usage of Priorty Classes, if I'm running a managed Kubernetes service (AKS)?</p>
<p>The use-case here is that, I am as cluster admin want to restrict the usage of these. So that developers are not using those that are supposed to be used by critical components.</p>
<p>Multi-tenant cluster, semi 0-trust policy. This is something that could be "abused".</p>
<p>Is this something I can achieve with resource quotas even though I'm running Azure Kubernetes Service?
<a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default</a></p>
| jbnasngs | <p>Cloud managed cluster does not allow you to customize the api-server. In this case, you can use <a href="https://kubernetes.io/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/#policies-and-constraints" rel="nofollow noreferrer">opa gatekeeper</a> or <a href="https://kyverno.io/policies/other/allowed_pod_priorities/allowed_pod_priorities/" rel="nofollow noreferrer">kyverno</a> to write rules that reject un-necessary priority class settings.</p>
| gohm'c |
<p>I'm using kustomize to pipe a manifest to kubectl on a new k8s cluster (v1.17.2). This includes CRDs, but other objects are unable to find them. For example:</p>
<pre><code>unable to recognize "STDIN": no matches for kind "Certificate" in version "cert-manager.io/v1alpha2"
unable to recognize "STDIN": no matches for kind "IngressRoute" in version "traefik.containo.us/v1alpha1"
</code></pre>
<p>The CRDs are defined in the <code>resources</code> section of my kubectl, they show in the output which I'm piping to kubectl, and I'm sure this approach of putting everything in one file worked last time I did it.</p>
<p>If I apply the CRDs first, then apply the main manifest separately, it all goes through without a problem. Can I do them all at the same time? If so, what am I doing wrong; if not, why did it work before?</p>
<p>Can anyone point me at where the problem may lie?</p>
<p>Sample CRD definition:</p>
<pre><code>apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced
version: v1alpha1
</code></pre>
| Sausage O' Doom | <p>I came across your question when working on an issue with trying to bring up Traefik with Kustomize on Kubernetes... My issue was resolved by ensuring the namespace was accurate in my kustomization.yml file. In my case, I had to change it to match what was in the other yml files in my deployment. Not sure if you eventually figured it out but I figured I would respond in case that was it...</p>
| Magbas |
<p>I am facing the following issue related specifying namespace quota.</p>
<ol>
<li>namespace quota specified is not getting created via helm.
My file namspacequota.yaml is as shown below</li>
</ol>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: namespacequota
namespace: {{ .Release.Namespace }}
spec:
hard:
requests.cpu: "3"
requests.memory: 10Gi
limits.cpu: "6"
limits.memory: 12Gi
</code></pre>
<p>Below command used for installation</p>
<pre><code>helm install privachart3 . -n test-1
</code></pre>
<p>However the resourcequota is not getting created.</p>
<pre><code>kubectl get resourcequota -n test-1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME CREATED AT
gke-resource-quotas 2021-01-20T06:14:16Z
</code></pre>
<ol start="2">
<li>I can define the resource-quota using the below kubectl command.</li>
</ol>
<p><code>kubectl apply -f namespacequota.yaml --namespace=test-1</code></p>
<p>The only change required in the file above is commenting of line number-5 that consist of release-name.</p>
<pre><code>kubectl get resourcequota -n test-1
NAME CREATED AT
gke-resource-quotas 2021-01-20T06:14:16Z
namespacequota 2021-01-23T07:30:27Z
</code></pre>
<p>However in this case, when i am trying to install the chart, the PVC is created, but the POD is not getting created.</p>
<p>The capacity is not an issue as i am just trying to create a single maria-db pod using "Deployment".</p>
<p>Command used for install given below</p>
<pre><code>helm install chart3 . -n test-1
</code></pre>
<p>Output observed given below</p>
<pre><code>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME: chart3
LAST DEPLOYED: Sat Jan 23 08:38:50 2021
NAMESPACE: test-1
STATUS: deployed
REVISION: 1
TEST SUITE: None
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
</code></pre>
| George | <p>I got the answer from another the Git forum.
Upon setting a namespace quota we need to explicitly set the POD's resource.
In my case i just needed to specify the resource limit under the image.</p>
<pre><code> - image: wordpress:4.8-apache
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
<p>Post that i am now able to observe the PODs as well</p>
<pre><code>[george@dis ]$ kubectl get resourcequota -n geo-test
NAME AGE REQUEST LIMIT
gke-resource-quotas 31h count/ingresses.extensions: 0/100, count/ingresses.networking.k8s.io: 0/100, count/jobs.batch: 0/5k, pods: 2/1500, services: 2/500
namespace-quota 7s requests.cpu: 500m/1, requests.memory: 128Mi/1Gi limits.cpu: 1/3, limits.memory: 256Mi/3Gi
[george@dis ]$
.
[george@dis ]$ kubectl get pod -n geo-test
NAME READY STATUS RESTARTS AGE
wordpress-7687695f98-w7m5b 1/1 Running 0 32s
wordpress-mysql-7ff55f869d-2w6zs 1/1 Running 0 32s
[george@dis ]$
</code></pre>
| George |
<p>I am using a service account with a role assigned to it using OIDC. I opened shell in the pod and checked current role,</p>
<p><a href="https://i.stack.imgur.com/57fNu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57fNu.png" alt="enter image description here" /></a></p>
<p>but my service is doing the same thing but it is using node role,</p>
<p><a href="https://i.stack.imgur.com/UMFhZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UMFhZ.png" alt="enter image description here" /></a></p>
<h2>Versions of Java SDK</h2>
<ul>
<li>aws-java-sdk-core:1.11.505</li>
</ul>
| PSKP | <blockquote>
<p>The containers in your pods must use an AWS SDK version that supports
assuming an IAM role via an OIDC web identity token file.</p>
</blockquote>
<p>Check if you meet the minimum requirement for <a href="https://github.com/boto/boto3" rel="nofollow noreferrer">boto3</a> is 1.9.220, for <a href="https://github.com/boto/botocore" rel="nofollow noreferrer">botocore</a> is 1.12.200.</p>
| gohm'c |
<p>I am trying to create a volume in EKS cluster, the background of the volume is basically it is a netapp volume on netapp server and it is also mounted to an EC2 instance which is in same VPC as EKS cluster. This volume is populated with files that are created by a different application.</p>
<p>Flink app that I am deploying to EKS cluster needs read only access to the aforementioned volume. As per kubernetes documentation I can create a PersistentVolume which I can access in the pods of flink cluster.</p>
<p>To create a PV, Using Netapp volume path like {netappaccount}/{instanceid}/{volumeid} or Using the path on EC2 which is already mounted would be better approach ?</p>
<p>If I can use Ec2 how can I create a PV, can I use {Ec2 ipaddress}/{mountpath} ?</p>
<p>Can I use NFS plugin like below ? or could you please suggest best practice ?</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
accessModes:
- ReadOnlyOnce
nfs:
path: /tmp
server: EC2Box Ip address
persistentVolumeReclaimPolicy: Retain
</code></pre>
| VSK | <p>Your best option is Trident.
Cloud Manager can automate the deployment around it</p>
<p>Is your NetApp is Cloud Volumes ONTAP?
You can reach out via the Cloud Manager chat for further information </p>
| Aviv Degani |
<p>I installed a test cluster using Minikube. Also I've installed Prometheus, Grafana & Loki using helm chart. I want to output two metrics, but I don't know how. First metric is half done, but for some reason, it is not output if you put the mount point "/", and I need the metric itself with it, which is needed:</p>
<ol>
<li><p>Percentage of free disk space β mount point β/β, exclude tmpfs</p>
<pre><code> node_filesystem_avail_bytes{fstype!='tmpfs'} / node_filesystem_size_bytes{fstype!='tmpfs'} * 100
</code></pre>
</li>
<li><p>second metric: Number of API servers running. I don't know how to get it out.</p>
</li>
</ol>
| Iceforest | <p>I solved the problem on my own
first metric they did not change it, since there is no such mountpoint /</p>
<p>second metric
count(kube_pod_info{pod=~".<em>apiserver.</em>",namespace=".."})</p>
| Iceforest |
<p>I am new with Kubernetes and maybe my question was stupid, but I don't know how to fix it</p>
<p>I create an nginx-ingress deployment with nodePort service using this manual <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/</a></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 2
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.9.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: readiness-port
containerPort: 8081
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
</code></pre>
<p>And service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30369
selector:
app: nginx-ingress
</code></pre>
<p>And create deployments with service:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: landing
name: landing
labels:
app:landing
spec:
replicas: 1
selector:
matchLabels:
app: landing
template:
metadata:
labels:
app: landing
namespace: landing
spec:
containers:
- name: landing
image: private-registry/landing
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: landing
namespace: landing
spec:
selector:
app: landing
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Then I add ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: landing
namespace: landing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
ingressClassName: nginx
rules:
- host: landing.site
http:
paths:
- path: /
backend:
serviceName: landing
servicePort: 80
</code></pre>
<p>But in nginx-ingress POD, I see default upstream backend,</p>
<pre><code>upstream default-landing-landing.site-landing-80 {
zone default-landing-landing.site-landing-80 256k;
random two least_conn;
server 127.0.0.1:8181 max_fails=1 fail_timeout=10s max_conns=0;
}
</code></pre>
<p>What am I doing wrong?</p>
| Dmitriy Krinitsyn | <p>Well, I was so stupid :) I have another one ingress in default backend with same host. Delete it and all works fine</p>
| Dmitriy Krinitsyn |
<p>When my yaml is something like this :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
</code></pre>
<p>Where is the nginx image coming from? for e.g. in GKE Kubernetes world, if I was going to reference an image from registry it's normally something like this:</p>
<pre><code>image: gsr.io.foo/nginx
</code></pre>
<p>but in this case it's just an image name:</p>
<pre><code>image: nginx
</code></pre>
<p>So trying to just understand where the source registry is on when deployed on a K8 cluster as it seems to pull down ok but just want to know how I can figure out where it's supposed to come from?</p>
| Rubans | <p>It's coming from docker hub (<a href="https://hub.docker.com/" rel="nofollow noreferrer">https://hub.docker.com/</a>) when only image name is specified in the manifest file. Example:</p>
<pre><code>...
Containers:
- name: nginx
image: nginx
...
</code></pre>
<p>For nginx, it's coming from official nginx repository (<a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">https://hub.docker.com/_/nginx</a>).</p>
| Pulak Kanti Bhowmick |
<p>I am new to Kubernetes, and trying to get apache airflow working using helm charts. After almost a week of struggling, I am nowhere - even to get the one provided in the apache airflow documentation working. I use Pop OS 20.04 and microk8s.</p>
<p>When I run these commands:</p>
<pre><code>kubectl create namespace airflow
helm repo add apache-airflow https://airflow.apache.org
helm install airflow apache-airflow/airflow --namespace airflow
</code></pre>
<p>The helm installation times out after five minutes.</p>
<pre><code>kubectl get pods -n airflow
</code></pre>
<p>shows this list:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
airflow-postgresql-0 0/1 Pending 0 4m8s
airflow-redis-0 0/1 Pending 0 4m8s
airflow-worker-0 0/2 Pending 0 4m8s
airflow-scheduler-565d8587fd-vm8h7 0/2 Init:0/1 0 4m8s
airflow-triggerer-7f4477dcb6-nlhg8 0/1 Init:0/1 0 4m8s
airflow-webserver-684c5d94d9-qhhv2 0/1 Init:0/1 0 4m8s
airflow-run-airflow-migrations-rzm59 1/1 Running 0 4m8s
airflow-statsd-84f4f9898-sltw9 1/1 Running 0 4m8s
airflow-flower-7c87f95f46-qqqqx 0/1 Running 4 4m8s
</code></pre>
<p>Then when I run the below command:</p>
<pre><code>kubectl describe pod airflow-postgresql-0 -n airflow
</code></pre>
<p>I get the below (trimmed up to the events):</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s (x2 over 58s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
</code></pre>
<p>Then I deleted the namespace using the following commands</p>
<pre><code>kubectl delete ns airflow
</code></pre>
<p>At this point, the termination of the pods gets stuck. Then I bring up the proxy in another terminal:</p>
<pre><code>kubectl proxy
</code></pre>
<p>Then issue the following command to force deleting the namespace and all it's pods and resources:</p>
<pre><code>kubectl get ns airflow -o json | jq '.spec.finalizers=[]' | curl -X PUT http://localhost:8001/api/v1/namespaces/airflow/finalize -H "Content-Type: application/json" --data @-
</code></pre>
<p>Then I deleted the PVC's using the following command:</p>
<pre><code>kubectl delete pvc --force --grace-period=0 --all -n airflow
</code></pre>
<p>You get stuck again, so I had to issue another command to force this deletion:</p>
<pre><code>kubectl patch pvc data-airflow-postgresql-0 -p '{"metadata":{"finalizers":null}}' -n airflow
</code></pre>
<p>The PVC's gets terminated at this point and these two commands return nothing:</p>
<pre><code>kubectl get pvc -n airflow
kubectl get all -n airflow
</code></pre>
<p>Then I restarted the machine and executed the helm install again (using first and last commands in the first section of this question), but the same result.</p>
<p>I executed the following command then (using the suggestions I found here):</p>
<pre><code>kubectl describe pvc -n airflow
</code></pre>
<p>I got the following output (I am posting the event portion of PostgreSQL):</p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m58s (x42 over 13m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>So my assumption is that I need to provide storage class as part of the values.yaml</p>
<p>Is my understanding right? How do I provide the required (and what values) in the values.yaml?</p>
| Rhonald | <p>If you installed with helm, you can uninstall with <code>helm delete airflow -n airflow</code>.</p>
<p>Here's a way to install airflow for <strong>testing</strong> purposes using default values:</p>
<p>Generate the manifest <code>helm template airflow apache-airflow/airflow -n airflow > airflow.yaml</code></p>
<p>Open the "airflow.yaml" with your favorite editor, replace all "volumeClaimTemplates" with emptyDir. Example:</p>
<p><a href="https://i.stack.imgur.com/klKsT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/klKsT.png" alt="enter image description here" /></a></p>
<p>Create the namespace and install:</p>
<pre><code>kubectl create namespace airflow
kubectl apply -f airflow.yaml --namespace airflow
</code></pre>
<p><a href="https://i.stack.imgur.com/VqyEd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VqyEd.png" alt="enter image description here" /></a></p>
<p>You can <a href="https://stackoverflow.com/a/70163535/14704799">copy</a> files out from the pods if needed.</p>
<p>To delete <code>kubectl delete -f airflow.yaml --namespace airflow</code>.</p>
| gohm'c |
<p><code>kubectl patch --help</code> gives an example where you can patch a specific element with a specific operation:</p>
<pre><code>kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new
image"}]'
</code></pre>
<p>However, there's no enumeration of possible <code>op</code> values. What operations are available?</p>
| roim | <p>Kubectl patch uses <a href="http://jsonpatch.com" rel="noreferrer">json patch</a> under the hood.
Possible op's are : Replace, Add, Remove</p>
<p>Example:</p>
<pre><code>[
{ "op": "replace", "path": "/baz", "value": "boo" },
{ "op": "add", "path": "/hello", "value": ["world"] },
{ "op": "remove", "path": "/foo" }
]
</code></pre>
| Pulak Kanti Bhowmick |
Subsets and Splits