Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I've set up single node kubernetes according to [official tutorial][1]. </p>
<p>In addition to official documentation I've set-up single node cluster:</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>Disabled eviction limit:</p>
<pre><code>cat << EOF >> /var/lib/kubelet/config.yaml
evictionHard:
imagefs.available: 1%
memory.available: 100Mi
nodefs.available: 1%
nodefs.inodesFree: 1%
EOF
systemctl daemon-reload
systemctl restart kubelet
</code></pre>
<p>And set systemd driver for Docker:</p>
<pre><code>cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl daemon-reload
systemctl restart docker
</code></pre>
<p>How can I temporary stop Kubernetes cluster (including all it's services, podd, etc)? I've issued <code>systemctl stop kubelet</code> but I stil see some kubernetes stuff among processes</p>
<pre><code>$ ps -elf | grep kube
4 S root 6032 5914 1 80 0 - 2653148 - Feb17 ? 00:35:10 etcd --advertise-client-urls=https://192.168.1.111:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.1.111:2380 --initial-cluster=ubuntu=https://192.168.1.111:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.1.111:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.1.111:2380 --name=ubuntu --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
4 S root 7536 7495 0 80 0 - 35026 - Feb17 ? 00:01:04 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=ubuntu
4 S root 9868 9839 0 80 0 - 34463 - Feb17 ? 00:00:59 /usr/bin/kube-controllers
4 S root 48394 48375 2 80 0 - 36076 - 13:41 ? 00:01:09 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
4 S root 48461 48436 3 80 0 - 52484 - 13:41 ? 00:01:53 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
4 S root 52675 52586 7 80 0 - 123895 - 14:00 ? 00:02:01 kube-apiserver --advertise-address=192.168.1.111 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
</code></pre>
| Wakan Tanka | <p>If you really want to stop everything what is running by kubernetes/docker for what ever reason - you can just stop both kubelet and docker.</p>
<p>Perform these commands on the node you like to stop kubernetes/docker</p>
<pre><code>systemctl stop kubelet
systemctl stop docker
</code></pre>
<p>I strongly recommend to drain the node first, but if you just like to kill everything without any caution that would be one way to stop kubernetes and the running containers on the node :)</p>
<p>once you like to start everything again just start docker and kubelet again or just reboot the machine.</p>
<p>cheers</p>
| Henry |
<p>I am running this command in my mac terminal,want to submit my test spark job on to one of our k8s cluster:</p>
<pre><code>ID_TOKEN=`kubectl config view --minify -o jsonpath='{.users[0].user.auth-provider.config.id-token}'`
./bin/spark-submit \
--master k8s://https://c2.us-south.containers.cloud.ibm.com:30326 \
--deploy-mode cluster \
--name Hello \
--class scala.example.Hello \
--conf spark.kubernetes.namespace=isap \
--conf spark.executor.instances=3 \
--conf spark.kubernetes.container.image.pullPolicy=Always \
--conf spark.kubernetes.container.image.pullSecrets=default-us-icr-io \
--conf spark.kubernetes.container.image=us.icr.io/cedp-isap/spark-for-apps:2.4.1 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.authenticate.driver.caCertFile=/usr/local/opt/spark/ca.crt \
--conf spark.kubernetes.authenticate.submission.oauthToken=$ID_TOKEN \
local:///opt/spark/jars/interimetl_2.11-1.0.jar
</code></pre>
<p>And I already created service account "spark", as well as cluster role binding yaml like this:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: isap
name: pod-mgr
rules:
- apiGroups: ["rbac.authorization.k8s.io", ""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list", "create", "delete"]
</code></pre>
<p>and </p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-mgr-spark
namespace: isap
subjects:
- kind: ServiceAccount
name: spark
namespace: isap
roleRef:
kind: ClusterRole
name: pod-mgr
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>But when I run above spark-submit command, I found the the log like this:</p>
<pre><code>20/06/15 02:45:02 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: hello-1592203500709-driver
namespace: isap
labels: spark-app-selector -> spark-0c7f50ab2d21427aac9cf2381cb4bb64, spark-role -> driver
pod uid: 375674d2-784a-4b32-980d-953488c8a8b2
creation time: 2020-06-15T06:45:02Z
service account name: default
volumes: kubernetes-credentials, spark-local-dir-1, spark-conf-volume, default-token-p8pgf
node name: N/A
start time: N/A
container images: N/A
phase: Pending
status: []
</code></pre>
<p>You will notice it is still using service account "default", rather than "Spark"
And the executor pod can not be created in my k8s cluster. Also no logs is displayed in created driver pod.</p>
<p>Could anyone can help to take a look what I missed here?Thanks!</p>
| yin xu | <p>I am not sure if you already figure it out the issue. Hope my input is still useful.</p>
<p>There are two places which whill check against RBAC.</p>
<p>First, when you execute the spark-submit, it will call k8s web api to create the driver pod. Later the driver pod will call k8s api to create executor pod.</p>
<p>I saw you already create the spark service account , role and rolebinding. You also use them for your driver pod. That is fine, but the problem, you haven't assign the user when creating the driver pod. So K8S believes you are still using "system:anonymous".</p>
<p>You can also assign the "spark" sa to it by config "spark.kubernetes.authenticate.submission.*", one example I put here</p>
<pre><code> spark-submit ^
--master k8s://xxx ^
--name chen-pi ^
--deploy-mode cluster ^
--driver-memory 8g ^
--executor-memory 16g ^
--executor-cores 2 ^
--conf spark.kubernetes.container.image=gcr.io/spark-operator/spark-py:v3.1.1 ^
--conf spark.kubernetes.file.upload.path=/opt/spark ^
--conf spark.kubernetes.namespace=default^
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark^
--conf spark.kubernetes.authenticate.caCertFile=./local/ca.crt ^
--conf spark.kubernetes.authenticate.oauthTokenFile=./local/token ^
--conf spark.kubernetes.authenticate.submission.oauthTokenFile=./local/token ^
./spark-examples/python/skeleton/skeleton.py
</code></pre>
| chen lin |
<p>I've got a basic architecture set up in Kubernetes, a Laravel container for my application level code, and a Mysql container for my database. And I'm looking to implement a code compiling API service (as a seperate container) which accepts user generated code which I then run a docker container to compile the code and return the output to the user. </p>
<p>There's some pretty raw implementations online but most of them use docker as a method of running user generated code compiling in an isolated environment (as you should) but the application itself is not hosted using containers or a container management system.</p>
<p>Questions is, how can I spin up docker containers to handle a task and then return the output to my Laravel API container before shutting the container down? </p>
<p>Apparently, running a docker container inside a docker container is not best practice.</p>
<p>The process:</p>
<ol>
<li>User sends a post request to Laravel API container</li>
<li>Laravel API container will take the request and run a docker container to compile code</li>
<li>Temporary docker container will return compiled output to Laravel API container before
shutting down.</li>
<li>Laravel API container will return compiled response to end user.</li>
</ol>
<p>I'm running my app in a Kubernetes cluster, and a Docker/Kubernetes solution is needed. I rather not have to run raw commands of starting a Docker container in my application level code but have a more higher level solution. </p>
| thatguyjono | <p>You can use Kubernetes job resource to perform this kind of task. </p>
<p>The Jobs objects can be spawned to run a process inside and can be set to be automatically terminated afterwards. A job in Kubernetes is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion. You are able to run multiple pod instances inside one job (parallel or sequentially). </p>
<p>Check this <a href="http://kubernetesbyexample.com/jobs/" rel="nofollow noreferrer">page</a> for more details about the jobs. </p>
<p>So basically your flow should look like this: </p>
<ol>
<li>User sends request for Laravel API container </li>
<li>Laravel API container needs to interact with the API server in order to create the job.</li>
<li>Pod inside the job will compile code and after compilation, will send a request to the Laravel API pod to deliver the compiled binary.</li>
</ol>
<p>The delivery of the binary should be coded by the user</p>
<ol start="4">
<li>Laravel API container will return compiled response to user. </li>
</ol>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster" rel="nofollow noreferrer">This documentation link</a> shows how to connect to the API, especially the section <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">Accessing the API from a Pod</a></p>
| acid_fuji |
<p>I am learning kubernetes and minikube, and I am following this tutorial:</p>
<p><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/</a></p>
<p>But I am running into a problem, I am not able to load the exposed service. Here are the steps I make:</p>
<pre><code>minikube start
</code></pre>
<p>The cluster info returns</p>
<pre><code>Kubernetes control plane is running at https://127.0.0.1:50121
CoreDNS is running at https://127.0.0.1:50121/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<p>Then I am creating a deployment</p>
<pre><code>kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
</code></pre>
<p>and exposing it as a service</p>
<pre><code>kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
</code></pre>
<p>When I list the services, I dont have a url:</p>
<pre><code>minikube service list
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAMESPACE</th>
<th>NAME</th>
<th>TARGET PORT</th>
<th>URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>default</td>
<td>hello-minikube1</td>
<td>8080</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>and when I try to get the url, I am not getting it, seems to be empty</p>
<pre><code>minikube service hello-minikube1 --url
</code></pre>
<p>This is the response (first line is empty):</p>
<pre><code>🏃 Starting tunnel for service hello-minikube2.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
</code></pre>
<p>Why I am not getting the url and cannot connect to it? What did I miss?</p>
<p>Thanks!</p>
| John P | <p>Please use <code>minikube ip</code> command to get the IP of minikube and then use port number with it.</p>
<p>Also, refer below link:</p>
<p><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#:%7E:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#:~:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system</a>.</p>
| Dhiraj Bansal |
<p>I have two GKE clusters in the same Google Cloud project, but using the same PV/PVC YAMLs one cluster can successfully mount the Filestore instance and the other cluster fails. The failed GKE cluster events look like:</p>
<pre><code>Event : Pod [podname] Unable to attach or mount volumes: unmounted volumes=[nfs-pv], unattached volumes=[nfs-pv]: timed out waiting for the condition FailedMount
Event : Pod [podname] MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 1
</code></pre>
<p>The Kublet logs for the failed mount:</p>
<pre><code>pod_workers.go:191] Error syncing pod [guid] ("[podname](guid)"), skipping: unmounted volumes=[nfs-pv], unattached volumes=[nfs-pv]: timed out waiting for the condition
kubelet.go:1622] Unable to attach or mount volumes for pod "podname(guid)": unmounted volumes=[nfs-pv], unattached volumes=[nfs-pv]: timed out waiting for the condition; skipping pod"
mount_linux.go:150] Mount failed: exit status 1
Output: Running scope as unit: run-r1fb543aa9a9246e0be396dd93bb424f6.scope
Mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/c61546e6-9769-4e16-bd0b-c73f904272aa/volumes/kubernetes.io~nfs/nfs-pv --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.99.2:/mount /var/lib/kubelet/pods/c61546e6-9769-4e16-bd0b-c73f904272aa/volumes/kubernetes.io~nfs/nfs-pv
Output: mount.nfs: Connection timed out
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs 192.168.99.2:/mount /var/lib/kubelet/pods/c61546e6-9769-4e16-bd0b-c73f904272aa/volumes/kubernetes.io~nfs/nfs-pv]
nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/nfs/c61546e6-9769-4e16-bd0b-c73f904272aa-nfs-pv podName:c61546e6-9769-4e16-bd0b-c73f904272aa nodeName:}" failed. No retries permitted until 2021-09-11 10:01:44.725959505 +0000 UTC m=+820955.435941160 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"nfs-pv\" (UniqueName: \"kubernetes.io/nfs/c61546e6-9769-4e16-bd0b-c73f904272aa-nfs-pv\") pod \"podname\" (UID: \"c61546e6-9769-4e16-bd0b-c73f904272aa\") : mount failed: exit status 1\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/c61546e6-9769-4e16-bd0b-c73f904272aa/volumes/kubernetes.io~nfs/nfs-pv --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.99.2:/mount /var/lib/kubelet/pods/c61546e6-9769-4e16-bd0b-c73f904272aa/volumes/kubernetes.io~nfs/nfs-pv\nOutput: Running scope as unit: run-r1fb543aa9a9246e0be396dd93bb424f6.scope\nMount failed: mount failed: exit status 32\nMounting command: chroot\nMounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs 192.168.99.2:/mount /var/lib/kubelet/pods/c61546e6-9769-4e16-bd0b-c73f904272aa/volumes/kubernetes.io~nfs/nfs-pv]\nOutput: mount.nfs: Connection timed out\n"
</code></pre>
<p>Perhaps one clue is the two clusters are in separate regions and separate subnets? Why can Filestore connect to one cluster but not other?</p>
| joey t | <p>After many, many hours of debugging I was able to find an answer. Based on: <a href="https://groups.google.com/g/google-cloud-filestore-discuss/c/wKTT6hEzk08" rel="nofollow noreferrer">https://groups.google.com/g/google-cloud-filestore-discuss/c/wKTT6hEzk08</a></p>
<p>The failing GKE cluster was in a subnet outside of RFC1918, so therefore was not accepted as a Filestore client. Once we changed the subnet to be a valid RFC1918, both GKE clusters were able to successfully mount the Filestore instance.</p>
<p>This was extremely frustrating, given the RFC1918 requirement is not clear in the documentation or troubleshooting - and in fact other Google Cloud services worked fine with the invalid RFC1918 subnet.</p>
| joey t |
<p>I am building a microservice application. I am unable to send request <strong>to one of the services</strong> using postman:</p>
<p>Endpoint I am sending <code>POST</code> request to using postman:</p>
<pre><code>http://cultor.dev/api/project
</code></pre>
<blockquote>
<p>Error : "project-srv" does not have any active Endpoint (ingress-nginx returns 503 error)</p>
</blockquote>
<h3>Note</h3>
<p>All the other microservices are running fine which use exact same config.</p>
<blockquote>
<p><strong>ingress-nginx config:</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: cultor.dev
http:
paths:
- path: /api/project/?(.*)
backend:
serviceName: project-srv
servicePort: 3000
- path: /api/profile/?(.*)
backend:
serviceName: profile-srv
servicePort: 3000
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
</code></pre>
<blockquote>
<p><strong>ClusterIP services:</strong></p>
</blockquote>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.245.52.208 <none> 3000/TCP 40m
client-srv ClusterIP 10.245.199.94 <none> 3000/TCP 39m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 24d
nats-srv ClusterIP 10.245.1.58 <none> 4222/TCP,8222/TCP 39m
profile-srv ClusterIP 10.245.208.174 <none> 3000/TCP 39m
project-srv ClusterIP 10.245.131.56 <none> 3000/TCP 39m
</code></pre>
<h3>LOGS</h3>
<blockquote>
<p><strong>inress-nginx:</strong></p>
</blockquote>
<pre><code>45.248.29.8 - - [02/Oct/2020:15:16:52 +0000] "POST /api/project/507f1f77bcf86cd799439011 HTTP/1.1" 503 197 "-" "PostmanRuntime/7.26.5" 591 0.000 [default-project-srv-3000] [] - - - - e1ae0615f49091786d56cab2bb9c94c6
W1002 15:17:59.712320 8 controller.go:916] Service "default/project-srv" does not have any active Endpoint.
I1002 15:17:59.814364 8 main.go:115] successfully validated configuration, accepting ingress ingress-service in namespace default
W1002 15:17:59.827616 8 controller.go:916] Service "default/project-srv" does not have any active Endpoint.
</code></pre>
| Karan Kumar | <p>It was error with the labels as suggested by @Kamol Hasan.</p>
<p>The pod selector label in the 'Deployment' config was not matching selector in it's 'Service' config.</p>
| Karan Kumar |
<p>I am trying to render flower, which lies in a kubernetes pod, in my other web-interface pod. Is there anyway how I can adress the service from outside the cluster, since the rendering happens in my computer and not on the pod.</p>
<p>I tried NodePort but it's not working. I used something like this but it cannot reach the server. I am quite new to kubernetes and would appreciate any help.</p>
<p>'''http://celery-flowe-service:5555'''</p>
| Codein | <p>You can connect to the service from a different pod in same cluster using a service DNS in kuberentes. Kubernetes internally create a DNS for every service you create.</p>
<p>The internal DNS inside cluster will be</p>
<pre><code><service-name>.<namespace-name>.svc.cluster.local
</code></pre>
<p>Example, if your service name is test-service and is deployed in namespace default, then</p>
<pre><code>test-service.default.svc.cluster.local
</code></pre>
<p>Using the above DNS, you can access the service running to the connect POD from within the cluster.</p>
<p>If you want to connect to it from outside cluster, you can create a External Load Balancer. Use service as <code>Load Balancer</code> and using this, you can access from outside cluster.</p>
<p>If you go with more advanced kuberentes deployment, you can use Ingress/Traefik Controllers to expose your services in Kubernetes.</p>
| Sadhvik Chirunomula |
<p>I've deployed locally a k8s cluster with kind. The firebase emulator runs on a pod inside the cluster and has a ClusterIp Service assigned. When I'm sending a request to <em><strong>kind-firebase.yaml</strong></em> pod from the <em><strong>service.yaml</strong></em> pod, the request fails because connection cannot be established.</p>
<p><strong>the error</strong>:</p>
<pre><code>failed to establish a connection:
Post \"http://firebase-service:9099/identitytoolkit.googleapis.com/v1/projects/demo-test
</code></pre>
<p><strong>CONFIGS:</strong></p>
<ul>
<li><em><strong>firebase.json:</strong></em></li>
</ul>
<pre class="lang-json prettyprint-override"><code>{
"emulators": {
"auth": {
"port": 9099,
"host": "0.0.0.0"
},
"ui": {
"enabled": true,
"host": "0.0.0.0",
"port": 4000
}
}
}
</code></pre>
<ul>
<li><em><strong>kind-firebase.yaml:</strong></em></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: firebase-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: firebase-depl
namespace: firebase-system
spec:
selector:
matchLabels:
app: firebase-emulator
replicas: 1
template:
metadata:
labels:
app: firebase-emulator
spec:
containers:
- name: firebase-emulator
image: fb-emulator
resources:
limits:
cpu: "1000m" # Up to 1 full core
requests:
cpu: "1000m" # Use 1 full core
imagePullPolicy: IfNotPresent
ports:
- name: auth
containerPort: 9099
- name: emulator-ui
containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: firebase-service
namespace: firebase-system
spec:
type: ClusterIP
selector:
app: firebase-emulator
ports:
- name: auth
port: 9099
targetPort: auth
- name: emulator-ui
port: 4000
targetPort: emulator-ui
</code></pre>
<ul>
<li><em><strong>service.yaml:</strong></em></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: auth-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
namespace: auth-system
spec:
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
terminationGracePeriodSeconds: 60
volumes:
- name: google-cloud-key
secret:
secretName: firebase-sacc
containers:
# auth-api container configuration
- name: auth-api
image: auth-api-image
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
readOnly: true
ports:
- name: auth-api
containerPort: 3000
- name: auth-api-debug
containerPort: 8080
readinessProbe: # readiness probes mark the service available to accept traffic.
httpGet:
path: /debug/readiness
port: 8080
initialDelaySeconds: 15
periodSeconds: 15
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
livenessProbe: # liveness probes mark the service alive or dead (to be restarted).
httpGet:
path: /debug/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBERNETES_PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KUBERNETES_NAMESPACE_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KUBERNETES_NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/sacc.json
- name: FIREBASE_AUTH_EMULATOR_HOST
value: firebase-service:9099
- name: GCLOUD_PROJECT
value: demo-test
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
namespace: auth-system
spec:
type: ClusterIP
selector:
app: auth
ports:
- name: auth-api
port: 3000
targetPort: auth-api
- name: auth-api-debug
port: 8080
targetPort: auth-api-debug
</code></pre>
<ul>
<li>in the file you'll see these env variables:</li>
</ul>
<pre><code>- name: FIREBASE_AUTH_EMULATOR_HOST
value: firebase-service:9099
- name: GCLOUD_PROJECT
value: demo-test
</code></pre>
<p>by using them, the firebase sdk used inside the app that represent <em><strong>service.yaml</strong></em>, will set the sdk for using the firebase emulator and not the one in the cloud.</p>
<p><strong>Screenshots with the cluster:</strong></p>
<ol>
<li>Here we can see the namespaces available.</li>
</ol>
<ul>
<li>In the auth-system the <em><strong>service.yaml</strong></em> pod will be present.</li>
<li>In the firebase-system the <em><strong>kind-firebase.yaml</strong></em> pod will be present.</li>
</ul>
<p><a href="https://i.stack.imgur.com/s5dcS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s5dcS.png" alt="enter image description here" /></a></p>
<ol start="2">
<li><em><strong>service.yaml</strong></em> pod</li>
</ol>
<p><a href="https://i.stack.imgur.com/zPwcD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zPwcD.png" alt="enter image description here" /></a></p>
<ol start="3">
<li><em><strong>kind-firebase.yaml</strong></em> pod</li>
</ol>
<p><a href="https://i.stack.imgur.com/x7cWw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x7cWw.png" alt="enter image description here" /></a></p>
<ol start="4">
<li>Here we can see the logs inside <em><strong>service.yaml</strong></em> pod when I send a request to the <em><strong>kind-firebase.yaml</strong></em> pod... the error:</li>
</ol>
<pre><code>failed to establish a connection:
Post \"http://firebase-service:9099/identitytoolkit.googleapis.com/v1/projects/demo-test
</code></pre>
<p><a href="https://i.stack.imgur.com/j2nck.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j2nck.png" alt="enter image description here" /></a></p>
<p>Thx for any help!</p>
| Robert Mihai | <p>After connecting to the firebase pod and checking DNS Resolution, the service name must be:</p>
<pre><code>firebase-service.firebase-system.svc.cluster.local:9099
</code></pre>
<p>So the env variable from <em><strong>kind-firebase.yaml</strong></em> must be:</p>
<pre class="lang-yaml prettyprint-override"><code>
- name: FIREBASE_AUTH_EMULATOR_HOST
value: firebase-service.firebase-system.svc.cluster.local:9099
</code></pre>
<p>Everything works fine now.</p>
| Robert Mihai |
<p>I want to create a Kubernetes CronJob that deletes resources (Namespace, ClusterRole, ClusterRoleBinding) that may be left over (initially, the criteria will be "has label=Something" and "is older than 30 minutes". (Each namespace contains resources for a test run).</p>
<p>I created the CronJob, a ServiceAccount, a ClusterRole, a ClusterRoleBinding, and assigned the service account to the pod of the cronjob.</p>
<p>The cronjob uses an image that contains kubectl, and some script to select the correct resources.</p>
<p>My first draft looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: default
labels:
app: my-app
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-app
namespace: default
labels:
app: my-app
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
# job spec
spec:
template:
# pod spec
spec:
serviceAccountName: my-app
restartPolicy: Never
containers:
- name: my-app
image: image-with-kubectl
env:
- name: MINIMUM_AGE_MINUTES
value: '2'
command: [sh, -c]
args:
# final script is more complex than this
- |
kubectl get namespaces
kubectl get clusterroles
kubectl get clusterrolebindings
kubectl delete Namespace,ClusterRole,ClusterRoleBinding --all-namespaces --selector=bla=true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-app
labels:
app: my-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-app
subjects:
- kind: ServiceAccount
name: my-app
namespace: default
apiGroup: ""
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-app
labels:
app: my-app
rules:
- apiGroups: [""]
resources:
- namespaces
- clusterroles
- clusterrolebindings
verbs: [list, delete]
</code></pre>
<p>The cronjob is able to list and delete namespaces, but not cluster roles or cluster role bindings. What am I missing?</p>
<p>(Actually, I'm testing this with a Job first, before moving to a CronJob):</p>
<pre><code>NAME STATUS AGE
cattle-system Active 16d
default Active 16d
fleet-system Active 16d
gitlab-runner Active 7d6h
ingress-nginx Active 16d
kube-node-lease Active 16d
kube-public Active 16d
kube-system Active 16d
security-scan Active 16d
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:default:my-app" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope`
</code></pre>
| jleeothon | <p>You need to change your ClusterRole like this :</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-app
labels:
app: my-app
rules:
- apiGroups: [""]
resources:
- namespaces
verbs: [list, delete]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
verbs: [list, delete]
</code></pre>
<p>The ressources are now in the right apiGroup</p>
| leachim742 |
<p>I want to get my nginx-ingress metrics to be added in Prometheus so I can see my app logs to be shown in loki. However, I have not had much success. I have been following this guide <a href="https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/04-setup-prometheus-stack/README.md#step-3---creating-the-nginx-backend-services" rel="nofollow noreferrer">here</a> but I want to do this for nginx-ingress instead, the guide is for ambassador ingress.</p>
<p>Nginx-Ingress installed using the following command:</p>
<pre><code>NGINX_CHART_VERSION="4.0.6";
helm upgrade ingress-nginx ingress-nginx/ingress-nginx --version "$NGINX_CHART_VERSION" --namespace ingress-nginx -f "03-setup-ingress-controller/assets/manifests/nginx-values-v${NGINX_CHART_VERSION}.yaml" --set controller.metrics.enabled=true --set-string controller.podAnnotations."prometheus\.io/scrape"="true" --set-string controller.podAnnotations."prometheus\.io/port"="10254"
</code></pre>
<p>So I have my nginx-ingress metrics exposed in my k8 cluster as you can see:</p>
<pre><code>kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.245.57.3 <REDACTED> 80:31150/TCP,443:31740/TCP 9d
ingress-nginx-controller-admission ClusterIP 10.245.186.61 <none> 443/TCP 9d
ingress-nginx-controller-metrics ClusterIP 10.245.240.243 <none> 10254/TCP 20h
</code></pre>
<p>I did a Helm upgrade with a values file that contains my ingress-nginx-controller-metrics :</p>
<pre><code> HELM_CHART_VERSION="27.2.1";
helm upgrade kube-prom-stack prometheus-community/kube-prometheus-stack \
--version "${HELM_CHART_VERSION}" --namespace monitoring \
-f "04-setup-prometheus-stack/assets/manifests/prom-stack-values-v${HELM_CHART_VERSION}.yaml"
</code></pre>
<p>and I updated it with the values file to contains, customized from <a href="https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/04-setup-prometheus-stack/assets/manifests/prom-stack-values-v17.1.3.yaml" rel="nofollow noreferrer">git repo</a> config) as:</p>
<pre><code>## Starter Kit components service monito
#
additionalServiceMonitors:
- name: "ingress-nginx-monitor"
selector:
matchLabels:
service: "ingress-nginx-controller
namespaceSelector:
matchNames:
- ingress-nginx
endpoints:
- port: "ingress-nginx-controller-me
path: /metrics
scheme: http
</code></pre>
<p>However, when I check the Prometheus Targets I don't see the nginx ingress there.</p>
| Katlock | <p>additionalServiceMonitors for the kube-prom-stack should be :</p>
<pre class="lang-yaml prettyprint-override"><code> additionalServiceMonitors:
- name: "ingress-nginx-monitor"
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
namespaceSelector:
matchNames:
- ingress-nginx
endpoints:
- port: "metrics"
</code></pre>
| leachim742 |
<p>I think I am misunderstanding Kubernetes CronJobs. On the CKAD exam there was a question to have a CronJob run every minute, but it should start after an arbitrary amount of time. I don't see any properties for CronJobs or Jobs to have them start after a specific time. Should that be part of the cron string or am I completely misunderstanding?</p>
| humiliatedpenguin | <p>You could do something like</p>
<p><em>@reboot sleep 60 && script.sh</em></p>
<p>though you don't mention boot time specifically. You can also add sleep to the crontab.</p>
<p>Another way is to create a systemd service (<strong>note:</strong> on systems with systemd installed)</p>
<pre><code>[Unit]
Description=cronjob with delay
After=(some criteria)
[Service]
Type=oneshot
ExecStart=/pathto/script.sh
[Install]
WantedBy=
</code></pre>
| stemixrf |
<p>What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?</p>
<p><a href="https://i.stack.imgur.com/CpQ8k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CpQ8k.png" alt="enter image description here"></a></p>
<p>Here's the deployment and service file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
namespace: myown
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: application-a
image: registry.gitlab.com/application-a
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: "HTTP_PROXY"
value: "http://localhost:1030"
- name:
image: registry.gitlab.com/application-b-proxy
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
name: proxy-svc
namespace: myown
spec:
ports:
- nodePort: 31000
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: application-a
sessionAffinity: None
type: NodePort
</code></pre>
<p>And here's how i build the docker image of mitmproxy/mitmdump</p>
<pre><code>FROM mitmproxy/mitmproxy:latest
ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]
</code></pre>
<p><strong>EDIT</strong></p>
<p>I created two dummy docker images in order to have this scenario recreated locally.</p>
<p><strong>APPLICATION A</strong> - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.</p>
<p><strong>APPLICATION B</strong> - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.</p>
<ol>
<li><p>Make sure your docker proxy config is set to listen to <a href="http://localhost:8080" rel="nofollow noreferrer">http://localhost:8080</a> - <a href="https://docs.docker.com/network/proxy/" rel="nofollow noreferrer">you can check how to do so here</a></p></li>
<li><p>Open a terminal and run this command:</p></li>
</ol>
<pre><code> docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
</code></pre>
<ol start="3">
<li>Open another terminal and run this command:</li>
</ol>
<pre><code> docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
</code></pre>
<ol start="4">
<li>Go into the shell with the container of application A in 3rd terminal:</li>
</ol>
<pre><code> docker exec -ti <name of docker container> sh
</code></pre>
<p>and try to make curl to whatever address you want. </p>
<p>And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes</p>
| uiguyf ufdiutd | <p>Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:</p>
<ul>
<li>Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.</li>
<li>Every container in a Pod shares the network namespace, including the IP address and network ports. Containers <em>inside a Pod</em> can communicate with one another using <code>localhost</code>, source <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#networking" rel="nofollow noreferrer">here</a>.</li>
<li>You was able to login to container <code>application-a</code> and send a <code>curl</code> request to container <code>
application-b-proxy</code> on port <code>1030</code>, proving the above statement.</li>
<li>The problem is that your proxy is not intercepting the request as expected.</li>
<li>You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.</li>
<li>Since I don't have access neither to your <code>app-a</code> code nor the mitmproxy <code>token.py</code> I will give you a general example how to redirect traffic from <code>container-a</code> to <code>container-b</code></li>
<li>In order to make it work, I'll use <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass" rel="nofollow noreferrer">NGINX Proxy Pass</a>: it simply proxies the request to <code>container-b</code>.</li>
</ul>
<hr>
<p><strong>Reproduction:</strong></p>
<ul>
<li><p>I'll use a nginx server as <code>container-a</code>.</p></li>
<li><p>I'll build it with this <code>Dockerfile</code>:</p></li>
</ul>
<pre><code>FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
</code></pre>
<ul>
<li>I'll add this configuration file <code>frontend.conf</code>:</li>
</ul>
<pre><code>server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
</code></pre>
<p>It's ordering the traffic should be sent to <code>container-b</code> that is listening in <code>port 8080</code> inside the same pod.</p>
<ul>
<li>I'll build this image as <code>nginxproxy</code> in my local repo: </li>
</ul>
<pre><code>$ docker build -t nginxproxy .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginxproxy latest 7c203a72c650 4 minutes ago 126MB
</code></pre>
<ul>
<li>Now the <code>full.yaml</code> deployment:</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: container-a
image: nginxproxy:latest
ports:
- containerPort: 80
imagePullPolicy: Never
- name: container-b
image: echo8080:latest
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: proxy-svc
spec:
ports:
- nodePort: 31000
port: 80
protocol: TCP
targetPort: 80
selector:
app: application-a
sessionAffinity: None
type: NodePort
</code></pre>
<p><strong>NOTE:</strong> I set <code>imagePullPolicy</code> as <code>Never</code> because I'm using my local docker image cache.</p>
<p>I'll list the changes I made to help you link it to your current environment:</p>
<ul>
<li><code>container-a</code> is doing the work of your <code>application-a</code> and I'm serving <code>nginx</code> on <code>port 80</code> where you are using <code>port 8090</code></li>
<li><p><code>container-b</code> is receiving the request, as your <code>application-b-proxy</code>. The image I'm using was based on <code>mendhak/http-https-echo</code>, normally it listens on <code>port 80</code>, I've made a custom image just changing to listen on <code>port 8080</code> and named it <code>echo8080</code>.</p></li>
<li><p>First I created a <code>nginx</code> pod and exposed it alone to show you it's running (since it's empty in content, it will return <code>bad gateway</code> but you can see the output is from nginx:</p></li>
</ul>
<pre><code>$ kubectl apply -f nginx.yaml
pod/nginx created
service/nginx-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 64s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc NodePort 10.103.178.109 <none> 80:31491/TCP 66s
$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
</code></pre>
<ul>
<li>I deleted the <code>nginx</code> pod and created a <code>echo-app</code>pod and exposed it to show you the response it gives when directly curled from outside:</li>
</ul>
<pre><code>$ kubectl apply -f echo.yaml
pod/echo created
service/echo-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo 1/1 Running 0 118s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc NodePort 10.102.168.235 <none> 8080:32116/TCP 2m
$ curl http://192.168.39.51:32116
{
"path": "/",
"headers": {
"host": "192.168.39.51:32116",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "192.168.39.51",
"ip": "::ffff:172.17.0.1",
"protocol": "http",
"os": {
"hostname": "echo"
},
</code></pre>
<ul>
<li>Now I'll apply the <code>full.yaml</code>:</li>
</ul>
<pre><code>$ kubectl apply -f full.yaml
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
proxy-deployment-9fc4ff64b-qbljn 2/2 Running 0 1s
$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-svc NodePort 10.103.238.103 <none> 80:31000/TCP 31s
</code></pre>
<ul>
<li>Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP <code>192.168.39.51</code> in port <code>31000</code> which is sending the request to <code>port 80</code> on the pod (handled by <code>nginx</code>):</li>
</ul>
<pre><code>$ curl http://192.168.39.51:31000
{
"path": "/",
"headers": {
"host": "127.0.0.1:8080",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "127.0.0.1",
"ip": "::ffff:127.0.0.1",
"protocol": "http",
"os": {
"hostname": "proxy-deployment-9fc4ff64b-qbljn"
},
</code></pre>
<ul>
<li>As you can see, the response has all the parameters of the pod, indicating it was sent from <code>127.0.0.1</code> instead of a public IP, showing that the NGINX is proxying the request to <code>container-b</code>.</li>
</ul>
<hr>
<p><strong>Considerations:</strong></p>
<ul>
<li>This example was created to show you how the communication works inside kubernetes.</li>
<li>You will have to check how your <code>application-a</code> is handling the requests and edit it to send the traffic to your proxy.</li>
<li>Here are a few links with tutorials and explanation that could help you port your application to kubernetes environment:
<ul>
<li><a href="https://gist.github.com/soheilhy/8b94347ff8336d971ad0" rel="nofollow noreferrer">Virtual Hosts on nginx</a></li>
<li><a href="https://www.magalix.com/blog/implemeting-a-reverse-proxy-server-in-kubernetes-using-the-sidecar-pattern" rel="nofollow noreferrer">Implementing a Reverse proxy Server in Kubernetes Using the Sidecar Pattern</a></li>
<li><a href="https://www.nginx.com/blog/validating-oauth-2-0-access-tokens-nginx/" rel="nofollow noreferrer">Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus</a></li>
<li><a href="https://developer.okta.com/blog/2018/08/28/nginx-auth-request" rel="nofollow noreferrer">Use nginx to Add Authentication to Any Application</a></li>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">Connecting a Front End to a Back End Using a Service</a></li>
<li><a href="https://cloud.google.com/community/tutorials/transparent-proxy-and-filtering-on-k8s" rel="nofollow noreferrer">Transparent Proxy and Filtering on K8s</a></li>
</ul></li>
</ul>
<p>I Hope to help you with this example.</p>
| Will R.O.F. |
<p>I am running a k3s cluster on some raspberry pi 4, in my local network. I have a DNS server (dnsmasq) on the master nodes. I want that the pods of my cluster use that DNS server, via coredns. However when I ping an adress from within a pod I always pass via the google DNS servers and overpass my local DNS rules.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
</code></pre>
<p>this is my coredns config. As you see there is the <code>forward . /etc/resolv.conf </code></p>
<p>my /etc/resolv.conf</p>
<pre><code>domain home
nameserver 127.0.0.1
</code></pre>
<p>Any suggestions ?</p>
| samsja | <p>thanks guys I change my coredns to</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
metadata:
annotations:
name: coredns
namespace: kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153
forward . <master node ip>
cache 30
loop
reload
loadbalance
}
NodeHosts: |
<master node ip> master
<slave node ip> slave
</code></pre>
<p>and it worked !</p>
| samsja |
<p>How can I add a custom <code>config.json</code> and add a custom style <code>/styles/custom.json</code> to tileserver-gl using Kubernetes? Here is what I have so far for my kubernetes manifest file. Tileserver-gl is up and running fine but I do not see my custom theme that I defined. It does not look like my custom config.json file is being applied. Is using <code>configMap</code> and <code>volumeMounts</code> the right approach?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: map-tile-server
namespace: test
labels:
app: map-tile-server
spec:
replicas: 1
selector:
matchLabels:
app: map-tile-server
template:
metadata:
namespace: test
labels:
app: map-tile-server
spec:
containers:
- name: map-tile-server
image: klokantech/tileserver-gl:v2.6.0
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: "map-tile-server-config"
mountPath: "/config.json"
- name: "map-tile-server-style-config"
mountPath: "/styles/custom.json"
resources:
limits:
memory: "256Mi"
cpu: "1"
volumes:
- name: "map-tile-server-config"
configMap:
name: "map-tile-server-config"
- name: "map-tile-server-style-config"
configMap:
name: "map-tile-server-style-config"
---
apiVersion: v1
kind: ConfigMap
data:
config.json: "{ \"options\": { \"paths\": { \"root\": \"\", \"fonts\": \"\", \"styles\": \"styles\", \"mbtiles\": \"data\" }, \"serveStaticMaps\": true, \"formatQuality\": { \"jpeg\": 90, \"webp\": 90 }, \"maxSize\": 8192, \"pbfAlias\": \"pbf\" }, \"styles\": { \"custom\": { \"style\": \"custom.json\", \"tilejson\": { \"bounds\": [-180, -90, 180, 90] } } }, \"data\": { \"v3\": { \"mbtiles\": \"zurich.mbtiles\" } } }"
metadata:
name: map-tile-server-config
namespace: test
---
apiVersion: v1
kind: ConfigMap
data:
custom.json: "{ \"version\": 8, \"name\": \"Custom\", \"metadata\": { \"mapbox:autocomposite\": false, \"mapbox:type\": \"template\", \"maputnik:renderer\": \"mbgljs\", \"openmaptiles:version\": \"3.x\", \"openmaptiles:mapbox:owner\": \"openmaptiles\", \"openmaptiles:mapbox:source:url\": \"mapbox://openmaptiles.4qljc88t\" }, \"sources\": {\"openmaptiles\": {\"type\": \"vector\", \"url\": \"mbtiles://{v3}\"}}, \"sprite\": \"\", \"glyphs\": \"{fontstack}/{range}.pbf\", \"layers\": [ {\"id\": \"background\",\"type\": \"background\",\"layout\": {\"visibility\": \"visible\"},\"paint\": {\"background-color\": \"rgba(49, 52, 56, 1)\"} }, {\"id\": \"landuse-residential\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landuse\",\"filter\": [ \"all\", [\"==\", \"$type\", \"Polygon\"], [\"in\", \"class\", \"residential\", \"suburb\", \"neighbourhood\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"hsl(47, 13%, 86%)\", \"fill-opacity\": 0.7} }, {\"id\": \"landcover_grass\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landcover\",\"filter\": [\"==\", \"class\", \"grass\"],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"hsl(82, 46%, 72%)\", \"fill-opacity\": 0.45} }, {\"id\": \"landcover_wood\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landcover\",\"filter\": [\"==\", \"class\", \"wood\"],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"fill-color\": \"hsl(82, 46%, 72%)\", \"fill-opacity\": {\"base\": 1, \"stops\": [[8, 0.6], [22, 1]]}} }, {\"id\": \"water\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"water\",\"filter\": [\"all\", [\"==\", \"$type\", \"Polygon\"], [\"!=\", \"intermittent\", 1]],\"layout\": {\"visibility\": \"visible\"},\"paint\": {\"fill-color\": \"rgba(34, 35, 40, 1)\"} }, {\"id\": \"water_intermittent\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"water\",\"filter\": [\"all\", [\"==\", \"$type\", \"Polygon\"], [\"==\", \"intermittent\", 1]],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"hsl(205, 56%, 73%)\", \"fill-opacity\": 0.7} }, {\"id\": \"landcover-ice-shelf\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landcover\",\"filter\": [\"==\", \"subclass\", \"ice_shelf\"],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"hsl(47, 26%, 88%)\", \"fill-opacity\": 0.8} }, {\"id\": \"landcover-glacier\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landcover\",\"filter\": [\"==\", \"subclass\", \"glacier\"],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"fill-color\": \"hsl(47, 22%, 94%)\", \"fill-opacity\": {\"base\": 1, \"stops\": [[0, 1], [8, 0.5]]}} }, {\"id\": \"landcover_sand\",\"type\": \"fill\",\"metadata\": {},\"source\": \"openmaptiles\",\"source-layer\": \"landcover\",\"filter\": [\"all\", [\"in\", \"class\", \"sand\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"fill-antialias\": false, \"fill-color\": \"rgba(232, 214, 38, 1)\", \"fill-opacity\": 0.3} }, {\"id\": \"landuse\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landuse\",\"filter\": [\"==\", \"class\", \"agriculture\"],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"#eae0d0\"} }, {\"id\": \"landuse_overlay_national_park\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"landcover\",\"filter\": [\"==\", \"class\", \"national_park\"],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"fill-color\": \"#E1EBB0\", \"fill-opacity\": {\"base\": 1, \"stops\": [[5, 0], [9, 0.75]]}} }, {\"id\": \"waterway-tunnel\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"waterway\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"tunnel\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(205, 56%, 73%)\", \"line-dasharray\": [3, 3], \"line-gap-width\": {\"stops\": [[12, 0], [20, 6]]}, \"line-opacity\": 1, \"line-width\": {\"base\": 1.4, \"stops\": [[8, 1], [20, 2]]}} }, {\"id\": \"waterway\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"waterway\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"!in\", \"brunnel\", \"tunnel\", \"bridge\"], [\"!=\", \"intermittent\", 1]],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(205, 56%, 73%)\", \"line-opacity\": 1, \"line-width\": {\"base\": 1.4, \"stops\": [[8, 1], [20, 8]]}} }, {\"id\": \"waterway_intermittent\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"waterway\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"!in\", \"brunnel\", \"tunnel\", \"bridge\"], [\"==\", \"intermittent\", 1]],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(205, 56%, 73%)\", \"line-opacity\": 1, \"line-width\": {\"base\": 1.4, \"stops\": [[8, 1], [20, 8]]}, \"line-dasharray\": [2, 1]} }, {\"id\": \"tunnel_railway_transit\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"minzoom\": 0,\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"tunnel\"], [\"==\", \"class\", \"transit\"]],\"layout\": { \"line-cap\": \"butt\", \"line-join\": \"miter\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(34, 12%, 66%)\", \"line-dasharray\": [3, 3], \"line-opacity\": {\"base\": 1, \"stops\": [[11, 0], [16, 1]]}} }, {\"id\": \"building\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"building\",\"layout\": {\"visibility\": \"none\"},\"paint\": { \"fill-antialias\": true, \"fill-color\": \"rgba(222, 211, 190, 1)\", \"fill-opacity\": {\"base\": 1, \"stops\": [[13, 0], [15, 1]]}, \"fill-outline-color\": { \"stops\": [[15, \"rgba(212, 177, 146, 0)\"],[16, \"rgba(212, 177, 146, 0.5)\"] ] }} }, {\"id\": \"housenumber\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"housenumber\",\"minzoom\": 17,\"filter\": [\"==\", \"$type\", \"Point\"],\"layout\": { \"text-field\": \"{housenumber}\", \"text-font\": [\"Noto Sans Regular\"], \"text-size\": 10, \"visibility\": \"none\"},\"paint\": {\"text-color\": \"rgba(212, 177, 146, 1)\"} }, {\"id\": \"road_area_pier\",\"type\": \"fill\",\"metadata\": {},\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [\"all\", [\"==\", \"$type\", \"Polygon\"], [\"==\", \"class\", \"pier\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"hsl(47, 26%, 88%)\", \"fill-antialias\": true} }, {\"id\": \"road_pier\",\"type\": \"line\",\"metadata\": {},\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [\"all\", [\"==\", \"$type\", \"LineString\"], [\"in\", \"class\", \"pier\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(47, 26%, 88%)\", \"line-width\": {\"base\": 1.2, \"stops\": [[15, 1], [17, 4]]}} }, {\"id\": \"road_bridge_area\",\"type\": \"fill\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"Polygon\"], [\"in\", \"brunnel\", \"bridge\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"fill-color\": \"hsl(47, 26%, 88%)\", \"fill-opacity\": 0.5} }, {\"id\": \"road_path\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"in\", \"class\", \"path\", \"track\"]],\"layout\": { \"line-cap\": \"square\", \"line-join\": \"bevel\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(0, 0%, 97%)\", \"line-dasharray\": [1, 1], \"line-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 10]]}} }, {\"id\": \"road_minor\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"minzoom\": 13,\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"in\", \"class\", \"minor\", \"service\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(0, 0%, 97%)\", \"line-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}} }, {\"id\": \"tunnel_minor\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"tunnel\"], [\"==\", \"class\", \"minor_road\"]],\"layout\": { \"line-cap\": \"butt\", \"line-join\": \"miter\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#efefef\", \"line-dasharray\": [0.36, 0.18], \"line-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}} }, {\"id\": \"tunnel_major\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"tunnel\"], [\"in\", \"class\", \"primary\", \"secondary\", \"tertiary\", \"trunk\"]],\"layout\": { \"line-cap\": \"butt\", \"line-join\": \"miter\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#fff\", \"line-dasharray\": [0.28, 0.14], \"line-width\": {\"base\": 1.4, \"stops\": [[6, 0.5], [20, 30]]}} }, {\"id\": \"aeroway-area\",\"type\": \"fill\",\"metadata\": {\"mapbox:group\": \"1444849345966.4436\"},\"source\": \"openmaptiles\",\"source-layer\": \"aeroway\",\"minzoom\": 4,\"filter\": [ \"all\", [\"==\", \"$type\", \"Polygon\"], [\"in\", \"class\", \"runway\", \"taxiway\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"fill-color\": \"rgba(255, 255, 255, 1)\", \"fill-opacity\": {\"base\": 1, \"stops\": [[13, 0], [14, 1]]}} }, {\"id\": \"aeroway-taxiway\",\"type\": \"line\",\"metadata\": {\"mapbox:group\": \"1444849345966.4436\"},\"source\": \"openmaptiles\",\"source-layer\": \"aeroway\",\"minzoom\": 12,\"filter\": [ \"all\", [\"in\", \"class\", \"taxiway\"], [\"==\", \"$type\", \"LineString\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"rgba(255, 255, 255, 1)\", \"line-opacity\": 1, \"line-width\": {\"base\": 1.5, \"stops\": [[12, 1], [17, 10]]}} }, {\"id\": \"aeroway-runway\",\"type\": \"line\",\"metadata\": {\"mapbox:group\": \"1444849345966.4436\"},\"source\": \"openmaptiles\",\"source-layer\": \"aeroway\",\"minzoom\": 4,\"filter\": [ \"all\", [\"in\", \"class\", \"runway\"], [\"==\", \"$type\", \"LineString\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"rgba(255, 255, 255, 1)\", \"line-opacity\": 1, \"line-width\": {\"base\": 1.5, \"stops\": [[11, 4], [17, 50]]}} }, {\"id\": \"road_trunk_primary\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"in\", \"class\", \"trunk\", \"primary\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#fff\", \"line-width\": {\"base\": 1.4, \"stops\": [[6, 0.5], [20, 30]]}} }, {\"id\": \"road_secondary_tertiary\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"in\", \"class\", \"secondary\", \"tertiary\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#fff\", \"line-width\": {\"base\": 1.4, \"stops\": [[6, 0.5], [20, 20]]}} }, {\"id\": \"road_major_motorway\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"class\", \"motorway\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(0, 0%, 100%)\", \"line-offset\": 0, \"line-width\": {\"base\": 1.4, \"stops\": [[8, 1], [16, 10]]}} }, {\"id\": \"railway-transit\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"class\", \"transit\"], [\"!=\", \"brunnel\", \"tunnel\"]],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(34, 12%, 66%)\", \"line-opacity\": {\"base\": 1, \"stops\": [[11, 0], [16, 1]]}} }, {\"id\": \"railway\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [\"==\", \"class\", \"rail\"],\"layout\": {\"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(34, 12%, 66%)\", \"line-opacity\": {\"base\": 1, \"stops\": [[11, 0], [16, 1]]}} }, {\"id\": \"waterway-bridge-case\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"waterway\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"bridge\"]],\"layout\": { \"line-cap\": \"butt\", \"line-join\": \"miter\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#bbbbbb\", \"line-gap-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}, \"line-width\": {\"base\": 1.6, \"stops\": [[12, 0.5], [20, 10]]}} }, {\"id\": \"waterway-bridge\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"waterway\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"bridge\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(205, 56%, 73%)\", \"line-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}} }, {\"id\": \"bridge_minor case\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"bridge\"], [\"==\", \"class\", \"minor_road\"]],\"layout\": { \"line-cap\": \"butt\", \"line-join\": \"miter\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#dedede\", \"line-gap-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}, \"line-width\": {\"base\": 1.6, \"stops\": [[12, 0.5], [20, 10]]}} }, {\"id\": \"bridge_major case\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"bridge\"], [\"in\", \"class\", \"primary\", \"secondary\", \"tertiary\", \"trunk\"]],\"layout\": { \"line-cap\": \"butt\", \"line-join\": \"miter\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#dedede\", \"line-gap-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}, \"line-width\": {\"base\": 1.6, \"stops\": [[12, 0.5], [20, 10]]}} }, {\"id\": \"bridge_minor\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"bridge\"], [\"==\", \"class\", \"minor_road\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#efefef\", \"line-width\": {\"base\": 1.55, \"stops\": [[4, 0.25], [20, 30]]}} }, {\"id\": \"bridge_major\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation\",\"filter\": [ \"all\", [\"==\", \"$type\", \"LineString\"], [\"==\", \"brunnel\", \"bridge\"], [\"in\", \"class\", \"primary\", \"secondary\", \"tertiary\", \"trunk\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"#fff\", \"line-width\": {\"base\": 1.4, \"stops\": [[6, 0.5], [20, 30]]}} }, {\"id\": \"admin_sub\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"boundary\",\"filter\": [\"in\", \"admin_level\", 4, 6, 8],\"layout\": {\"visibility\": \"none\"},\"paint\": {\"line-color\": \"hsla(0, 0%, 60%, 0.5)\", \"line-dasharray\": [2, 1]} }, {\"id\": \"admin_country\",\"type\": \"line\",\"source\": \"openmaptiles\",\"source-layer\": \"boundary\",\"filter\": [ \"all\", [\"<=\", \"admin_level\", 2], [\"==\", \"$type\", \"LineString\"]],\"layout\": { \"line-cap\": \"round\", \"line-join\": \"round\", \"visibility\": \"none\"},\"paint\": { \"line-color\": \"hsl(0, 0%, 60%)\", \"line-width\": {\"base\": 1.3, \"stops\": [[3, 0.5], [22, 15]]}} }, {\"id\": \"poi_label\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"poi\",\"minzoom\": 14,\"filter\": [\"all\", [\"==\", \"$type\", \"Point\"], [\"==\", \"rank\", 1]],\"layout\": { \"icon-size\": 1, \"text-anchor\": \"top\", \"text-field\": \"{name:latin}\\n{name:nonlatin}\", \"text-font\": [\"Noto Sans Regular\"], \"text-max-width\": 8, \"text-offset\": [0, 0.5], \"text-size\": 11, \"visibility\": \"none\"},\"paint\": { \"text-color\": \"#666\", \"text-halo-blur\": 1, \"text-halo-color\": \"rgba(255,255,255,0.75)\", \"text-halo-width\": 1} }, {\"id\": \"airport-label\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"aerodrome_label\",\"minzoom\": 10,\"filter\": [\"all\", [\"has\", \"iata\"]],\"layout\": { \"icon-size\": 1, \"text-anchor\": \"top\", \"text-field\": \"{name:latin}\\n{name:nonlatin}\", \"text-font\": [\"Noto Sans Regular\"], \"text-max-width\": 8, \"text-offset\": [0, 0.5], \"text-size\": 11, \"visibility\": \"none\"},\"paint\": { \"text-color\": \"#666\", \"text-halo-blur\": 1, \"text-halo-color\": \"rgba(255,255,255,0.75)\", \"text-halo-width\": 1} }, {\"id\": \"road_major_label\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"transportation_name\",\"filter\": [\"==\", \"$type\", \"LineString\"],\"layout\": { \"symbol-placement\": \"line\", \"text-field\": \"{name:latin} {name:nonlatin}\", \"text-font\": [\"Noto Sans Regular\"], \"text-letter-spacing\": 0.1, \"text-rotation-alignment\": \"map\", \"text-size\": {\"base\": 1.4, \"stops\": [[10, 8], [20, 14]]}, \"text-transform\": \"uppercase\", \"visibility\": \"none\"},\"paint\": { \"text-color\": \"#000\", \"text-halo-color\": \"hsl(0, 0%, 100%)\", \"text-halo-width\": 2} }, {\"id\": \"place_label_other\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"place\",\"minzoom\": 8,\"filter\": [ \"all\", [\"==\", \"$type\", \"Point\"], [\"!in\", \"class\", \"city\", \"state\", \"country\", \"continent\"]],\"layout\": { \"text-anchor\": \"center\", \"text-field\": \"{name:latin}\\n{name:nonlatin}\", \"text-font\": [\"Noto Sans Regular\"], \"text-max-width\": 6, \"text-size\": {\"stops\": [[6, 10], [12, 14]]}, \"visibility\": \"none\"},\"paint\": { \"text-color\": \"hsl(0, 0%, 25%)\", \"text-halo-blur\": 0, \"text-halo-color\": \"hsl(0, 0%, 100%)\", \"text-halo-width\": 2} }, {\"id\": \"place_label_city\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"place\",\"maxzoom\": 16,\"filter\": [\"all\", [\"==\", \"$type\", \"Point\"], [\"==\", \"class\", \"city\"]],\"layout\": { \"text-field\": \"{name:latin}\\n{name:nonlatin}\", \"text-font\": [\"Noto Sans Regular\"], \"text-max-width\": 10, \"text-size\": {\"stops\": [[3, 12], [8, 16]]}, \"visibility\": \"none\"},\"paint\": { \"text-color\": \"hsl(0, 0%, 0%)\", \"text-halo-blur\": 0, \"text-halo-color\": \"hsla(0, 0%, 100%, 0.75)\", \"text-halo-width\": 2} }, {\"id\": \"country_label-other\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"place\",\"maxzoom\": 12,\"filter\": [ \"all\", [\"==\", \"$type\", \"Point\"], [\"==\", \"class\", \"country\"], [\"!has\", \"iso_a2\"]],\"layout\": { \"text-field\": \"{name:latin}\", \"text-font\": [\"Noto Sans Regular\"], \"text-max-width\": 10, \"text-size\": {\"stops\": [[3, 12], [8, 22]]}, \"visibility\": \"none\"},\"paint\": { \"text-color\": \"hsl(0, 0%, 13%)\", \"text-halo-blur\": 0, \"text-halo-color\": \"rgba(255,255,255,0.75)\", \"text-halo-width\": 2} }, {\"id\": \"country_label\",\"type\": \"symbol\",\"source\": \"openmaptiles\",\"source-layer\": \"place\",\"maxzoom\": 12,\"filter\": [ \"all\", [\"==\", \"$type\", \"Point\"], [\"==\", \"class\", \"country\"], [\"has\", \"iso_a2\"]],\"layout\": { \"text-field\": \"{name:latin}\", \"text-font\": [\"Noto Sans Bold\"], \"text-max-width\": 10, \"text-size\": {\"stops\": [[3, 12], [8, 22]]}, \"visibility\": \"none\"},\"paint\": { \"text-color\": \"hsl(0, 0%, 13%)\", \"text-halo-blur\": 0, \"text-halo-color\": \"rgba(255,255,255,0.75)\", \"text-halo-width\": 2} } ], \"id\": \"basic\"}"
metadata:
name: map-tile-server-style-config
namespace: test
</code></pre>
| stevetronix | <p>I've looked a little deeper into your environment and deployed it on my local cluster, here are my findings:</p>
<ul>
<li>Look where the files end up being mounted when I deploy your yaml as is in your question:</li>
</ul>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
map-tile-server-5c86b677df-wswwz 1/1 Running 0 2s
$ kubectl exec -it map-tile-server-5cb677df-wswwz /bin/bash --
root@map-tile-server-5c86b677df-wswwz:/# ls -l
drwxrwxrwx 3 root root 4096 Apr 10 12:16 config.json
drwxr-xr-x 3 root root 4096 Apr 10 12:16 styles
root@map-tile-server-5c86b677df-wswwz:/config.json# ls -l
lrwxrwxrwx 1 root root 18 Apr 10 12:16 config.json -> ..data/config.json
root@map-tile-server-5c86b677df-wswwz:/styles# ls -l
drwxrwxrwx 3 root root 4096 Apr 10 12:16 custom.json
root@map-tile-server-5c86b677df-wswwz:/styles# cd custom.json/
root@map-tile-server-5c86b677df-wswwz:/styles/custom.json# ls -l
lrwxrwxrwx 1 root root 18 Apr 10 12:16 custom.json -> ..data/custom.json
</code></pre>
<ul>
<li><p>It is creating folders with the file names and inserting your configmaps inside each one.</p></li>
<li><p>In your question you mention you want this files to be placed in <code>/config.json</code> and <code>/styles/custom.json</code>, I'd like to comment two things with you:</p>
<ul>
<li><p>You can't set <code>config.json</code> to mount directly in <code>/</code>, the container will not run because you "overwrite" the / folder. So your config.json needs to stay inside some folder, best practice is to mount directly in <code>/data</code> dir.</p></li>
<li><p>Although there is a symlink directing the <code>/styles/custom.json/custom.json</code> file to <code>data/custom.json</code>, It is good to warn you that:</p></li>
</ul>
<p><strong>if you mention the path <code>/style/custom.json</code> in your <code>config.json</code> it will not find the file there</strong>.</p></li>
</ul>
<p>Here you can see an example provided in Kubernetes Documentation on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-specific-path-in-the-volume" rel="nofollow noreferrer">How to Mount ConfigMaps as Files</a></p>
<ul>
<li>These are the changes I made to your yaml to move the <code>custom.json</code> file to the intended path:</li>
</ul>
<pre><code> volumeMounts:
- name: map-tile-server-config
mountPath: /config.json
- name: map-tile-server-style-config
mountPath: /styles
volumes:
- name: map-tile-server-config
configMap:
name: map-tile-server-config
- name: map-tile-server-style-config
configMap:
name: map-tile-server-style-config
</code></pre>
<ul>
<li>Now let's test it:</li>
</ul>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
map-tile-server-5cd7694b74-s6g6g 1/1 Running 0 8s
$ kubectl exec -it map-tile-server-5cd7694b74-s6g6g -- /bin/bash
root@map-tile-server-5cd7694b74-s6g6g:/# ls
bin boot config.json data dev etc home lib lib64 media mnt opt proc root run sbin srv styles sys tmp usr var
root@map-tile-server-5cd7694b74-s6g6g:/config.json# ls -l
lrwxrwxrwx 1 root root 18 Apr 10 12:27 config.json -> ..data/config.json
root@map-tile-server-5cd7694b74-s6g6g:/styles# ls -l
lrwxrwxrwx 1 root root 18 Apr 10 12:27 custom.json -> ..data/custom.json
</code></pre>
<p>Now the files are in the intended location.</p>
<ul>
<li>My suggestion to easy your management later, would be for example mounting to:</li>
</ul>
<pre><code> volumeMounts:
- name: map-tile-server-config
mountPath: /data/config
- name: map-tile-server-style-config
mountPath: /data/styles
</code></pre>
<ul>
<li>Would output everything inside data folder:</li>
</ul>
<pre><code>owilliam@minikube-usc1a:~/CaseFiles/configmap-json$ k exec -it map-tile-server-6b5fc64fd6-6g2wb -- /bin/bash
root@map-tile-server-6b5fc64fd6-6g2wb:/data# ls
config styles zurich_switzerland.mbtiles
root@map-tile-server-6b5fc64fd6-6g2wb:/data# ls -l
total 23684
drwxrwxrwx 3 root root 4096 Apr 10 13:22 config
drwxrwxrwx 3 root root 4096 Apr 10 13:22 styles
-rw-r--r-- 1 root root 24244224 Apr 10 13:22 zurich_switzerland.mbtiles
root@map-tile-server-6b5fc64fd6-6g2wb:/data# ls config
config.json
root@map-tile-server-6b5fc64fd6-6g2wb:/data# ls styles
</code></pre>
<p>If you have any question let me know in the comments.</p>
| Will R.O.F. |
<p>Let's say I have an image <code>foo</code> with tag <code>v1</code>.
So I deployed it on Kubernetes by <code>foo:v1</code>.</p>
<p>However, for some reason(e.g. monoversion in monorepo), I pushed the exact same image to container registry with tag <code>v2</code>.
And I changed k8s manifest to <code>foo:v2</code>.</p>
<p>In this situation, I want to update the pod only when the <code>image digest</code> of <code>v1</code> and <code>v2</code> are different. So in the case of foo, the digest are same, so container of <code>foo:v1</code> should keep running.</p>
<p>Is this possible? If so, how?</p>
<p>Thanks</p>
| jjangga | <p>There is no way to update <code>tag</code> image without restarting pod.
The only way to make it work is too use <code>digest</code> explicitly instead of tags. </p>
<p>So now image spec would look like this: </p>
<pre><code>spec:
image: foo@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566
</code></pre>
<p>This way your image does not depended on tags. Digests can be found either on <code>dockerhub</code> or by running command <code>docker images --digests <image-name></code></p>
| acid_fuji |
<p>I'm doing research on how to run a Spring Batch job on RedHat OpenShift as a Kubernetes Scheduled Job.
Steps have done,</p>
<p>1) Created a sample Spring Batch app that reads a .csv file that does simple processing and puts some data into in-memory h2 DB. The job launcher is called upon as a REST endpoint (<strong>/load</strong>). The source code can be found <a href="https://github.com/samme4life/spring-batch-example" rel="nofollow noreferrer">here</a>. Please see the README file for the endpoint info.</p>
<p>2) Created the Docker Image and pushed into <a href="https://hub.docker.com/repository/docker/samme4life/spring-batch/tags?page=1" rel="nofollow noreferrer">DockerHub</a></p>
<p>3) Deployed using that image to my OpenShift Online cluster as an <a href="http://spring-batch-example-my-first-project.4b63.pro-ap-southeast-2.openshiftapps.com/health" rel="nofollow noreferrer">app</a></p>
<p>What I want to do is,</p>
<p>Run a Kubernetes Cron Job from OpenShift to call <strong>/load</strong> REST endpoint which launches the SpringBatch job periodically</p>
<p>Can someone please guide me here on how can I achieve this?</p>
<p>Thank you</p>
<p>Samme</p>
| Samme | <p>The easiest way would be to curl your <code>/load</code> REST endpoint.
Here's a way to do that: </p>
<p><strong>The Pod definition that I used as replacement for you application (for testing purposes):</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: mendhak/http-https-echo
</code></pre>
<p>I used this image because it sends various HTTP request properties back to client.</p>
<p><strong>Create a service for pod</strong>: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp the selector
ports:
- protocol: TCP
port: 80 #Port that service is available on
targetPort: 80 #Port that app listens on
</code></pre>
<p><strong>Create a CronJob:</strong></p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: curljob
spec:
jobTemplate:
metadata:
name: curljob
spec:
template:
metadata:
spec:
containers:
- command:
- curl
- http://myapp-service:80/load
image: curlimages/curl
imagePullPolicy: Always
name: curljobt
restartPolicy: OnFailure
schedule: '*/1 * * * *'
</code></pre>
<p><strong>Alternatively you can use command to launch it:</strong> </p>
<pre><code>kubectl create cronjob --image curlimages/curl curljob -oyaml --schedule "*/1 * * * *" -- curl http://myapp-service:80/load
</code></pre>
<p>When <code>"*/1 * * * *"</code> will specify how often this <code>CronJob</code> would run. I`ve set it up to run every one minute.
You can see more about how to setup cron job <a href="http://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs" rel="nofollow noreferrer">here</a> and <a href="https://medium.com/swlh/the-power-of-kubernetes-cron-jobs-d7f550958de8" rel="nofollow noreferrer">here</a></p>
<p><strong>Here is the result of the kubectl logs from one of the job`s pod:</strong> </p>
<pre><code>{
"path": "/load",
"headers": {
"host": "myapp-service",
"user-agent": "curl/7.68.0-DEV",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "myapp-service",
"ip": "::ffff:192.168.197.19",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "myapp-pod"
</code></pre>
<p>As you can see the application receives <code>GET</code> request with path: <code>/load</code>. </p>
<p>Let me know if that helps. </p>
| acid_fuji |
<p>I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.</p>
<ol>
<li><p>The below command is from google code lab (URL: <a href="https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7" rel="noreferrer">https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7</a> )</p>
<p><code>$ kubectl create service loadbalancer hello-java --tcp=8080:8080</code></p>
</li>
<li><p>Another command is being seen in a different place along with the Kubernetes site (<a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/</a>)</p>
</li>
</ol>
<p><code>$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service</code></p>
<br/>
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
<p>I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.</p>
<p>Would anyone please clarify this to me?</p>
| Monaj | <p>There are cases where the <code>expose</code> command is not sufficient & your only practical option is to use <code>create service</code>.</p>
<p>Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.</p>
<p>The types of Kubernetes services are:</p>
<ul>
<li>ClusterIP</li>
<li>NodePort</li>
<li>LoadBalancer</li>
<li>ExternalName</li>
</ul>
<p>So for example in the case of the <strong>NodePort</strong> type service let's say we wanted to set a node port with value <strong>31888</strong> :</p>
<ul>
<li><p><strong>Example 1:</strong>
In the following command there is no argument for the node port value, the expose command creates it automatically:</p>
<p><code>kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80</code></p>
</li>
</ul>
<p>The only way to set the node port value is after being created using the edit command to update the node port value: <code>kubectl edit service demo</code></p>
<ul>
<li><p><strong>Example 2:</strong>
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:</p>
<p><code>kubectl create service nodeport demo --tcp=8080:80 --node-port=31888</code></p>
</li>
</ul>
<p>In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.</p>
<p><strong>Important</strong> :</p>
<p>The <code>create service [service-name]</code> does not have an option to set the service's selector , so the service wont automatically connect to existing pods.</p>
<p>To set the selector labels to target specific pods you will need to follow up the <code>create service [service-name]</code> with the <code>set selector</code> command :</p>
<p><code>kubectl set selector service [NAME] [key1]=[value1]</code></p>
<p>So for above case 2 example, if you want the service to work with a deployment with pods labeled <code>myapp: hello</code> then this is the follow-up command needed:</p>
<pre><code>kubectl set selector service demo myapp=hello
</code></pre>
| Yariv |
<p>I'm approaching k8s volumes and best practices and I've noticed that when reading documentation it seems that you always need to use <strong>StatefulSet</strong> resource if you want to implement persistency in your cluster:</p>
<blockquote>
<p>"StatefulSet is the workload API object used to manage stateful
applications."</p>
</blockquote>
<p>I've implemented some tutorials, <strong>some of them use StatefulSet, some others don't</strong>.</p>
<p>In fact, let's say I want to persist some data, I can have my stateless Pods (even MySql server pods!) in which I use a PersistentVolumeClaim which persists the state. If I stop and rerun the cluster, I can resume the state from the Volume with no need of StatefulSet.</p>
<p>I attach here an example of Github repo in which there is a stateful app with MySql and no StatefulSet at all:
<a href="https://github.com/shri-kanth/kuberenetes-demo-manifests" rel="nofollow noreferrer">https://github.com/shri-kanth/kuberenetes-demo-manifests</a></p>
<p>So <strong>do I really need to use a StatefulSet</strong> resource for databases in k8s? Or are there some specific cases it could be a necessary practice?</p>
| Alessandro Argentieri | <p>PVCs are not the only reason to use Statefulsets over Deployments.
As the Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets" rel="nofollow noreferrer">manual</a> states:</p>
<p>StatefulSets are valuable for applications that require one or more of the following:</p>
<ul>
<li>Stable, unique network identifiers.</li>
<li>Stable, persistent storage.</li>
<li>Ordered, graceful deployment and scaling.</li>
<li>Ordered, automated rolling updates.</li>
</ul>
<p>You can read more about database considerations when deployed on Kubernetes here <a href="https://cloud.google.com/blog/products/databases/to-run-or-not-to-run-a-database-on-kubernetes-what-to-consider" rel="nofollow noreferrer">To run or not to run a database on Kubernetes</a></p>
| Dragan Bocevski |
<p>After <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/prometheus/README.md" rel="nofollow noreferrer">getting Prometheus up for a gke cluster</a>, I ran the step to add an external ip address for Grafana:</p>
<pre><code>kubectl patch svc "prometheus-1-grafana" --namespace "cluster-1" \
-p '{"spec": {"type": "LoadBalancer"}}'
</code></pre>
<p>but now no longer want Grafana to be available via an external ip.</p>
<p>I've tried running with <code>-p '{"spec": {"type": "ClusterIP"}}'</code> but I just get the error:</p>
<pre><code>The Service "prometheus-1-prometheus" is invalid:
spec.ports[0].nodePort: Forbidden: may not be used when
`type` is 'ClusterIP'
</code></pre>
<p>How do I do the above <code>kubectl patch svc</code> command to remove the external ip?</p>
| Silfheed | <p>When you change the service to <code>LoadBalancer</code>, a <code>NodePort</code> is attributed to the service.</p>
<p>In order to return to <code>ClusterIP</code> you need to also remove the <code>NodePort</code>.</p>
<ul>
<li>Using <code>kubectl patch</code> we will set the <code>NodePort</code> to <code>NULL</code>, here is the command:</li>
</ul>
<pre><code>kubectl patch svc "prometheus-1-grafana" --namespace "cluster-1" --type="merge" \
-p '{"spec":{"ports":[{"nodePort":null,"port":<PORT_NUMBER>}],"type":"ClusterIP"}}'
</code></pre>
<p><strong>Note:</strong> Kubernetes will not allow you to set the <code>nodePort</code> to <code>null</code> alone, because the <code>Port</code> field is obligatory, make sure to check the correct port and change it, I'm using an http server as example.</p>
<ul>
<li>Optionally, you can create a <code>patch.yaml</code>:</li>
</ul>
<pre><code>spec:
ports:
- port: <PORT_NUMBER>
protocol: TCP
targetPort: <TARGET_PORT_NUMBER>
type: ClusterIP
</code></pre>
<p>and Apply it:</p>
<pre><code>kubectl patch svc "prometheus-1-grafana" --namespace "cluster-1" \
--type="merge" --patch "$(cat patch.yaml)"
</code></pre>
<hr>
<p><strong>Reproduction:</strong></p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc ClusterIP 10.0.13.9 <none> 80/TCP 65m
$ kubectl patch svc "echo-svc" -p '{"spec": {"type": "LoadBalancer"}}'
service/echo-svc patched
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc LoadBalancer 10.0.13.9 <pending> 80:32021/TCP 65m
$ kubectl patch svc "echo-svc" --type="merge" -p '{"spec":{"ports":[{"nodePort":null,"port":80}],"type":"ClusterIP"}}'
service/echo-svc patched
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc ClusterIP 10.0.13.9 <none> 80/TCP 66m
$ kubectl patch svc "echo-svc" -p '{"spec": {"type": "LoadBalancer"}}'
service/echo-svc patched
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc LoadBalancer 10.0.13.9 35.223.145.193 80:30394/TCP 66m
$ cat patch.yaml
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
type: ClusterIP
$ kubectl patch svc "echo-svc" --type="merge" --patch "$(cat patch.yaml)"
service/echo-svc patched
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc ClusterIP 10.0.13.9 <none> 80/TCP 66m
</code></pre>
<p><strong>References:</strong></p>
<ul>
<li><a href="http://jsonpatch.com/" rel="nofollow noreferrer">Json Patch</a></li>
<li><a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="nofollow noreferrer">Update API Object with Kubectl Patch</a></li>
</ul>
| Will R.O.F. |
<p>We have a Kubernetes cluster running on GKE, using its own VPC created for this with a subnet of <code>10.184.0.0/20</code>. This cluster has a workload that has been assigned an external load balancer towards public access, along with an internal cluster IP towards internal communication. The subnet of the services is <code>10.0.0.0/20</code>.</p>
<p>There is a google cloud Classic VPN setup on the same VPC to be able to access the private network.</p>
<p>We have another system hosted on-premise that is connecting via the above VPN using a tunnel. The on-premise network can ping the Nodes in the VPC via their private IPs on the subnet <code>10.184.0.0/20</code>, but is enable to ping / telnet to the cluster IP which is on the subnet <code>10.0.0.0/20</code>.</p>
<p>Is this possible to achieve?</p>
| zulugraffi | <p>This is indeed possible, since your tunnel is already up and you can ping your nodes my guess is that you are unable to reach the pod and services ranges from your on-prem application, meaning that you are only advertising the main 10.184.0.0/20 CIDR but not the secondaries, am I right?</p>
<p>You can easily check that by running a <a href="https://cloud.google.com/network-intelligence-center/docs/connectivity-tests/how-to/running-connectivity-tests" rel="nofollow noreferrer">connectivity test</a>, it will simulate traffic between source-destination (in this case source is an IP from your on-prem network and the destination should be your Service IP) taking into consideration several products (firewall rules, VPC peering, routes, VPN tunnels, etc) and will let you know if there is something wrong/missing in your environment.</p>
<p>If you are missing those ranges in your VPN configuration, you will need to <a href="https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-static-vpns" rel="nofollow noreferrer">re-create it</a> and be sure to add the secondary ranges in the traffic selectors (or use a wide 0.0.0.0/0 CIDR).</p>
<p>Finally, remember that you need to expose your applications using <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps" rel="nofollow noreferrer">services</a> (Cluster IP, NodePort, Load Balancer) and test again from your on-premises network.</p>
| alejandrooc |
<p>Mongock looks very promising. We want to use it inside a kubernetes service that has multiple replicas that run in parallel.</p>
<p>We are hoping that when our service is deployed, the first replica will acquire the mongockLock and all of its ChangeLogs/ChangeSets will be completed before the other replicas attempt to run them.</p>
<p>We have a single instance of mongodb running in our kubernetes environment, and we want the mongock ChangeLogs/ChangeSets to execute only once.</p>
<p>Will the mongockLock guarantee that only one replica will run the ChangeLogs/ChangeSets to completion?</p>
<p>Or do I need to enable transactions (or some other configuration)?</p>
| paul-pauses-to-wonder | <p>I am going to provide the short answer first and then the long one. I suggest you to read the long one too in order to understand it properly.</p>
<h2>Short answer</h2>
<p>By default, Mongock guarantees that the ChangeLogs/changeSets will be run only by one pod at a time. The one owning the lock.</p>
<h2>Long answer</h2>
<p>What really happens behind the scenes(if it's not configured otherwise) is that when a pod takes the lock, the others will try to acquire it too, but they can't, so they are forced to wait for a while(configurable, but 4 mins by default)as many times as the lock is configured(3 times by default). After this, if i's not able to acquire it and there is still pending changes to apply, Mongock will throw an MongockException, which should mean the JVM startup fail(what happens by default in Spring).</p>
<p>This is fine in Kubernetes, because it ensures it will restart the pods.
So now, assuming the pods start again and changeLogs/changeSets are already applied, the pods start successfully because they don't even need to acquire the lock as there aren't pending changes to apply.</p>
<h2>Potential problem with MongoDB without transaction support and Frameworks like Spring</h2>
<p>Now, assuming the lock and the mutual exclusion is clear, I'd like to point out a potential issue that needs to be mitigated by the the changeLog/changeSet design.</p>
<p>This issue applies if you are in an environment such as Kubernetes, which has a pod initialisation time, your migration take longer than that initialisation time an the Mongock process is executed before the pod becomes ready/health(and it's a condition for it). This last condition is highly desired as it ensures the application runs with the right version of the data.</p>
<p>In this situation imagine the Pod starts the Mongock process. After the Kubernetes initialisation time, the process is still not finished, but Kubernetes stops the JVM abruptly. This means that some changeSets were successfully executed, some other not even started(no problem, they will be processed in the next attempt), but one changeSet was partially executed and marked as not done. This is the potential issue. The next time Mongock runs, it will see the changeSet as pending and it will execute it from the beginning. If you haven't designed your changeLogs/changeSets accordingly, you may experience some unexpected results because some part of the data process covered by that changeSet has already taken place and it will happen again.</p>
<p>This, somehow needs to be mitigated. Either with the help of mechanisms like transactions, with a changeLog/changeSet design that takes this into account or both.</p>
<p>Mongock currently provides transactions with “all or nothing”, but it doesn’t really help much as it will retry every time from scratch and will probably end up in an infinite loop. The next version 5 will provide transactions per ChangeLogs and changeSets, which together with good organisation, is the right solution for this.</p>
<p>Meanwhile this issue can be addressed by following <a href="https://github.com/cloudyrock/mongock/blob/master/community/README_V3.md#mongo-transaction-limitations" rel="noreferrer">this design suggestions</a>.</p>
| Mongock team |
<p>Tried creating node group inside the EKS cluster. After creating node group, inside add-ons, the core-dns option displays as degraded. Tried all possibilities found on google. Unable to resolve this. Can someone help on this.</p>
| Naveen Kumar | <p>If you are using the AWS EKS Fargate then, you have to add the Labels while creating the coredns Fargate profile and in this way, the coredns will be able to find the nodes for its deployment and start working.
Here the steps:</p>
<ol>
<li><p>Let's get the pods available in the kube-system namespace.</p>
<p>kubectl get pods -n kube-system</p>
</li>
</ol>
<p>We can see that the coredns pods are stuck in a pending state.
<a href="https://i.stack.imgur.com/LjEGO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LjEGO.png" alt="Stuck in Pending state" /></a></p>
<ol start="2">
<li><p>Let's check why are they stuck in a pending state. (because there are no nodes available on the AWS EKS cluster to deploy coredns pods)</p>
<p>kubectl describe pods [pods_name] -n kube-system</p>
</li>
</ol>
<p><a href="https://i.stack.imgur.com/1MDVK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1MDVK.png" alt="Issue found" /></a></p>
<p>And if we scroll a bit up, we will be able to find the labels section of the particular pod.</p>
<p><a href="https://i.stack.imgur.com/G6ppN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G6ppN.png" alt="Get the labels information" /></a></p>
<ol start="3">
<li>Now we will create a new Fargate pod and include the Label "<strong>k8s-app</strong>=<strong>kube-dns</strong>". So that the coredns fargate profile can identify the particular Pods to deploy.
<a href="https://i.stack.imgur.com/aZE4k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aZE4k.png" alt="Add the Labels to the coredns fargate profile" /></a></li>
</ol>
<p><a href="https://i.stack.imgur.com/cJawe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cJawe.png" alt="coredns fargate profile is now added" /></a></p>
<ol start="4">
<li><p>Now we will patch the coredns deployment using the following command:</p>
<p>kubectl patch deployment coredns <br />
-n kube-system <br />
--type json <br />
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'</p>
</li>
</ol>
<p><a href="https://i.stack.imgur.com/vU3Cx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vU3Cx.png" alt="Coredns patched successfully" /></a></p>
<p>And we can see that our coredns pods have been started and running successfully
.
<a href="https://i.stack.imgur.com/oZlef.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oZlef.png" alt="Coredns pods working perfectly" /></a></p>
<ol start="5">
<li><p>We can also re-create the existing pods using the following command:</p>
<p>kubectl rollout restart -n kube-system deployment coredns
<a href="https://i.stack.imgur.com/A76oc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A76oc.png" alt="coredns restarted" /></a></p>
</li>
</ol>
<p>And our CoreDNS is healthy and Active now which was in a degraded state before.
<a href="https://i.stack.imgur.com/yxYsg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yxYsg.png" alt="Coredns addon is active now" /></a></p>
<p>You are now ready to rock with k8s on AWS. Best of Luck.</p>
| Desh Deepak Dhobi |
<p>I am super new to Kubernetes. I have inherited a side project - really an in progress POC - from another developer that recently left the team. He did a demo from a VM that we still have access to before he abruptly left. After he left we were able to go through his demo and things were working. One of the team members restarted the VM and now things are broken. I've been assigned to figure things out. I've been able to bring all the components back up aside from the Kubernetes part which all stack traces point to being the issue at the moment.</p>
<p>As mentioned I am new to Kubernetes, so I lack the vocabulary to do proper searches online.</p>
<p>I have ran a few commands have pasted their output below. If I understand correctly the issue is with the k8s deployment not running:
kubectl get all</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IPPORT(S) AGE AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
service/app-service-5x7z NodePort 10.96.215.11 <none> 3000:32155/TCP,3001:32762/TCP,27017:30770/TCP 3d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-deployment-5x7z 0/1 1 0 3d
NAME DESIRED CURRENT READY AGE
replicaset.apps/app-deployment-5x7z 1 1 0 3d
</code></pre>
<p>I'm guessing that the issue is with the fact that the READY state is <strong>0/1</strong></p>
<p>Can someone please guide me as to how I can bring this guy back up? Also, I see a lot of heavy documentation online, is there a place with a shallow bank that I can dive into the work of Kubernetes. I'm very excited about this opportunity, but it just hasn't been a smooth start.</p>
| Handsome Wayfarer | <p>Writing down my two cents on your solution:</p>
<ul>
<li>After you did the <code>kubeadm reset</code> and <code>kubeadm init</code> your cluster was empty.</li>
</ul>
<p><strong>1st Problem:</strong></p>
<blockquote>
<p>I applied those changes, but now when i run <code>kubectl get all</code> i only get the first line "service/kubernetes" and i no longer get anything regarding <code>app-service-5x7z</code>
any chance you could give me a hint as to how to accomplish that [getting back the deployment]?</p>
</blockquote>
<p><strong>Solution:</strong></p>
<ul>
<li>You need to find the yaml files responsible for deploying the cluster, it's probably called <code>app-deployment</code> or something similar.</li>
</ul>
<hr>
<p><strong>2nd Issue:</strong></p>
<blockquote>
<p>Here's my current situation. Since I have no idea what this guy used, i built the docker images, and i updated the service and deployment yaml files to use them. I then ran <code>kubectl apply -f <yaml_folder></code> which succeeds, but when i run <code>kubectl get pods --watch</code> i see the following: <a href="https://justpaste.it/3p9r1" rel="nofollow noreferrer">justpaste.it/3p9r1</a> any suggestions how i could debug and get to the root cause? my understanding is that it's not able to pull the docker image. but since i just created it and it's located on the same machine (not in a registry), not sure what the problem is.</p>
</blockquote>
<p><strong>Solution:</strong></p>
<p>From <a href="https://kubernetes.io/docs/concepts/containers/images/#pr%C3%A9-pulled-images" rel="nofollow noreferrer">PrePulledImages</a> Documentation:</p>
<blockquote>
<p>By default, the kubelet will try to pull each image from the specified registry. However, if the <code>imagePullPolicy</code> property of the container is set to <code>IfNotPresent</code> or <code>Never</code>, then a local image is used (preferentially or exclusively, respectively).
All pods will have read access to any pre-pulled images.</p>
</blockquote>
<p>If a docker image is in the local registry of the node, you have to set <code>imagePullPolicy: Never</code> on the deployment file. Note that the image must be present on all nodes local repositories to ensure availability.</p>
<p>It's good also to create a <a href="https://docs.docker.com/docker-hub/repos/#private-repositories" rel="nofollow noreferrer">Docker Hub Private Repository</a> to ensure availability and integrity.</p>
| Will R.O.F. |
<p>I'm using kubernetes/ingress-nginx.
Task is to extract CN field from client's certificate using nginx ingress. I was searching for a while and found a solution:</p>
<pre><code> map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~CN=(?<CN>[^/,\"]+) $CN;
}
</code></pre>
<p>But I can't adapt this code for nginx ingress.That's what I currently have and it doesn't work.</p>
<pre><code>nginx.ingress.kubernetes.io/http-snippets: |
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~CN=(?<CN>[^/,\"]+) $CN;
}
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Remote-User $ssl_client_s_dn_cn;
</code></pre>
<p>Probably someone faced with it and know how to adjust this properly, as I'm out of ideas.<br>
If you know more elegant way to do it please share you knowledge here.<br>
Thanks in advance.</p>
| DavidGreen55 | <p>I found a solution,hope it may help someone:</p>
<pre><code> nginx.ingress.kubernetes.io/http-snippet: |
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~CN=(?<CN>[^/,\"]+) $CN;
};
nginx.ingress.kubernetes.io/location-snippet: |
proxy_set_header REMOTE-USER $ssl_client_s_dn_cn;
</code></pre>
| DavidGreen55 |
<p>When I run locally, in my laptop (using python 3.10 and pandas 1.3.5), the following code, I get 0.031s approximately (ball parking it):</p>
<pre><code>profile_data = (
profiles_df[data_cols]
.loc[profile_ids]
.rename(columns=new_cols)
.to_dict("records")
)
</code></pre>
<p>where data_cols and new_cols are two lists of strings, and profiles_df is a dataframe with mostly string data.
However, when I run in it in a pod, using the same python and pandas versions, I get it run in 0.1s approx. The pod has still ample secondary memory (a few GBs) and never reaches its limit, nor does it reach the CPU limits (1 out of 1.5)</p>
<ol>
<li>Is there a way to optimize the above code?</li>
<li>What could be causing this difference in performance?</li>
</ol>
| An old man in the sea. | <p><code>df.rename()</code> introduces some overhead, but you can skip that step by constructing the result directly with the new column names:</p>
<pre class="lang-py prettyprint-override"><code>profile_data = [{new_cols[col]: profiles_df.loc[ix, col]
for col in new_cols}
for ix in profile_ids]
</code></pre>
<p>I do not know the answer to your second question.</p>
| Arne |
<p>I have two pods namely payroll and mysql labelled as <code>name=payroll</code> and <code>name=mysql</code>. There's another pod named internal with label <code>name=internal</code>. I am trying to allow egress traffic from internal to other two pods while allowing all ingress traffic. My <code>NetworkPoliy</code> looks like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
</code></pre>
<p>This does not match the two pods payroll and mysql. What am I doing wrong?</p>
<p>The following works:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
</code></pre>
<p>What is the best way to write a <code>NetWorkPolicy</code> and why is the first one incorrect?</p>
<p>I also am wondering why the <code>to</code> field is an array while the <code>podSelector</code> is also an array inside it? I mean they are the same right? Multiple <code>podSelector</code> or multiple <code>to</code> fields. Using one of them works.</p>
| Riyafa Abdul Hameed | <blockquote>
<p>This does not match the two pods payroll and mysql. What am I doing wrong?</p>
</blockquote>
<ul>
<li>I've reproduce your scenarios with pod-to-service and pod-to-pod environments, in both cases <strong>both yamls worked</strong> well. That said after fixing the indentation on line 19 where both <code>podSelector</code> should be in the same level, as follows:</li>
</ul>
<pre><code> - to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
</code></pre>
<hr>
<blockquote>
<p>What is the best way to write a <code>NetWorkPolicy</code>?</p>
</blockquote>
<ul>
<li>The best one depends on each scenario, it's a good practice to create one networkpolicy for each rule. I'd say the first yaml is the best one if you intend to expose ports <code>8080</code> and <code>3306</code> on BOTH pods, otherwise it would be better to create two rules, to avoid leaving unnecessary open ports.</li>
</ul>
<hr>
<blockquote>
<p>I also am wondering why the <code>to</code> field is an array while the <code>podSelector</code> is also an array inside it? I mean they are the same right? Multiple <code>podSelector</code> or multiple <code>to</code> fields. Using one of them works.</p>
</blockquote>
<ul>
<li><p>From <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#networkpolicyspec-v1-networking-k8s-io" rel="nofollow noreferrer">NetworkPolicySpec v1 networking API Ref:</a></p>
<blockquote>
<p><code>egress</code> <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#networkpolicyegressrule-v1-networking-k8s-io" rel="nofollow noreferrer">NetworkPolicyEgressRule array</a>:
List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod, <strong>OR</strong> if the traffic <strong>matches at least one</strong> egress rule <strong>across all of the NetworkPolicy objects</strong> whose podSelector matches the pod.</p>
</blockquote></li>
<li><p>Also keep in mind that this list also includes the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#networkpolicyport-v1-networking-k8s-io" rel="nofollow noreferrer">Ports</a> Array as well.</p></li>
</ul>
<hr>
<blockquote>
<p>Why is the first one incorrect?</p>
</blockquote>
<ul>
<li>Both rules are basically the same, only written in different formats. I'd say you should check if there is any other rule in effect for the same labels.</li>
<li>I'd suggest you to create a test cluster and try applying the step-by-step example I'll leave below.</li>
</ul>
<hr>
<p><strong>Reproduction:</strong></p>
<ul>
<li>This example is very similar to your case. I'm using <code>nginx</code> images for it's easy testing and changed ports to <code>80</code> on <code>NetworkPolicy</code>. I'm calling your first yaml <code>internal-original.yaml</code> and the second you posted <code>second-internal.yaml</code>:</li>
</ul>
<pre><code>$ cat internal-original.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-original
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 80
$ cat second-internal.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 80
</code></pre>
<ul>
<li>Now we create the pods with the labels and expose the services:</li>
</ul>
<pre><code>$ kubectl run mysql --generator=run-pod/v1 --labels="name=mysql" --image=nginx
pod/mysql created
$ kubectl run internal --generator=run-pod/v1 --labels="name=internal" --image=nginx
pod/internal created
$ kubectl run payroll --generator=run-pod/v1 --labels="name=payroll" --image=nginx
pod/payroll created
$ kubectl run other --generator=run-pod/v1 --labels="name=other" --image=nginx
pod/other created
$ kubectl expose pod mysql --port=80
service/mysql exposed
$ kubectl expose pod payroll --port=80
service/payroll exposed
$ kubectl expose pod other --port=80
service/other exposed
</code></pre>
<ul>
<li>Now, before applying the <code>networkpolicy</code>, I'll log into the <code>internal</code> pod to download <code>wget</code>, because after that outside access will be blocked:</li>
</ul>
<pre><code>$ kubectl exec internal -it -- /bin/bash
root@internal:/# apt update
root@internal:/# apt install wget -y
root@internal:/# exit
</code></pre>
<ul>
<li>Since your rule is blocking access to DNS, I'll list the IPs and test with them:</li>
</ul>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
internal 1/1 Running 0 62s 10.244.0.192
mysql 1/1 Running 0 74s 10.244.0.141
other 1/1 Running 0 36s 10.244.0.216
payroll 1/1 Running 0 48s 10.244.0.17
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.101.209.87 <none> 80/TCP 23s
other ClusterIP 10.103.39.7 <none> 80/TCP 9s
payroll ClusterIP 10.109.102.5 <none> 80/TCP 14s
</code></pre>
<ul>
<li>Now let's test the access with the first yaml:</li>
</ul>
<pre><code>$ kubectl get networkpolicy
No resources found in default namespace.
$ kubectl apply -f internal-original.yaml
networkpolicy.networking.k8s.io/internal-original created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:17:55-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:04-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:08-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
</code></pre>
<ul>
<li>Now let's test the access with the second yaml:</li>
</ul>
<pre><code>$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
internal-original name=internal 96s
$ kubectl delete networkpolicy internal-original
networkpolicy.networking.k8s.io "internal-original" deleted
$ kubectl apply -f second-internal.yaml
networkpolicy.networking.k8s.io/internal-policy created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:24-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:30-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:35-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
</code></pre>
<ul>
<li>As you can see, the connection to the services with the labels were ok and the connection to the pod that has other label has failed.</li>
</ul>
<p><strong>Note:</strong> If you wish to allow pods to resolve DNS, you can follow this guide: <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/11-deny-egress-traffic-from-an-application.md#allowing-dns-traffic" rel="nofollow noreferrer">Allow DNS Egress Traffic</a></p>
<p>If you have any questions, let me know in the comments.</p>
| Will R.O.F. |
<p>Is there anyone who can explain what the 'optimize-utilization' setting for the GKE autoscaler specifically does different from the standard autoscaling. It claims to be more aggressive in downscaling but does that mean that it doesn't look at the pod disruption budget, does it have a different limit for max resource usage (50% for the standard way) or does it have a 1 minute limit before scaling down instead of the normal 10 minutes? It is all very vague to me and I want to know the consequences before turning it on.</p>
| meerlol | <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">From Cluster Autoscaler</a> Documentation:</p>
<blockquote>
<p><code>optimize-utilization</code>: Prioritize optimizing utilization over keeping spare resources in the cluster. When enabled, Cluster Autoscaler will scale down the cluster more aggressively: it can remove more nodes, and remove nodes faster. This profile has been optimized for use with batch workloads that are not sensitive to start-up latency. We do not currently recommend using this profile with serving workloads.</p>
</blockquote>
<ul>
<li>In <a href="https://cloud.google.com/sdk/docs/release-notes#kubernetes_engine_12" rel="nofollow noreferrer">January 28th 2020</a> the Autoscaling Profiles were promoted to beta:</li>
</ul>
<blockquote>
<p><strong>Promoted Autoscaling Profiles to beta</strong>. Use with gcloud beta container clusters create or gcloud container clusters update: --autoscaling-profile=balanced (default) or --autoscaling-profile=optimize-utilization.</p>
</blockquote>
<ul>
<li>These were the only references I could find on the subject.</li>
</ul>
<blockquote>
<p><strong>At beta</strong>, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The <strong>average beta phase lasts about six months.</strong></p>
</blockquote>
<ul>
<li><p>Being recently promoted to beta probably means that it is still being assessed and fine tuned before being released and properly documented.</p>
</li>
<li><p>The official suggestion to use this method <strong>only</strong> for batch workloads (jobs) not for serving workloads enforces the statement that it is not ready for all environments in production.</p>
</li>
<li><p>I suggest you to follow the recommendations provided and if you are looking to apply it on serving workloads I'd wait a few months before it's promoted to General Availability.</p>
</li>
</ul>
<p>More references:</p>
<ul>
<li><a href="https://cloud.google.com/composer/docs/concepts/beta-support" rel="nofollow noreferrer">https://cloud.google.com/composer/docs/concepts/beta-support</a></li>
<li><a href="https://cloud.google.com/products#product-launch-stages" rel="nofollow noreferrer">https://cloud.google.com/products#product-launch-stages</a></li>
</ul>
| Will R.O.F. |
<p>I was running my kafka-connect on two ec2 machines. So irrespective of number of tasks, these two machines would always stay up running tasks. Hence under used machines.
Recently I migrated kafka-connect on kubernetes. I achieved good cpu/memory efficiency. </p>
<p><strong>But the problem arises when downscaling of kubernetes happens</strong>. <strong>Downscaling of pods does not happen gracefully</strong>. </p>
<p><strong>Eg.</strong> Suppose there are 2 pods p1 and p2.
p1 is running 3 tasks t1,t2,t3
p2 is running 2 tasks t4,t5
(here task t5 is task for source connector that brings data from postgres to kafka)</p>
<p>When any pod vanishes during downscaling, tasks running on it are rebalanced on other pods.
Suppose pod p2 vanishes.</p>
<p>After task rebalancing new state of cluster is:-
P1 is running 5 tasks t1,t2,t3,t4_new,t5_new</p>
<p>But logs for my source connector says that some other task(presumably task running on older pod t5) is still running and accessing postgres db data.</p>
<p>How can i make sure whenever pod downscales, it happens gracefully in the sense that all task running on pod are stopped.</p>
| Tanuj Gupta | <p>It could be that the default <code>grace period</code> is not enough for you a
aplication to finish its tasks after recevied <code>SIGTERM</code> singal. </p>
<p><code>SIGTERM</code> signal is sent the the main process in the container and once the signal is recevied container should start a graceful shutdown of the running application and exit. </p>
<p>There is a very good explanation/flow described in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">kubernetes official documentation</a> about <code>Termination of Pods</code>. </p>
<p>You could try to extend the <code>terminationGracePeriodSeconds</code> in your deployment to see if that helps (The default is 30): </p>
<pre><code>spec:
replicas:
template:
spec:
containers:
- name: test
image: ...
terminationGracePeriodSeconds: 60
</code></pre>
<p>The other way is to use <code>preStop</code> hook. <code>preStop</code> hook is executed immediately before a container is terminated.
How it works is when container needs to be terminated, Kubelet will run the pre-stop hook and only then send <code>SIGTERM</code> to the process. This can be used to initate a graceful shutdown of the container. </p>
<p>It can be also used to perform some other operations before shutdown without having to implement those in the app itself. </p>
<p>This is an simple example how it works (It is a <code>HTTP GET</code> request that will be sent to `http:///shutdown): </p>
<pre><code>lifecycle:
preStop:
httpGet:
port: 80
path: shutdown
</code></pre>
<p>Here is also link to <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">kubernetes documentation</a> about <code>Container hooks</code>. Let me know if this was heplful. </p>
| acid_fuji |
<p>While using Kubernetes v1.16.8 both the ResourceQuota and LimitRanger are enabled by default and I did not have to add them in my admission plugin in kube-apiserver.
In my case, I use the following LimitRanger </p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
</code></pre>
<p>and it adds the default limit for memory usage in a new Pod without specified limits, as expected.<br>
The Pod's definition is as simple as possible:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod-ctr
image: redis
</code></pre>
<p><strong>When I get the created pod described it has acquired the value for limit from the LimitRanger.</strong>
Everything is fine!</p>
<p>The problem occurs when i try to enforce a resourcequota for the namespace.
The ResourceQuota looks like this:</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
</code></pre>
<p>When <strong>I delete and recreate the pod</strong> it will not be created.
The resourcequota will result in the following error:</p>
<p><em>Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu</em></p>
<p>In other words, <strong>the resourcequota is applied before LimitRanger so it does not let me create pods without a specified limit.</strong></p>
<p>Is there a way to enforce LimitRanger first and then the ResourceQuota?
How do you apply them to your namespaces? </p>
<p>I would like to have developers that do not specify limits in the pod definition to be able to acquire the defaults while enforcing the resource quota as well.</p>
| taggelos | <p><strong>TL;DR</strong>:</p>
<blockquote>
<p>Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: <strong>must specify limits.cpu</strong></p>
</blockquote>
<ul>
<li><p>You didn't set a default limit for CPU, according to <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">ResourceQuota Docs</a>:</p>
<blockquote>
<p><strong>If quota is enabled</strong> in a namespace for compute resources like cpu and memory, users <strong>must specify</strong> requests or limits for those values; <strong>otherwise</strong>, the quota system <strong>may reject pod creation</strong>.</p>
</blockquote></li>
<li><p>This is why the pod is not being created. Add a <code>cpu-limit.yaml</code>:</p></li>
</ul>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
</code></pre>
<ul>
<li><p>The limitRanger injects the defaults at container runtime, and yes, it injects the default values prior to the ResourceQuota validation.</p></li>
<li><p>Other minor issue that I found, is that not all your yamls contains the <code>namespace: test</code> line under metadata, that's important to assign the resources to the right namespace, I fixed it on the example below.</p></li>
</ul>
<p><strong>Reproduction:</strong></p>
<ul>
<li>Created namespace, applied first the mem-limit and quota, as you mentioned:</li>
</ul>
<pre><code>$ kubectl create namespace test
namespace/test created
$ cat mem-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
namespace: test
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
$ kubectl apply -f mem-limit.yaml
limitrange/mem-limit-range created
$ kubectl apply -f quota.yaml
resourcequota/mem-cpu-demo created
$ kubectl describe resourcequota -n test
Name: mem-cpu-demo
Namespace: test
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
$ kubectl describe limits -n test
Name: mem-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
</code></pre>
<ul>
<li>Now if I try to create the pod:</li>
</ul>
<pre><code>$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: test
spec:
containers:
- name: test-pod-ctr
image: redis
$ kubectl apply -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
</code></pre>
<ul>
<li>Same error you faced, because there is no default limits for CPU set. We'll create and apply it:</li>
</ul>
<pre><code>$ cat cpu-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
$ kubectl apply -f cpu-limit.yaml
limitrange/cpu-limit-range created
$ kubectl describe limits cpu-limit-range -n test
Name: cpu-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 500m 1 -
</code></pre>
<ul>
<li>Now with the cpu limitRange in action, let's create the pod and inspect it:</li>
</ul>
<pre><code>$ kubectl apply -f pod.yaml
pod/test-pod created
$ kubectl describe pod test-pod -n test
Name: test-pod
Namespace: test
Status: Running
...{{Suppressed output}}...
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 500m
memory: 256Mi
</code></pre>
<ul>
<li>Our pod was created with the enforced limitRange.</li>
</ul>
<p>If you have any question let me know in the comments.</p>
| Will R.O.F. |
<p><a href="https://stackoverflow.com/questions/38242062/how-to-get-kubernetes-cluster-name-from-k8s-api">How to get Kubernetes cluster name from K8s API</a> mentions that </p>
<pre><code>curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
</code></pre>
<p>(from within the cluster), or </p>
<pre><code>kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
</code></pre>
<p>(from outside the cluster), can be used to retrieve the cluster name. That works.</p>
<p>Is there a way to perform the same programmatically using the <code>k8s client-go</code> library? Maybe using the RESTClient()? I've tried but kept getting <code>the server could not find the requested resource</code>.</p>
<p><strong>UPDATE</strong></p>
<p>What I'm trying to do is to get the <code>cluster-name</code> from an app that runs either in a local computer or within a k8s cluster. the k8s <code>client-go</code> allows to initialise the <code>clientset</code> via <a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration" rel="nofollow noreferrer">in cluster</a> or <a href="https://github.com/kubernetes/client-go/tree/master/examples/out-of-cluster-client-configuration" rel="nofollow noreferrer">out of cluster</a> authentication.</p>
<p>With the two commands mentioned at the top that is achievable. I was wondering if there was a way from the <code>client-go</code> library to achieve the same, instead of having to do <code>kubectl</code> or <code>curl</code> depending on where the service is run from.</p>
| supercalifragilistichespirali | <p>The data that you're looking for (name of the cluster) is available at GCP level. The name itself is a resource within GKE, not Kubernetes. This means that this specific information is not available using the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a>.
So in order to get this data, you can use the <a href="https://github.com/googleapis/google-cloud-go" rel="nofollow noreferrer">Google Cloud Client Libraries for Go</a>, designed to interact with GCP.</p>
<p>As a starting point, you can consult this <a href="https://github.com/googleapis/google-api-go-client/blob/master/GettingStarted.md" rel="nofollow noreferrer">document</a>. </p>
<p>First you have to download the <code>container</code> package: </p>
<pre><code>➜ go get google.golang.org/api/container/v1
</code></pre>
<p>Before you will launch you code you will have authenticate to fetch the data:
Google has a very <a href="https://cloud.google.com/docs/authentication/production" rel="nofollow noreferrer">good document</a> how to achieve that. </p>
<p>Basically you have <a href="https://console.cloud.google.com/apis/credentials/serviceaccountkey?_ga=2.197131116.1778639707.1582885931-1426982194.1570097934" rel="nofollow noreferrer">generate</a> a <code>ServiceAccount</code> key and pass it in <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment: </p>
<pre><code>➜ export GOOGLE_APPLICATION_CREDENTIALS=sakey.json
</code></pre>
<p>Regarding the information that you want, you can fetch the cluster information (including name) following <a href="https://github.com/GoogleCloudPlatform/golang-samples/blob/master/container/listclusters/listclusters.go" rel="nofollow noreferrer">this example</a>.</p>
<p>Once you do do this you can launch your application like this: </p>
<pre><code>➜ go run main.go -project <google_project_name> -zone us-central1-a
</code></pre>
<p>And the result would be information about your cluster: </p>
<pre><code>Cluster "tom" (RUNNING) master_version: v1.14.10-gke.17 -> Pool "default-pool" (RUNNING) machineType=n1-standard-2 node_version=v1.14.10-gke.17 autoscaling=false%
</code></pre>
<p>Also it is worth mentioning that if you run this command: </p>
<pre><code>curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
</code></pre>
<p>You are also interacting with the GCP APIs and can go unauthenticated as long as it's run within a GCE machine/GKE cluster. This provided automatic authentication.</p>
<p>You can read more about it under google`s <a href="https://cloud.google.com/compute/docs/storing-retrieving-metadata" rel="nofollow noreferrer">Storing and retrieving instance metadata</a> document.</p>
<p>Finally, one great advantage of doing this with the Cloud Client Libraries, is that it can be launched externally (as long as it's authenticated) or internally within pods in a deployment.</p>
<p>Let me know if it helps. </p>
| acid_fuji |
<p>I've deployed <code>metrics-server</code> in my K8s cluster (ver. 1.15)<br>
I gather this is a standard way to perform simple mem utilization checks</p>
<p>I have a POD that contains multiple processes (wrapped with <code>dumb-init</code> for process reaping purposes)</p>
<p>I want to know the exact current memory usage of my POD.</p>
<p>The output <code>kube-capacity --util --pods</code>:</p>
<pre><code>NODE NAMESPACE POD CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
sj-k8s1 kube-system kube-apiserver-sj-k8s1 250m (6%) 0m (0%) 77m (1%) 0Mi (0%) 0Mi (0%) 207Mi (2%)
...
sj-k8s3 salt-provisioning salt-master-7dcf7cfb6c-l8tth 0m (0%) 0m (0%) 220m (5%) 1536Mi (19%) 3072Mi (39%) 1580Mi (20%)
</code></pre>
<p>Shows that salt-master POD uses currently ~1.6Gi and kubeapi uses ~200Mi</p>
<p>However performing on <code>sj-k8s3</code>, command <code>ps aux | awk '$12 ~ /salt-master/ {sum += $6} END {print sum}'</code> (sum of RSS from PS output):</p>
<pre><code>2051208
</code></pre>
<p>Which is ~2Gi, the output of <code>/sys/fs/cgroup/memory/memory.stats</code>:</p>
<pre><code>cache 173740032
rss 1523937280
rss_huge 0
shmem 331776
mapped_file 53248
dirty 4096
writeback 0
pgpgin 34692690
pgpgout 34278218
pgfault 212566509
pgmajfault 6
inactive_anon 331776
active_anon 1523916800
inactive_file 155201536
active_file 18206720
unevictable 0
hierarchical_memory_limit 2147483648
total_cache 173740032
total_rss 1523937280
total_rss_huge 0
total_shmem 331776
total_mapped_file 53248
total_dirty 4096
total_writeback 0
total_pgpgin 34692690
total_pgpgout 34278218
total_pgfault 212566509
total_pgmajfault 6
total_inactive_anon 331776
total_active_anon 1523916800
total_inactive_file 155201536
total_active_file 18206720
total_unevictable 0
</code></pre>
<p>This POD actually contains two docker containers, so actual sum of RSS is:</p>
<pre><code>2296688
</code></pre>
<p>which is even bigger: 2.3Gi </p>
<p>On apiserver Node, performing just <code>ps aux</code> reveals that process RSS is: <code>447948</code>
The output of <code>/sys/fs/cgroup/memory/memory.stats</code>:</p>
<pre><code>cache 78499840
rss 391188480
rss_huge 12582912
shmem 0
mapped_file 69423104
dirty 0
writeback 0
pgpgin 111883
pgpgout 1812
pgfault 100603
pgmajfault 624
inactive_anon 0
active_anon 215531520
inactive_file 253870080
active_file 270336
unevictable 0
hierarchical_memory_limit 8361357312
total_cache 78499840
total_rss 391188480
total_rss_huge 12582912
total_shmem 0
total_mapped_file 69423104
total_dirty 0
total_writeback 0
total_pgpgin 111883
total_pgpgout 1812
total_pgfault 100603
total_pgmajfault 624
total_inactive_anon 0
total_active_anon 215531520
total_inactive_file 253870080
total_active_file 270336
total_unevictable 0
</code></pre>
<p>Could someone explain why the reported POD memory utilization differs from simple <code>ps</code> by almost 40% (for apiserver process by 100) ?</p>
<p>EDIT: I've updated the memory reported values to include output of <code>/sys/fs/cgroup/memory/memory.stat</code> which seems to +- correspond to POD utilization reported by <code>kube-capacity</code><br>
As suggested in first comment: does it mean that the difference is the shared memory only (reported by PS, but not by POD metrics/cgroup)?<br>
The difference is pretty big</p>
| lakier | <p>The <code>ps</code> does not reflect the actual amount of memory used by the application but only the memory reserved for it. It can be very misleading if pages are shared by several processes or by using some dynamically linked libraries. </p>
<p><a href="http://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html" rel="nofollow noreferrer">Understanding memory usage on Linux</a> is a very good article describing how memory usage in Linux works and what ps is actually reporting. </p>
<blockquote>
<p><strong>Why <code>ps</code> is "wrong"</strong></p>
<p>Depending on how you look at it, <code>ps</code> is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up <strong>if it were the only process running</strong>. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by <code>ps</code> are almost definitely <em>wrong</em>.</p>
</blockquote>
<p>That is why <code>ps</code> should not be used for some detailed data for memory consumption. </p>
<p>Alternative to <code>ps</code> would be <code>smem</code>.
It reports physical memory usage, taking shared memory pages into account. Then unshared memory is reported at the <code>USS</code> (Unique Set Size). So you can use <code>USS</code> when you want to ignore shared memory. </p>
<p>The unshared memory (<code>USS</code>) plus process's proportion of shared memory is reported at the <code>PSS</code> (Proportionial Set Size). Basically it add <code>USS</code> along with a proportion of its shared memory divided by the number of processes sharing that memory. </p>
<p>On the other hand <code>RSS</code>(Resident Set Size) is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will short report that over the amount of memory that is actually used. </p>
<p>Linux uses a resource management technique used in programming to efficiently implement a <code>duplicate</code> or <code>copy</code> operation. This is called <code>copy-on-write</code>. So when you have parent and child process, they both will show the same RSS. With <code>copy-on-write</code> linux ensures that both processes are really using the same memory. </p>
| acid_fuji |
<p>I Installed K8S with Helm Charts on EKS but the Loadbalancer EXTERNAL IP is in pending state , I see that EKS does support the service Type : LoadBalancer now.</p>
<p>Is it something I will have to check at the network outgoing traffic level ? Please share your experience if any.</p>
<p>Tx,</p>
| Rajesh Thakur | <p>The <code>Loadbalancer</code> usually takes some seconds or a few minutes to provision you an IP.</p>
<p>If after 5 minutes the IP isn't provisioned:
- run <code>kubectl get svc <SVC_NAME> -o yaml</code> and if there is any different annotation set.</p>
<ul>
<li><p>By default services with <code>Type:LoadBalancer</code> are provisioned with Classic Load Balancers automatically. Learn more <strong><a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="nofollow noreferrer">here</a></strong>. </p></li>
<li><p>If you wish to use Network load Balancers you have to use the annotation:</p></li>
</ul>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-type: nlb
</code></pre>
<ul>
<li><p>The process is really automatic, you don't have to check for network traffic.</p></li>
<li><p>You can check if there is any issue with the Helm Chart you are deploying by manually creating a service with loadbalancer type and check if it gets provisioned:</p></li>
</ul>
<pre><code>$ kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80
pod/nginx created
$ kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 34s
$ kubectl expose pod nginx --type=LoadBalancer
service/nginx exposed
$ kubectl get svc nginx -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.1.63.178 <pending> 80:32522/TCP 7s
nginx LoadBalancer 10.1.63.178 35.238.146.136 80:32522/TCP 42s
</code></pre>
<ul>
<li>In this example the LoadBalancer took <code>42s</code> to be provisioned. This way you can verify if the issue is on the Helm Chart or something else.</li>
</ul>
| Will R.O.F. |
<p>I am trying to add resource and limits to my deployment on Kuberenetes Engine since one of my deployment on the pod is continuously getting evicted with an error message <code>The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0.</code> I assume that the issue could be resolved by adding resource requests. </p>
<p>When I try to add resource requests and deploy, the deployment is successful but when I go back and and view detailed information about the Pod, with the command
<code>kubectl get pod default-pod-name --output=yaml --namespace=default</code>
It still says the pod has request of cpu: 100m and without any mention of memory that I have allotted. I am guessing that the cpu request of 100m was a default one. Please let me know how I can allot the requests and limits, the code I am using to deploy is as follows:</p>
<pre><code>kubectl run model-run --image-pull-policy=Always --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development
</code></pre>
<p>Any solution will be appreciated</p>
| Ranga Vittal | <blockquote>
<p>The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0.</p>
</blockquote>
<p>At first the message seems like there is a lack of resource in the <code>node</code> itself but the second part makes me believe you are correct in trying to raise the request limit for the container.</p>
<p>Just keep in mind that if you still face errors after this change, you might need to add mode powerful node-pools to your cluster.</p>
<p>I went through your command, there is a few issues I'd like to highlight:</p>
<ul>
<li><code>kubectl run</code> was <a href="https://github.com/kubernetes/kubernetes/pull/68132" rel="nofollow noreferrer">deprecated</a> in 1.12 to all resources except for pods and it is retired in version 1.18.</li>
<li><code>apiVersion": "apps/v1beta1</code> is <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="nofollow noreferrer">deprecated</a>, and starting on v 1.16 it is no longer be supported, I replaced with <code>apps/v1</code>. </li>
<li>In <code>spec.template.spec.container</code> it's written <code>"resouces"</code> instead of <strong><code>"resources"</code></strong></li>
<li>after fixing the resources the next issue is that <code>requests</code> and <code>limits</code> are written in <code>array</code> format, but they need to be in a <code>list</code>, otherwise you get this error:</li>
</ul>
<pre><code>kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Limits: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|"limits":[{"cpu":"75|..., bigger context ...|Always","name":"model-run","resources":{"limits":[{"cpu":"750m","memory":"2500Mi"}],"requests":[{"cp|...
</code></pre>
<ul>
<li>Here is the fixed format of your command:</li>
</ul>
<pre><code>kubectl run model-run --image-pull-policy=Always --overrides='{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "nginx",
"imagePullPolicy": "Always",
"resources": {
"requests": {
"memory": "2048Mi",
"cpu": "500m"
},
"limits": {
"memory": "2500Mi",
"cpu": "750m"
}
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath": "/path/collection/keys"
}
],
"env": [
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}' --image=gcr.io/some-project/news/model-run:development
</code></pre>
<ul>
<li>Now after aplying it on my Kubernetes Engine Cluster <code>v1.15.11-gke.13</code> , here is the output of <code>kubectl get pod X -o yaml</code>:</li>
</ul>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
model-run-7bd8d79c7d-brmrw 1/1 Running 0 17s
$ kubectl get pod model-run-7bd8d79c7d-brmrw -o yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: model-run
pod-template-hash: 7bd8d79c7d
run: model-run
name: model-run-7bd8d79c7d-brmrw
namespace: default
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /path/collection/keys/key.json
image: nginx
imagePullPolicy: Always
name: model-run
resources:
limits:
cpu: 750m
memory: 2500Mi
requests:
cpu: 500m
memory: 2Gi
volumeMounts:
- mountPath: /path/collection/keys
name: credentials
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-tjn5t
readOnly: true
nodeName: gke-cluster-115-default-pool-abca4833-4jtx
restartPolicy: Always
volumes:
- name: credentials
secret:
defaultMode: 420
secretName: credentials
</code></pre>
<ul>
<li>You can see that the resources limits and requests were set.</li>
</ul>
<p>If you still have any question let me know in the comments!</p>
| Will R.O.F. |
<p>I'm trying to deploy a simple python app to Google Container Engine:</p>
<p>I have created a cluster then run <code>kubectl create -f deployment.yaml</code>
It has been created a deployment pod on my cluster. After that i have created a service as: <code>kubectl create -f deployment.yaml</code></p>
<blockquote>
<p>Here's my Yaml configurations:</p>
<p><strong>pod.yaml</strong>:</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: test-ctr
image: arycloud/flask-svc
ports:
- containerPort: 5000
</code></pre>
<blockquote>
<p>Here's my Dockerfile:</p>
</blockquote>
<pre><code>FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./app.py
</code></pre>
<blockquote>
<p><strong>deployment.yaml:</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
name: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<blockquote>
<p><strong>service.yaml:</strong></p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
selector:
app: test-app
</code></pre>
<blockquote>
<p><strong>Ingress</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>It creates a LoadBalancer and provides an external IP, when I open the IP it returns <code>Connection Refused error</code></p>
<p>What's going wrong?</p>
<p>Help me, please!</p>
<p>Thank You,
Abdul</p>
| Abdul Rehman | <p>First make sure your ingress controller in running, to check that <code>kubectl get pods -n ingress-nginx</code> if you dont find any pods running you need to deploy the kubernetes ingress you can do that by <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml</code>.</p>
<p>If you have installed ingress controller correctly, then just apply the yaml below, you need to have <strong>selector</strong> in your deployment so that the deployment can manage the replicas, apart from that you dont need to expose a node port as you are going to access your app through load balancer.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
</code></pre>
| Talha Latif |
<p>I have a subnet mask for my subnet set to 10.0.0.0/9. When setting up kubernetes, google asks for a master ip range for kubernets. I set this to 10.0.0.0/28 but I have no idea if this is correct or how these two things are related? Is there any info on that?</p>
<p>Also, did I do that right? I assume the kubernetes has to be using the ips of the subnet.</p>
<p>thanks,
Dean</p>
| Dean Hiller | <p>"Master IP Range" is only relevant in GKE when you enable <strong>Private Network</strong>.</p>
<p>When creating a private cluster, the <code>Master IP Range</code> has the following information message:</p>
<blockquote>
<p>Master IP range is a private RFC 1918 range for the master's VPC. The master range must not overlap with any subnet in your cluster's VPC. The master and your cluster use VPC peering to communicate privately.
This setting is permanent.</p>
</blockquote>
<ul>
<li>Since 10.0.0.0/28 is a range inside 10.0.0.0/9, it will not effectively isolate the cluster.
I Created a vpc subnet with 10.0.0.0/9 and tried to create the cluster with Master IP Range 10.0.0.0/28, look at the message I get while creating it:</li>
</ul>
<p><a href="https://i.stack.imgur.com/D1wxx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D1wxx.png" alt="enter image description here"></a></p>
<p>If you look at <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">Creating a Private GKE Cluster</a> you can find many configuration examples for different access types.</p>
<p>Example:
If your subnet is 10.0.0.0/9 you must use a Master IP Range outside of that range.</p>
<ul>
<li>Since the first half of /9 ends in 10.127.255.255 you can set master network to be anything inside 10.128.0.0/9, 172.16.0.0/12 or 192.168.0.0/16 as long it does not overlaps any other vpc or subnet in your project.</li>
</ul>
<p>Here you can learn more about <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#inside-cluster" rel="nofollow noreferrer">GKE Networking</a>.</p>
<p>If you have any doubts let me know in the comments.</p>
| Will R.O.F. |
<p>Aws sdk getting instance credential from eks instead of ec2</p>
<p>I'm using spring cloud aws to send send messages to an sns, and local the <a href="https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html" rel="nofollow noreferrer">credential chain</a> work fine with a .aws/credentials file. However in the cloud it is not being that easy.</p>
<p>For cloud deployment, we are using IAM roles for service accounts. In the SDK <a href="https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html" rel="nofollow noreferrer">doc</a>, the credential chain assumes this role if no other is found.</p>
<p>This would be the easy way, but it doesn't happen, when spring is up it somehow is taking the role that is assigned to node eks, which in theory it shouldn't even fill, which is not correct and causes a permission error when i use sns.</p>
<pre><code>software.amazon.awssdk.services.sns.model.AuthorizationErrorException: User: arn:aws:sts::*******:assumed-role/eksctl-*******-eks-qa-nodegroup-spo-NodeInstanceRole-*******/i-******* is not authorized to perform: SNS:ListTopics
</code></pre>
<p>I've tried several ways to get it right, including</p>
<pre><code>@Bean
@Primary
public AmazonSNS amazonSns() {
return AmazonSNSClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider())
.build();
}
cloud:
aws:
credentials:
use-default-aws-credentials-chain: true
</code></pre>
<p>and a few others.</p>
<p>I isolated the error, and the sdk v1 is responsible. I uploaded a code version with the pure sdk v2 without modifying anything in the environment, and it worked as it should, using the credential chain and getting the correct role.</p>
<p>I already checked this <a href="https://stackoverflow.com/a/70028461/13041893">answer</a>, and the version used by spring is 1.11.951 and with the pure sdk I used 1.12.142 . The minimum version by the <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html" rel="nofollow noreferrer">doc</a> is 1.11.704</p>
<p>Using the pure sdk v2 is a bit laborious and unnecessary if spring already provides this implementation, it will use it by default in <a href="https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available" rel="nofollow noreferrer">spring cloud aws V3.0</a></p>
<p>My gradle.build</p>
<pre><code> plugins {
id 'org.springframework.boot' version '2.6.2'
id 'io.spring.dependency-management' version '1.0.11.RELEASE'
id 'java'
}
group = 'com.multilaser.worker'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '17'
ext {
set('springCloudVersionAws', "2.3.2")
}
repositories {
mavenCentral()
mavenLocal()
}
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
implementation 'io.awspring.cloud:spring-cloud-starter-aws'
implementation 'io.awspring.cloud:spring-cloud-starter-aws-messaging'
implementation 'io.awspring.cloud:spring-cloud-aws-autoconfigure'
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
annotationProcessor 'org.mapstruct:mapstruct-processor:1.4.2.Final'
}
dependencyManagement {
imports {
mavenBom "io.awspring.cloud:spring-cloud-aws-dependencies:${springCloudVersionAws}"
}
}
</code></pre>
<p>depoy.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ----
namespace: ---
labels:
app: ---
spec:
replicas: 1
selector:
matchLabels:
app: ---
template:
metadata:
labels:
app: ---
spec:
serviceAccountName: ----
containers:
- name: ---
image: ---
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "80m"
limits:
memory: "800Mi"
cpu: "500m"
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
httpHeaders:
- name: X-Custom-Header
value: ReadinessProbe
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
httpHeaders:
- name: X-Custom-Header
value: LivenessProbe
initialDelaySeconds: 35
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 10
envFrom:
- configMapRef:
name: ---
- secretRef:
name: ---
ports:
- containerPort: 8080
- containerPort: 5005
</code></pre>
| Bryan Motta | <p>Adding the lib aws-java-sdk-sts to the project solved the problem</p>
<pre><code>implementation group: 'com.amazonaws', name: 'aws-java-sdk-sts', version: '1.11.951'
</code></pre>
| Bryan Motta |
<p>I am using <code>kubectl apply -f pv.yaml</code> on this basic setup:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
storageClassName: "normal"
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
hostPath:
path: /home/demo/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
spec:
storageClassName: "normal"
resources:
requests:
storage: 200Mi
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
labels:
name: nginx-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: pv-demo
volumes:
- name: pv-demo
persistentVolumeClaim:
claimName: pvc-demo
</code></pre>
<p>Now I wanted to delete everything so I used: <code>kubectl delete -f pv.yaml</code>
However, the volume still persists on the node at /home/demo and has to be removed manually.</p>
<p>So I tried to patch and remove protection before deletion: </p>
<pre><code>kubectl patch pv pv-demo -p '{"metadata":{"finalizers":null}}'
</code></pre>
<p>But the mount still persists on the node.
I tried to edit and null Finalizers manually, although it said 'edited'; <code>kubectl get pv</code> shows Finalizers unmodified.</p>
<p>I don't understand what's going on, Why all of the above is not working? I want when delete, the mount folder on the node /home/demo gets deleted as well.</p>
| x300n | <p>This is expected behavior when using <code>hostPath</code> as it does not support deletion as to other volume types. I tested this with <code>kubeadm</code> and <code>gke</code> clusters and the mounted directory and files remain intact after removal the <code>pv</code> and <code>pvc</code>. </p>
<p>Taken from the manual about reclaim policies:</p>
<blockquote>
<p>Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD,<br>
Azure Disk, and Cinder volumes support deletion.</p>
</blockquote>
<p>While <code>recycle</code> is mentioned in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycle" rel="nofollow noreferrer">documentation</a> as deprecated <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.5.md#deprecations" rel="nofollow noreferrer">since version 1.5</a> it still works and can cleanup your files but it won`t delete your mounted directory. It is not ideal but that is the closest workaround. </p>
<p>IMPORTANT:
To successfully use recycle you cannot delete PV itself. If you delete PVC then controller manager creates recycyler pod that cleans up the volumes and this volume become available for binding to the next PVC. </p>
<p>When looking at the control-manager logs you can see that <code>host_path deleter</code> rejects the <code>/home/demo/</code> dir deletion and it supports only deletion of the <code>/tmp/.+</code> directory. However after testing this <code>tmp</code> is also not being deleted. </p>
<pre><code>'Warning' reason: 'VolumeFailedDelete' host_path deleter only supports /tmp/.+ but received provided /home/demo/```
</code></pre>
| acid_fuji |
<p>I have a React app which I build and dockerize to be hosted on a nginx server.</p>
<pre><code>FROM nginx:latest
COPY build /usr/share/nginx/html
</code></pre>
<p>I then create a simple deployment and service for it (which I very much doubt are the problem). Next I create an nginx ingress.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: <MY_NGINX_CONTROLLER_EXTERNAL_IP>.xip.io
http:
paths:
- backend:
serviceName: my-app-service
servicePort: 80
path: /myapp
</code></pre>
<p>But when I call <code><MY_NGINX_CONTROLLER_EXTERNAL_IP>.xip.io/myapp</code> I get a 404.</p>
<p>When I change <code>path: /myapp</code> to <code>path: /</code> I can reach my app with no problems on <code><MY_NGINX_CONTROLLER_EXTERNAL_IP>.xip.io</code></p>
<p>I have tried numerous variations of <code>rewrite-target</code> and <code>path</code> like <code>/$1</code> and <code>/myapp/(.*)</code> but it makes no difference.</p>
<p>I believe the problem lies with using the <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotation to route to a React app. Because using that annotation for any other service on my cluster works just fine.</p>
<p>Others users seem to have similar problems: <a href="https://github.com/kubernetes/ingress-nginx/issues/3770#" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3770#</a>. But I was unable to find a solution.</p>
<p>A workaround would be to change the <code>basename</code> and <code>homepage</code> attributes within my react app and then update all routes. Then I would not need <code>rewrite-target</code> at all. But making it work with <code>rewrite-target</code> would be much cleaner, especially considering that the path/subdirectory will have to change regularly for my app.</p>
| 2um9YJ6haZ7wKP4m | <p>I got it working now. With two additional steps "rewrite-target" annotation is not required at all and can be removed from the ingress resource.</p>
<p>Step 1: Change the base URL within the react app. See how to do this <a href="https://skryvets.com/blog/2018/09/20/an-elegant-solution-of-deploying-react-app-into-a-subdirectory/" rel="nofollow noreferrer">here</a>.</p>
<p>Step 2: Copy the react build into the corresponding subdirectory of <code>/usr/share/nginx/html</code> in the Dockerfile:</p>
<pre><code>FROM nginx:latest
COPY build /usr/share/nginx/html/myapp
</code></pre>
| 2um9YJ6haZ7wKP4m |
<p>The "nginx.ingress.kubernetes.io/rewrite-target" annotation on my ingress resource does not seem to be doing anything. Everything works just fine when I change <code>path: /helloworld</code> to <code>path: /</code>. I have tried putting the annotation's value in double quotes and changing the order of annotations to no effect. What am I missing? It seems like this should be fairly straightforward.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: helloworld-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: <NGINX_CONTROLLER_EXTERNAL_IP>.xip.io
http:
paths:
- backend:
serviceName: helloworld-svc
servicePort: 80
path: /helloworld
</code></pre>
| 2um9YJ6haZ7wKP4m | <p>I solved my problem. It was not related to the "rewrite-target" annotation itself. I was trying to point it to a Nginx server serving a React.js app. I needed to set the base URL within the React app and copy the build to the corresponding subdirectory of <code>/usr/share/nginx/html</code> in the Dockerfile. After taking those steps there was no need for using "rewrite-target" anymore. I posted a <a href="https://stackoverflow.com/questions/65546735/rewrite-target-failing-when-routing-to-react-js-app-on-nginx-server/65560923#65560923">separate question</a> detailing the setup with React and Nginx and answered it too.</p>
| 2um9YJ6haZ7wKP4m |
<p>Is it possible to get into the chart which has been pulled from the bitnami or stable repo? and what are the requirements if want to write my own chart.yml and deploy that into kubernetes pod, and what would be the command.</p>
<p>If I type the command helm install bitnami/tomcat, helm deploys a service right! so in the background there has to be a chart.yml which supports this execution, so is it possible to edit that chart.yml?</p>
<p>Please help me out!</p>
| hemanth43 | <p>We can't modify public repositories from other companies for obvious
reasons.</p>
<p>But <strong>you can download, modify and apply it!</strong></p>
<p>Using your bitnami/tomcat as example.</p>
<ul>
<li>On <strong>Helm 2</strong> you can use <strong>fetch</strong>:</li>
</ul>
<pre><code>$ helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
$ helm fetch bitnami/tomcat
❯ ls
tomcat-6.2.4.tgz
</code></pre>
<ul>
<li>If you are running <strong>Helm 3</strong> the <em>fetch</em> was replaced by <strong>pull</strong>:</li>
</ul>
<pre><code>$ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
$ helm pull bitnami/tomcat
$ ls
tomcat-6.2.4.tgz
</code></pre>
<ul>
<li>It will download a <strong>tgz</strong> of the chart, just unpack it, modify what you want carefully and then you can apply it locally pointing to the folder where it was unpacked:</li>
</ul>
<pre><code>$ tar -xvzf tomcat-6.2.4.tgz
tomcat/Chart.yaml
tomcat/values.yaml
tomcat/templates/NOTES.txt
tomcat/templates/_helpers.tpl
tomcat/templates/deployment.yaml
tomcat/templates/ingress.yaml
tomcat/templates/pvc.yaml
tomcat/templates/secrets.yaml
tomcat/templates/svc.yaml
tomcat/.helmignore
tomcat/README.md
tomcat/ci/values-with-ingress-and-initcontainers.yaml
$ ls
tomcat tomcat-6.2.4.tgz
$ cd tomcat
$ ls
Chart.yaml ci README.md templates values.yaml
$ head Chart.yaml
apiVersion: v1
appVersion: 9.0.31
description: Chart for Apache Tomcat
home: http://tomcat.apache.org
icon: https://bitnami.com/assets/stacks/tomcat/img/tomcat-stack-110x117.png
keywords:
- tomcat
- java
- http
- web
$ helm install . --generate-name
NAME: chart-1583237097
LAST DEPLOYED: Tue Mar 3 13:04:58 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
...
$ helm3 list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chart-1583237097 default 1 2020-03-03 13:04:58.617410239 +0100 CET deployed tomcat-6.2.4 9.0.31
</code></pre>
<ul>
<li><p>I didn't changed anything but as you could see the chart is open for you to modify as you like.</p></li>
<li><p>You can even create a private repository for your custom charts, learn more here: <a href="https://helm.sh/docs/topics/chart_repository/" rel="noreferrer">The Chart Repository Guide</a></p></li>
</ul>
| Will R.O.F. |
<p>Hi I saw <a href="https://itnext.io/running-kubectl-commands-from-within-a-pod-b303e8176088" rel="noreferrer">this documentation</a> where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same.. </p>
<p>When I tried adding serviceAccounts to the container i got the error:</p>
<pre><code>Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
</code></pre>
<p>This was when i tried sshing into the container and running the kubctl.</p>
<p>Editing the question.....</p>
<p>As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml: </p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
</code></pre>
<p>On running the job, I get the error:</p>
<pre><code>Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
</code></pre>
| Vipin Menon | <blockquote>
<p>Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..</p>
</blockquote>
<p>A <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Job</a> creates one or more Pods</strong> and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that <strong>yes, it is possible to run kubectl inside a job resource.</strong></p>
<p><strong>TL;DR:</strong></p>
<ul>
<li>Your yaml file is <strong>correct</strong>, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.</li>
<li>Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl" rel="noreferrer">you may have unexpected incompatibilities</a></li>
</ul>
<hr>
<p><strong>Security Considerations:</strong></p>
<ul>
<li>Your job role's scope is the best practice according to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="noreferrer">documentation</a> (specific role, to specific user on specific namespace).</li>
<li>If you use a <code>ClusterRoleBinding</code> with the <code>cluster-admin</code> role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.</li>
</ul>
<hr>
<p><strong>Test Environment:</strong></p>
<ul>
<li>I deployed your config on a kubernetes 1.17.3 and run the job with <code>bitnami/kubectl</code> and <code>bitnami/kubectl:1:17.3</code>. It worked on both cases.</li>
<li>In order to avoid incompatibility, use the <code>kubectl</code> with matching version with your server.</li>
</ul>
<hr>
<p><strong>Reproduction:</strong></p>
<pre><code>$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
</code></pre>
<ul>
<li>I created two pods just to add output to the log of <code>get pods</code>.</li>
</ul>
<pre><code>$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
</code></pre>
<ul>
<li>Then I apply the <code>job</code>, <code>ServiceAccount</code>, <code>Role</code> and <code>RoleBinding</code></li>
</ul>
<pre><code>$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
</code></pre>
<ul>
<li>Now let's check the testing-stuff pod log to see if it logged the command output:</li>
</ul>
<pre><code>$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
</code></pre>
<p>As you can see, it has succeeded running the job with the custom <code>ServiceAccount</code>.</p>
<p>Let me know if you have further questions about this case.</p>
| Will R.O.F. |
<p>I have one deployment which has two containers.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ol000dep
spec:
selector:
matchLabels:
app: ol000kube
replicas : 2
template:
metadata:
labels:
app: ol000kube
spec:
containers:
- name: django
image: django:local
ports:
- containerPort: 8011
containers:
- name: nginx
image: nginx:local
ports:
- containerPort: 80
</code></pre>
<p>This makes two replicad pods, and on my understanding, each pod has two containers.</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
ol000dep-bc96bfc98-r9lrj 1/1 Running 0 21m
ol000dep-bc96bfc98-t6flw 1/1 Running 0 21m
</code></pre>
<p>Now I can login</p>
<pre><code>kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash
</code></pre>
<p>Then, I wonder,</p>
<p>Is this login in a Pod not a Container??</p>
<p>If so ,how can I select the container I want to login?</p>
| whitebear | <p>Interesting. Even though the yaml you provided is incorrect (<code>.spec.containers</code> map key must be <strong>unique</strong>), k8s seems to be fine with that.</p>
<p>In your case the pod is started with <strong>only</strong> the second container (name: nginx) in it though.</p>
<blockquote>
<p>Is this login in a Pod not a Container??</p>
</blockquote>
<p><strong>Container.</strong></p>
<p>So, with <code>kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash</code>, you login/exec into the nginx container.</p>
<p>After correcting the yaml, two containers would be started in the pod and you can log into the desired container via its name (e.g. <code>name: django</code>) using the <code>-c / --container</code> parameter.</p>
<pre class="lang-yaml prettyprint-override"><code>...
containers:
- name: django
image: django:local
ports:
- containerPort: 8011
- name: nginx
image: nginx:local
ports:
- containerPort: 80
</code></pre>
<p>login:</p>
<pre><code>kubectl exec -it POD_NAME -c CONTAINER_NAME -- /bin/bash
</code></pre>
<p>Note that if you do not specify the name of the container (by omitting <code>-c CONTAINER_NAME</code>), you will login into the first defined container by default (in your case <code>django</code>).</p>
| Kenan Güler |
<p>I have two applications running in K8. <code>APP A</code> has write access to a data store and <code>APP B</code> has read access.</p>
<p><code>APP A</code> needs to be able to change <code>APP B</code>'s running deployment.</p>
<p>How we currently do this is manually by kicking off a process in <code>APP A</code> which adds a new DB in the data store (say db <code>bob</code>). Then we do:</p>
<pre><code>kubectl edit deploy A
</code></pre>
<p>And change an environment variable to <code>bob</code>. This starts a rolling restart of all the pods of <code>APP B</code>. We would like to automate this process.</p>
<p>Is there anyway to get <code>APP A</code> to change the deployment config of <code>APP B</code> in k8? </p>
| Filipe Teixeira | <p>Firstly answering your main question:</p>
<blockquote>
<p>Is there anyway to get a service to change the deployment config of another service in k8?</p>
</blockquote>
<p>From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them <code>APP A</code> and <code>APP B</code>, because:</p>
<blockquote>
<p>In Kubernetes, a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#service-resource" rel="nofollow noreferrer">Service</a> is an <strong>abstraction which defines a logical set of Pods</strong> and a policy by which to access them (sometimes this pattern is called a micro-service).</p>
</blockquote>
<p>So if in your question you meant:</p>
<p>"Is there anyway to get <code>APP A</code> to change the deployment config of <code>APP B</code> in k8?"</p>
<p>Then <strong>Yes</strong>, you can give a pod admin privileges to manage other components of the cluster using the <strong><code>kubectl set env</code></strong> command to change/add envs.</p>
<p>In order to achieve this, you will need:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="nofollow noreferrer">A Service Account</a> with needed permissions in the namespace.
<ul>
<li>NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">ClusterRole</a>, granting <code>cluster-admin</code> to a specific user. If you use only 1 namespace for these apps, consider a Role instead.</li>
</ul></li>
<li>A <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">ClusterRoleBinding</a> binding the permissions of the service account to a role of the Cluster. </li>
<li>The <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux" rel="nofollow noreferrer">Kubectl client</a> inside the pod (manually added or modifying the docker-image) on <code>APP A</code></li>
</ul>
<p><strong>Steps to Reproduce:</strong></p>
<ul>
<li>Create a deployment to apply the cluster-admin privileges, I'm naming it <code>manager-deploy.yaml</code>:</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
</code></pre>
<ul>
<li>Create a deployment with a environment var, mocking your Service B. I'm naming it <code>deploy-env.yaml</code>:</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
</code></pre>
<ul>
<li>Create a <code>ServiceAccount</code> and a <code>ClusterRoleBinding</code> with <code>cluster-admin</code> privileges, I'm naming it <code>service-account-for-pod.yaml</code> (notice it's <code>mentioned in manager-deploy.yaml</code>:</li>
</ul>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
</code></pre>
<ul>
<li>Apply the <code>service-account-for-pod.yaml</code>, <code>deploy-env.yaml</code>, <code>manager-deploy.yaml</code>and list current environment variables from <code>deploy-env</code> pod:</li>
</ul>
<pre><code>$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
</code></pre>
<ul>
<li>Shell into the <code>manager pod</code>, download the <code>kubectl</code> binary and apply the <code>kubectl set env deployment/deployment_name VAR_NAME=VALUE</code>:</li>
</ul>
<pre><code>$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root@manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root@manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root@manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root@manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
</code></pre>
<ul>
<li>Verify the <code>env var value</code> on the pod (notice that the pod is recreated when deployment is modified: </li>
</ul>
<pre><code>$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
</code></pre>
<p>Let me know in the comments if you have any doubt on how to apply this solution to your environment.</p>
| Will R.O.F. |
<p>Document followed
<a href="https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html</a></p>
<p>I am able to set up the dashboard and access it using the link <a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login</a></p>
<p>The issue with this is that "EVERY USER HAS TO FOLLOW THE SAME TO ACCESS THE DASHBOARD"</p>
<p>I was wondering if there was some way wherein we can access the dashboard via DomainName and everyone should be able to access it without much pre-set up required.</p>
| codeaprendiz | <p>We have two approaches to expose the Dashboard, <code>NodePort</code> and in <code>LoadBalancer</code>.</p>
<p>I'll demonstrate both cases and some of it's pros and cons.</p>
<hr>
<h2><code>type: NodePort</code></h2>
<p>This way your dashboard will be available in <code>https://<MasterIP>:<Port></code>.</p>
<ul>
<li>I'll start with Dashboard is already deployed and running as ClusterIP (just like yours).</li>
</ul>
<pre><code>$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.11.223 <none> 80/TCP 11m
</code></pre>
<ul>
<li>We patch the service to change the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ServiceType</a>:</li>
</ul>
<pre><code>$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'
service/kubernetes-dashboard patched
</code></pre>
<p><strong>Note:</strong> You can also apply in YAML format changing the field <code>type: ClusterIP</code> to <code>type: Nodeport</code>, instead I wanted to show a direct approach with <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="nofollow noreferrer"><code>kubectl patch</code></a> using JSON format to patch the same field.</p>
<ul>
<li>Now let's list to see the new port:</li>
</ul>
<pre><code>$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.0.11.223 <none> 443:31681/TCP 13m
</code></pre>
<p><strong>Note:</strong> Before accessing from an outside cluster, you must enable the security group of the nodes to allow incoming traffic through the port exposed, or here for <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#creating_a_service_of_type_nodeport" rel="nofollow noreferrer">GKE</a>.
Below my example creating the rule on Google Cloud, but the same concept applies to <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/" rel="nofollow noreferrer">EKS</a>.</p>
<pre><code>$ gcloud compute firewall-rules create test-node-port --allow tcp:31681
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/owilliam/global/firewalls/test-node-port].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
test-node-port default INGRESS 1000 tcp:31681 False
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-pool-1-4776b3eb-16t7 Ready <none> 18d v1.15.8-gke.3 10.128.0.13 35.238.162.157
</code></pre>
<ul>
<li>And I'll access it using <code>https://35.238.162.157:31681</code>:</li>
</ul>
<p><a href="https://i.stack.imgur.com/UDz0d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UDz0d.png" alt="enter image description here"></a></p>
<hr>
<h2><code>type: LoadBalancer</code></h2>
<p>This way your dashboard will be available in <code>https://IP</code>.</p>
<ul>
<li><p>Using <code>LoadBalancer</code> your cloud provider automates the firewall rule and port forwarding assigning an IP for it. (you may be charged extra depending on your plan).</p></li>
<li><p>Same as before, I deleted the service and created again as clusterIP:</p></li>
</ul>
<pre><code>$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.2.196 <none> 443/TCP 15s
$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec": {"type": "LoadBalancer"}}'
service/kubernetes-dashboard patched
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.0.2.196 <pending> 443:30870/TCP 58s
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.0.2.196 35.232.133.138 443:30870/TCP 11m
</code></pre>
<p><strong>Note:</strong> When you apply it, the EXTERNAL-IP will be in <code><pending></code> state, after a few minutes a Public IP should be assigned as you can see above.</p>
<ul>
<li>You can access it using <code>https://35.232.133.138</code>:</li>
</ul>
<p><a href="https://i.stack.imgur.com/yWBfv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yWBfv.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Security Considerations:</strong></p>
<ul>
<li><p>Your connection to the Dashboard when exposed is always thru HTTPS, you may get a notification about the autogenerated certificate everytime you enter, unless you change it for a trusted one. You can find more <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md" rel="nofollow noreferrer">here</a></p></li>
<li><p>Since the Dashboard is not meant to be much exposed, I'd suggest to keep the access using the Public IP (or custom dns name in case of aws, i.e: *****.us-west-2.elb.amazonaws.com).</p></li>
<li><p>If you really like to integrate to your main domain name, I'd suggest to put it behind another layer of authentication on your website.</p></li>
<li><p>New access will still need the Access Token, but no one will have to go thru that process to expose the Dashboard, you only have to pass the IP/DNS Address and the Token to access it.</p></li>
<li><p>This token has Cluster-Admin Access, so keep it safe as you'd keep a root password.</p></li>
</ul>
<p>If you have any doubts, let me know!</p>
| Will R.O.F. |
<p>I'm looking at the Kubernetes logs we see in our log aggregation tool (Splunk, but not important), and I'm trying to visualise image pull speeds over the day. message that appears in the logs is something like:</p>
<p>Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.10" in 1.100152244s</p>
<p>The value 1.100152244s is a string so obviously doesn't work for visualisation. The string itself can appear in multiple formats:</p>
<p>1.100152244s
4m4.004131504s
64.10064ms</p>
<p>Silly question but what units do we see here after the period? Are these nanoseconds?</p>
<p>Many thanks in advance</p>
| Michael | <p><strong>Disclaimer</strong>: At the time of writing I am not an expert on Kubernetes internals nor do I have any knowledge about Go. (However, I am familiar with the general functioning of K8s components and also with high/low level programming languages...) After about 10 min of research I've gained satisfactory answers:</p>
<p><strong>Question</strong></p>
<blockquote>
<p>what units do we see here after the period? Are these nanoseconds?</p>
</blockquote>
<p><strong>Short answer</strong></p>
<p>The short answer lies in <a href="https://github.com/golang/go/blob/master/src/time/time.go#L644" rel="nofollow noreferrer">time.go</a>.</p>
<p><strong>Long answer</strong></p>
<p>The Kubernetes event regarding "<em>Successfully pulled image <image-name> in <time></em>" is generated by the <code>kubelet</code> component. <em>Kubelet</em> is basically responsible for managing (the life cycle of) pods and their containers. It generates <a href="https://github.com/kubernetes/kubernetes/blob/release-1.24/pkg/kubelet/events/event.go#L40" rel="nofollow noreferrer">events</a> for various stages of the container's execution, including the <code>image pulling</code>. Thus the event you mentioned is generated by the <em>kubelet</em> during the image pulling stage of container execution. (At least I knew that part already).</p>
<p>You seem to have a kubelet <= release-1.24 in use. The corresponding event is generated <a href="https://github.com/kubernetes/kubernetes/blob/release-1.24/pkg/kubelet/images/image_manager.go#L156" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-golang prettyprint-override"><code>..., fmt.Sprintf("Successfully pulled image %q in %v", container.Image, time.Since(startTime)), ...
</code></pre>
<p>The duration string comes from the <code>time.Since(startTime)</code> (<code>Duration</code>) stament. The <code>fmt.Sprintf</code> calls the <a href="https://github.com/golang/go/blob/master/src/time/time.go#L644" rel="nofollow noreferrer"><code>func (d Duration) String() string</code></a> method on that Duration value, which appears to produce a string output according to the following criteria:</p>
<ul>
<li><p>If the duration is less than one second, it is displayed with smaller units such as "12.345µs" or "678ns".</p>
</li>
<li><p>If the duration is between one second and one minute, it is displayed in seconds, such as "5s" or "42.123456s".</p>
</li>
<li><p>If the duration is between one minute and one hour, it is displayed in minutes and seconds, such as "3m45s" or "59m59s".</p>
</li>
<li><p>If the duration is more than one hour, it is displayed in hours, minutes, and seconds, such as "1h42m" or "12h34m56s".</p>
</li>
</ul>
<p>Maybe you can develop suitable patterns in Splunk to transform this format into the desired time unit for visualization (using if-else conditions may also help, e.g. value includes "h", "m" and "s"? then use the regex <code>(\d+)h(\d+)m(\d+\.?\d*)s</code> to extract values from e.g. "2h10m10.100152244s").</p>
<p><strong>Side note</strong></p>
<p>Newer versions of kubelet (> release-1.24) seem to use a slightly different <a href="https://github.com/kubernetes/kubernetes/blob/release-1.25/pkg/kubelet/images/image_manager.go" rel="nofollow noreferrer">log</a></p>
<pre class="lang-golang prettyprint-override"><code>..., fmt.Sprintf("Successfully pulled image %q in %v (%v including waiting)", container.Image, imagePullResult.pullDuration, time.Since(startTime)), ...
</code></pre>
<p>e.g.</p>
<blockquote>
<p>Successfully pulled image "xyz" in 8.831719579s (8.831722421s including waiting)</p>
</blockquote>
| Kenan Güler |
<p>I have Sentry running in my cluster and i want to expose it on subpath using nginx ingress but it seems that it only works on root path, i tried several ways but it didn't work. Is there any configuration i can perform in order to make it work on a subpath because i've seen some examples using these two variables in the sentry.conf.py file:</p>
<p><strong>SENTRY_URL_PREFIX = '/sentry'<br>
FORCE_SCRIPT_NAME = '/sentry'</strong></p>
<p>But i don't know if it works</p>
<p>Here is the ingress resource for sentry :</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "sentry-ingress"
namespace: "tools"
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
labels:
app: sentry-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: "sentry"
servicePort: 9000
</code></pre>
| touati ahmed | <p>Your ingress does not work properly.</p>
<p>In <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">nginx ingress docs</a> you can read:</p>
<blockquote>
<p><strong>IMPORTANT NOTES</strong>:
If the <code>use-regex</code> OR <code>rewrite-target</code> annotation is used on any Ingress for a given host, then the case insensitive regular expression <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="nofollow noreferrer">location modifier</a> will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.</p>
</blockquote>
<p>Meaning that when you use rewrite-target annotation, <code>path</code> field value is treated as regexp and not as a prefix. This is why <code>path: /</code> is matched literally and only with <code>/</code>. </p>
<p>So if you want to use rewrite-target you should do it as following:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sentry-ingress
namespace: tools
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
labels:
app: sentry-ingress
spec:
rules:
- http:
paths:
- path: /sentry/(.*)
backend:
serviceName: sentry
servicePort: 9000
</code></pre>
| acid_fuji |
<p>We use <code>kubectl set image</code> to rollout a new version <code>2.0.0</code> of an existing application. We then use <code>kubectl rollout status</code> to wait for the new pod to become ready so that we can run some basic tests.</p>
<p>The problem is, <code>kubectl rollout status</code> returns (implying the new v2 pod is ready) but when we use <code>kubectl exec</code> we ALWAYS land in the old v1 pod.</p>
<pre class="lang-bash prettyprint-override"><code>$ date
Mon 13 Feb 2023 02:33:50 PM CET
$ k set image deploy/myapp myapp=myapp:2.0.0 && k rollout status deploy/myapp
deployment.apps/myapp image updated
Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination...
deployment "myapp" successfully rolled out
</code></pre>
<p>Here, we assume the new version is running. Let's check:</p>
<pre class="lang-bash prettyprint-override"><code>$ k exec deploy/myapp -- show_version
1.0.0
</code></pre>
<p>Nope, it's still the old version.<br />
Check the deplyoment:</p>
<pre class="lang-bash prettyprint-override"><code>$ k get deploy/myapp
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 1/1 1 1 273d
</code></pre>
<p>Looks ready (K9S shows 1 pod "Terminating" and 1 pod ready).<br />
Check again:</p>
<pre class="lang-bash prettyprint-override"><code>$date
Mon 13 Feb 2023 02:34:00 PM CET
$ k exec deploy/myapp -- show_version
1.0.0
</code></pre>
<p>Nope, check the pods:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pod | grep myapp-
myapp-79454d746f-zw5kg 1/1 Running 0 31s
myapp-6c484f86d4-2zsk5 1/1 Terminating 0 3m5s
</code></pre>
<p>So our pod is running, we just can't exec into it - it always "picks" the terminating pod:</p>
<pre class="lang-bash prettyprint-override"><code>$date
Mon 13 Feb 2023 02:34:10 PM CET
$ k exec deploy/myapp -- show_version
1.0.0
</code></pre>
<p>Wait 20-30s:</p>
<pre class="lang-bash prettyprint-override"><code>$ date
Mon 13 Feb 2023 02:34:25 PM CET
$ k exec deploy/myapp -- show_version
2.0.0
</code></pre>
<p>Finally we have <code>exec</code> on the correct pod.</p>
<p>Why/how can we wait for the old pod to terminate?
OR
How can we ensure we exec into the correct pod for testing?</p>
| Marc | <h2>Update</h2>
<blockquote>
<p>Even better would be to get the new_pod id and exec directly into that.</p>
</blockquote>
<p>Also possible, yes. Try this:</p>
<pre><code>k rollout status deploy/myapp >/dev/null && \
k get po -l app=myapp | grep Running | awk '{print $1}' | xargs -I{} kubectl exec {} -- show_version
</code></pre>
<blockquote>
<p>I would love to know what controls that 30s time.</p>
</blockquote>
<p>This can be configured using the <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">terminationGracePeriodSeconds</a> field in the pod's spec. The value defaults to, you guessed it right, 30s. If you're not concerned about data loss (due to the immediate shutdown), it can be set to 0. After that you can directly exec into the new pod:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
terminationGracePeriodSeconds: 0
</code></pre>
<pre><code>k rollout status deploy/myapp >/dev/null && k exec deploy/myapp -- show_version
</code></pre>
<hr />
<p>While being "Terminated" the old pod is still in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">phase <em>Running</em></a>, and the <code>kubectl exec deploy/myapp</code> seems to use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">first <em>Running</em> pod of the deployment </a>.</p>
<p>I would suggest:</p>
<ol>
<li>Retrieve and store the name of the currently running pod in a temp variable prior to deployment (assuming the pod has the label <code>app=myapp</code>)</li>
</ol>
<pre><code>$ old_pod=$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}')
</code></pre>
<ol start="2">
<li>Deploy</li>
</ol>
<pre><code>$ k apply -f Deployment.yaml
</code></pre>
<ol start="3">
<li>Wait <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-" rel="nofollow noreferrer">until the rollout is done</a></li>
</ol>
<pre><code>$ k rollout status deploy/myapp
</code></pre>
<ol start="4">
<li>Wait <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">until the <code>old_pod</code> is deleted</a></li>
</ol>
<pre><code>$ k wait --for=delete pod/$old_pod --timeout -1s
</code></pre>
<ol start="5">
<li>Check the new pod</li>
</ol>
<pre><code>$ k exec deploy/myapp -- show_version
</code></pre>
| Kenan Güler |
<p>I have running k8s cluster using kops. the autoscaling policy terminate the master machine and recreated a new one since then every time i try to run kubectl command it returns "The connection to the server refused, did you specify the right host or port". i tried to ssh to the master machine but the did not found any of k8s services so i think the autoscale policy did not configure the master node correctly. so what should i do in this situation ?
update: also i found this log in syslog file:</p>
<pre><code>E: Package 'ebtables' has no installation candidate
Jun 25 12:03:33 ip-172-20-35-193 nodeup[7160]: I0625 12:03:33.389286 7160 executor.go:145] No progress made, sleeping before retrying 2 failed task(s)
</code></pre>
| Ahmed Ali | <p>the issue was the kops was unable to install ebtables and conntrack so i installed it manually by :</p>
<pre><code>sudo apt-get -o Acquire::Check-Valid-Until=false update
sudo apt-get install -y ebtables --allow-unauthenticated
sudo apt-get install --yes conntrack
</code></pre>
<p>and everything is running fine now</p>
| Ahmed Ali |
<p>Hej, developers. There is a static URL of a target that I want to monitor via the Prometheus operator. For some reason, I don't know the label name of the service or namespace. I found probe CRD may can help me to get a metrics from a static target. But there are no docs or example to help me to make a probe yaml. Can anyone help me with a probe example? the example ip could be 0.0.0.0:8080.</p>
<p>PS: I tried to use EndPoint to point to the static target, unfortunately it only can point to a ip address not a domain name.</p>
| Samuel Three | <p>Solved this: just set the prober same with the target. TBH, I can not understant why prober should be set same as target. but it is works.</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: probe-sample
namespace: monitoring
spec:
interval: 5s
jobName: probe-sample
scrapeTimeout: 2s
prober:
url: example.com:8080
scheme: http
targets:
staticConfig:
static:
- example.com:8080
</code></pre>
| Samuel Three |
<p>I set up my HorizontalPodAutoscaler like described here <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling</a> to listen scale according to the number of unacked messages from my Pub/Sub. My desire is that the pods scale if there is more than 1 unacknowledged message. When I run <code>k describe hpa</code> I get:</p>
<pre><code>Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"foobar-gke-prod","namespace":"defau...
CreationTimestamp: Mon, 25 May 2020 18:01:33 -0700
Reference: Deployment/foobar-gke-prod
Metrics: ( current / target )
"pubsub.googleapis.com|subscription|num_undelivered_messages" (target average value): 200m / 1
Min replicas: 3
Max replicas: 9
Deployment pods: 5 current / 5 desired
</code></pre>
<p>The metrics data returned is confusing me. When I ran that command the number of unacked knowledge messages was around 4 according to the console metrics. So I don't understand what does <code>200m</code> mean? Why wouldn't it say 4?</p>
<p>Here is my configuration for the HPA</p>
<pre><code># Template from https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: foobar-gke-prod
spec:
minReplicas: 3
maxReplicas: 9
metrics:
- external:
metricName: pubsub.googleapis.com|subscription|num_undelivered_messages
metricSelector:
matchLabels:
resource.labels.subscription_id: prod_foobar_subscription
targetAverageValue: "1"
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: foobar-gke-prod
</code></pre>
| Daniel Kobe | <p><strong>Reference Example:</strong></p>
<pre><code>Name: pubsub
...
Metrics: ( current / target )
"pubsub.googleapis.com|subscription|num_undelivered_messages" (target average value): 2250m / 2
Min replicas: 1
Max replicas: 4
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4
ScalingLimited True TooManyReplicas the desired replica count is more than the maximum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 7s horizontal-pod-autoscaler New size: 4; reason: external metric pubsub.googleapis.com|subscription|num_undelivered_messages(&LabelSelector{MatchLabels:map[string]string{resource.labels.subscription_id: echo-read,},MatchExpressions:[],}) above target
</code></pre>
<ul>
<li>The Metrics section gives the last value of metric observed by HPA. <strong>Fractional values are represented as milli-units</strong>. For example, in the output above there are 4 replicas of application and the current number of unacknowledged messages in Pub/Sub subscription is 9. So <strong>the average number of messages per replica is 2.25, or 2250m</strong>.</li>
</ul>
<hr>
<blockquote>
<p>The metrics data returned is confusing me. When I ran that command the number of unacked knowledge messages was around 4 according to the console metrics. So I don't understand what does 200m mean? Why wouldn't it say 4?</p>
</blockquote>
<ul>
<li>That means that on your case <code>200m/1</code> means that at that moment the average number of undelivered messages <strong>per replica running</strong> is 0.2(20%) at the time HPA measured.</li>
</ul>
<p><strong>Considerations:</strong></p>
<ul>
<li>Make sure you make the readings on metrics console and HPA roughly at the same time to avoid discrepancies due to scaling running during reads.</li>
<li><p>a reading of 4 messages for 5 pods would result in a load of 800m but at that point the hpa could be already running another scale up event.</p></li>
<li><p>I encourage you to take a reading of the metrics console and hpa at the same time and verify again.</p></li>
</ul>
<p>If you still think the results don't match post here with the updated hpa describe and we can take another look.</p>
<hr>
<p><strong>EDIT:</strong></p>
<blockquote>
<p>Is there anyway to make the metric not be an average across pods? I.e. if there are 5 unacked messages the metrics data would read 5000m?</p>
</blockquote>
<p>From Kubernetes API Reference <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#externalmetricsource-v2beta1-autoscaling" rel="nofollow noreferrer">ExternalMetricSource v2beta1 Autoscaling</a>:</p>
<ul>
<li><p><strong><code>targetAverageValue</code></strong> is the target per-pod value of global metric (as a quantity).</p></li>
<li><p><strong><code>targetValue</code></strong> is the target value of the metric (as a quantity). </p></li>
</ul>
<p>Note that <code>targetAverageValue</code> and <code>targetValue</code> are mutually exclusive.</p>
<p>So if you want the total instead of the average, just swap them on your HPA.</p>
| Will R.O.F. |
<p>I noticed in a deployment file there are two fields for containers like <code>initContainers</code> and <code>containers</code> and looks confusing to me and I search through the internet but can't understand. Could anyone please tell me the difference between <code>initContainers</code> and <code>containers</code> and how we use them together?</p>
<p>For example</p>
<pre><code> containers:
- name: php
image: php:7-fpm
volumeMounts:
- name: dir
mountPath: /dir
initContainers:
- name: install
image: busybox
volumeMounts:
- name: dir
mountPath: /dir
command:
- wget
- "-O"
- "/dir/index.php"
- https://raw.githubusercontent.com/videofalls/demo/master/index.php
</code></pre>
<p>It's really appreciable and thanks in advance!!</p>
| Bablu Ahmed | <p>About <a href="https://kubernetes.io/docs/concepts/containers/" rel="noreferrer">Containers:</a></p>
<blockquote>
<p>Containers are a technology for packaging the (compiled) code for an application along with the dependencies it needs at run time. Each container that you run is repeatable; the standardization from having dependencies included means that you get the same behavior wherever you run it.</p>
</blockquote>
<p>About <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">InitContainer</a>:</p>
<blockquote>
<p>Init containers are exactly like regular containers, except:</p>
<ul>
<li>Init containers always run to completion before the container execution.</li>
<li>Each initContainer <strong>must complete successfully before the next one starts</strong>.</li>
<li>If a Pod’s init container fails, Kubernetes <strong>repeatedly restarts the Pod</strong> until the init container succeeds. However, if the Pod has a <code>restartPolicy</code> of Never, Kubernetes does not restart the Pod.</li>
</ul>
</blockquote>
<p>Summarizing:
<code>Containers</code> hosts your dockerized applications, <code>initContainer</code> run tasks that are required to run <strong>before</strong> the main Container execution.</p>
<hr />
<p>One simple example is the code you provided:</p>
<ul>
<li>You created a container with a php server, but you want the content of <code>index.html</code> to be always updated, without having to change the pod manifest itself.</li>
<li>So you added a <code>initContainer</code> to fetch the updated <code>index.php</code> and add to the container.</li>
<li>I've fixed your yaml with the <code>volume</code> parameters to add the <code>emptyDir</code> that will hold the downloaded file and changing the <code>mountPath</code> to the default html folder <code>/var/www/html</code>:</li>
</ul>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: php-updated
spec:
containers:
- name: php
image: php:7-fpm
volumeMounts:
- name: dir
mountPath: /var/www/html/
initContainers:
- name: install
image: busybox
volumeMounts:
- name: dir
mountPath: /var/www/html/
command:
- wget
- "-O"
- "/var/www/html/index.php"
- https://raw.githubusercontent.com/videofalls/demo/master/index.php
volumes:
- name: dir
emptyDir: {}
</code></pre>
<p><strong>POC:</strong></p>
<pre><code>$ kubectl apply -f php.yaml
pod/php-updated created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
php-updated 1/1 Running 0 3s
$ kubectl exec -it php-updated -- /bin/bash
root@php-updated:/var/www/html# cat index.php
<?php
echo 'Demo Test';
</code></pre>
<ul>
<li>As you can see the <code>initContainer</code> ran before the pod, downloaded the file to the mounted volume that is shared with the PHP server <code>Container</code>.</li>
</ul>
<p><strong>NOTE:</strong> The above webserver is not fully functional because the full <code>php-fpm</code> deployment is a little more complex, and it's not the core of this question, so I'll leave this tutorial for it: <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/php-fpm-nginx-kubernetes.html" rel="noreferrer">PHP-FPM, Nginx, Kubernetes, and Docker</a></p>
<p>One could argue that <code>index.html</code> is not a critical file for Pod initialization, and could be replaced during pod execution using <code>Command</code> so I'll leave here an answer I gave for persistently changing <code>resolv.conf</code> before pod initialization even after pod restart: <a href="https://stackoverflow.com/questions/60508061/dnsconfig-is-skipped-in-gke">DNS Config is Skipped in GKE</a>.</p>
<hr />
<p>Another great usage of <code>initContainer</code> is to make a pod wait for another resource in the cluster to be ready before initializing.</p>
<ul>
<li>Here is a pod with a <code>initContainer</code> called <code>init-mydb</code> that waits and watched for a service called <code>mydb</code> to be on <code>running</code> state before allowing the container <code>myapp-container</code> start, imagine <code>myapp-container</code> is an app that requires the database connection before execution, otherwise it would fail repeatedly.</li>
</ul>
<p><strong>Reproduction:</strong></p>
<ul>
<li>here is the manifest <code>my-app.yaml</code>:</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app
name: my-app
spec:
replicas: 2
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
restartPolicy: Always
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
</code></pre>
<ul>
<li>Now let's apply it see the status of the deployment:</li>
</ul>
<pre><code>$ kubectl apply -f my-app.yaml
deployment.apps/my-app created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 4s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 4s
</code></pre>
<ul>
<li>The pods are hold on <code>Init:0/1</code> status waiting for the completion of the init container.</li>
<li>Now let's create the service which the <code>initContainer</code> is waiting to be <code>running</code> before completing his task:</li>
</ul>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
</code></pre>
<ul>
<li>We will apply it and monitor the changes in the pods:</li>
</ul>
<pre><code>$ kubectl apply -f mydb-svc.yaml
service/mydb created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 PodInitializing 0 93s
my-app-6b4fb4958f-44ds7 0/1 PodInitializing 0 94s
my-app-6b4fb4958f-s7wmr 1/1 Running 0 94s
my-app-6b4fb4958f-44ds7 1/1 Running 0 95s
^C
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app-6b4fb4958f-44ds7 1/1 Running 0 99s
pod/my-app-6b4fb4958f-s7wmr 1/1 Running 0 99s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mydb ClusterIP 10.100.106.67 <none> 80/TCP 14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app 2/2 2 2 99s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app-6b4fb4958f 2 2 2 99s
</code></pre>
<hr />
<ul>
<li><p>Finally I'll leave you a few more examples on how to use <code>InitContainers</code>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#examples" rel="noreferrer">Kubernetes.io InitContainer Examples</a></li>
<li><a href="https://www.exoscale.com/syslog/configuration-management-kubernetes-spring-boot/" rel="noreferrer">A Spring-boot Use Case with Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="noreferrer">Kubernetes.io Configure a Pod Initialization</a></li>
<li><a href="https://www.magalix.com/blog/kubernetes-patterns-the-init-container-pattern" rel="noreferrer">The InitContainer Pattern</a></li>
</ul>
</li>
</ul>
<p>If you have any questions let me know in the comments!</p>
| Will R.O.F. |
<p>So far our on-prem kubernetes cluster is working fine. Lately we are seeing jobs are failing with the below error. I checked there are no space issues on Kube master as well as the worker nodes. There is plenty of space available under "/var/lib" as well as under persistent volume claims. </p>
<pre><code>Version:
Client Version: v1.17.2
Server Version: v1.17.2
Host OS:
Centos 7.7
CNI:
Weave
</code></pre>
<p>Error:</p>
<blockquote>
<p>The node was low on resource: ephemeral-storage.Container main was
using 5056Ki, which exceeds its request of 0. Container wait was using
12Ki, which exceeds its request of 0.</p>
</blockquote>
<p>any pointers will be helpful.</p>
<p>Thanks,
CS </p>
| user1739504 | <p>The main reason why this could be happening is that pod logs, or <code>emptyDir</code> usage are filling up your ephemeral storage. </p>
<blockquote>
<p>Docker takes a conservative approach to cleaning up unused objects (often referred to as “garbage collection”), such as images, containers, volumes, and networks: these objects are generally not removed unless you explicitly ask Docker to do so. This can cause Docker to use extra disk space.</p>
</blockquote>
<p>You can use docker function called <code>prune</code>. This will clean up the system from unused objects. If you wish to cleanup multiple objects you can use <code>docker system prune</code>. Check here more about <a href="https://docs.docker.com/config/pruning/" rel="nofollow noreferrer">prunning</a>.</p>
<p>There is also another tool called <code>Garbage collector</code>. It`s docker tool that removes. unused/abandoned/orphaned blobs. Check here <a href="https://docs.docker.com/registry/garbage-collection/" rel="nofollow noreferrer">more</a> about it. </p>
<blockquote>
<p>In the context of the Docker registry, garbage collection is the process of removing blobs from the filesystem when they are no longer referenced by a manifest. Blobs can include both layers and manifests.</p>
</blockquote>
<p>If this does`t help you can try to <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">configure logging driver</a> and set its limit: </p>
<pre><code>{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
</code></pre>
<p>There is also another option if <code>emptyDir</code> has been used. Using <code>emptyDir</code> you allow
container to write any amount of storage to it's node fs. You can <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#requests-and-limits-setting-for-local-ephemeral-storage" rel="nofollow noreferrer">request or limit</a> settings for local ephemeral storage by setting up: </p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: test
image: test-image
resources:
requests:
ephemeral-storage: "1Gi"
limits:
ephemeral-storage: "1Gi"
- name: test
image: test-image2
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "2Gi"
</code></pre>
<p>You can also check the containers running using <code>docker ps</code> and then inspect the container by yourself and locate the fs. </p>
<p>It should be found at this location: </p>
<pre><code>/var/lib/docker/containers/<container-id>/<container-id>-json.log
</code></pre>
<p>Let me know if that helps. </p>
| acid_fuji |
<p>I am bit confused about Calico IPs : </p>
<p>If I add calico to kubernetes cluster using </p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
</code></pre>
<p>The CALICO_IPV4POOL_CIDR is 192.168.0.0/16
So IP Range is 192.168.0.0 to 192.168.255.255</p>
<p>Now I have initiated the cluster using : </p>
<pre><code>kubeadm init --pod-network-cidr=20.96.0.0/12 --apiserver-advertise-address=192.168.56.30
</code></pre>
<p>So, now pods will have IP address (using pod network CIDR) will be between: 20.96.0.0 to 20.111.255.255</p>
<p>What are these two different IPs. My Pods are getting IP addresses 20.96.205.192 and so on.</p>
| Ankit Bansal | <ul>
<li>The <code>CALICO_IPV4POOL_CIDR</code> is <code>#commented</code> by default, look at these lines in <a href="https://docs.projectcalico.org/v3.14/manifests/calico.yaml" rel="noreferrer">calico.yaml</a>:</li>
</ul>
<pre><code># The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
</code></pre>
<p><strong>For all effects</strong>, unless manually modified before deployment, <strong>those lines are not considered during deployment</strong>.</p>
<ul>
<li>Another important line in the yaml itself is:</li>
</ul>
<blockquote>
<p># Pod CIDR auto-detection on kubeadm needs access to config maps.</p>
</blockquote>
<p>This confirms that the CIDR is obtained from the cluster, not from <code>calico.yaml</code>.</p>
<hr />
<blockquote>
<p>What are these two different IPs? My Pods are getting IP addresses 20.96.205.192 and so on.</p>
</blockquote>
<ul>
<li><p>Kubeadm supports many <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="noreferrer">Pod network add-ons</a>, Calico is one of those. Calico on the other hand is supported by many kinds of deployment, kubeadm is just one of those.</p>
</li>
<li><p>Kubeadm <code>--pod-network-cidr</code> in your deployment is the correct way to define the pod network CIDR, this is why the range <code>20.96.0.0/12</code> is effectively used.</p>
</li>
<li><p><code>CALICO_IPV4POOL_CIDR</code> is required for other kinds of deployment that does not specify the CIDR pool reservation for pod networks.</p>
</li>
</ul>
<hr />
<p><strong>Note:</strong></p>
<ul>
<li>The range <code>20.96.0.0/12</code> is not a <a href="https://en.wikipedia.org/wiki/Private_network" rel="noreferrer">Private Network</a> range, and it can cause problems if a client with a Public IP from that range tries to access your service.</li>
<li>The <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="noreferrer">classless</a> reserved IP ranges for Private Networks are:
<ul>
<li>10.0.0.0/8 (16.777.216 addresses)</li>
<li>172.16.0.0/12 (1.048.576 addresses)</li>
<li>192.168.0.0/16 (65.536 addresses)</li>
</ul>
</li>
<li>You can use any subnet size inside these ranges for your POD CIDR Network, just make sure it doesn't overlaps with any subnet in your network.</li>
</ul>
<p><strong>Additional References:</strong></p>
<ul>
<li><a href="https://docs.projectcalico.org/getting-started/kubernetes/quickstart#create-a-single-host-kubernetes-cluster" rel="noreferrer">Calico - Create Single Host Kubernetes Cluster with Kubeadm</a></li>
<li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tabs-pod-install-0" rel="noreferrer">Kubeadm Calico Installation</a></li>
<li><a href="https://www.rfc-editor.org/rfc/rfc1918#section-3" rel="noreferrer">IETF RFC1918 - Private Address Space</a></li>
</ul>
| Will R.O.F. |
<p>I have a couple of questions</p>
<ol>
<li><p>When we make changes to ingress resource, are there any cases where we have to delete the resource and re-create it again or is <code>kubectl apply -f <file_name></code> sufficient? </p></li>
<li><p>When I add the host attribute without <code>www i.e. (my-domain.in)</code>, I am not able to access my application but with <code>www i.e. (www.my-domain.in)</code> it works, what's the difference?</p></li>
</ol>
<p>Below is my ingress resource</p>
<p>When I have the host set to <code>my-domain.in</code>, I am unable to access my application, but when i set the host to <code>www.my-domain.in</code> I can access the application.</p>
<p>my domain is on a different provider and I have added CNAME (www) pointing to DNS name of my ALB.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eks-learning-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:a982529496:cerd878ef678df
labels:
app: eks-learning-ingress
spec:
rules:
- host: my-domain.in **does not work**
http:
paths:
- path: /*
backend:
serviceName: eks-learning-service
servicePort: 80
</code></pre>
| opensource-developer | <ul>
<li>First answering your question 1:
<blockquote>
<p>When we make changes to ingress resource, are there any cases where we have to delete the resource and re-create it again or is kubectl apply -f sufficient?</p>
</blockquote></li>
</ul>
<p><strong>In theory, yes, the <code>kubectl apply</code> is the correct way</strong>, either it will show <code>ingress unchanged</code> or <code>ingress configured</code>. </p>
<p>Other valid <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#updating-an-ingress" rel="nofollow noreferrer">documented</a> option is <code>kubectl edit ingress INGRESS_NAME</code> which saves and apply at the end of the edition if the output is valid.</p>
<p><em>I said theory because bugs happen, so we can't fully discard it, but bug is the worst case scenario.</em></p>
<ul>
<li>Now the blurrier question 2:
<blockquote>
<p>When I add the host attribute without www i.e. (my-domain.in), I am not able to access my application but with www i.e. (www.my-domain.in) it works, what's the difference?</p>
</blockquote></li>
</ul>
<p>To troubleshoot it we need to isolate the processes, like in a chain we have to find which link is broken. One by one:</p>
<p><strong><em>Endpoint > Domain Provider> Cloud Provider > Ingress > Service > Pod.</em></strong></p>
<ol>
<li>DNS Resolution (Domain Provider)</li>
<li>DNS Resolution (Cloud Provider)</li>
<li>Kubernetes Ingress (Ingress > Service > Pod)</li>
</ol>
<hr>
<h2>DNS Resolution</h2>
<hr>
<p><strong>Domain Provider:</strong></p>
<p>To the Internet, who answers for <code>my-domain.in</code> is your Domain Provider.</p>
<ul>
<li>What are the rules for <code>my-domain.in</code> and it's subdomains (like <code>www.my-domain.in</code> or <code>admin.my-domain.in</code>)?
You said <em>"domain is on a different provider and I have added CNAME (www) pointing to DNS name of my ALB."</em>
<ul>
<li>Are <code>my-domain.in</code> and <code>my-domain.in</code> being redirected to the ALB address instinctively?</li>
<li>How does it handle URL subdomains? how the request is passed on to your Cloud?</li>
</ul></li>
</ul>
<p><strong>Cloud Provider:</strong></p>
<p>Ok, the cloud provider is receiving the request correctly and distinctly.</p>
<ul>
<li>Does your ALB have generic or specific rules for subdomains or path requests?</li>
<li>Test with another host, a different VM with a web server.</li>
<li><a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html" rel="nofollow noreferrer">Check ALB Troubleshooting Page</a></li>
</ul>
<h2>Kubernetes Ingress</h2>
<p>Usually we would start the troubleshoot from this part, but since you mentioned it works with <code>www.my-domain.in</code>, we can presume that your service, deployment and even ingress structure is working correctly.</p>
<p>You can check the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress" rel="nofollow noreferrer">Types of Ingress Docs</a> to get a few examples of how it should work.</p>
<p><strong>Bottom Line:</strong> I believe your DNS has a route for <code>www.my-domain.in</code> but the root domain has no route to your cloud provider that's why it's only working when you are enabling the ingress for www.</p>
| Will R.O.F. |
<p>I'm trying to deploy my simple web application using Kubernetes.
I completed making the Kubernetes cluster and successfully exposed my react app using ingress.
But it seems that the domain URL of backend service received from manifest file's "env" field does not work.</p>
<p>Following is the manifest file of react application.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: recofashion-client
labels:
app: recofashion-client
spec:
type: NodePort
selector:
app: recofashion-client
ports:
- name: http
port: 80
targetPort: 3000
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: recofashion-client
labels:
name: recofashion-client
spec:
replicas: 2
selector:
matchLabels:
app: recofashion-client
template:
metadata:
labels:
app: recofashion-client
spec:
containers:
- name: web
image: recofashion/client-runtime
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: REACT_APP_API_V1_ENDPOINT
value: http://recofashion-api/api/v1
</code></pre>
<p>And I think there is no problem in k8s DNS itself. I tried to send request using curl in my "recofashion-client" pod, and it seems to work as I intended.</p>
<pre><code>curl http://recofashion-api/api/v1/user/me
{"timestamp":"2020-02-03T06:55:20.748+0000","status":403,"error":"Forbidden","message":"Access Denied","path":"/api/v1/user/me"}
</code></pre>
<p>But when I try to send request in the browser, it fails like this:</p>
<p><img src="https://i.stack.imgur.com/PEIJ2.png" alt="image"></p>
<p>And I'm receiving the external environment variables in react app from k8s like this:</p>
<pre><code>const response = await getWithAuth(`${process.env.REACT_APP_API_V1_ENDPOINT}/user/me`)
</code></pre>
<p>So what's the problem??? I searched internet a lot, but I couldn't get any appopriate answer...</p>
<p>++ the manifest file of ingress</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: recofashion-client
servicePort: 80
</code></pre>
| geunyoung | <p>Based on all informations provided I managed to reproduce your scenario using GKE.</p>
<p><strong>TL;DR:</strong></p>
<ul>
<li>Yes, the correct service-type for your <strong>api-service</strong> is <code>ClusterIP</code></li>
<li>The correct service-type for your <strong>api-client</strong> for outside access is <code>LoadBalancer</code>, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#access_your_service_2" rel="nofollow noreferrer">doc here</a>.</li>
<li><p>Your <strong>ENV</strong> <code>REACT_APP_API_V1_ENDPOINT</code> must point to <strong>Api Service</strong> address, not to deploy or pod address. (i.e: <code>value: http://recofashion-api-svc</code>)</p></li>
<li><p>You cannot use cluster DNS externally.</p></li>
</ul>
<h2><strong>Reproduction</strong></h2>
<p>Since I don't have your react app, I'm using an echo-app to simulate the two parts of the communication. This way I'm manually reproducing what your application would do by itself.</p>
<ul>
<li>First between <code>internet</code> and <code>recofashion-client</code></li>
<li>Second between <code>recofashion-client</code> and <code>recofashion-api</code>.</li>
</ul>
<p><strong><code>recofashion-client</code> - FrontEnd</strong>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: recofashion-client
labels:
name: recofashion-client
spec:
replicas: 2
selector:
matchLabels:
app: recofashion-client
template:
metadata:
labels:
app: recofashion-client
spec:
containers:
- name: web
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
env:
- name: NODE_ENV
value: production
- name: REACT_APP_API_V1_ENDPOINT
value: http://recofashion-api-svc
---
apiVersion: v1
kind: Service
metadata:
name: recofashion-cli-svc
labels:
app: recofashion-client
spec:
type: LoadBalancer
selector:
app: recofashion-client
ports:
- name: http
port: 3000
targetPort: 80
</code></pre>
<p><strong><code>recofashion-api</code> - backend API</strong>:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: recofashion-api
labels:
name: recofashion-api
spec:
replicas: 2
selector:
matchLabels:
app: recofashion-api
template:
metadata:
labels:
app: recofashion-api
spec:
containers:
- name: api-web
image: mendhak/http-https-echo
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: NODE_ENV
value: production
---
kind: Service
apiVersion: v1
metadata:
name: recofashion-api-svc
labels:
app: recofashion-api
spec:
selector:
app: recofashion-api
ports:
- name: http
port: 80
targetPort: 80
</code></pre>
<p>Note: <strong>I kept your Ingress intact.</strong></p>
<p><strong>Now to the terminal:</strong></p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-1-default-pool-e0523823-06jt Ready <none> 2d v1.15.7-gke.23
gke-cluster-1-default-pool-e0523823-vklh Ready <none> 2d v1.15.7-gke.23
$ kubectl apply -f recofashion-full.yaml
deployment.apps/recofashion-client created
service/recofashion-cli-svc created
deployment.apps/recofashion-api created
service/recofashion-api-svc created
ingress.extensions/reco-ingress created
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/recofashion-api-784b4d9897-9256q 1/1 Running 0 12m
pod/recofashion-api-784b4d9897-ljkfs 1/1 Running 0 12m
pod/recofashion-client-75579c8499-wd5vj 1/1 Running 0 12m
pod/recofashion-client-75579c8499-x766s 1/1 Running 0 12m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d
service/recofashion-api-svc ClusterIP 10.0.4.73 <none> 80/TCP 12m
service/recofashion-cli-svc LoadBalancer 10.0.3.133 35.239.58.188 3000:31814/TCP 12m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/recofashion-api 2/2 2 2 13m
deployment.apps/recofashion-client 2/2 2 2 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/recofashion-api-784b4d9897 2 2 2 13m
replicaset.apps/recofashion-client-75579c8499 2 2 2 13m
$ curl http://35.239.58.188:3000
{
"path": "/",
"headers": {
"host": "35.239.58.188:3000",
"user-agent": "curl/7.66.0",
},
"method": "GET",
"body": "",
"hostname": "35.239.58.188",
"ip": "::ffff:10.8.1.1",
"protocol": "http",
"os": {
"hostname": "recofashion-client-75579c8499-x766s"
}
}
</code></pre>
<p><a href="https://i.stack.imgur.com/ZDm3a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZDm3a.png" alt="Client Access from browser from outside network"></a></p>
<p>So far no problems, we are able to <code>curl</code> from outside to the <code>recofashion-client</code>.</p>
<p><strong>Now let's connect inside <code>recofashion-client</code> and try to connect to <code>recofashion-api</code> using the ENV created during deploy:</strong></p>
<pre><code>❯ kubectl exec recofashion-client-75579c8499-x766s -it sh
/app # apk update && apk add curl
OK: 10 MiB in 20 packages
/app # env
REACT_APP_API_V1_ENDPOINT=http://recofashion-api-svc
NODE_ENV=production
/app # curl $REACT_APP_API_V1_ENDPOINT
{
"path": "/",
"headers": {
"host": "recofashion-api-svc",
"user-agent": "curl/7.61.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"hostname": "recofashion-api-svc",
"ip": "::ffff:10.8.1.21",
"protocol": "http",
"os": {
"hostname": "recofashion-api-784b4d9897-9256q"
}
}
/app # nslookup recofashion-api-svc
Name: recofashion-api-svc
Address 1: 10.0.4.73 recofashion-api-svc.default.svc.cluster.local
</code></pre>
<p>When we use the <code>api-service</code> name in the ENV <code>value</code>, it resolves the DNS because the service is the responsible for directing the load to the PODs.</p>
<ul>
<li>Follow this steps and you can be sure that your K8s configuration won't be an issue.</li>
</ul>
<p><strong>Edit:</strong></p>
<ul>
<li>If your React is outside the cluster, the best way you can access the backend-api is creating a service to reach to the backend pods and address your request the exposed ip and port of the service, just like we did.</li>
<li>Now, if you want this to work with external DNS names for the whole cluster, take a look on some external dns projects like this: <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/nginx-ingress.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/nginx-ingress.md</a></li>
</ul>
| Will R.O.F. |
<p>We're using gitlab for CI/CD.I'll include the script which we're using</p>
<pre><code>services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build_dev:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name=development -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name=stage -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- stage
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_DEV}
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- stage
</code></pre>
<p>According to the script, We just merged the script not to face conflicts/clashes for stage and development while deployment. Previously , we having each docker files for each environment. Now I want to merge the dockerfile also , I merged, but the dockerfile is not fetching. Having clashes in kubernetes. I don't know about kubernetes . I'll enclosed the docker file which I merged.</p>
<pre><code>FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
ARG environment_name
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/patient-service-*.jar /app/patient-service.jar
ENV PORT 8094
ENV env_var_name=$environment_name
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active= $env_var_name","-jar","/app/patient-service.jar"]
</code></pre>
<p>the last line , we used before ,</p>
<pre><code>ENTRYPOINT ["java","-Dspring.profiles.active=development","-jar","/app/patient-service.jar"]
</code></pre>
<p>at the time, its working fine, I'm not facing any issue on kubernetes. I'd just add environment variable to fetch along with whether development or stage. You can check ,my script after the docker build. After adding the varaible only , we facing the clashes. Please help me to sort it out this. Thanks in advance</p>
| Cyril I | <p>As already mentioned in <a href="https://stackoverflow.com/a/37904830/13097422">the question</a> in the comment section, you would need to execute a shell form, because the exec form won't do directly do variable substitution.</p>
<pre><code>ENTRYPOINT ["java","-Dspring.profiles.active= $env_var_name","-jar","/app/patient-service.jar"]
</code></pre>
<p>needs to be</p>
<pre><code>ENTRYPOINT [ "sh", "-c", "java","-Dspring.profiles.active=$env_var_name","-jar","/app/patient-service.jar" ]
</code></pre>
<p>Relevant documentation from Docker Docs: <a href="https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example" rel="nofollow noreferrer">Shell form ENTRYPOINT example</a></p>
<blockquote>
<p>Unlike the <em>shell</em> form, the <em>exec</em> form does not invoke a command shell. This means that normal shell processing does not happen. For example, <code>ENTRYPOINT [ "echo", "$HOME" ]</code> will not do variable substitution on <code>$HOME</code>. If you want shell processing then either use the <em>shell</em> form or execute a shell directly, for example: <code>ENTRYPOINT [ "sh", "-c", "echo $HOME" ]</code>. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.</p>
</blockquote>
| lyzlisa |
<p>My k8s env:</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready master 46h v1.18.0 172.18.90.100 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://19.3.8
k8s-node01 Ready <none> 46h v1.18.0 172.18.90.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://19.3.8
</code></pre>
<p>kube-system:</p>
<pre><code>kubectl get pod -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66bff467f8-9dg27 1/1 Running 0 16h 10.244.1.62 k8s-node01 <none> <none>
coredns-66bff467f8-blgch 1/1 Running 0 16h 10.244.0.5 k8s-master01 <none> <none>
etcd-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
kube-flannel-ds-amd64-scgkt 1/1 Running 0 17h 172.19.90.194 k8s-node01 <none> <none>
kube-flannel-ds-amd64-z6fk9 1/1 Running 0 44h 172.19.90.189 k8s-master01 <none> <none>
kube-proxy-8pbmz 1/1 Running 0 16h 172.19.90.194 k8s-node01 <none> <none>
kube-proxy-sgpds 1/1 Running 0 16h 172.19.90.189 k8s-master01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 0 46h 172.19.90.189 k8s-master01 <none> <none>
</code></pre>
<p>My Deployment and Service:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
spec:
selector:
matchLabels:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9376
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>My svc info:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames ClusterIP 10.106.24.115 <none> 80/TCP 42m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
</code></pre>
<p>The problem:</p>
<p>When I curl 10.106.24.115 on the k8s-master01 responding a high deley about a minute,But I can get response right away on k8s-node01.</p>
<p>I edited my svc and changed ClusterIP to NodePort:</p>
<pre><code>kubectl edit svc hostnames
spec:
clusterIP: 10.106.24.115
ports:
- name: default
port: 80
protocol: TCP
targetPort: 9376
nodePort: 30888
selector:
app: hostnames
sessionAffinity: None
type: NodePort
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames NodePort 10.106.24.115 <none> 80:30888/TCP 64m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
</code></pre>
<p>Now, I curl each node with: nodeIp:30888. It works well and response right away.Why It occured high delay when I access throuth ClusterIP on other node.I also have another k8s cluster, it has no problem.Then the same delay response using curl 127.0.0.1:30555 on k8s-master01. so weird!</p>
<p>There are no errors in my kube-controller-manager:</p>
<pre><code>'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-mbh4k
I0330 09:11:20.953439 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7pd8r
I0330 09:11:36.488237 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-dns", UID:"f42d9cbc-c757-48f0-96a4-d15f75082a88", APIVersion:"v1", ResourceVersion:"250956", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I0330 09:11:44.753349 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"250936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-z7fps
I0330 09:12:46.690043 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-flannel-ds-amd64", UID:"12cda6e4-fd07-4328-887d-6dd9ca8a86d7", APIVersion:"apps/v1", ResourceVersion:"251183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-flannel-ds-amd64-scgkt
I0330 09:19:35.915568 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"251982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9dg27
I0330 09:19:42.808373 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-dns", UID:"f42d9cbc-c757-48f0-96a4-d15f75082a88", APIVersion:"v1", ResourceVersion:"252221", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I0330 09:19:52.606633 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"df14e2c6-faf1-4f6a-8b97-8d519b390c73", APIVersion:"apps/v1", ResourceVersion:"252222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-blgch
I0330 09:20:36.488412 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"33fa53f5-2240-4020-9b1f-14025bb3ab0b", APIVersion:"apps/v1", ResourceVersion:"252365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-sgpds
I0330 09:20:46.686463 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"33fa53f5-2240-4020-9b1f-14025bb3ab0b", APIVersion:"apps/v1", ResourceVersion:"252416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-8pbmz
I0330 09:24:31.015395 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hostnames", UID:"b54625e7-6f84-400a-9048-acd4a9207d86", APIVersion:"apps/v1", ResourceVersion:"252991", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hostnames-68b5ff98ff to 3
I0330 09:24:31.020097 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"5b4bba3e-e15e-45a6-b33e-055cdb1beca4", APIVersion:"apps/v1", ResourceVersion:"252992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-gzvxb
I0330 09:24:31.024513 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"5b4bba3e-e15e-45a6-b33e-055cdb1beca4", APIVersion:"apps/v1", ResourceVersion:"252992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-kl29m
I0330 09:24:31.024538 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"5b4bba3e-e15e-45a6-b33e-055cdb1beca4", APIVersion:"apps/v1", ResourceVersion:"252992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-czrqx
I0331 00:56:33.245614 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hostnames", UID:"10e9b06c-9e0c-4303-aff9-9ec03f5c5919", APIVersion:"apps/v1", ResourceVersion:"381792", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hostnames-68b5ff98ff to 3
I0331 00:56:33.251743 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"aaa4d5ac-b7f4-4bcb-b6ea-959ecee00e0e", APIVersion:"apps/v1", ResourceVersion:"381793", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-7z4bb
I0331 00:56:33.256083 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"aaa4d5ac-b7f4-4bcb-b6ea-959ecee00e0e", APIVersion:"apps/v1", ResourceVersion:"381793", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-2zwxf
I0331 00:56:33.256171 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hostnames-68b5ff98ff", UID:"aaa4d5ac-b7f4-4bcb-b6ea-959ecee00e0e", APIVersion:"apps/v1", ResourceVersion:"381793", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hostnames-68b5ff98ff-x289b
</code></pre>
<p>The output of describe ep kube-dns:</p>
<pre><code>kubectl describe ep kube-dns --namespace=kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-03-31T04:27:42Z
Subsets:
Addresses: 10.244.0.2,10.244.0.3
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
dns-tcp 53 TCP
metrics 9153 TCP
dns 53 UDP
Events: <none>
</code></pre>
| Jerold Tsao | <p>Based on the information that you provided there are couple of things that can be checked/done: </p>
<p>Your <code>kube-controller-manager</code> reports an error with endpoints: </p>
<pre><code>Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
</code></pre>
<p>Going further you may also notice that your your <code>kube-dns</code> endpoints does not match your <code>core-dns</code> ip addresses. </p>
<p>This could be caused by previous <code>kubeadm</code> installation that was not entirely cleaned up and did not remove the <code>cni</code> and <code>flannel</code> interfaces. </p>
<p>I would assure and check for any virtual NIC's created by flannel with previous installation. You can check them using <code>ip link</code> command and then delete them: </p>
<pre><code>ip link delete cni0
ip link delete flannel.1
</code></pre>
<p>Alternatively use <code>brctl</code> command (<code>brctl delbr cni0</code>) </p>
<p>Please also note that you reported initializing cluster with <code>10.244.0.0/16</code> but I can see that your system pods are running with different one (except the coreDNS pods which have the correct one). All the system pods should have the same pod subnet that you specified using the <code>--pod-network-cidr</code> flag. Your Pod network must not overlap with any of the host networks. Looking at your system pods having the same subnet as the host this may also be the reason for that. </p>
<p>Second thing is to check <code>iptables-save</code> for master and worker. Your reported that using <code>NodePort</code> you don't experience latency. I would assume it because you using <code>NodePort</code> you are bypassing the flannel networking and going straight to the pod that is running on the worker (I can see that you have only one). This also indicates an issues with CNI. </p>
| acid_fuji |
<p>I have a application running with kubernetes orchestrator. I want to implement calico network policy <strong>on the basis of CIDR</strong> so that I can control the pod's traffic (incoming and outgoing). I am looking for prerequisite installation (any plugin) and what changes (calico yaml file or manifest file) are required to achieve this.</p>
<p>Some explanation about steps that need to be implemented will be appreciated.</p>
| solveit | <p>By default as explained <a href="https://rancher.com/docs/k3s/latest/en/installation/network-options/" rel="nofollow noreferrer">here</a> K3s is running with flannel CNI, using VXLAN as default backend.</p>
<p>To change the CNI you need to run <code>K3s</code> with <code>--flannel-backend=none</code>. For more information please visit <a href="https://rancher.com/docs/k3s/latest/en/installation/network-options/#custom-cni" rel="nofollow noreferrer">custom-CNI</a> section of the docs.</p>
<p>Please note that besides calico you can run <a href="https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel" rel="nofollow noreferrer">canal CNI</a> which in fact is flannel with calico network policies available.</p>
| acid_fuji |
<p>Can environment variables passed to containers be composed from environment variables that already exist? Something like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: URL
value: $(HOST):$(PORT)
</code></pre>
| Marko Galesic | <p>Helm with it's <a href="https://helm.sh/docs/chart_template_guide/variables/" rel="nofollow noreferrer">variables</a> seems like a better way of handling that kind use cases.
In the example below you have a deployment snippet with values and variables:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- name: {{ .Chart.Name }}
image: "image/thomas:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: URL
value: {{ .Values.host }}:{{ .Values.port }}
</code></pre>
<p>And here is one of the ways of deploying it with some custom variables:</p>
<pre class="lang-sh prettyprint-override"><code>helm upgrade --install myChart . \
--set image.tag=v2.5.4 \
--set host=example.com \
--set-string port=12345 \
</code></pre>
<p>Helm allows you also to use template functions. You can have <code>default</code>functions and this will go to default values if they're not filled. In the example above you can see <code>required</code> which display the message and fails to go further with installing the chart if you won't specify the value. There is also <code>include</code> function that allows you to bring in another template and pass results to other template functions.</p>
| acid_fuji |
<p>in the yaml file can we use something like below to pull the image? or is there a better way?</p>
<p>I want to pull an image but the may vary depending on the releases.</p>
<p>configmap is like</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: configmap
namespace: rel
data:
# Configuration values can be set as key-value properties
RELEASE_ID: 1.1.1
</code></pre>
<p>Pod is like </p>
<pre><code> imagePullPolicy: Always
image: "sid_z:$(RELEASE_LEVEL)"
env:
- name: RELEASE_LEVEL
valueFrom:
configMapKeyRef:
name: configmap
key: RELEASE_ID
</code></pre>
<p>rightnow it gives me an error invalid reference format Error: InvalidImageName</p>
| user2511126 | <ul>
<li><p>You can't use Environment variables, because they are made available inside the container, as mentioned in the other answer.</p>
</li>
<li><p>using linux variables would not be processed as well because is the server who processes the yaml, not the terminal itself.</p>
</li>
<li><p>My suggestion is <strong>to use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-image-em-" rel="nofollow noreferrer">Kubectl set image</a></strong>:</p>
</li>
</ul>
<blockquote>
<p><code>kubectl set image</code> Update existing container image(s) of resources like:</p>
<p>pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs)</p>
</blockquote>
<ul>
<li>You can even see the processed yaml locally, before hitting the server with the parameter <code>--local</code>, more on that on the example bellow.</li>
</ul>
<hr />
<p><strong>Examples:</strong></p>
<ul>
<li>Print result in yaml format of updating the image of the container named <code>container-1</code> from local file, without hitting the server, neither changing the file:</li>
</ul>
<pre><code>$ cat hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello
spec:
containers:
- name: container-1
image: gcr.io/google-samples/hello-app:1.0
ports:
- name: http
containerPort: 8080
$ kubectl set image -f hello.yaml container-1=gcr.io/google-samples/hello-app:2.0 --local -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: hello
spec:
containers:
- image: gcr.io/google-samples/hello-app:2.0
name: container-1
ports:
- containerPort: 8080
name: http
resources: {}
status: {}
</code></pre>
<ul>
<li>You can pipe the output to directly create the pod with the changed image:</li>
</ul>
<pre><code>$ kubectl set image -f hello.yaml container-1=gcr.io/google-samples/hello-app:2.0 --local -o yaml | kubectl apply -f -
pod/hello created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello 1/1 Running 0 22s
$ kubectl describe pod hello | grep Image:
Image: gcr.io/google-samples/hello-app:2.0
</code></pre>
<ul>
<li>Changing the image of <code>container-1</code> on the deployed pod <code>hello</code>:</li>
</ul>
<pre><code>$ kubectl set image pod/hello container-1=gcr.io/google-samples/hello-app:1.0
pod/hello image updated
$ kubectl describe pod hello | grep Image:
Image: gcr.io/google-samples/hello-app:1.0
</code></pre>
<ul>
<li>Update images of all containers from deployment <code>hello-2</code>:</li>
</ul>
<pre><code>$ kubectl apply -f hello-2-deploy.yaml
deployment.apps/hello-2 created
$ kubectl get deployment hello-2
NAME READY UP-TO-DATE AVAILABLE AGE
hello-2 3/3 3 3 6s
$ kubectl describe deploy hello-2 | grep Image:
Image: gcr.io/google-samples/hello-app:2.0
$ kubectl set image deployment hello-2 *=nginx:latest
deployment.apps/hello-2 image updated
$ kubectl describe deploy hello-2 | grep Image:
Image: nginx:latest
</code></pre>
<ul>
<li>All updates were made without modifying the original yaml file.</li>
</ul>
<p>If you have any questions let me know in the comments.</p>
| Will R.O.F. |
<p>I'm new to Kubernetes. I successfully created a deployment with 2 replicas of my Angular frontend application, but when I expose it with a service and try to access the service with 'minikube service service-name', the browser can't show me the application. </p>
<p>This is my docker file</p>
<pre><code>FROM registry.gitlab.informatica.aci.it/ccsc/images/nodejs/10_15
LABEL maintainer="[email protected]" name="assistenza-fo" version="v1.0.0" license=""
WORKDIR /usr/src/app
ARG PRODUCTION_MODE="false"
ENV NODE_ENV='development'
ENV HTTP_PORT=4200
COPY package*.json ./
RUN if [ "${PRODUCTION_MODE}" = "true" ] || [ "${PRODUCTION_MODE}" = "1" ]; then \
echo "Build di produzione"; \
npm ci --production ; \
else \
echo "Build di sviluppo"; \
npm ci ; \
fi
RUN npm audit fix
RUN npm install -g @angular/cli
COPY dockerize /usr/local/bin
RUN chmod +x /usr/local/bin/dockerize
COPY . .
EXPOSE 4200
CMD ng serve --host 0.0.0.0
</code></pre>
<p>pod description</p>
<pre><code>Name: assistenza-fo-674f85c547-bzf8g
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sun, 19 Apr 2020 12:41:06 +0200
Labels: pod-template-hash=674f85c547
run=assistenza-fo
Annotations: <none>
Status: Running
IP: 172.18.0.6
Controlled By: ReplicaSet/assistenza-fo-674f85c547
Containers:
assistenza-fo:
Container ID: docker://ef2bfb66d22dea56b2dc0e49e875376bf1edff369274015445806451582703a0
Image: registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo:latest
Image ID: docker-pullable://registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo@sha256:8d02a3e69d6798c1ac88815ef785e05aba6e394eb21f806bbc25fb761cca5a98
Port: 4200/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 19 Apr 2020 12:41:08 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zdrwg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-zdrwg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zdrwg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>my deployment description</p>
<pre><code>Name: assistenza-fo
Namespace: default
CreationTimestamp: Sun, 19 Apr 2020 12:41:06 +0200
Labels: run=assistenza-fo
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=assistenza-fo
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=assistenza-fo
Containers:
assistenza-fo:
Image: registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo:latest
Port: 4200/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: assistenza-fo-674f85c547 (2/2 replicas created)
Events: <none>
</code></pre>
<p>and my service description</p>
<pre><code>Name: assistenza-fo
Namespace: default
Labels: run=assistenza-fo
Annotations: <none>
Selector: run=assistenza-fo
Type: LoadBalancer
IP: 10.97.3.206
Port: <unset> 4200/TCP
TargetPort: 4200/TCP
NodePort: <unset> 30375/TCP
Endpoints: 172.18.0.6:4200,172.18.0.7:4200
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>When i run the command</p>
<pre><code>minikube service assistenza-fo
</code></pre>
<p>I get the following output: </p>
<pre><code>|-----------|---------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|-------------------------|
| default | assistenza-fo | 4200 | http://172.17.0.2:30375 |
|-----------|---------------|-------------|-------------------------|
* Opening service default/assistenza-fo in default browser...
</code></pre>
<p>but Chrome prints out: "unable to reach the site" for timeout.</p>
<p>Thank you</p>
<p><strong>EDIT</strong></p>
<p>I create again the service, this time as a NodePort service. Still not working. This is the service description:</p>
<pre><code>Name: assistenza-fo
Namespace: default
Labels: run=assistenza-fo
Annotations: <none>
Selector: run=assistenza-fo
Type: NodePort
IP: 10.107.46.43
Port: <unset> 4200/TCP
TargetPort: 4200/TCP
NodePort: <unset> 30649/TCP
Endpoints: 172.18.0.7:4200,172.18.0.8:4200
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
| Yanosh | <p>I was able to reproduce your issue.</p>
<p>It's actually a bug on latest version of Minikube for Windows running Docker Driver: <code>--driver=docker</code></p>
<ul>
<li>You can see it here: <a href="https://github.com/kubernetes/minikube/issues/7644" rel="noreferrer">Issue - minikube service not working with Docker driver on Windows 10 Pro #7644</a></li>
<li>it was patched with the merge: <a href="https://github.com/kubernetes/minikube/pull/7739" rel="noreferrer">Pull - docker driver: Add Service & Tunnel features to windows</a></li>
<li>it is available now on <a href="https://github.com/kubernetes/minikube/releases/tag/v1.10.0-beta.0" rel="noreferrer">Minikube v1.10.0-beta.0</a></li>
</ul>
<p>In order to make it work, download the beta version from the website:</p>
<ul>
<li><p><a href="https://github.com/kubernetes/minikube/releases/download/v1.10.0-beta.0/minikube-windows-amd64.exe" rel="noreferrer">https://github.com/kubernetes/minikube/releases/download/v1.10.0-beta.0/minikube-windows-amd64.exe</a></p></li>
<li><p>move it to your working folder and rename it to <code>minikube.exe</code></p></li>
</ul>
<pre><code>C:\Kubernetes>rename minikube-windows-amd64.exe minikube.exe
C:\Kubernetes>dir
22/04/2020 21:10 <DIR> .
22/04/2020 21:10 <DIR> ..
22/04/2020 21:04 55.480.832 minikube.exe
22/04/2020 20:05 489 nginx.yaml
2 File(s) 55.481.321 bytes
</code></pre>
<ul>
<li>If you haven't yet, <code>stop</code> and <code>uninstall</code> the older version, then start Minikube with the new binary:</li>
</ul>
<pre><code>C:\Kubernetes>minikube.exe start --driver=docker
* minikube v1.10.0-beta.0 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
- kubeadm.pod-network-cidr=10.244.0.0/16
* Enabled addons: dashboard, default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
C:\Kubernetes>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-76df748b9-t6q59 1/1 Running 1 78m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 85m
service/nginx-svc NodePort 10.100.212.15 <none> 80:31027/TCP 78m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 78m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-76df748b9 1 1 1 78m
</code></pre>
<ul>
<li>Minikube is now running on version v1.10.0-beta.0, now you can run the service as intended (and note the command will be unavailable because it will be tunneling the connection:</li>
</ul>
<p><a href="https://i.stack.imgur.com/wlbDG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wlbDG.png" alt="enter image description here"></a></p>
<ul>
<li>The browser will open automatically and your service will be available:</li>
</ul>
<p><a href="https://i.stack.imgur.com/Zx0fc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Zx0fc.png" alt="enter image description here"></a></p>
<p>If you have any doubts let me know in the comments.</p>
| Will R.O.F. |
<p>I installed rancher into existing kubernetes cluster from this <a href="https://rancher.com/docs/rancher/v2.x/en/installation/install-rancher-on-k8s/" rel="noreferrer">tutorial</a>, using these commands:</p>
<pre><code>helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace rancher
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl get pods --namespace cert-manager
helm install rancher rancher-latest/rancher \
--namespace rancher \
--set hostname=rancher.blabla.com
kubectl -n rancher rollout status deploy/rancher # wait
kubectl -n rancher get deploy rancher
</code></pre>
<p>The uninstall method from <a href="//rancher.com/docs/rancher/v2.x/en/faq/removing-rancher/#:%7E:text=From%20the%20Global%20view%20in,Click%20Delete." rel="noreferrer">this page</a></p>
<pre><code>./system-tools_linux-amd64 remove -c ~/.kube/config -n rancher
</code></pre>
<p>But it shows an error:</p>
<pre><code>Are you sure you want to remove Rancher Management Plane in Namespace [rancher] [y/n]: y
INFO[0001] Removing Rancher management plane in namespace: [rancher]
INFO[0001] Getting connection configuration
INFO[0001] Removing Cattle deployment
INFO[0002] Removed Cattle deployment succuessfully
INFO[0002] Removing ClusterRoleBindings
INFO[0003] Successfully removed ClusterRoleBindings
INFO[0003] Removing ClusterRoles
INFO[0003] deleting cluster role [cluster-owner]..
INFO[0003] deleting cluster role [create-ns]..
INFO[0003] deleting cluster role [project-owner]..
INFO[0003] deleting cluster role [project-owner-promoted]..
INFO[0004] Successfully removed ClusterRoles
INFO[0004] Removing Cattle Annotations, Finalizers and Labels
INFO[0004] Checking API resource [bindings]
INFO[0004] Checking API resource [componentstatuses]
INFO[0004] Checking API resource [configmaps]
WARN[0005] Can't build dynamic client for [configmaps]: the server could not find the requested resource
INFO[0005] Checking API resource [endpoints]
WARN[0005] Can't build dynamic client for [endpoints]: the server could not find the requested resource
INFO[0005] Checking API resource [events]
WARN[0005] Can't build dynamic client for [events]: the server could not find the requested resource
INFO[0005] Checking API resource [limitranges]
WARN[0005] Can't build dynamic client for [limitranges]: the server could not find the requested resource
INFO[0005] Checking API resource [namespaces]
WARN[0005] Can't build dynamic client for [namespaces]: the server could not find the requested resource
INFO[0005] Checking API resource [namespaces/finalize]
INFO[0005] Checking API resource [namespaces/status]
INFO[0005] Checking API resource [nodes]
WARN[0006] Can't build dynamic client for [nodes]: the server could not find the requested resource
INFO[0006] Checking API resource [nodes/proxy]
INFO[0006] Checking API resource [nodes/status]
INFO[0006] Checking API resource [persistentvolumeclaims]
WARN[0006] Can't build dynamic client for [persistentvolumeclaims]: the server could not find the requested resource
INFO[0006] Checking API resource [persistentvolumeclaims/status]
INFO[0006] Checking API resource [persistentvolumes]
WARN[0006] Can't build dynamic client for [persistentvolumes]: the server could not find the requested resource
INFO[0006] Checking API resource [persistentvolumes/status]
INFO[0006] Checking API resource [pods]
WARN[0006] Can't build dynamic client for [pods]: the server could not find the requested resource
INFO[0006] Checking API resource [pods/attach]
INFO[0006] Checking API resource [pods/binding]
INFO[0006] Checking API resource [pods/eviction]
INFO[0006] Checking API resource [pods/exec]
INFO[0006] Checking API resource [pods/log]
INFO[0006] Checking API resource [pods/portforward]
INFO[0006] Checking API resource [pods/proxy]
INFO[0006] Checking API resource [pods/status]
INFO[0006] Checking API resource [podtemplates]
WARN[0006] Can't build dynamic client for [podtemplates]: the server could not find the requested resource
INFO[0006] Checking API resource [replicationcontrollers]
WARN[0007] Can't build dynamic client for [replicationcontrollers]: the server could not find the requested resource
INFO[0007] Checking API resource [replicationcontrollers/scale]
INFO[0007] Checking API resource [replicationcontrollers/status]
INFO[0007] Checking API resource [resourcequotas]
WARN[0007] Can't build dynamic client for [resourcequotas]: the server could not find the requested resource
INFO[0007] Checking API resource [resourcequotas/status]
INFO[0007] Checking API resource [secrets]
WARN[0007] Can't build dynamic client for [secrets]: the server could not find the requested resource
INFO[0007] Checking API resource [serviceaccounts]
WARN[0007] Can't build dynamic client for [serviceaccounts]: the server could not find the requested resource
INFO[0007] Checking API resource [serviceaccounts/token]
INFO[0007] Checking API resource [services]
WARN[0008] Can't build dynamic client for [services]: the server could not find the requested resource
INFO[0008] Checking API resource [services/proxy]
INFO[0008] Checking API resource [services/status]
INFO[0008] Checking API resource [apiservices]
WARN[0008] Can't build dynamic client for [apiservices]: the server could not find the requested resource
INFO[0008] Checking API resource [apiservices/status]
INFO[0008] Checking API resource [controllerrevisions]
WARN[0008] Can't build dynamic client for [controllerrevisions]: the server could not find the requested resource
INFO[0008] Checking API resource [daemonsets]
WARN[0009] Can't build dynamic client for [daemonsets]: the server could not find the requested resource
INFO[0009] Checking API resource [daemonsets/status]
INFO[0009] Checking API resource [deployments]
WARN[0009] Can't build dynamic client for [deployments]: the server could not find the requested resource
INFO[0009] Checking API resource [deployments/scale]
INFO[0009] Checking API resource [deployments/status]
INFO[0009] Checking API resource [replicasets]
WARN[0009] Can't build dynamic client for [replicasets]: the server could not find the requested resource
INFO[0009] Checking API resource [replicasets/scale]
INFO[0009] Checking API resource [replicasets/status]
INFO[0009] Checking API resource [statefulsets]
WARN[0009] Can't build dynamic client for [statefulsets]: the server could not find the requested resource
INFO[0009] Checking API resource [statefulsets/scale]
INFO[0009] Checking API resource [statefulsets/status]
INFO[0009] Checking API resource [events]
WARN[0010] Can't build dynamic client for [events]: the server could not find the requested resource
INFO[0010] Checking API resource [tokenreviews]
INFO[0010] Checking API resource [localsubjectaccessreviews]
INFO[0010] Checking API resource [selfsubjectaccessreviews]
INFO[0010] Checking API resource [selfsubjectrulesreviews]
INFO[0010] Checking API resource [subjectaccessreviews]
INFO[0010] Checking API resource [horizontalpodautoscalers]
WARN[0011] Can't build dynamic client for [horizontalpodautoscalers]: the server could not find the requested resource
INFO[0011] Checking API resource [horizontalpodautoscalers/status]
INFO[0011] Checking API resource [jobs]
WARN[0011] Can't build dynamic client for [jobs]: the server could not find the requested resource
INFO[0011] Checking API resource [jobs/status]
INFO[0011] Checking API resource [certificatesigningrequests]
WARN[0011] Can't build dynamic client for [certificatesigningrequests]: the server could not find the requested resource
INFO[0011] Checking API resource [certificatesigningrequests/approval]
INFO[0011] Checking API resource [certificatesigningrequests/status]
INFO[0012] Checking API resource [ingressclasses]
WARN[0012] Can't build dynamic client for [ingressclasses]: the server could not find the requested resource
INFO[0012] Checking API resource [ingresses]
WARN[0012] Can't build dynamic client for [ingresses]: the server could not find the requested resource
INFO[0012] Checking API resource [ingresses/status]
INFO[0012] Checking API resource [networkpolicies]
WARN[0012] Can't build dynamic client for [networkpolicies]: the server could not find the requested resource
INFO[0013] Checking API resource [ingresses]
WARN[0013] Can't build dynamic client for [ingresses]: the server could not find the requested resource
INFO[0013] Checking API resource [ingresses/status]
INFO[0013] Checking API resource [poddisruptionbudgets]
WARN[0013] Can't build dynamic client for [poddisruptionbudgets]: the server could not find the requested resource
INFO[0013] Checking API resource [poddisruptionbudgets/status]
INFO[0013] Checking API resource [podsecuritypolicies]
WARN[0013] Can't build dynamic client for [podsecuritypolicies]: the server could not find the requested resource
INFO[0014] Checking API resource [clusterrolebindings]
WARN[0014] Can't build dynamic client for [clusterrolebindings]: the server could not find the requested resource
INFO[0014] Checking API resource [clusterroles]
WARN[0014] Can't build dynamic client for [clusterroles]: the server could not find the requested resource
INFO[0014] Checking API resource [rolebindings]
WARN[0014] Can't build dynamic client for [rolebindings]: the server could not find the requested resource
INFO[0014] Checking API resource [roles]
WARN[0014] Can't build dynamic client for [roles]: the server could not find the requested resource
INFO[0015] Checking API resource [csidrivers]
WARN[0015] Can't build dynamic client for [csidrivers]: the server could not find the requested resource
INFO[0015] Checking API resource [csinodes]
WARN[0015] Can't build dynamic client for [csinodes]: the server could not find the requested resource
INFO[0015] Checking API resource [storageclasses]
WARN[0015] Can't build dynamic client for [storageclasses]: the server could not find the requested resource
INFO[0015] Checking API resource [volumeattachments]
WARN[0016] Can't build dynamic client for [volumeattachments]: the server could not find the requested resource
INFO[0016] Checking API resource [volumeattachments/status]
INFO[0016] Checking API resource [mutatingwebhookconfigurations]
WARN[0016] Can't build dynamic client for [mutatingwebhookconfigurations]: the server could not find the requested resource
INFO[0016] Checking API resource [validatingwebhookconfigurations]
WARN[0016] Can't build dynamic client for [validatingwebhookconfigurations]: the server could not find the requested resource
INFO[0016] Checking API resource [customresourcedefinitions]
WARN[0017] Can't build dynamic client for [customresourcedefinitions]: the server could not find the requested resource
INFO[0017] Checking API resource [customresourcedefinitions/status]
INFO[0017] Checking API resource [priorityclasses]
WARN[0017] Can't build dynamic client for [priorityclasses]: the server could not find the requested resource
INFO[0017] Checking API resource [leases]
WARN[0017] Can't build dynamic client for [leases]: the server could not find the requested resource
INFO[0018] Checking API resource [runtimeclasses]
WARN[0018] Can't build dynamic client for [runtimeclasses]: the server could not find the requested resource
INFO[0018] Checking API resource [endpointslices]
WARN[0018] Can't build dynamic client for [endpointslices]: the server could not find the requested resource
INFO[0018] Checking API resource [flowschemas]
WARN[0019] Can't build dynamic client for [flowschemas]: the server could not find the requested resource
INFO[0019] Checking API resource [flowschemas/status]
INFO[0019] Checking API resource [prioritylevelconfigurations]
WARN[0019] Can't build dynamic client for [prioritylevelconfigurations]: the server could not find the requested resource
INFO[0019] Checking API resource [prioritylevelconfigurations/status]
INFO[0019] Checking API resource [perconaxtradbclusterbackups]
WARN[0019] Can't build dynamic client for [perconaxtradbclusterbackups]: the server could not find the requested resource
INFO[0019] Checking API resource [perconaxtradbclusterbackups/status]
INFO[0019] Checking API resource [perconaxtradbclusterrestores]
WARN[0020] Can't build dynamic client for [perconaxtradbclusterrestores]: the server could not find the requested resource
INFO[0020] Checking API resource [perconaxtradbclusterrestores/status]
INFO[0020] Checking API resource [perconaxtradbclusters]
WARN[0020] Can't build dynamic client for [perconaxtradbclusters]: the server could not find the requested resource
INFO[0020] Checking API resource [perconaxtradbclusters/status]
INFO[0020] Checking API resource [challenges]
WARN[0020] Can't build dynamic client for [challenges]: the server could not find the requested resource
INFO[0020] Checking API resource [challenges/status]
INFO[0020] Checking API resource [orders]
WARN[0020] Can't build dynamic client for [orders]: the server could not find the requested resource
INFO[0020] Checking API resource [orders/status]
INFO[0021] Checking API resource [clusterrepos]
WARN[0021] Can't build dynamic client for [clusterrepos]: the server could not find the requested resource
INFO[0021] Checking API resource [clusterrepos/status]
INFO[0021] Checking API resource [apps]
WARN[0021] Can't build dynamic client for [apps]: the server could not find the requested resource
INFO[0021] Checking API resource [apps/status]
INFO[0021] Checking API resource [operations]
WARN[0021] Can't build dynamic client for [operations]: the server could not find the requested resource
INFO[0021] Checking API resource [operations/status]
INFO[0022] Checking API resource [clusterissuers]
WARN[0022] Can't build dynamic client for [clusterissuers]: the server could not find the requested resource
INFO[0022] Checking API resource [clusterissuers/status]
INFO[0022] Checking API resource [certificaterequests]
WARN[0022] Can't build dynamic client for [certificaterequests]: the server could not find the requested resource
INFO[0022] Checking API resource [certificaterequests/status]
INFO[0022] Checking API resource [certificates]
WARN[0022] Can't build dynamic client for [certificates]: the server could not find the requested resource
INFO[0022] Checking API resource [certificates/status]
INFO[0022] Checking API resource [issuers]
WARN[0022] Can't build dynamic client for [issuers]: the server could not find the requested resource
INFO[0022] Checking API resource [issuers/status]
INFO[0023] Checking API resource [gitjobs]
WARN[0023] Can't build dynamic client for [gitjobs]: the server could not find the requested resource
INFO[0023] Checking API resource [gitjobs/status]
INFO[0023] Checking API resource [prometheusrules]
WARN[0023] Can't build dynamic client for [prometheusrules]: the server could not find the requested resource
INFO[0023] Checking API resource [thanosrulers]
WARN[0024] Can't build dynamic client for [thanosrulers]: the server could not find the requested resource
INFO[0024] Checking API resource [alertmanagers]
WARN[0024] Can't build dynamic client for [alertmanagers]: the server could not find the requested resource
INFO[0024] Checking API resource [podmonitors]
WARN[0024] Can't build dynamic client for [podmonitors]: the server could not find the requested resource
INFO[0024] Checking API resource [probes]
WARN[0024] Can't build dynamic client for [probes]: the server could not find the requested resource
INFO[0024] Checking API resource [servicemonitors]
WARN[0024] Can't build dynamic client for [servicemonitors]: the server could not find the requested resource
INFO[0024] Checking API resource [prometheuses]
WARN[0025] Can't build dynamic client for [prometheuses]: the server could not find the requested resource
INFO[0025] Checking API resource [projects]
WARN[0025] Can't build dynamic client for [projects]: the server could not find the requested resource
INFO[0025] Checking API resource [projects/status]
INFO[0025] Checking API resource [roletemplates]
WARN[0025] Can't build dynamic client for [roletemplates]: the server could not find the requested resource
INFO[0025] Checking API resource [roletemplates/status]
INFO[0025] Checking API resource [clusters]
WARN[0025] Can't build dynamic client for [clusters]: the server could not find the requested resource
INFO[0025] Checking API resource [clusters/status]
INFO[0025] Checking API resource [roletemplatebindings]
WARN[0026] Can't build dynamic client for [roletemplatebindings]: the server could not find the requested resource
INFO[0026] Checking API resource [roletemplatebindings/status]
INFO[0026] Checking API resource [clusters]
WARN[0026] Can't build dynamic client for [clusters]: the server could not find the requested resource
INFO[0026] Checking API resource [clusters/status]
INFO[0026] Checking API resource [gitrepos]
WARN[0026] Can't build dynamic client for [gitrepos]: the server could not find the requested resource
INFO[0026] Checking API resource [gitrepos/status]
INFO[0026] Checking API resource [bundles]
WARN[0026] Can't build dynamic client for [bundles]: the server could not find the requested resource
INFO[0026] Checking API resource [bundles/status]
INFO[0026] Checking API resource [clusterregistrations]
WARN[0027] Can't build dynamic client for [clusterregistrations]: the server could not find the requested resource
INFO[0027] Checking API resource [clusterregistrations/status]
INFO[0027] Checking API resource [clusterregistrationtokens]
WARN[0027] Can't build dynamic client for [clusterregistrationtokens]: the server could not find the requested resource
INFO[0027] Checking API resource [clusterregistrationtokens/status]
INFO[0027] Checking API resource [bundledeployments]
WARN[0027] Can't build dynamic client for [bundledeployments]: the server could not find the requested resource
INFO[0027] Checking API resource [bundledeployments/status]
INFO[0027] Checking API resource [gitreporestrictions]
WARN[0027] Can't build dynamic client for [gitreporestrictions]: the server could not find the requested resource
INFO[0027] Checking API resource [gitreporestrictions/status]
INFO[0027] Checking API resource [contents]
WARN[0028] Can't build dynamic client for [contents]: the server could not find the requested resource
INFO[0028] Checking API resource [clustergroups]
WARN[0028] Can't build dynamic client for [clustergroups]: the server could not find the requested resource
INFO[0028] Checking API resource [clustergroups/status]
INFO[0028] Checking API resource [bundlenamespacemappings]
WARN[0028] Can't build dynamic client for [bundlenamespacemappings]: the server could not find the requested resource
INFO[0028] Checking API resource [bundlenamespacemappings/status]
INFO[0028] Checking API resource [clusters]
WARN[0028] Can't build dynamic client for [clusters]: the server could not find the requested resource
INFO[0028] Checking API resource [clusters/status]
INFO[0028] Checking API resource [roles]
WARN[0029] Can't build dynamic client for [roles]: the server could not find the requested resource
INFO[0029] Checking API resource [roles/status]
INFO[0029] Checking API resource [roles/scale]
INFO[0029] Checking API resource [replicasettemplates]
WARN[0029] Can't build dynamic client for [replicasettemplates]: the server could not find the requested resource
INFO[0029] Checking API resource [replicasettemplates/status]
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x148dedc]
goroutine 1 [running]:
github.com/rancher/system-tools/vendor/github.com/urfave/cli.HandleAction.func1(0xc000f3da30)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/app.go:472 +0x278
panic(0x1681e40, 0x2adb0e0)
/usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/rancher/system-tools/remove.getGroupAPIResourceList(0xc000198140, 0x0, 0x0, 0x0, 0x0, 0xc000e47580, 0x17, 0xc000ea7ac0, 0x1, 0x1, ...)
/go/src/github.com/rancher/system-tools/remove/remove.go:425 +0x9c
github.com/rancher/system-tools/remove.removeCattleAnnotationsFinalizersLabels(0xc000198140, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/remove/remove.go:476 +0x1b2
github.com/rancher/system-tools/remove.DoRemoveRancher.func4(0x0, 0x0)
/go/src/github.com/rancher/system-tools/remove/remove.go:93 +0x2a
github.com/rancher/system-tools/utils.RetryWithCount(0xc000f3d400, 0x3, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/utils/utils.go:53 +0x61
github.com/rancher/system-tools/remove.DoRemoveRancher(0xc000198140, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/remove/remove.go:92 +0x373
reflect.Value.call(0x15fa8e0, 0x1995388, 0x13, 0x18df3cc, 0x4, 0xc0007a59d0, 0x1, 0x1, 0xc00067c080, 0xc000af43b0, ...)
/usr/local/go/src/reflect/value.go:447 +0x449
reflect.Value.Call(0x15fa8e0, 0x1995388, 0x13, 0xc0007a59d0, 0x1, 0x1, 0x5, 0x4, 0xc000686b40)
/usr/local/go/src/reflect/value.go:308 +0xa4
github.com/rancher/system-tools/vendor/github.com/urfave/cli.HandleAction(0x15fa8e0, 0x1995388, 0xc000198140, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/app.go:481 +0x1fb
github.com/rancher/system-tools/vendor/github.com/urfave/cli.Command.Run(0x18e1d91, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1911f67, 0x2a, 0x0, ...)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/command.go:186 +0x8f6
github.com/rancher/system-tools/vendor/github.com/urfave/cli.(*App).Run(0xc0003a6d80, 0xc0000381e0, 0x6, 0x6, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/app.go:235 +0x52e
main.main()
/go/src/github.com/rancher/system-tools/main.go:84 +0x5c3
</code></pre>
<p>How to uninstall it manually?</p>
| Kokizzu | <p>Remove the cattle-system namespace where rancher install all its components.
kubectl delete ns cattle-system</p>
| Sachin Shaji |
<p>I'm facing a problem to configure an ingress for traefik.</p>
<p>The design is simple :</p>
<p>I want to be able to reach port 8888 and 8080 from the host via a CI/CD flow with argocd and a simple docker application, all the stuff embedded in a cluster created with k3d. I thought that the easiest way to do that is to execute something like this :</p>
<pre><code>k3d cluster create -p 8888:8888@loadblancer -p 8080:80@loadbalancer
</code></pre>
<p>I installed everything I need (argocd cli, kubectl...) and defined for the application and argocd a "naive" ingress.</p>
<p>For argocd :</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: argocd.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
</code></pre>
<p>For the application :</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wil-app-ingress
namespace: dev
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wil-app-svc
port:
number: 8888
</code></pre>
<p>For argocd, it seems to work perfectly fine : I'm able to reach the ui, connect to it etc.</p>
<p>But for the app, I can do nothing.</p>
<p>Indeed, if I try to curl to localhost:8888 I have this response :</p>
<blockquote>
<p>empty reply from server</p>
</blockquote>
<p>When I'm trying to know how the ingresses are defined, I have this :</p>
<pre class="lang-bash prettyprint-override"><code>john@doe:~$ kubectl describe ing wil-app-ingress -n dev
Name: wil-app-ingress
Labels: <none>
Namespace: dev
Address: 172.18.0.2
Ingress Class: traefik
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
localhost
/ wil-app-svc:8888 (10.42.0.17:8888)
Annotations: ingress.kubernetes.io/ssl-redirect: false
Events: <none>
john@doe:~$ kubectl describe ing argocd-ingress -n argocd
Name: argocd-ingress
Labels: <none>
Namespace: argocd
Address: 172.18.0.2
Ingress Class: traefik
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
argocd.local
/ argocd-server:80 (10.42.0.16:8080)
Annotations: ingress.kubernetes.io/ssl-redirect: false
Events: <none>
</code></pre>
<p>Synthetically :</p>
<pre class="lang-bash prettyprint-override"><code>john@doe:~$ kubectl get ing --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
argocd argocd-ingress traefik argocd.local 172.18.0.2 80 16m
dev wil-app-ingress traefik localhost 172.18.0.2 80 14m
</code></pre>
<p>It seems that traefik point to port 80 for both ingress. If I delete the ingress for argocd, and I curl localhost:8080, I'm able to reach the app ! Like if traefik redirect all the trafic to the same port (here, 80 and 8080 on the host).</p>
<p>I'm a noob in kubernetes, I can't figure out why this happen. Sorry if I use the wrong term for such and such a notion, I'm a beginner and it's quite complicated.</p>
<p>Can someone explain me why I have this problem ? I think maybe it is related to traefik and its basic behaviour, but I was not able to find something clear about this. Thanks !</p>
| Batche | <p>Well, there is a solution. It do not answer to my initial question (how to get the application on the host port 8888 via an ingress), but it makes possible to reach the app and argocd without troubles.</p>
<p>To do so, I exposed my service following this : <a href="https://k3d.io/v5.0.1/usage/exposing_services/#2-via-nodeport" rel="nofollow noreferrer">https://k3d.io/v5.0.1/usage/exposing_services/#2-via-nodeport</a></p>
<p>It's really simple :</p>
<pre class="lang-bash prettyprint-override"><code>k3d cluster create p3-iot -p "8080:80@loadbalancer" -p "8888:30080@agent:0" --agents 2
</code></pre>
<p>Then, we have to create a service of type nodePort :</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app: wil-app
name: wil-app-svc
spec:
ports:
- name: 8888-8888
nodePort: 30080
port: 8888
protocol: TCP
targetPort: 8888
selector:
app: wil-app
type: NodePort
</code></pre>
<p>Like so, we do not have to create an ingress for the application. k3d seems to expose the service directly from the port of the service of type nodePort. I think it's not a best pratice, but it works. Another solution is to use nginx as an ingress controller. It is worth because the way to configure nginx for such a project is straightforward.</p>
<p>The rest is unchanged, we can reach argocd ui, and reach the application.</p>
<p>If someone can answer why my previous way to achieve this task did not work, I will be glad of it.</p>
| Batche |
<p>I am using GKE cluster with master version <code>1.15.9-gke.24</code> and linkerd2 as proxy for my <code>gRPC</code> services. </p>
<p>From my cluster I saw calico node vertical autoscaler pod is in <code>CrashLoopBackOff</code> state. From log I see following</p>
<pre><code>$ kubectl logs -f calico-node-vertical-autoscaler-7767597775-rd68v -n kube-system
I0503 10:36:55.586271 1 autoscaler.go:46] Scaling namespace: kube-system, target: daemonset/calico-node
E0503 10:36:55.720025 1 autoscaler.go:49] unknown target kind: Tap
</code></pre>
<p>According to <a href="https://github.com/linkerd/linkerd2/issues/3388#issuecomment-608169755" rel="nofollow noreferrer">this</a> I need to update from <code>k8s.gcr.io/cpvpa-amd64:v0.8.1</code> to <code>k8s.gcr.io/cpvpa-amd64:v0.8.2</code>. I edited the deployment and replace the version. But it seems gke reset the image version to <code>v0.8.1</code>. How can I modify the version without uprading cluster?</p>
<p>Additional information:</p>
<pre><code> $ linkerd version
Client version: stable-2.7.1
Server version: stable-2.7.1
</code></pre>
| hoque | <blockquote>
<p>I edited the deployment and replace the version. But it seems gke reset the image version to v0.8.1. How can I modify the version without uprading cluster?</p>
</blockquote>
<p>When you tried to edit the manifest to upgrade cpvpa image to 0.8.2 it got to 0.8.1 since GKE is a managed cluster, which is an intended behaviour. </p>
<ul>
<li><p>Any changes made to a <code>kube-system</code> object will be automatically reverted, this happens because the addon-manager will perform the necessary actions to preserve it state.</p></li>
<li><p>Direct manipulation to these addons through apiserver is discouraged because addon-manager will bring them back to the original state. </p></li>
</ul>
<p><strong>Upgrading the cluster Version once the release of 0.8.2 is available on the <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes" rel="nofollow noreferrer">GKE Release Notes</a> page is the only recommended way to get it.</strong></p>
<ul>
<li>As a workaround I suggest you try setting the <code>priorityClass</code> and <code>priorityClassName</code> configurations in Linkerd configuration as mentioned in the <a href="https://github.com/linkerd/linkerd2/issues/3388#issuecomment-553486915" rel="nofollow noreferrer">Github Issue</a> as the solution while 0.8.2 is not available.</li>
</ul>
<p>If you need further help let me know in the comments!</p>
| Will R.O.F. |
<p>Hope you are all well,</p>
<p>I am currently trying to rollout the <a href="https://github.com/ansible/awx-operator" rel="noreferrer">awx-operator</a> on to a Kubernetes Cluster and I am running into a few issues with going to the service from outside of the cluster.</p>
<p>Currently I have the following services set up:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx NodePort 10.102.30.6 <none> 8080:32155/TCP 110m
awx-operator NodePort 10.110.147.152 <none> 80:31867/TCP 125m
awx-operator-metrics ClusterIP 10.105.190.155 <none> 8383/TCP,8686/TCP 3h17m
awx-postgres ClusterIP None <none> 5432/TCP 3h16m
awx-service ClusterIP 10.102.86.14 <none> 80/TCP 121m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
</code></pre>
<p>I did set up a <code>NodePort</code> which is called <code>awx-operator</code>. I did attempt to create an ingress to the application. You can see that below:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awx-ingress
spec:
rules:
- host: awx.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: awx
port:
number: 80
</code></pre>
<p>When I create the ingress, and then run <code>kubectl describe ingress</code>, I get the following output:</p>
<pre><code>Name: awx-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
awx.mycompany.com
/ awx:80 (10.244.1.8:8080)
Annotations: <none>
Events: <none>
</code></pre>
<p>Now I am not too sure whether the <code>default-http-backend:80</code> error is a red-herring as I have seen this in a number of places and they don't seem too worried about it, but please correct me if I am wrong.</p>
<p>Please let me know whether there is anything else I can do to troubleshoot this, and I will get back to you as soon as I can.</p>
| jacklikethings | <p>You are right and the blank address is the issue here. In traditional <em>cloud</em> environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster.</p>
<p><em>Bare-metal</em> environments on the other hand lack this option, requiring from you a slightly different setup to offer the same kind of access to external consumers:</p>
<p><img src="https://kubernetes.github.io/ingress-nginx/images/baremetal/baremetal_overview.jpg" alt="Bare-metal environment" /></p>
<p>This means you have to do some additional gymnastics to make the ingress work. And you have basically two main options here (all well described <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#bare-metal-considerations" rel="nofollow noreferrer">here</a>):</p>
<ul>
<li>A pure software solution: <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb" rel="nofollow noreferrer">MetalLB</a></li>
<li>Over the <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service" rel="nofollow noreferrer">NodePort</a> service.</li>
</ul>
<p>What is happening here is that you basically creating a service type <code>NodePort</code> with selector that matches your ingress controller pod and then it's routes the traffic accordingly to your ingress object:</p>
<pre class="lang-yaml prettyprint-override"><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.30.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.46.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>Full nginx deployment that conatains that service can be found <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">here</a>.</p>
<p>If you wish to skip the ingress you might be just using the <code>nodePort</code> service <code> awx</code> and reach it directly.</p>
| acid_fuji |
<p>Just getting started with Kubernetes. I cannot seem to connect pods running on different nodes to communicate with each other. </p>
<p>I set up a Kubernetes Cluster with Calico networking on three AWS EC2 instances (one master, two workers <a href="https://docs.projectcalico.org/reference/public-cloud/aws#routing-traffic-within-a-single-vpc-subnet" rel="nofollow noreferrer">all with src/dest check disabled as described by the Calico website</a>). Each instance is using the same Security Group with all TCP/UDP/ICMP ports open for 10.0.0.0/8 and 192.168.0.0/16 to make sure there is no blocked ports inside my cluster. </p>
<p>using a vanilla repo install </p>
<pre><code>~$ sudo apt-get install -y docker.io kubelet kubeadm kubectl
~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
</code></pre>
<p>and basic Calico install</p>
<pre><code>~$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<p>joined two worker nodes to the cluster</p>
<pre><code> sudo kubeadm join <Master IP>:6443 --token <Token> --discovery-token-ca-cert-hash sha256:<cert hash>
</code></pre>
<p>Once up and running, I created three replicas for testing:</p>
<pre><code>~$ kubectl run pingtest --image=busybox --replicas=3 -- sleep infinity
</code></pre>
<p>two on the first node and one on the second node</p>
<pre><code>~$ kubectl get pod -l run=pingtest -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pingtest-7689dd958f-9mfgl 1/1 Running 0 15m 192.168.218.65 ip-10-78-31-198 <none> <none>
pingtest-7689dd958f-l288v 1/1 Running 0 15m 192.168.218.66 ip-10-78-31-198 <none> <none>
pingtest-7689dd958f-z2l97 1/1 Running 0 15m 192.168.237.65 ip-10-78-11-83 <none> <none>
</code></pre>
<p>log into a shell on the first pod</p>
<pre><code>~$ kubectl exec -ti pingtest-7689dd958f-9mfgl /bin/sh
</code></pre>
<p>When I ping pods on the same node everything works</p>
<pre><code>/ # ping 192.168.218.66 -c 2
PING 192.168.218.66 (192.168.218.66): 56 data bytes
64 bytes from 192.168.218.66: seq=0 ttl=63 time=0.105 ms
64 bytes from 192.168.218.66: seq=1 ttl=63 time=0.078 ms
</code></pre>
<p>but when I ping a pod on another node, no response </p>
<pre><code>/ # ping 192.168.237.65 -c 2
PING 192.168.237.65 (192.168.237.65): 56 data bytes
--- 192.168.237.65 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
</code></pre>
<p>What am I missing? What is preventing communication between the pods on different nodes?</p>
| Tim de Vries | <p>I figured out the issue. It was with the AWS configuration and some extra work you have to do in that environment. </p>
<ol>
<li>For the three AWS EC2 instances, they must all have src/dest check <strong>disabled</strong> <a href="https://i.stack.imgur.com/NMPXN.png" rel="nofollow noreferrer">as described by the Calico website</a>).</li>
<li>For the Security Group covering your AWS instances, you must add a <strong>Custom Protocol</strong> (not Custom TCP or Custom UDP), select 4 (IP in IP) in the protocol column and choose the subnets covering your instance (.eg. 10.0.0.0/8, 192.168.0.0/16). Then you can use the <code>curl</code> command to address you Pods/ServiceIP
<a href="https://i.stack.imgur.com/NMPXN.png" rel="nofollow noreferrer">AWS Security Group Settings</a></li>
</ol>
| Tim de Vries |
<p>I would like to read state of K8s using µK8s, but I don't want to have rights to modify anything. How to achieve this?</p>
<p>The following will give me full access:</p>
<pre><code>microk8s.kubectl Insufficient permissions to access MicroK8s. You can either try again with sudo or add the user digital to the 'microk8s' group:
sudo usermod -a -G microk8s digital sudo chown -f -R digital ~/.kube
The new group will be available on the user's next login.
</code></pre>
| digital_infinity | <blockquote>
<p>on Unix/Linux we can just set appropriate file/directory access
permission - just <code>rx</code>, decrease shell limits (like max memory/open
file descriptors), decrease process priority (<code>nice -19</code>). We are
looking for similar solution for K8S</p>
</blockquote>
<p>This kind of solutions in Kubernetes are handled via <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> (Role-based access control). RBAC prevents unauthorized users from viewing or modifying the cluster state. Because the API server exposes a REST interface, users perform actions by sending HTTP requests to the server. Users authenticate themselves by including credentials in the request (an authentication token, username and password, or a client certificate).</p>
<p>As for REST clients you get <code>GET</code>, <code>POST</code>, <code>PUT</code>,<code>DELETE</code> etc. These are send to specific URL paths that represents specific REST API resources (Pods, Services, Deployments and so).</p>
<p>RBAC auth is configured with two groups:</p>
<ul>
<li>Roles and ClusterRoles - this specify which actions/verbs can be performed</li>
<li>RoleBinding and ClusterRoleBindings - this bind the above roles to a user, group or service account.</li>
</ul>
<p>As you might already find out the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrole-example" rel="nofollow noreferrer">ClusterRole</a> is the one your might be looking for. This will allow to restrict specific user or group against the cluster.
In the example below we are creating <code>ClusterRole</code> that can only list pods. The namespace is omitted since ClusterRoles are not namepsaced.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-viewer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
</code></pre>
<p>This permission has to be bound then via <code>ClusterRoleBinding</code> :</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to list pods in any namespace.
kind: ClusterRoleBinding
metadata:
name: list-pods-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-viewer
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Because you don't have the enough permissions on your own you have to reach out to appropriate person who manage those to create user for you that has the <code>ClusterRole: View</code>. View role should be predefined already in cluster ( <code>kubectl get clusterrole view</code>)</p>
<p>If you wish to read more <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes docs</a> explains well its whole concept of authorization.</p>
| acid_fuji |
<p>When I try to <code>kubeadm reset -f</code>, it report the etcd server can not be removed, you must remove it manually.</p>
<pre><code>failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints. Please manually remove this etcd member using etcdctl
</code></pre>
| ccd | <p>Is this a control-plane (master) node?</p>
<p>If not: simply running <code>kubectl delete node <node_id></code> should suffice (see reference below). This will update etcd and take care of the rest of cleanup. You'll still have to diagnose what caused the node to fail to reset in the first place if you're hoping to re-add it... but that's a separate problem. See discussion e.g., <a href="https://github.com/kubernetes/kubeadm/issues/1300#issuecomment-548569829" rel="nofollow noreferrer">here</a> on a related issue:</p>
<blockquote>
<p>If the node is hard failed and you cannot call kubeadm reset on it, it requires manual steps. you'd have to:</p>
<ol>
<li><p>Remove the control-plane IP from the kubeadm-config CM ClusterStatus</p>
</li>
<li><p>Remove the etcd member using etcdctl</p>
</li>
<li><p>Delete the Node object using kubectl (if you don't want the Node around anymore)</p>
</li>
</ol>
<p>1 and 2 apply only to control-plane nodes.</p>
</blockquote>
<p>Hope this helps — if you <em>are</em> dealing with a master node, I'd be happy to include examples of what commands to run.</p>
| Jesse Stuart |
<p>I'm creating a web application that comprises a <em>React</em> <strong>frontend</strong> and a <em>node.js</em> (express) <strong>server</strong>. The frontend makes an internal api call to the express server and the express server then makes an external api call to gather some data. The frontend and the server are in different containers within the same Kubernetes pod.</p>
<p>The <strong>frontend service</strong> is an <code>nginx:1.14.0-alpine</code> image. The static files are built (<code>npm build</code>) in a CI pipeline and the <code>build</code> directory is copied to the image during <code>docker build</code>. The <code>package.json</code> contains a proxy key, <code>"proxy": "http://localhost:8080"</code>, that routes traffic from the app to <code>localhost:8080</code> - which is the port that the express server is listening on for an internal api call. I think the <code>proxy</code> key will have no bearing once the files are packaged into static files and served up onto an <code>nginx</code> image?</p>
<p>When running locally, i.e. running <code>npm start</code> instead of <code>npm build</code>, this all works. The express server picks up the api requests sent out by the frontend on port <code>8080</code>.</p>
<p>The <strong>express server</strong> is a simple service that adds authentication to the api call that the frontend makes, that is all. But the authentication relies on secrets as environment variables, making them incompatible with React. The server is started by running <code>node server.js</code>; locally the server service successfully listens (<code>app.listen(8080)</code>)to the api calls from the React frontend, adds some authentication to the request, then makes the the request to the external api and passes the response back to the frontend once it is received.</p>
<p>In production, in a Kubernetes pod, things aren't so simple. The traffic from the <strong>React frontend</strong> proxying through the <strong>node server</strong> needs to be handled by kubernetes now, and I haven't been able to figure it out.</p>
<p>It may be important to note that there are no circumstances in which the <strong>frontend</strong> will make any external api calls directly, they will all go through the <strong>server</strong>.</p>
<p><code>React frontend Dockerfile</code></p>
<pre><code>FROM nginx:1.14.0-alpine
# Copy static files
COPY client/build/ /usr/share/nginx/html/
# The rest has been redacted for brevity but is just copying of favicons etc.
</code></pre>
<p><code>Express Node Server</code></p>
<pre><code>FROM node:10.16.2-alpine
# Create app directory
WORKDIR /app
# Install app dependencies
COPY server/package*.json .
RUN npm install
EXPOSE 8080
CMD [ "node", "server.js" ]
</code></pre>
<p><code>Kubernetes Manifest</code> - Redacted for brevity</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
containers:
- name: frontend
image: frontend-image:1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: config-dir
subPath: my-config.conf
- name: server
image: server-image:1.0.0
imagePullPolicy: IfNotPresent
volumes:
- name: config-tmpl
configMap:
name: app-config
defaultMode: 0744
- name: my-config-directory
emptyDir: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: my-namespace
data:
my-conf.conf: |-
server {
listen 80;
server_name _;
location api/ {
proxy_pass http://127.0.0.1:8080/;
}
.....
</code></pre>
| Grant | <p>In Kubernetes, thes pod share the same network interface with all container inside it, so for the frontend container <strong>localhost:8080</strong> is the backend, and for the backend container <strong>localhost:80</strong> is the frontend.
As for any container application, you should ensure that they are listening in other interfaces than 127.0.0.1 if you want traffic from outside.</p>
<p>Migrating an aplication from one server - where every application talks from <strong>127.0.0.1</strong> - to a pod was intended to be simple as in a dedicated machine.</p>
<p>Your <strong>nginx.conf</strong> looks a little bit strange, should be <code>location /api/ {</code>.</p>
<p>Here is functional example:</p>
<p><strong>nginx.conf</strong></p>
<pre><code>server {
server_name localhost;
listen 0.0.0.0:80;
error_page 500 502 503 504 /50x.html;
location / {
root html;
}
location /api/ {
proxy_pass http://127.0.0.1:8080/;
}
}
</code></pre>
<p>Create this <strong>ConfigMap</strong>:</p>
<pre><code>kubectl create -f nginx.conf
</code></pre>
<p><strong>app.js</strong></p>
<pre><code>const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => res.send('Hello from Express!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM alpine
RUN apk add nodejs npm && mkdir -p /app
COPY . /app
WORKDIR /app
RUN npm install express --save
EXPOSE 8080
CMD node app.js
</code></pre>
<p>You can build this image or use the one I've made <strong>hectorvido/express</strong>.</p>
<p>Then, create the <strong>pod</strong> YAML definition:</p>
<p><strong>pod.yml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: front-back
labels:
app: front-back
spec:
containers:
- name: front
image: nginx:alpine
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/
ports:
- containerPort: 80
- name: back
image: hectorvido/express
ports:
- containerPort: 8080
volumes:
- name: nginx-conf
configMap:
name: nginx
</code></pre>
<p>Put on the cluster:</p>
<pre><code>kubectl create -f pod.yml
</code></pre>
<p>Get the IP:</p>
<pre><code>kubectl get pods -o wide
</code></pre>
<p>I tested with Minikube, so if the pod IP was <em>172.17.0.7</em> I have to do:</p>
<pre class="lang-sh prettyprint-override"><code>minikube ssh
curl -L 172.17.0.7/api
</code></pre>
<p>If you had an ingress in the front, it should still working. I enabled an nginx ingress controller on minikube, so we need to create a service and a ingress:</p>
<p><strong>service</strong></p>
<pre class="lang-sh prettyprint-override"><code>kubectl expose pod front-back --port 80
</code></pre>
<p><strong>ingress.yml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: front-back
spec:
rules:
- host: fb.192-168-39-163.nip.io # minikube ip
http:
paths:
- path: /
backend:
serviceName: front-back
servicePort: 80
</code></pre>
<p>The test still works:</p>
<pre class="lang-sh prettyprint-override"><code>curl -vL http://fb.192-168-39-163.nip.io/api/
</code></pre>
| Hector Vido |
<p>I have been playing around with minikube and after a set of operations, the output of <code>kubectl get pod -w</code> is like this-</p>
<pre><code>
nginx 1/1 Running 2 10m
nginx 1/1 Running 3 10m
nginx 0/1 Completed 2 10m
nginx 0/1 CrashLoopBackOff 2 11m
nginx 1/1 Running 3 11m
nginx 1/1 Running 3 12m
</code></pre>
<p>I don't understand the count shown at line 3 and 4. What does restart count convey exactly?</p>
| Shadja Chaudhari | <h3>About the <code>CrashLoopBackOff</code> Status:</h3>
<p>A <code>CrashloopBackOff</code> means that you have a pod starting, crashing, starting again, and then crashing again.</p>
<p>Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.</p>
<p><code>CrashLoopBackOff</code> events occurs for different reasons, most of te cases related to the following:
- The application inside the container keeps crashing
- Some parameter of the pod or container have been configured incorrectly
- An error made during the deployment </p>
<p>Whenever you face a <code>CrashLoopBackOff</code> do a <code>kubectl describe</code> to investigate:</p>
<p><code>kubectl describe pod POD_NAME --namespace NAMESPACE_NAME</code> </p>
<pre><code>user@minikube:~$ kubectl describe pod ubuntu-5d4bb4fd84-8gl67 --namespace default
Name: ubuntu-5d4bb4fd84-8gl67
Namespace: default
Priority: 0
Node: minikube/192.168.39.216
Start Time: Thu, 09 Jan 2020 09:51:03 +0000
Labels: app=ubuntu
pod-template-hash=5d4bb4fd84
Status: Running
Controlled By: ReplicaSet/ubuntu-5d4bb4fd84
Containers:
ubuntu:
Container ID: docker://c4c0295e1e050b5e395fc7b368a8170f863159879821dd2562bc2938d17fc6fc
Image: ubuntu
Image ID: docker-pullable://ubuntu@sha256:250cc6f3f3ffc5cdaa9d8f4946ac79821aafb4d3afc93928f0de9336eba21aa4
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 Jan 2020 09:54:37 +0000
Finished: Thu, 09 Jan 2020 09:54:37 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 Jan 2020 09:53:05 +0000
Finished: Thu, 09 Jan 2020 09:53:05 +0000
Ready: False
Restart Count: 5
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxst (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-xxxst:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xxxst
Optional: false
QoS Class: BestEffort
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m16s default-scheduler Successfully assigned default/ubuntu-5d4bb4fd84-8gl67 to minikube
Normal Created 5m59s (x4 over 6m52s) kubelet, minikube Created container ubuntu
Normal Started 5m58s (x4 over 6m52s) kubelet, minikube Started container ubuntu
Normal Pulling 5m17s (x5 over 7m5s) kubelet, minikube Pulling image "ubuntu"
Normal Pulled 5m15s (x5 over 6m52s) kubelet, minikube Successfully pulled image "ubuntu"
Warning BackOff 2m2s (x24 over 6m43s) kubelet, minikube Back-off restarting failed container
</code></pre>
<p>The <code>Events</code> section will provide you with detailed explanation on what happened.</p>
| Will R.O.F. |
<p><strong>My set up</strong></p>
<p>I have one physical node K8s cluster where I taint master node so it can also act a worker. the node has Centos7 with total of 512 GB memory. I am limiting my experiments to one node cluster; once a solution is found, I will test it on my small scale k8s cluster where master and worker services are on separate nodes.</p>
<p><strong>What I am trying to do</strong></p>
<p>I want to monitor pod level resource utilization (CPU and Memory). I am launching a pod which consumed memory at rate of 1GBPS; in around 100seconds, memory utilization reaches 100GB and at that point application reaches steady state. From that point, it keep running until killed with a trigger.</p>
<p><strong>Where I am right now with this</strong></p>
<p>After launching k8s metric server; I am able to do <code>kubectl top pods</code> and it shows per pod CPU and memory utilization. However these utilization numbers are not updated frequently. I tried to measure how long does k8s takes to update these telemetry and sampling interval appears to be close to 1 minute or 60 seconds.</p>
<p>I tired looking into <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</a> for figuring out various sampling intervals. there are a few parameters which could impact telemetry sampling rate; but they are set to ~20s (defaults) at max. I am not changing any Kubelet settings.</p>
<p><strong>My Question</strong></p>
<p>Why it takes around a minute for <code>kubectl top pods</code> to update resource utilization numbers.? how can I reduce this interval and have frequent updates.?</p>
| ankit patel | <ul>
<li><p><strong>Why it takes around a minute for <code>kubectl top pods</code> to update resource utilization numbers?</strong></p>
<p>It's because metric server's default resolution which is set to 60s.</p>
</li>
<li><p><strong>How can I reduce this interval and have frequent updates?</strong></p>
<p>You can change the resolution with <code>--metric-resolution=<duration></code> flag.
It's not however recommended settings values below 15s as this is the resolution of metrics calculated by Kubelet.</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- command:
- /metrics-server
- --metric-resolution=15s
</code></pre>
<p>Reference: <a href="https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-often-metrics-are-scraped" rel="nofollow noreferrer">How often metrics are scraped</a></p>
</li>
</ul>
| acid_fuji |
<p>I have a simple working NetworkPolicy looking like this</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: monitoring-network-policy-prometheus-jbn
namespace: monitoring
spec:
podSelector:
matchLabels:
app: prometheus
policyTypes:
- Egress
egress:
- to:
ports:
- port: 61678
</code></pre>
<p>But now I want to restrict this a bit more. Instead of allowing egress to all destinations on port 61678 from all pods with label <code>app: prometheus</code> I want to allow only traffic to pods with label <code>k8s-app: aws-node</code></p>
<p>So I change the policy to:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: monitoring-network-policy-prometheus-jbn
namespace: monitoring
spec:
podSelector:
matchLabels:
app: prometheus
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
k8s-app: aws-node
</code></pre>
<p>According to <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a> a policy that looks like this</p>
<pre><code> ...
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
- podSelector:
matchLabels:
role: client
...
</code></pre>
<p>is described as <code> allows connections from Pods in the local Namespace with the label role=client, or from any Pod in any namespace with the label user=alice.</code></p>
<p>So I would think that this would match a pod with label <code>k8s-app: aws node</code> which is located in the <code>kube-system</code> namespace on any port. But when I try to connect to a pod with that label I get a timeout.</p>
<p>Here is the pod I am connecting to</p>
<pre><code> kubectl get pods -n kube-system -l k8s-app=aws-node
NAME READY STATUS RESTARTS AGE
aws-node-ngmnd 1/1 Running 0 46h
</code></pre>
<p>I am using AWS EKS with Calio network plugin.</p>
<p>What am I missing here?</p>
| Johnathan | <p>This is happening because you omit placing the <code>namespaceSelector</code> in your manifest and by default when <code>namespaceSelector</code> is not preset the system will select the Pods matching <code>PodSelector</code> in the policy's own namespace.</p>
<p>See here:</p>
<blockquote>
<p><strong>podSelector</strong><br />
<em>This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects
all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer
as a whole selects the Pods matching PodSelector in the Namespaces
selected by NamespaceSelector. <strong>Otherwise it selects the Pods matching
PodSelector in the policy's own Namespace</strong>.</em></p>
</blockquote>
<p>What can you do solve it? You could set empty namespace selector as per documents:</p>
<blockquote>
<p><strong>namespaceSelector</strong><br />
<em>Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; <strong>if present but empty, it selects
all namespaces</strong>. If PodSelector is also set, then the
NetworkPolicyPeer as a whole selects the Pods matching PodSelector in
the Namespaces selected by NamespaceSelector. Otherwise it selects all
Pods in the Namespaces selected by NamespaceSelector.</em></p>
<p>Reference <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#networkpolicypeer-v1-networking-k8s-io" rel="noreferrer">NetworkPolicyPeer</a></p>
</blockquote>
<p>I reproduce this issue and the documentation is correct but a bit misleading about place which should be in fact empty. So the parenthesis should be placed after the <code>matchLabels</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: monitoring-network-policy-prometheus-jbn
namespace: monitoring
spec:
podSelector:
matchLabels:
app: prometheus
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
k8s-app: aws-node
namespaceSelector:
matchLabels: {}
</code></pre>
<p>To answer your concerns about whether calico might be causing some issues. Well in fact it is, but it is suppose to. For network policies to take effect you need to run network plugin that will enforce them.</p>
| acid_fuji |
<p>I would like to find out if openiddict (<a href="https://github.com/openiddict/openiddict-core" rel="nofollow noreferrer">https://github.com/openiddict/openiddict-core</a>) and
Amazon Cognito.
I plan to use ABP OpenIddict Module (<a href="https://docs.abp.io/en/abp/6.0/Modules/OpenIddict" rel="nofollow noreferrer">https://docs.abp.io/en/abp/6.0/Modules/OpenIddict</a>
which provides advanced authentication features like single sign-on, single log-out, and API access control. This module persists applications, scopes, and other OpenIddict-related objects to the database.</p>
<p>In this video at the time mark: Amazon EKS SaaS deep dive: A multi-tenant EKS SaaS solution
<a href="https://youtu.be/tXVLjWjEEwo?t=1250" rel="nofollow noreferrer">https://youtu.be/tXVLjWjEEwo?t=1250</a>
You can see the onboarding experience when a SaaS tenant selects to provision its infrastructure and application using EKS Kubernetes.</p>
<p>Amazon Cognito creates User pool, App ID and Custom claims for the tenant.
Can OpenIddict have equivalent functionality?</p>
<p>I would like to rebuild AWS SaaS provisioning with DigitalOcean kubernetes and abp.io framework.</p>
<p>Thank you.</p>
| Rad | <p>Short answer:
Yes, you could implement a similar solution.</p>
<p>Long anser:
Abp is an opinionated framework with a lot of best practices pre implemented for you. You need to understand how ABP does things in the first place to understand how to extend it. For selfservice onboarding you have to create your own registration process, which in turn would create a user and tenant.</p>
<p>Also openiddic is for all intents and purposes already implemented. Creating a selfservice onboarding would probably only touch the Account Module, the Tenant Management Module and the Identity Module</p>
<p>Read the doc on the abp site:</p>
<pre><code>[https://docs.abp.io/en/abp/latest/Modules/Account][1]
[https://docs.abp.io/en/abp/latest/Modules/Tenant-Management][2]
[https://docs.abp.io/en/abp/latest/Modules/Identity][3]
</code></pre>
| Ralph Lavaud |
<p>I'd like to use <code>kubectl wait</code> to block in a script until a job or pod I launched has either completed or until it has failed. I wanted to do something like <code>kubectl wait --for=condition=*</code> but I can't find good documentation on what the different condition options are. <code>kubectl wait --for=condition=completed</code> doesn't seem to work for jobs in an error state.</p>
| software_intern | <p>Note that currently <code>kubectl wait</code> is marked as an experimental command on the kuberenetes documentation page, so it's likely a feature in development and not finished.<br />
Examples from <code>kubectl wait --help</code> and <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait</a> :</p>
<pre><code>Examples:
# Wait for the pod "busybox1" to contain the status condition of type "Ready".
kubectl wait --for=condition=Ready pod/busybox1
# Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command.
kubectl delete pod/busybox1
kubectl wait --for=delete pod/busybox1 --timeout=60s
</code></pre>
<p>The documentation also states the only options for the <code>--for</code> parameter are currently <code>delete</code> and <code>condition=condition-name</code></p>
<hr />
<p>It appears the <code>condition</code> field in</p>
<pre><code>kubectl wait --for condition=condition-name pod pod-name
</code></pre>
<p>refers to entries in <code>pod.status.conditions</code>.<br />
So we can launch a pod and look at the fields in <code>pod.status.conditions</code> for the kind of conditions we'd like to check.<br />
As an example I had a look at the yaml for one of my pods using <code>kubectl get pod pod-name -o yaml</code> and found the following conditions</p>
<pre class="lang-yaml prettyprint-override"><code>status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-07-25T21:14:32Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-07-25T21:14:41Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-07-25T21:14:41Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-07-25T21:14:32Z"
status: "True"
type: PodScheduled
</code></pre>
<p>So we can create pods and block until any of these conditions are met by using:</p>
<pre><code>kubectl wait --for condition=Initialized pod my-pod-name
kubectl wait --for condition=Ready pod my-pod-name
kubectl wait --for condition=ContainersReady pod my-pod-name
kubectl wait --for condition=PodScheduled pod my-pod-name
</code></pre>
<hr />
<p>Similarly for jobs:<br />
I created a job, waited for it to complete, then looked at it's yaml and found the following</p>
<pre class="lang-yaml prettyprint-override"><code>status:
completionTime: "2020-07-28T17:24:09Z"
conditions:
- lastProbeTime: "2020-07-28T17:24:09Z"
lastTransitionTime: "2020-07-28T17:24:09Z"
status: "True"
type: Complete
startTime: "2020-07-28T17:24:06Z"
succeeded: 1
</code></pre>
<p>Therefore we can use the conditions in <code>job.status.conditions</code>, such as</p>
<pre><code>kubectl wait --for condition=Complete job job-name
</code></pre>
<p>And we can of course add timeouts if we want to handle cases when the job fails to complete</p>
<pre><code>kubectl wait --for condition=Complete job job-name --timeout 60s
</code></pre>
| Mark McElroy |
<p>I'm new to Kubernetes and am setting up a raspberry pi cluster at home and I want to put Kubernetes on it. I have 4 new raspberry pi 4s with 8gb each and one old raspberry pi 2+. There are several hardware differences between the two models but I think technically I can make the old pi2 the master node to manage the pi4s. I'm wondering if this is a good idea. I don't really understand what the consequences of this would be. For instance if I was to expand my little cluster to 8 or 16 pi4s in the future, would my one pi2 be overloaded in managing the workers? I'm really trying to understand the consequences of putting lower grade hardware in control of higher grade hardware in the master/worker node relationship in Kubernetes.</p>
<p>There are three main goals that I have for this hardware. I want to recreate an entire development environment, so some VMs that would host a testing environment, a staging environment, a dev environment, and then a small production environment for hosting some toy website, and then I want to be able to host some services for myself in Kubernetes like a nas storage service, a local github repo, an externally facing plex media server, stuff like that. I think I can use k8s to containerize all of this, but how will having a pi2 as master limit me? Thanks in advance.</p>
| neogeek23 | <p>Normally, kubernetes <strong>masters</strong> use more resources because they run a lot of things and checks by default. Mainly because <strong>etcd</strong> and <strong>apiserver</strong>. Etcd is the database that stores everything that happens in Kubernetes, and <strong>apiserver</strong> receives all api requests from inside and outside the cluster, check permissions, certificates and so on.</p>
<p>But this is not always truth, sometimes you can have <strong>node</strong> with a lot of heavy applications, consuming much more resources than <strong>masters</strong>.</p>
<p>There are always the recommend specs, the specific specs for our business logic and enterprise applications, and the "what we have" specs.</p>
<p>Because you can move pods between different machines, you can remove some weight from your master, no problem.</p>
| Hector Vido |
<p>I want to use ceph rbd with kubernetes.</p>
<p>I have a kubernetes 1.9.2 and ceph 12.2.5 cluster and on my k8s nodes I have installed ceph-common pag.</p>
<pre><code>[root@docker09 manifest]# ceph auth get-key client.admin|base64
QVFEcmxmcGFmZXlZQ2hBQVFJWkExR0pXcS9RcXV4QmgvV3ZFWkE9PQ==
[root@docker09 manifest]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFEcmxmcGFmZXlZQ2hBQVFJWkExR0pXcS9RcXV4QmgvV3ZFWkE9PQ==
kubectl create -f ceph-secret.yaml
</code></pre>
<p>Then:</p>
<pre><code>[root@docker09 manifest]# cat ceph-pv.yaml |grep -v "#"
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 10.211.121.61:6789
- 10.211.121.62:6789
- 10.211.121.63:6789
pool: rbd
image: ceph-image
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
[root@docker09 manifest]# rbd info ceph-image
rbd image 'ceph-image':
size 2048 MB in 512 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.341d374b0dc51
format: 2
features: layering
flags:
create_timestamp: Fri Jun 15 15:58:04 2018
[root@docker09 manifest]# cat task-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@docker09 manifest]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/ceph-pv 2Gi RWO Recycle Bound default/ceph-claim 54m
pv/host 10Gi RWO Retain Bound default/hostv 24d
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/ceph-claim Bound ceph-pv 2Gi RWO 53m
pvc/hostv Bound host 10Gi RWO 24d
</code></pre>
<p>I create a pod use this pvc .</p>
<pre><code>[root@docker09 manifest]# cat ceph-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod2
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
[root@docker09 manifest]# kubectl get pod ceph-pod2 -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ceph-pod2 0/1 ContainerCreating 0 14m <none> docker10
</code></pre>
<p>The pod is still in <code>ContainerCreating</code> status.</p>
<pre><code>[root@docker09 manifest]# kubectl describe pod ceph-pod2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned ceph-pod2 to docker10
Normal SuccessfulMountVolume 15m kubelet, docker10 MountVolume.SetUp succeeded for volume "default-token-85rc7"
Warning FailedMount 1m (x6 over 12m) kubelet, docker10 Unable to mount volumes for pod "ceph-pod2_default(56af9345-7073-11e8-aeb6-1c98ec29cbec)": timeout expired waiting for volumes to attach/mount for pod "default"/"ceph-pod2". list of unattached/unmounted volumes=[ceph-vol1]
</code></pre>
<p>I don't know why this happening, need your help... Best regards.</p>
| Damien | <p>There's no need to reinvent a wheel here. There's already project called ROOK, which deploys ceph on kubernetes and it's super easy to run.</p>
<p><a href="https://rook.io/" rel="nofollow noreferrer">https://rook.io/</a></p>
| shaolin |
<p>Following the example on <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">kubernetes.io</a> I'm trying to connect to an external IP from within the cluster (and i need some port proxy, so not ExternalName service). However it is not working. This is the response I'm expecting</p>
<pre class="lang-bash prettyprint-override"><code>ubuntu:/opt$ curl http://216.58.208.110:80
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
</code></pre>
<p>if I use the following config</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: my-service-1
labels:
kubernetes.io/service-name: my-service
addressType: IPv4
ports:
- name: http
appProtocol: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "216.58.208.110"
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 8888
targetPort: 80
</code></pre>
<p>I expect the following command to get same result:</p>
<pre class="lang-bash prettyprint-override"><code>minikube kubectl -- run -it --rm --restart=Never curl --image=curlimages/curl curl -- my-service:8888
</code></pre>
<p>but I get nothing.
if I start an debian image with</p>
<pre class="lang-bash prettyprint-override"><code>minikube kubectl -- run -it --rm --restart=Never debian --image=debian:latest
</code></pre>
<p>then</p>
<pre class="lang-bash prettyprint-override"><code>apt update && apt install dnsutils curl -y && nslookup my-service && curl my-service:8888
</code></pre>
<p>gives</p>
<pre class="lang-bash prettyprint-override"><code>Server: 10.96.0.10
Address: 10.96.0.10#53
Name: my-service.default.svc.cluster.local
Address: 10.111.116.160
curl: (28) Failed to connect to my-service port 8888: Connection timed out
</code></pre>
<p>Am i missing something? or is it not supposed to work this way?</p>
| blanNL | <p>After some trial and error it seem that if <code>ports[0].name = http</code> is set for the <code>endpointslice</code> it stops working.</p>
<p>it stops working for when for the <code>service</code> <code>spec.ports[0].targetPort</code> is set to <code>80</code> or <code>http</code> as well.</p>
<p>(it does work when <code>ports[0].name = ''</code>)</p>
<p>Further investing shows that it works if:</p>
<p>for <code>service</code></p>
<pre class="lang-yaml prettyprint-override"><code>spec:
ports:
- port: 8888
name: http
targetPort: http
</code></pre>
<p>for <code>endpointslice</code></p>
<pre class="lang-yaml prettyprint-override"><code>ports:
- port: 80
name: http
</code></pre>
<p>I guess if you want to name them both the <code>service</code> and the <code>endpointslice</code> have to have corresponding <code>.name</code> values.</p>
| blanNL |
<p>I want to set up a EKS cluster, enabling other IAM users to connect and tinker with the cluster. To do so, <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">AWS recommends patching a config map</a>, which I did. Now I want to enable the same “feature” using terraform.</p>
<p>I use terraforms EKS provider and read <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest#general-notes" rel="nofollow noreferrer">in the documentation</a> in section "Due to the plethora of tooling a..." that basically authentication is up to myself.</p>
<p>Now I use the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="nofollow noreferrer">Terraform Kubernetes provider</a> to update this config map:</p>
<pre><code>resource "kubernetes_config_map" "aws_auth" {
depends_on = [module.eks.cluster_id]
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = THATS_MY_UPDATED_CONFIG
}
</code></pre>
<p>But do not succeed and get the following error:</p>
<pre><code>2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: 2022/01/07 15:49:55 [DEBUG] Kubernetes API Response Details:
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: ---[ RESPONSE ]--------------------------------------
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: HTTP/2.0 409 Conflict
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Content-Length: 206
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Audit-Id: 15....
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Cache-Control: no-cache, private
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Content-Type: application/json
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: Date: Fri, 07 Jan 2022 14:49:55 GMT
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: X-Kubernetes-Pf-Flowschema-Uid: f43...
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: X-Kubernetes-Pf-Prioritylevel-Uid: 0054...
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5:
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: {
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "kind": "Status",
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "apiVersion": "v1",
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "metadata": {},
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "status": "Failure",
2022-01-07T15:49:55.732+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "message": "configmaps \"aws-auth\" already exists",
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "reason": "AlreadyExists",
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "details": {
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "name": "aws-auth",
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "kind": "configmaps"
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: },
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: "code": 409
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: }
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5:
2022-01-07T15:49:55.733+0100 [DEBUG] provider.terraform-provider-kubernetes_v2.7.1_x5: -----------------------------------------------------
2022-01-07T15:49:55.775+0100 [ERROR] vertex "module.main.module.eks.kubernetes_config_map.aws_auth" error: configmaps "aws-auth" already exists
╷
│ Error: configmaps "aws-auth" already exists
│
│ with module.main.module.eks.kubernetes_config_map.aws_auth,
│ on ../../modules/eks/eks-iam-map-users.tf line 44, in resource "kubernetes_config_map" "aws_auth":
│ 44: resource "kubernetes_config_map" "aws_auth" {
│
╵
</code></pre>
<p>It seems this is a controversial problem and as everyone using EKS and Terraform should have it – I ask myself how to solve this? The <a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/852" rel="nofollow noreferrer">related issue</a>, <a href="https://stackoverflow.com/questions/51127339/terraform-kubernetes-provider-with-eks-fails-on-configmap">I</a> is close .... I'm somewhat lost, anyone has an idea?</p>
<p>I use the following versions:</p>
<pre><code>terraform {
required_providers {
# https://registry.terraform.io/providers/hashicorp/aws/latest
aws = {
source = "hashicorp/aws"
version = "~> 3.70"
}
# https://registry.terraform.io/providers/hashicorp/kubernetes/latest
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.7.1"
}
required_version = ">= 1.1.2"
}
...
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.0.3"
...
</code></pre>
| lony | <p>I use 17.24.0 and have no idea what is new with 18.0.3.</p>
<p>In my case, I follow this example:
<a href="https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v17.24.0/examples/complete/main.tf" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v17.24.0/examples/complete/main.tf</a></p>
<p>My main.tf</p>
<pre><code>locals {
eks_map_roles = []
eks_map_users = []
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
module "eks" {
source = "..."
...
eks_map_roles = local.eks_map_roles
eks_map_users = local.eks_map_users
...
}
</code></pre>
<p>To add another user, you can follow this docs: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/</a></p>
<p>I think you should add the role (don't forget to remove the path).</p>
| Franxi Hidro |
<p>I'm playing around with GitOps and ArgoCD in Redhat Openshift. My goal is to switch a worker node to an infra node.</p>
<p>I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)</p>
<p>In order to do make the node an infra node, I want to add a label "infra" and take the label "worker" from it. Before, the object looks like this (irrelevant labels omitted): </p>
<pre><code>apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/infra: ""
name: node6.example.com
spec: {}
</code></pre>
<p>After applying a YAML File, it's supposed to look like that: </p>
<pre><code>apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/worker: ""
name: node6.example.com
spec: {}
</code></pre>
<p>If I put the latter config in a file, and do "kubectl apply -f ", the node has both infra and worker labels. So adding a label or changing the value of a label is easy, but is there a way to remove a label in an objects metadata by applying a YAML file ?</p>
| tvitt | <p>you can delete the label with</p>
<pre><code>kubectl label node node6.example.com node-role.kubernetes.io/infra-
</code></pre>
<p>than you can run the <code>kubectl apply</code> again with the new label.
You will be up and running.</p>
| Petr Kotas |
<p>I am using on-prem k8s v1.19 and Istio with 1.8.0.. I got stuck to run them together properly when I inject istio mesh to the <code>hub-dev</code> where our microservices are running. Vault is running <code>dev</code> namespace.</p>
<p>The first issue I had is Vault and Istio sidecar is not running properly somehow and the application can not able to init as below. I tried to use below annotations to init first vault but it did not solve below issue.</p>
<ul>
<li>vault.hashicorp.com/agent-init-first: true</li>
<li>vault.hashicorp.com/agent-inject: true</li>
</ul>
<p>Here are the outputs of pod status and describe</p>
<pre><code>$ kubectl get pods -n hub-dev
NAME READY STATUS RESTARTS AGE
oneapihub-mp-dev-59f7685455-5kmft 0/3 Init:0/2 0 19
$ kubectl describe pod oneapihub-mp-dev-59f7685455-5kmft -n hub-dev
Init Containers:
vault-agent-init:
Container ID:
State: Running
Started: Fri, 15 Jan 2021 13:54:30 +0300
Ready: False
istio-validation:
Container ID:
Image: reg-dhc.app.corpintra.net/i3-mirror/docker.io_istio_proxyv2:1.8.0
State: Waiting
Reason: PodInitializing
Ready: False
Containers:
oneapihub-mp:
Container ID:
State: Waiting
Reason: PodInitializing
Ready: False
istio-proxy:
Container ID:
State: Waiting
Reason: PodInitializing
Ready: False
istio-proxy:
Container ID:
State: Waiting
Reason: PodInitializing
Ready: False
Normal Pulled 16m kubelet, xx-kube-node07 Container image "docker.io_vault:1.5.2" already present on machine
Normal Created 16m kubelet, xx-kube-node07 Created container vault-agent-init
Normal Started 16m kubelet, xx-kube-node07 Started container vault-agent-init
</code></pre>
<p>When I tried below annotation, it fixed the above issue, but this time when pod starts to run it can not able to find <code>/vault/secrets</code> path but somehow after that it can be read when I checked the logs of proxy and application and <code>/vault/secrets</code> folder is exist inside pod.</p>
<pre><code> - vault.hashicorp.com/agent-pre-populate: "false"
</code></pre>
<p>Here the logs of app even if folder exists</p>
<pre><code>$ kubectl get pods -n hub-dev
oneapihub-mp-dev-78449b8cf6-qbqhn 3/3 Running 0 9m31s
$ kubectl logs -f oneapihub-mp-dev-78449b8cf6-qbqhn -n hub-dev -c oneapihub-mp
> [email protected] start:docker /usr/src/app
> node app.js
{"message""devMessage":"SECRET_READ_ERROR","data":"","exception":"ENOENT: no such file or directory, open '/vault/secrets/database'","stack":"Error: ENOENT: no such file or directory, open '/vault/secrets/database'->
/ $ cd /vault/secrets
/vault/secrets $ ls
database jenkins
/vault/secrets $
</code></pre>
<p>Here I have some PUT error which might related with Vault itself but I am confuse how then Vault can inject the secrets.</p>
<pre><code> $ kubectl logs -f oneapihub-mp-dev-78449b8cf6-qbqhn -n hub-dev -c vault-agent
2021-01-15T11:21:13.477Z [ERROR] auth.handler: error authenticating: error="Put "http://vault.dev.svc:8200/v1/auth/kubernetes/login": dial tcp 10.254.30.115:8200: connect: connection refused" backoff=2.464775515
==> Vault agent started! Log data will stream in below:
==> Vault agent configuration:
Cgo: disabled
Log Level: info
Version: Vault v1.5.2
Version Sha: 685fdfa60d607bca069c09d2d52b6958a7a2febd
2021-01-15T11:21:15.942Z [INFO] auth.handler: authenticating
2021-01-15T11:21:15.966Z [INFO] auth.handler: authentication successful, sending token to sinks
2021-01-15T11:21:15.966Z [INFO] sink.file: token written: path=/home/vault/.vault-token
</code></pre>
<p>And lastly when I checkted the istio-proxy logs I can see that GET or PUT request returns 200.</p>
<pre><code>$ kubectl logs -f oneapihub-mp-dev-78449b8cf6-h8s8j -n hub-dev -c istio-proxy
021-01-15T11:35:04.352221Z warning envoy filter mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
[2021-01-15T11:35:05.557Z] "PUT /v1/auth/kubernetes/login HTTP/1.1" 200 - "-" 1294 717 8 8 "-" "Go-http-client/1.1" "a082698b-d1f7-4aa5-9db5-01d86d5093ef" "vault.dev.svc:8200" "10.6.24.55:8200" outbound|8200||vault.dev.svc.cluster.local 10.6.19.226:55974 10.254.30.115:8200 10.6.19.226:60478 - default
2021-01-15T11:35:05.724833Z info Envoy proxy is ready
[2021-010.6.19.226:41888 - default
[2021-01-15T11:35:05.596Z] "GET /v1/secret/data/oneapihub-marketplace/database HTTP/1.1" 200 - "-" 0 400 0 0 "-" "Go-http-client/1.1" "d7d10c1f-c445-44d1-b0e3-bb9ae7bbc2f0" "vault.dev.svc:8200" "10.6.24.55:8200" outbound|8200||vault.dev.svc.cluster.local 10.6.19.226:55974 10.254.30.115:8200 10.6.19.226:41900 - default
[2021-01-15T11:35:05.591Z] "PUT /v1/auth/token/renew-self HTTP/1.1" 200 - "-" 15 717 8 8 "-" "Go-http-client/1.1" "56705e5c-c966-4bc8-8187-7ca5bb2b4abe" "vault.dev.svc:8200" "10.6.24.55:8200" outbound|8200||vault.dev.svc.cluster.local 10.6.19.226:37388 10.254.30.115:8200 10.6.19.226:41890 - default
[2021-01-15T11:35:05.602Z] "GET /v1/secret/data/oneapihub-marketplace/jenkins HTTP/1.1" 200 - "-" 0 284 0 0 "-" "Go-http-client/1.1" "1b6d8601-18df-4f32-8722-162aa785c476" "vault.dev.svc:8200" "10.6.24.55:8200" outbound|8200||vault.dev.svc.cluster.local 10.6.19.226:55974 10.254.30.115:8200 10.6.19.226:41902 - default
</code></pre>
| semural | <p>Added below annotations worked for me.</p>
<pre><code> template:
metadata:
annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "8200"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
</code></pre>
| semural |
<p>In normal kubernetes CI/CD process , following process occurs :</p>
<ul>
<li>clone code from git</li>
<li>build and push docker image</li>
<li>update kubernetes deployment with updated code</li>
</ul>
<p>According to gitops definition</p>
<p><em>GitOps is a new approach to Continuous Deployment that leverages Git as a single source of truth for declarative infrastructure and applications, providing both revision and change control. With GitOps, a system is run by submitting pull requests (and subsequent merges) to achieve the desired state of the system represented in a Git repository</em></p>
<p>what my understanding of gitops is , when you update anything in git (as this is the source of truth) , the desired state of kubernetes changes to the latest one and the latest code is deployed.</p>
<p><strong>End Result of traditional CI/CD without gitops</strong>: new code is deployed as kubernetes deployment</p>
<p><strong>End Result of gitops</strong>: new code is deployed as kubernetes deployment</p>
<p>I am unable to figure the difference . sorry if it sounds weird to you. But I am kinda new and exploring gitops .</p>
<p>Thanks in advance for your response</p>
| hanzala | <p>GitOps is nothing but extending CI/CD principles beyond application code: to infra code. Simple. You can think of using Git as source of truth that combines with Terraform (provisioning), Ansible (config mgmt) and Kubernetes (orchestration) as an example ...in order to achieve the goal of keeping Git as 1:1 reflection of your infra. It's nothing new and do not be worried about such fancy terms...</p>
| runlevel3 |
<p>I would like to set the <code>name</code> field in a <code>Namespace</code> resource and also replace the <code>namespace</code> field in a <code>Deployment</code> resource with the same value, for example <code>my-namespace</code>.</p>
<p>Here is <code>kustomization.json</code>:</p>
<pre><code>namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
</code></pre>
<p>and <code>manager.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
</code></pre>
<p>I tried using <code>kustomize edit set namespace my-namespace && kustomize build</code>, but it only changes the <code>namespace</code> field in the <code>Deployment</code> object.</p>
<p>Is there a way to change both field without using <code>sed</code>, in 'pure' <code>kustomize</code> and without having to change manually value in <code>kustomization.json</code>?</p>
| Fabrice Jammes | <p><strong>Is there a way to change both field without using <code>sed</code>, in 'pure' <code>kustomize</code> and without having to change manually value in <code>kustomization.json</code>?</strong></p>
<p>I managed to achieve somewhat similar with the following configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
</code></pre>
<p>And here is the output of the command that you used:</p>
<pre class="lang-sh prettyprint-override"><code>➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
</code></pre>
<p>What is happening here is that once you set the <code>namespace</code> globally in <code>kustomization.yaml</code> it will apply it to your targets which looks to me that looks an easier way to achieve what you want.</p>
<p>I cannot test your config without <code>manager_patch.yaml</code> content. If you wish to go with your way further you will have update the question with the file content.</p>
| acid_fuji |
<p>I know this is topic that comes up every once in a while. I did read many of the existing posts and answers, bit wasn't able to figure it out.</p>
<p>What I need is nginx-ingress to redirect from <code>www.foo.bar</code> to just <code>foo.bar</code>. I set up a test environment on Minikube by installing the nginx ingress the minikube way: <code>minikube addons enable ingress</code> and put the following ingress manifest in place:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'www.foo.bar') {
return 301 https://foo.bar;
}
nginx.ingress.kubernetes.io/proxy-body-size: 600m
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.org/client-max-body-size: 600m
name: foo-ingress
spec:
rules:
- host: api.foo.bar
http:
paths:
- backend:
service:
name: identity
port:
number: 80
path: /identity
pathType: ImplementationSpecific
- backend:
service:
name: catalog
port:
number: 80
path: /catalog
pathType: ImplementationSpecific
- backend:
service:
name: marketing
port:
number: 80
path: /marketing
pathType: ImplementationSpecific
- backend:
service:
name: marketplace
port:
number: 80
path: /marketplace
pathType: ImplementationSpecific
- backend:
service:
name: notification
port:
number: 80
path: /notification
pathType: ImplementationSpecific
- backend:
service:
name: provider
port:
number: 80
path: /provider
pathType: ImplementationSpecific
- host: www.foo.bar
http:
paths:
- backend:
service:
name: frontend
port:
number: 80
path: /
pathType: ImplementationSpecific
- host: foo.bar
http:
paths:
- backend:
service:
name: frontend
port:
number: 80
path: /
pathType: ImplementationSpecific
- host: blog.foo.bar
http:
paths:
- backend:
service:
name: wordpress
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- foo.bar
- api.foo.bar
- blog.foo.bar
secretName: foo-cert
</code></pre>
<p>And it works:</p>
<pre><code>➜ ~ curl -k -v -o /dev/null https://www.foo.bar
...
> GET / HTTP/2
> Host: www.foo.bar
> user-agent: curl/7.68.0
> accept: */*
>
< HTTP/2 301
< date: Sat, 27 Mar 2021 16:37:50 GMT
< content-type: text/html
< content-length: 162
< location: https://foo.bar
</code></pre>
<p>Now, let me get to the problem: I have a production cluster on GCP (GKE) on which I installed nginx-ingress via Helm:</p>
<pre><code>helm -n nginx-ingress ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx-ingress nginx-ingress 2 2021-03-27 15:05:41.229254628 +0000 UTC deployed nginx-ingress-0.8.1 1.10.1
</code></pre>
<p>The Ingress manifest I have is (I don't see a relevant difference to the one above):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'www.foo.bar') {
return 301 https://foo.bar;
}
nginx.ingress.kubernetes.io/proxy-body-size: 600m
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.org/client-max-body-size: 600m
name: foo-ingress
spec:
rules:
- host: api.foo.bar
http:
paths:
- backend:
service:
name: identity
port:
number: 80
path: /identity
pathType: ImplementationSpecific
- backend:
service:
name: catalog
port:
number: 80
path: /catalog
pathType: ImplementationSpecific
- backend:
service:
name: marketing
port:
number: 80
path: /marketing
pathType: ImplementationSpecific
- backend:
service:
name: marketplace
port:
number: 80
path: /marketplace
pathType: ImplementationSpecific
- backend:
service:
name: notification
port:
number: 80
path: /notification
pathType: ImplementationSpecific
- backend:
service:
name: provider
port:
number: 80
path: /provider
pathType: ImplementationSpecific
- host: foo.bar
http:
paths:
- backend:
service:
name: frontend
port:
number: 80
path: /
pathType: ImplementationSpecific
- host: www.foo.bar
http:
paths:
- backend:
service:
name: frontend
port:
number: 80
path: /
pathType: ImplementationSpecific
- host: blog.foo.bar
http:
paths:
- backend:
service:
name: wordpress
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- foo.bar
- api.foo.bar
- www.foo.bar
- blog.foo.bar
secretName: foo-cert
</code></pre>
<p>You can see, the redirect is not working:</p>
<pre><code>➜ ~ curl -v -o /dev/null https://www.foo.bar
> GET / HTTP/1.1
> Host: www.foo.bar
> User-Agent: curl/7.68.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.19.8
< Date: Sat, 27 Mar 2021 16:42:18 GMT
< Content-Type: text/html
< Content-Length: 671097
< Connection: keep-alive
< Last-Modified: Tue, 09 Mar 2021 11:51:47 GMT
< ETag: "60476153-a3d79"
< Expires: Thu, 01 Jan 1970 00:00:01 GMT
< Cache-Control: no-cache
< Accept-Ranges: bytes
</code></pre>
<p>I noticed, that prod and Minikube deploy different images but don't really know what the difference is if there's any.</p>
<p>Prod:</p>
<pre><code>➜ ~ kubectl -n nginx-ingress get pod nginx-ingress-nginx-ingress-68f5bc7654-kffnm -o json | jq ".spec.containers[0].image"
"nginx/nginx-ingress:1.10.1"
</code></pre>
<p>Minikube:</p>
<pre><code>➜ ~ kubectl -n kube-system get pod ingress-nginx-controller-65cf89dc4f-xkdtn -o json | jq ".spec.containers[0].image"
"us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f"
</code></pre>
<p>Any ideas what's going on?</p>
| user3235738 | <p>The reason this is happening is that you've deploy different version of the nginx ingress controller called <code>nginxinc</code>into your GKE cluster. This is very common mistake made especially if you use helm you don't really check which you are deploying.</p>
<p>So <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">kubernetes/ingress-nginx</a> which you used in minikube is maintained by the Kubernetes open source community and the <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">nginxinc/kubernetes-ingress</a> is maintained by NGINX, Inc. You will notice many cases where the one maintained by Kubernetes will be commonly called community Ingress controller and second one NGINX's.</p>
<p>You can find the differences <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md#the-key-differences" rel="nofollow noreferrer">here</a> but main one that is causing you problems is that they both use differences annotation formula with different prefix. For example:</p>
<ul>
<li>Community ingress controller uses: <code>nginx.ingress.kubernetes.io/server-snippet</code></li>
<li>NGINX's uses: <code>nginx.org/server-snippets</code></li>
</ul>
| acid_fuji |
<p><strong>Not able to connect to internet from inside the pod</strong></p>
<p>My system Spec Include : I have created a Kubernetes cluster using 2 system one acts as master the other as worker node .</p>
<p>Operating System : NAME="Red Hat Enterprise Linux" VERSION="8.3 (Ootpa)" ID="rhel".</p>
<p>I installed the Kuberenetes clusted using the following the link (<a href="https://dzone.com/articles/kubernetes-installation-in-redhat-centos" rel="nofollow noreferrer">https://dzone.com/articles/kubernetes-installation-in-redhat-centos</a>)</p>
<p>I have tried both CALICO pod network and Flannel pod network for both same issue is happening. Not able to connect to internet from inside the pod</p>
<p>See the below image for further details</p>
<p><a href="https://i.stack.imgur.com/92PvW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/92PvW.png" alt="enter image description here" /></a></p>
<p>you can see that all the pods are up and running.</p>
<p>My coredns pod is also up and running and the service for the respective is also up check the below image</p>
<p><a href="https://i.stack.imgur.com/473JE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/473JE.png" alt="enter image description here" /></a></p>
<p><strong>Debugging</strong></p>
<p>For debugging i tried using this link (<a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a>)</p>
<p><strong>whenever i do nslookup it shows up the error saying (;; connection timed out; no servers could be reached ,command terminated with exit code 1)</strong></p>
<p>please have a look at the below image</p>
<p><a href="https://i.stack.imgur.com/HaDdU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HaDdU.png" alt="enter image description here" /></a></p>
<p>Can anyone please tell where exactly the problem lies . why is that from inside the pod i an not able to connect to the internet</p>
<p>Any help would be Appriciated Thank you.</p>
| Umesh | <p>There are couple of possibilities for this kind of issue:</p>
<ul>
<li><p>It could be that this is not an issue with CoreDNS itself but rather the Kubernetes networking problem where the traffic to ClusterIPs is not directed correctly to Pods. It could be that kube-proxy is responsinble for that.</p>
<p>Here's a <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Kubernetes guide</a> about troubleshooting services.</p>
</li>
<li><p>Another issue very common for rhel/centos distributions is the issue with <code>nftables</code> backed is not compatible with kubernetes. <code>nftables</code> is available as a modern replacement for the kernel’s <code>iptables</code> subsystem.</p>
<p>The workaround for this is to use Calico since from v3.8.1+ it possible to the CNI to run on hosts which use iptables in NFT mode. Setting the <code>FELIX_IPTABLESBACKEND=NFT</code> option will tell Calico to use the nftables backend. For now, this will need to be set explicitly.</p>
</li>
<li><p>Lastly it is very possible that your Pod network overlap with the host networks. Reference: <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">Installing a Pod network add-on</a></p>
</li>
</ul>
| acid_fuji |
<p>I'm trying to test and implement Traefik's <em>https</em> redirect feature in my kubernetes cluster per Traefik's documentation: <a href="https://docs.traefik.io/middlewares/overview/" rel="nofollow noreferrer">https://docs.traefik.io/middlewares/overview/</a>. Here's the definition of the <code>Middleware</code> and <code>IngressRoute</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`your.domain.name`) && Host(`www.your.domain.name`)
kind: Rule
services:
- name: traefik-dashboard
port: 8080
middlewares:
- name: redirectscheme
tls:
secretName: cloud-tls
</code></pre>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirectscheme
spec:
redirectScheme:
scheme: https
</code></pre>
<p>However, <strong>https</strong>://your.domain.name works and <strong>http</strong>://your.domain.name gives me a <em>404 page not found</em>.
Does anyone know what have I misconfigured ?</p>
| Deepak Garg | <blockquote>
<p>that worked for me:</p>
</blockquote>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`example.com`)
middlewares:
- name: test-redirectscheme
kind: Rule
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`www.example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`www.example.com`)
kind: Rule
middlewares:
- name: test-redirectscheme
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-redirectscheme
namespace: default
spec:
redirectScheme:
scheme: https
</code></pre>
| Julián Aguirre |
<p>Our worker service is a long running service. When scale in happen or deployment happen, we expect the PODs can finish existing works (max alive 1 week) then exit.</p>
<p>What I have tried is I make a deployment with 10 pods and set terminationgraceperiodseconds = 604800, and then scale down instance to 1, that works good.</p>
<p><strong>Question</strong> here is our service will have <strong>hundreds</strong> POD, which means worse case is hundreds pod will be in terminating status, run 7 days then exit. Is this workable in K8s world, or any potential issue?
Thank you if any comment~</p>
| Gu111gu | <p>Google starts and destroys over <a href="https://cloud.google.com/containers" rel="nofollow noreferrer">7 billion pods</a> per week. This is the purpose they were made for. As long as you are persisting the necessary data to disk, Kubernetes will replicate the state of the pod exactly as you have configured. A <a href="https://research.google/pubs/pub44843/" rel="nofollow noreferrer">paper</a> has also been published on how this scale may be achieved.</p>
| justahuman |
<p>Is it possible with the Nginx ingress controller for Kubernetes to have an ingress rule that routes to different services based on if a query string exists? For example..</p>
<p>/foo/bar -> route to serviceA</p>
<p>/foo/bar?x=10 -> route to serviceB</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: xxxx.com
http:
paths:
- path: /foo/bar(/|$)(.*)
pathType: Prefix
backend:
service:
name: serviceA
port:
number: 8001
- path: /foo/bar(/|$)(.*)\?
pathType: Prefix
backend:
service:
name: serviceB
port:
number: 8002
</code></pre>
| Michael Kobaly | <p>I managed to find a working solution for what you described with two ingress objects. With the example that you provided ingress won't be able to direct you towards <code>service-b</code> since nginx does not match query string at all. This is very well explained <a href="https://stackoverflow.com/questions/15713934/how-to-match-question-mark-as-regexp-on-nginx-conf-location">here</a>.</p>
<p>Ingress selects the proper backed based on path. So I have prepared separate path for the second backend and put a conditional redirect to it to the first path so when request reach the <code>/tmp</code> path it uses <code>service-b</code> backend and trims the tmp part from the request.</p>
<p>So here's the ingress that matches <code>/foo/bar</code> for the <code>backend-a</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($args ~ .+){
rewrite ^ http://xxxx.com/foo/bar/tmp permanent;
}
spec:
rules:
- host: xxxx.com
http:
paths:
- path: /foo/bar
pathType: Prefix
backend:
serviceName: service-a
servicePort: 80
</code></pre>
<p>And here is the ingress that matches <code>/foo/bar?</code> and whatever comes after for the <code>backend-b</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-rewrite
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /foo/bar$1
spec:
rules:
- host: xxxx.com
http:
paths:
- path: /foo/bar/tmp(.*)
backend:
serviceName: service-b
servicePort: 80
</code></pre>
<p>Please note, that previous configuration leftovers can prevent that solution from working well. Clean up, redeploy and ingress controller restart should help in that situation.</p>
<p>Here are some tests to prove the case. First I have added the <code>xxxx.com</code> to <code>/etc/hosts</code>:</p>
<pre class="lang-sh prettyprint-override"><code>➜ ~ cat /etc/hosts
127.0.0.1 localhost
192.168.59.2 xxxx.com
</code></pre>
<p><strong>- Here we are testing the firs path <code>/foo/bar</code>:</strong></p>
<pre class="lang-sh prettyprint-override"><code>➜ ~ curl -L -v http://xxxx.com/foo/bar
* Trying 192.168.59.2...
* TCP_NODELAY set
* Connected to xxxx.com (192.168.59.2) port 80 (#0)
> GET /foo/bar HTTP/1.1 <----- See path here!
> Host: xxxx.com
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Tue, 13 Apr 2021 12:30:00 GMT
< Content-Type: application/json; charset=utf-8
< Content-Length: 644
< Connection: keep-alive
< X-Powered-By: Express
< ETag: W/"284-P+J4oZl3lklvyqdp6FEGTPVw/VM"
<
{
"path": "/foo/bar",
"headers": {
"host": "xxxx.com",
"x-request-id": "1f7890a47ca1b27d2dfccff912d5d23d",
"x-real-ip": "192.168.59.1",
"x-forwarded-for": "192.168.59.1",
"x-forwarded-host": "xxxx.com",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-scheme": "http",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "xxxx.com",
"ip": "192.168.59.1",
"ips": [
"192.168.59.1"
],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "service-a" <------ Pod hostname that response came from.
</code></pre>
<p><strong>- And here we are testing the firs path <code>/foo/bar</code>:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>➜ ~ curl -L -v http://xxxx.com/foo/bar\?x\=10
* Trying 192.168.59.2...
* TCP_NODELAY set
* Connected to xxxx.com (192.168.59.2) port 80 (#0)
> GET /foo/bar?x=10 HTTP/1.1 <--------- The requested path!
> Host: xxxx.com
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Tue, 13 Apr 2021 12:31:58 GMT
< Content-Type: text/html
< Content-Length: 162
< Connection: keep-alive
< Location: http://xxxx.com/foo/bar/tmp?x=10
<
* Ignoring the response-body
* Curl_http_done: called premature == 0
* Connection #0 to host xxxx.com left intact
* Issue another request to this URL: 'http://xxxx.com/foo/bar/tmp?x=10'
* Found bundle for host xxxx.com: 0x55d6673218a0 [can pipeline]
* Re-using existing connection! (#0) with host xxxx.com
* Connected to xxxx.com (192.168.59.2) port 80 (#0)
> GET /foo/bar/tmp?x=10 HTTP/1.1
> Host: xxxx.com
> User-Agent: curl/7.52.1
> Accept: */*
>
{
"path": "/foo/bar",
"headers": {
"host": "xxxx.com",
"x-request-id": "96a949a407dae653f739db01fefce7bf",
"x-real-ip": "192.168.59.1",
"x-forwarded-for": "192.168.59.1",
"x-forwarded-host": "xxxx.com",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-scheme": "http",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "xxxx.com",
"ip": "192.168.59.1",
"ips": [
"192.168.59.1"
],
"protocol": "http",
"query": {
"x": "10"
},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "service-b" <-----Service-b host name!
},
"connection": {}
</code></pre>
<p>For the responses I've used the <code>mendhak/http-https-echo</code> image:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: service-b
labels:
app: echo2
spec:
containers:
- name: service-b #<-------- service-b host name
image: mendhak/http-https-echo
ports:
- containerPort: 80
</code></pre>
| acid_fuji |
<p>I am deploy kubernetes UI using this command:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>start proxy:</p>
<pre><code>kubectl proxy --address='172.19.104.231' --port=8001 --accept-hosts='^*$'
</code></pre>
<p>access ui:</p>
<pre><code>curl http://172.19.104.231:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
http://kubernetes.example.com/api/v1/namespaces/kube-system/services/kube-ui/#/dashboard/
</code></pre>
<p>the log output:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# curl http://172.19.104.231:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kubernetes-dashboard\" not found",
"reason": "NotFound",
"details": {
"name": "kubernetes-dashboard",
"kind": "services"
},
"code": 404}
</code></pre>
<p>how to fix the problem? Check pods status:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-7d75c474bb-b2lwd 0/1 Pending 0 34h
</code></pre>
| Dolphin | <p>If you use K8S dashboard v2.0.0-betax,</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
</code></pre>
<p>Then use this to access the dashboard:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
</code></pre>
<p>If you use K8S dashboard v1.10.1,</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>Then use this to access the dashboard:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
</code></pre>
<p>I also faced the same problem, but then i realized that dashboard v2.0.0-betax and v1.10.1 use different namespace. Latest version use kubernetes-dashboard namespace, but the older one use kube-system namespace</p>
| maximalyono |
<p>Trying to exec into a container with the following command</p>
<p><code>kubectl exec -it my-pod my-container1 -- bash</code></p>
<p>Gives error:</p>
<blockquote>
<p>OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "my-container1": executable file not found in $PATH: unknown
command terminated with exit code 126</p>
</blockquote>
<p>The pod <em>my-pod</em> has two containers. And <em>my-container1</em> has alipne image with bash installed.
Trying to get shell into the container, it is not able to find bash.</p>
<p>Kubectl client version: v1.17.0</p>
| RMNull | <p>Adding -c before the container name worked.</p>
<p><code>kubectl exec -it my-pod -c my-container1 -- bash</code></p>
| RMNull |
<p>I have a 3 node kubernetes cluster installed using kubeadm. However, when pods are scheduled on a particular kubernetes worker, it fails to do mounting and sometimes goes to pending state altogether. Can someone please suggest as to how to debug this.</p>
<p>Update -:
Error log of worker.</p>
<pre><code>Jun 24 19:14:19 kub-minion-1 kubelet[3268]: E0624 19:14:19.481535 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:21 kub-minion-1 kubelet[3268]: E0624 19:14:21.489485 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:22 kub-minion-1 kubelet[3268]: E0624 19:14:22.446385 3268 pod_workers.go:191] Error syncing pod 871b0139-85b7-46c0-8616-24eb87e2d38d ("calico-kube-controllers-76d4774d89-88djk_kube-system(871b0139-85b7-46c0-8616-24eb87e2d38d)"), skipping: failed to "StartContainer" for "calico-kube-controllers" with ImagePullBackOff: "Back-off pulling image \"calico/kube-controllers:v3.14.1\""
Jun 24 19:14:23 kub-minion-1 kubelet[3268]: E0624 19:14:23.483911 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:25 kub-minion-1 kubelet[3268]: E0624 19:14:25.494279 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:27 kub-minion-1 kubelet[3268]: E0624 19:14:27.519243 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:29 kub-minion-1 kubelet[3268]: E0624 19:14:29.505746 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:31 kub-minion-1 kubelet[3268]: E0624 19:14:31.555006 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.470044 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.609573 3268 configmap.go:200] Couldn't get configMap default/izac-cp-kafka-connect-jmx-configmap: object "default"/"izac-cp-kafka-connect-jmx-configmap" not registered
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.609639 3268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/82f16d16-2363-4167-a7a8-6af0ce96ba95-jmx-config podName:82f16d16-2363-4167-a7a8-6af0ce96ba95 nodeName:}" failed. No retries permitted until 2020-06-24 19:16:35.609618005 +0530 IST m=+571503.228144805 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"jmx-config\" (UniqueName: \"kubernetes.io/configmap/82f16d16-2363-4167-a7a8-6af0ce96ba95-jmx-config\") pod \"izac-cp-kafka-connect-8c6d86c4d-r5r4s\" (UID: \"82f16d16-2363-4167-a7a8-6af0ce96ba95\") : object \"default\"/\"izac-cp-kafka-connect-jmx-configmap\" not registered"
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.609853 3268 secret.go:195] Couldn't get secret default/nginx-ingress-backend-token-d6drl: object "default"/"nginx-ingress-backend-token-d6drl" not registered
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.609892 3268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ac02ceb4-726d-4209-a97b-d3f4249d646a-nginx-ingress-backend-token-d6drl podName:ac02ceb4-726d-4209-a97b-d3f4249d646a nodeName:}" failed. No retries permitted until 2020-06-24 19:16:35.609874166 +0530 IST m=+571503.228400966 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"nginx-ingress-backend-token-d6drl\" (UniqueName: \"kubernetes.io/secret/ac02ceb4-726d-4209-a97b-d3f4249d646a-nginx-ingress-backend-token-d6drl\") pod \"nginx-ingress-default-backend-7c868597f4-f8rtt\" (UID: \"ac02ceb4-726d-4209-a97b-d3f4249d646a\") : object \"default\"/\"nginx-ingress-backend-token-d6drl\" not registered"
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.610039 3268 configmap.go:200] Couldn't get configMap default/izac-cp-schema-registry-jmx-configmap: object "default"/"izac-cp-schema-registry-jmx-configmap" not registered
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.610072 3268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/7bf54920-7557-4149-90da-3e8f261fc18c-jmx-config podName:7bf54920-7557-4149-90da-3e8f261fc18c nodeName:}" failed. No retries permitted until 2020-06-24 19:16:35.610056758 +0530 IST m=+571503.228583538 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"jmx-config\" (UniqueName: \"kubernetes.io/configmap/7bf54920-7557-4149-90da-3e8f261fc18c-jmx-config\") pod \"izac-cp-schema-registry-6cbf6d694b-6lwl6\" (UID: \"7bf54920-7557-4149-90da-3e8f261fc18c\") : object \"default\"/\"izac-cp-schema-registry-jmx-configmap\" not registered"
Jun 24 19:14:33 kub-minion-1 kubelet[3268]: E0624 19:14:33.610198 3268 secret.go:195] Couldn't get secret default/izac-jobserver-token-qdfps: object "default"/"izac-jobserver-token-qdfps" not registered
Jun 24 19:14:36 kub-minion-1 kubelet[3268]: E0624 19:14:36.155302 3268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/a6125528-30b9-4da5-9449-ff32f17a43f7-izac-kafkaadmin-token-d5vjn podName:a6125528-30b9-4da5-9449-ff32f17a43f7 nodeName:}" failed. No retries permitted until 2020-06-24 19:16:38.155279619 +0530 IST m=+571505.773806419 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"izac-kafkaadmin-token-d5vjn\" (UniqueName: \"kubernetes.io/secret/a6125528-30b9-4da5-9449-ff32f17a43f7-izac-kafkaadmin-token-d5vjn\") pod \"izac-kafkaadmin-7db7645d4-n2nrk\" (UID: \"a6125528-30b9-4da5-9449-ff32f17a43f7\") : object \"default\"/\"izac-kafkaadmin-token-d5vjn\" not registered"
Jun 24 19:14:36 kub-minion-1 kubelet[3268]: E0624 19:14:36.155353 3268 secret.go:195] Couldn't get secret default/izac-eventview-token-hx56g: object "default"/"izac-eventview-token-hx56g" not registered
Jun 24 19:14:36 kub-minion-1 kubelet[3268]: E0624 19:14:36.155386 3268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f7aa6503-0424-406e-9f27-517c3046f38a-izac-eventview-token-hx56g podName:f7aa6503-0424-406e-9f27-517c3046f38a nodeName:}" failed. No retries permitted until 2020-06-24 19:16:38.155370912 +0530 IST m=+571505.773897711 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"izac-eventview-token-hx56g\" (UniqueName: \"kubernetes.io/secret/f7aa6503-0424-406e-9f27-517c3046f38a-izac-eventview-token-hx56g\") pod \"izac-eventview-bbc689b4-8wzfd\" (UID: \"f7aa6503-0424-406e-9f27-517c3046f38a\") : object \"default\"/\"izac-eventview-token-hx56g\" not registered"
Jun 24 19:14:37 kub-minion-1 kubelet[3268]: E0624 19:14:37.569743 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:39 kub-minion-1 kubelet[3268]: E0624 19:14:39.549682 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:41 kub-minion-1 kubelet[3268]: E0624 19:14:41.461248 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:43 kub-minion-1 kubelet[3268]: E0624 19:14:43.562688 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:45 kub-minion-1 kubelet[3268]: E0624 19:14:45.469403 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:47 kub-minion-1 kubelet[3268]: E0624 19:14:47.469884 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:48 kub-minion-1 kubelet[3268]: E0624 19:14:48.469826 3268 pod_workers.go:191] Error syncing pod 871b0139-85b7-46c0-8616-24eb87e2d38d ("calico-kube-controllers-76d4774d89-88djk_kube-system(871b0139-85b7-46c0-8616-24eb87e2d38d)"), skipping: failed to "StartContainer" for "calico-kube-controllers" with ImagePullBackOff: "Back-off pulling image \"calico/kube-controllers:v3.14.1\""
Jun 24 19:14:49 kub-minion-1 kubelet[3268]: E0624 19:14:49.466371 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:51 kub-minion-1 kubelet[3268]: E0624 19:14:51.559734 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:53 kub-minion-1 kubelet[3268]: E0624 19:14:53.584893 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:55 kub-minion-1 kubelet[3268]: E0624 19:14:55.445726 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:57 kub-minion-1 kubelet[3268]: E0624 19:14:57.609778 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:14:59 kub-minion-1 kubelet[3268]: E0624 19:14:59.498661 3268 kubelet_volumes.go:154] orphaned pod "093642bc-5ef9-4c41-b9f4-032a6866b400" found, but volume paths are still present on disk : There were a total of 38 errors similar to this. Turn up verbosity to see them.
Jun 24 19:15:00 kub-minion-1 kubelet[3268]: E0624 19:15:00.445806 3268 pod_workers.go:191] Error syncing pod 871b0139-85b7-46c0-8616-24eb87e2d38d ("calico-kube-controllers-76d4774d89-88djk_kube-system(871b0139-85b7-46c0-8616-24eb87e2d38d)"), skipping: failed to "StartContainer" for "calico-kube-controllers" with ImagePullBackOff: "Back-off pulling image \"calico/kube-controllers:v3.14.1\""
</code></pre>
<p>Update 2-:
Kubectl describe pod gives -:</p>
<pre><code> Normal Scheduled 25m default-scheduler Successfully assigned default/izac-eventview-78775467d6-6bvxq to kub-minion-1
Warning FailedMount 2m1s (x16 over 24m) kubelet, kub-minion-1 MountVolume.SetUp failed for volume "izac-eventview-token-hrzcc" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/6fd35008-2adf-4a35-a88a-28c2991d40ac/volumes/kubernetes.io~secret/izac-eventview-token-hrzcc --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/6fd35008-2adf-4a35-a88a-28c2991d40ac/volumes/kubernetes.io~secret/izac-eventview-token-hrzcc
Output: Failed to start transient scope unit: Connection timed out
</code></pre>
| Anish Sarangi | <p>The issue was that StorageOS pod was taking up certain OS specific service which was preventing mounting. I had to shutdown Kubelet Service forecibly, and delete StorageOS operator. Then had to restart the entire cluster after deleting pods in minion.</p>
| Anish Sarangi |
<p>I have created a private repository in Azure following this example: <a href="https://jessicadeen.com/how-to-create-a-public-helm-repo-using-azure-storage/" rel="nofollow noreferrer">https://jessicadeen.com/how-to-create-a-public-helm-repo-using-azure-storage/</a>. Everything worked fine, I was able to push my chart, add repository to my cluster and even to deploy it. In the next day I tried to upgrade my newly pushed chart which unfortunately isn't working anymore, it complains always with the generic error: <code>Error: failed to download "azikiel/calendar" (hint: running "helm repo update" may help)</code>
which is weird because I can see my new version of chart added to the cluster after repo update:</p>
<pre><code>❯ helm search repo calendar
NAME CHART VERSION APP VERSION DESCRIPTION
azikiel/calendar 0.2.1 1.16.0 A Helm chart for Kubernetes
</code></pre>
<p>I remember that in the first time while deploying first version of my chart I haven't set up a SaS token or any other authentication method to pull it. Although I suspect that there was involved an environment variable which is missing in my new terminal session.</p>
<p>Please guide me on what is wrong in my setup.</p>
| salsa_moreca | <p>The problem here was in the script that I created to push to Azure - it was corrupted. It was pushing <code>index.yaml</code> file so my repo new about new versions of my chart, but it was failing to push the chart itself to remote.
As per Vitalii's suggestion I was able to see that helm simply could not find that version of chart.</p>
| salsa_moreca |
<p>I have a self managed Kubernetes cluster consisting of one master node and 3 worker nodes. I use the Cluster Network Interface <a href="https://github.com/flannel-io/flannel#flannel" rel="nofollow noreferrer">flannel</a> within the cluster.</p>
<p>On all my machines I can see the following kind of kernel messages:</p>
<pre><code>Apr 12 04:22:24 worker-7 kernel: [278523.379954] iptables[6260]: segfault at 88 ip 00007f9e69fefe47 sp 00007ffee4dff356 error 4 in libnftnl.so.11.3.0[7f9e69feb000+16000]
Apr 12 04:22:24 worker-7 kernel: [278523.380094] Code: bf 88 00 00 00 48 8b 2f 48 39 df 74 13 4c 89 ee 41 ff d4 85 c0 78 0b 48 89 ef 48 8b 6d 00 eb e8 31 c0 5a 5b 5d 41 5c 41 5d c3 <48> 8b 87 88 00 00 00 48 81 c7 78 00 00 00 48 39 f8 74 0b 85 f6 74
Apr 12 05:59:10 worker-7 kernel: [284329.182667] iptables[13978]: segfault at 88 ip 00007fb799fafe47 sp 00007fff22419b36 error 4 in libnftnl.so.11.3.0[7fb799fab000+16000]
Apr 12 05:59:10 worker-7 kernel: [284329.182774] Code: bf 88 00 00 00 48 8b 2f 48 39 df 74 13 4c 89 ee 41 ff d4 85 c0 78 0b 48 89 ef 48 8b 6d 00 eb e8 31 c0 5a 5b 5d 41 5c 41 5d c3 <48> 8b 87 88 00 00 00 48 81 c7 98 00 00 00 48 39 f8 74 0b 85 f6 74
Apr 12 08:29:25 worker-7 kernel: [293343.999073] iptables[16041]: segfault at 88 ip 00007fa40c7f7e47 sp 00007ffe04ba9886 error 4 in libnftnl.so.11.3.0[7fa40c7f3000+16000]
Apr 12 08:29:25 worker-7 kernel: [293343.999165] Code: bf 88 00 00 00 48 8b 2f 48 39 df 74 13 4c 89 ee 41 ff d4 85 c0 78 0b 48 89 ef 48 8b 6d 00 eb e8 31 c0 5a 5b 5d 41 5c 41 5d c3 <48> 8b 87 88 00 00 00 48 81 c7 98 00 00 00 48 39 f8 74 0b 85 f6 74
</code></pre>
<p>I narrowed it down that the messages originated in the kube-flannel-ds pods. I have log messages like this one:</p>
<pre><code>Failed to ensure iptables rules: Error checking rule existence: failed to check rule existence: running [/sbin/iptables -t filter -C FORWARD -s 10.244.0.0/16 -j ACCEPT --wait]: exit status -1:
Failed to ensure iptables rules: Error checking rule existence: failed to check rule existence: running [/sbin/iptables -t nat -C POS TROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully --wait]: exit status -1:
</code></pre>
<p>Can someone explain what this kind of messages are indicating? Can this be a hardware problem? Does it make sense to switch form flannel to another kuberentes container network interface (CNI) - e.g. <a href="https://docs.projectcalico.org/about/about-calico" rel="nofollow noreferrer">Calico</a>?</p>
| Ralph | <p>As already mentioned in the comments debian buster uses nftables backed instead of iptables:</p>
<blockquote>
<p><strong>NOTE: iptables is being replaced by <a href="https://wiki.debian.org/nftables" rel="nofollow noreferrer">nftables</a> starting with Debian Buster</strong> - reference <a href="https://wiki.debian.org/iptables" rel="nofollow noreferrer">here</a></p>
</blockquote>
<p>Unfortunately the nftables are <a href="https://v1-17.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#ensure-iptables-tooling-does-not-use-the-nftables-backend" rel="nofollow noreferrer">not compatible</a> at this moment with kubernetes.</p>
<p>In Linux, nftables is available as a modern replacement for the kernel’s iptables subsystem. The <code>iptables</code> tooling can act as a compatibility layer, behaving like iptables but actually configuring nftables. This nftables backend is not compatible with the current kubeadm packages: it causes duplicated firewall rules and breaks <code>kube-proxy</code>. You could try to switch to legacy option like described <a href="https://serverfault.com/questions/1025176/debian-10-how-can-i-disable-nftables-and-continue-to-use-iptables-only">here</a> but I'm not sure about this solution as I don't have a way to test it with your Os. I solved similar case with debian with this <a href="https://stackoverflow.com/questions/61978030/kubernetes-pods-not-accessible-within-the-cluster/62031947#62031947">here</a>.</p>
<p>Alternative way would to switch to Calico which actually <a href="https://docs.projectcalico.org/reference/felix/configuration" rel="nofollow noreferrer">supports nftbacked</a> with <code>FELIX_IPTABLESBACKEND</code>. This parameter controls which variant of iptables binary Felix uses. Set this to <code>Auto</code> for auto detection of the backend. If a specific backend is needed then use <code>NFT</code> for hosts using a netfilter backend or <code>Legacy</code> for others. [Default: <code>Auto</code>].</p>
<p>When installing calico with containerd please also have a look a <a href="https://stackoverflow.com/questions/59744000/kubernetes-1-17-containerd-1-2-0-with-calico-cni-node-not-joining-to-master">this</a> case.</p>
| acid_fuji |
<p>I have a simple application composed with Express.js as backend API and React.js as Frontend client.</p>
<p>I create a singles image container with frontend and backend
Application repo: <a href="https://github.com/vitorvr/list-users-kubernetes" rel="nofollow noreferrer">https://github.com/vitorvr/list-users-kubernetes</a></p>
<p>Dockerfile:</p>
<pre><code>FROM node:13
WORKDIR /usr/app/listusers
COPY . .
RUN yarn
RUN yarn client-install
RUN yarn client-build
EXPOSE 8080
CMD ["node", "server.js"]
</code></pre>
<p>server.js</p>
<pre><code>const express = require('express');
const cors = require('cors');
const path = require('path');
const app = express();
const ip = process.env.IP || '0.0.0.0';
const port = process.env.PORT || 8080;
app.use(express.json());
app.use(cors());
app.use(express.static(path.join(__dirname, 'public')));
app.get('/users', (req, res) => {
res.json([
{ name: 'Jhon', id: 1 },
{ name: 'Ashe', id: 2 }
]);
});
app.listen(port, ip, () =>
console.log(`Server is running at http://${ip}:${port}`)
);
</code></pre>
<p>React call:</p>
<pre><code>const api = axios.create({
baseURL: 'http://0.0.0.0:8080'
});
useEffect(() => {
async function loadUsers() {
const response = await api.get('/users');
if (response.data) {
setUsers(response.data);
}
}
loadUsers();
}, []);
</code></pre>
<p>To deploy and run this image in minikube I use these follow commands:</p>
<pre><code>kubectl run list-users-kubernetes --image=list-users-kubernetes:1.0 --image-pull-policy=Never
kubectl expose pod list-users-kubernetes --type=LoadBalancer --port=8080
minikube service list-users-kubernetes
</code></pre>
<p>The issue occurs when the front end try to access the localhost:</p>
<p><a href="https://i.stack.imgur.com/jKTaq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jKTaq.png" alt="enter image description here"></a></p>
<p>I don't know where I need to fix this, if I have to do some fix in React, or do some settings in Kubernetes or even this is the best practice to deploy small applications as a Container Image at Kubernetes.</p>
<p>Thanks in advance.</p>
| vitorvr | <p>Your Kubernetes node, assuming it is running as a virtual machine on your local development machine, would have an IP address assigned to it. Similarly, when an IP address would be assigned to your pod where the "list-user-kubernetes" service is running. You can view the IP address by running the following command: <code>kubectl get pod list-users-kubernetes</code>, and to view more information add <code>-o wide</code> at the end of the command, eg. <code>kubectl get pod list-users-kubernetes -o wide</code>. </p>
<p>Alternatively, you can do port forwarding to your localhost using <code>kubectl port-forward pod/POD_NAME POD_PORT:LOCAL_PORT</code>. Example below:</p>
<p><code>kubectl port-forward pod/list-users-kubernetes 8080:8080</code>
Note: You should run this as a background service or in a different tab in your terminal, as the port forwarding would be available as long as the command is running. </p>
<p>I would recommend using the second approach, as your external IP for the pod can change during deployments, but mapping it to localhost would allow you to run your app without making code changes. </p>
<p><a href="https://kubectl.docs.kubernetes.io/pages/container_debugging/port_forward_to_pods.html" rel="nofollow noreferrer">Link to port forwarding documentation</a></p>
| arshit arora |
<p>I have the following service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hedgehog
labels:
run: hedgehog
spec:
ports:
- port: 3000
protocol: TCP
name: restful
- port: 8982
protocol: TCP
name: websocket
selector:
run: hedgehog
externalIPs:
- 1.2.4.120
</code></pre>
<p>In which I have specified an externalIP.
I'm also seeing this IP under EXTERNAL-IP when running <code>kubectl get services</code>.
However, when I do <code>curl http://1.2.4.120:3000</code> I get a timeout. However the app is supposed to give me a response because the jar running inside the container in the deployment does respond to localhost:3000 requests when run locally.</p>
| zendevil.eth | <p>First of all you have to understand you cannot place any random address in your <code>ExternalIP</code> field. Those addresses are not managed by Kubernetes and are the responsibility of the cluster administrator or you. External IP addresses specified with <code>externalIPs</code> are different than the external IP address assigned to a service of type <code>LoadBalancer</code> by a cloud provider.</p>
<p>I <a href="https://dnschecker.org/ip-whois-lookup.php?query=1.2.4.120" rel="nofollow noreferrer">checked the address</a> that you mentioned in the question and it does not look like it belongs to you. That why I suspect that you placed a random one there.</p>
<p>The same address appears in this <a href="https://medium.com/swlh/kubernetes-external-ip-service-type-5e5e9ad62fcd" rel="nofollow noreferrer">article</a> about <code>ExternalIP</code>. As you can see here the address in this case are the IP addresses of the nodes that Kubernetes runs on.
This is potential issue in your case.</p>
<p>Another one is too verify if your application is listening on <code>localhost</code> or <code>0.0.0.0</code>. If it's really localhost then this might be another potential problem for you. You can change where the server process is listening. You do this by listening on <code>0.0.0.0</code>, which means “listen on all interfaces”.</p>
<p>Lastly please verify that your selector/ports of the services are correct and that you have at least one <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-have-any-endpoints" rel="nofollow noreferrer">endpoint</a> that backs your service.</p>
| acid_fuji |
<p>i hope you're doing okay</p>
<p>im trying to build a cdap image that i havein gitlab in aks using argocd</p>
<p>the build works in my local kubernetes cluster with rook-ceph storage class but with managed premium storage class in aks it seems that something is wrong in permissions</p>
<p>here is my storage class :</p>
<pre><code>#The default value for fileMode and dirMode is 0777 for Kubernetes #version 1.13.0 and above, you can modify as per your need
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile-zrs
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
parameters:
skuName: Standard_LRS
</code></pre>
<p>here is my statfulset :</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-sts
labels:
app: {{ .Release.Name }}
spec:
revisionHistoryLimit: 2
replicas: {{ .Values.replicas }}
updateStrategy:
type: RollingUpdate
serviceName: {{ .Release.Name }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
imagePullSecrets:
- name: regcred-secret-argo
volumes:
- name: nginx-proxy-config
configMap:
name: {{ .Release.Name }}-nginx-conf
containers:
- name: nginx
image: nginx:1.17
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- containerPort: 8080
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
- name: cdap-sandbox
image: {{ .Values.containerrepo }}:{{ .Values.containertag }}
imagePullPolicy: Always
resources:
limits:
cpu: 1000m
memory: 8Gi
requests:
cpu: 500m
memory: 6000Mi
readinessProbe:
httpGet:
path: /
port: 11011
initialDelaySeconds: 30
periodSeconds: 20
volumeMounts:
- name: {{ .Release.Name }}-data
mountPath: /opt/cdap/sandbox/data
ports:
- containerPort: 11011
- containerPort: 11015
volumeClaimTemplates:
- metadata:
name: {{ .Release.Name }}-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "300Gi"
</code></pre>
<p>the problem is the cdap pod can't change ownership of a directory<br />
here are the logs :</p>
<pre><code>Fri Oct 22 13:48:08 UTC 2021 Starting CDAP Sandbox ...LOGBACK: No context given for io.cdap.cdap.logging.framework.local.LocalLogAppender[io.cdap.cdap.logging.framework.local.LocalLogAppender]
55
log4j:WARN No appenders could be found for logger (DataNucleus.General).
54
log4j:WARN Please initialize the log4j system properly.
53
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
52
2021-10-22 13:48:56,030 - ERROR [main:i.c.c.StandaloneMain@446] - Failed to start Standalone CDAP
51
Failed to start Standalone CDAP
50
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Error applying authorization policy on hive configuration: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
49
48
at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015)
47
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
46
at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
45
at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
44
at io.cdap.cdap.StandaloneMain.startUp(StandaloneMain.java:300)
43
at io.cdap.cdap.StandaloneMain.doMain(StandaloneMain.java:436)
42
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
41
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
40
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
39
at java.lang.reflect.Method.invoke(Method.java:498)
38
at io.cdap.cdap.StandaloneMain.main(StandaloneMain.java:418)
37
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Error applying authorization policy on hive configuration: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
36
35
at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015)
34
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
33
at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
32
at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
31
at io.cdap.cdap.explore.executor.ExploreExecutorService.startUp(ExploreExecutorService.java:99)
30
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
29
at java.lang.Thread.run(Thread.java:748)
28
Caused by: java.lang.RuntimeException: Error applying authorization policy on hive configuration: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
27
26
at org.apache.hive.service.cli.CLIService.init(CLIService.java:114)
25
at io.cdap.cdap.explore.service.hive.BaseHiveExploreService.startUp(BaseHiveExploreService.java:309)
24
at io.cdap.cdap.explore.service.hive.Hive14ExploreService.startUp(Hive14ExploreService.java:76)
23
... 2 more
22
Caused by: java.lang.RuntimeException: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
21
20
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
19
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)
18
at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)
17
... 4 more
16
Caused by: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
15
14
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
13
at org.apache.hadoop.util.Shell.run(Shell.java:869)
12
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
11
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
10
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
9
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:771)
8
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:515)
7
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:555)
6
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:533)
5
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:313)
4
at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639)
3
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:574)
2
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
1
... 6 more
</code></pre>
<p>i don't know why it can't change permission</p>
<p>i would appreciate any kind of help i can get because im stuck at this and i have no idea how to fix this rather than installing a new provisioner which i really don't want to do</p>
<p>thank you in advance and hope a good day for you all</p>
| ossama assaghir | <p>after a lot of testing i changed the storage class
i installed rook-ceph using : <a href="https://dev.to/cdennig/using-rook-ceph-with-pvcs-on-azure-kubernetes-service-djc" rel="nofollow noreferrer">this procedure</a></p>
<p><strong>note:</strong> you have to change the image version in cluster.yaml from ceph/ceph:v14.2.4 to ceph/ceph:v16</p>
| ossama assaghir |
<p>I have tested the app using minikube locally and it works. When I use the same Doeckerfile with deploymnt.yml, the pod returns to Error state with the below reason</p>
<p>Error: Cannot find module '/usr/src/app/server.js'</p>
<p>Dockerfile:</p>
<pre><code>FROM node:13-alpine
WORKDIR /api
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>Deployment.yml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app-dep
labels:
app: nodejs-app
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: opp-sa
imagePullSecrets:
- name: xxx
containers:
- name: nodejs-app
image: registry.xxxx.net/k8s_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
</code></pre>
<p>Assuming it could be a problem with "node_modules", I had "ls" on the WORDIR inside the Dockerfile and it does show me "node_modules". Does anyone what else to check to resolve this issue ?</p>
| techPM | <p>@willrof, thanks for the detailed write-up. A reply to your response is limited to 30 characters and hence I'm posting as new comment.</p>
<p>My problem was resolved yesterday. It was with COPY . . </p>
<p>It works perfectly fine in my local but, when I tried to deploy onto the cluster with the same Dockerfile, I was running into the issue of "cannot find module..."</p>
<p>So it finally worked when the directory path was mentioned instead of . . while copying files</p>
<pre><code>COPY /api /usr/app #copy project basically
WORKDIR /usr/app #set workdir just before npm install
RUN npm install
EXPOSE 3000
</code></pre>
<p>Moving WORKDIR statement before installing "node_modules" worked in my case. I'm surprised to figure this as the problem though it worked locally with COPY . .</p>
| techPM |
<p>I know the below information is not enough to trace the issue but still, I want some solution.</p>
<p>We have Amazon EKS cluster.</p>
<p>Currently, we are facing the reachability of the Kafka pod issue.</p>
<p><strong>Environment:</strong></p>
<ul>
<li>Total 10 nodes with Availability zone ap-south-1a,1b</li>
<li>I have a three replica of the Kafka cluster (Helm chart installation)</li>
<li>I have a three replica of the zookeeper (Helm chart installation)</li>
<li>Kafka using external advertised listener on port 19092</li>
<li>Kafka has service with an internal network load balancer</li>
<li>I have deployed a test-pod to check reachability of Kafka pod.</li>
<li>we are using cloud-map based DNS for advertized listener</li>
</ul>
<p><strong>Working:</strong></p>
<ul>
<li>When I run telnet command from ec2 like <code>telnet 10.0.1.45 19092</code>. It works as expected. IP <code>10.0.1.45</code> is a loadbalancer ip.</li>
<li>When I run telnet command from ec2 like <code>telnet 10.0.1.69 31899</code>. It works as expected. IP <code>10.0.1.69</code> is a actual node's ip and 31899 is nodeport.</li>
</ul>
<p><strong>Problem:</strong></p>
<ul>
<li>When I run same command from test-pod. like <code>telnet 10.0.1.45 19092</code>. It works sometime and sometime it will gives an error like <code>telnet: Unable to connect to remote host: Connection timed out</code></li>
</ul>
<p>The issue is something related to kube-proxy. we need help to resolve this issue.</p>
<p>Can anyone help to guide me?
Can I restart kube-proxy? Does it affect other pods/deployments?</p>
| NIrav Modi | <p>I believe this problem is caused by AWS's NLB TCP-only nature (as mentioned in the comments).</p>
<p>In a nutshell, <a href="https://github.com/kubernetes/kubernetes/issues/66796#issuecomment-409066184" rel="nofollow noreferrer">your pod-to-pod communication fails when hairpin is needed</a>.</p>
<p>To confirm this is the root cause, you can verify that when the telnet works, kafka pod and client pod <strong>are not</strong> in the same EC2 node. And when they're in the same EC2 server, the telnet fails.</p>
<p>There are (at least) two approaches to tackle this issue:</p>
<ol>
<li><strong>Use K8s internal networking</strong> - Refer to k8s Service's URL</li>
</ol>
<p>Every K8s service has <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services" rel="nofollow noreferrer">its own DNS FQDN</a> for internal usage (meaning using k8s network only, without reaching the LoadBalancer and come back to k8s again). You can just telnet this instead of the NodePort via the LB.
I.e. let's assume your kafka service is named <code>kafka</code>. Then you can just telnet <code>kafka.svc.cluster.local</code> (on the port exposed by kafka service)</p>
<ol start="2">
<li><strong>Use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">K8s anti-affinity</a></strong> to make sure client and kafka are never scheduled in the same node.</li>
</ol>
<p>Oh and as indicated in <a href="https://stackoverflow.com/questions/34937184/kubernetes-cant-connect-to-a-service-ip-from-the-services-pod?rq=1">this answer</a> you might need to make that service <strong>headless</strong>.</p>
| castel |
<p>I am trying to create the following:</p>
<ol>
<li><code>Deployment</code> for mongo</li>
<li><code>Deployment</code> for mongo-express</li>
<li><code>ClusterIp</code> for mongo</li>
<li><code>ClusterIp</code> for mongo-express</li>
<li>An Ingress Service to route request to mongo-express</li>
</ol>
<p>I want to be able to go to <code>xyz.com/admin/auth-db-gui</code> and see the mongo-express gui.</p>
<p>I am running this on Linux minikube.</p>
<p>When going to <code>xyz.com/admin/auth-db-gui</code>, I get 503 Service Temporarily Unavailable, however when executing <code>kubectl get pods</code>, I can see 2 pods running.</p>
<p><em>I will setup the mapping for <code>xyz.com</code> in <code>/etc/hosts</code> manually as this is for only dev purpose</em></p>
<p>db.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-db-deployment
spec:
selector:
matchLabels:
app: auth-db
template:
metadata:
labels:
app: auth-db
spec:
containers:
- name: auth-db
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-db-service
spec:
selector:
app: auth-db
ports:
- name: auth-db
protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>db-gui</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-db-gui-deployment
spec:
selector:
matchLabels:
app: auth-db-gui
template:
metadata:
labels:
app: auth-db-gui
spec:
containers:
- name: auth-db-gui
image: mongo-express
env:
- name: ME_CONFIG_MONGODB_SERVER
value: auth-db-service
---
apiVersion: v1
kind: Service
metadata:
name: auth-db-gui-service
spec:
selector:
app: auth-db-gui
ports:
- name: auth-db-gui
protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>ingress.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: xyz-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: xyz.com
http:
paths:
- path: /admin/auth-db-gui
backend:
serviceName: auth-db-gui-service
servicePort: 8081
</code></pre>
<p><em>Sorry if there is an obvious mistake above.</em></p>
| keemahs | <p>I notice that in <a href="https://serverfault.com/questions/868281/kubernetes-always-gives-503-service-temporarily-unavailable-with-multiple-tls-in">some cases</a> 503 means that ports on configured correctly.</p>
<p>I've tested your ingress, along with the service and the deployment and after correcting the mistake with port in the ingress object it works great:</p>
<pre class="lang-sh prettyprint-override"><code>curl -H "Host: xyz.com" "192.168.49.2/admin/auth-db-gui"
{
"path": "/admin/auth-db-gui",
"headers": {
"host": "xyz.com",
"x-request-id": "ff272df6d729af6c1fb4d5f510de88f4",
"x-real-ip": "192.168.49.1",
"x-forwarded-for": "192.168.49.1",
"x-forwarded-host": "xyz.com",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-scheme": "http",
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "xyz.com",
"ip": "192.168.49.1",
"os": {
"hostname": "auth-db-gui-deployment-555c77cf75-fjbf2"
</code></pre>
<p>For testing purposes I highly recommend <code>mendhak/http-https-echo</code>. I swapped the image in your deployed and fix the port. You can test the ingress yourself with the below yaml files:</p>
<pre class="lang-yaml prettyprint-override"><code>#ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: xyz-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: xyz.com
http:
paths:
- path: /admin/auth-db-gui
backend:
serviceName: auth-db-gui-service
servicePort: 27017
</code></pre>
<p>Notice the how the port are being set. The <code>servicePort</code> in ingress corresponds to service <code>port</code> which is 27017. I changed the target port of the service since echo-server works on 80</p>
<pre class="lang-yaml prettyprint-override"><code>#sevice.yaml
apiVersion: v1
kind: Service
metadata:
name: auth-db-gui-service
spec:
selector:
app: auth-db-gui
ports:
- name: auth-db-gui
protocol: TCP
port: 27017
targetPort: 80
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-db-gui-deployment
spec:
selector:
matchLabels:
app: auth-db-gui
template:
metadata:
labels:
app: auth-db-gui
spec:
containers:
- name: auth-db-gui
image: mendhak/http-https-echo
</code></pre>
<p>Please have a look at the docs about <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> for more examples.</p>
| acid_fuji |
<p>I kicked off some Spark job on Kubernetes with quite big volume of data, and job failed that there is no enough space in /var/data/spark-xxx directory.</p>
<p>As the Spark documentation says on <a href="https://github.com/apache/spark/blob/master/docs/running-on-kubernetes.md" rel="nofollow noreferrer">https://github.com/apache/spark/blob/master/docs/running-on-kubernetes.md</a></p>
<blockquote>
<p>Spark uses temporary scratch space to spill data to disk during
shuffles and other operations. When using Kubernetes as the resource
manager the pods will be created with an emptyDir volume mounted for
each directory listed in SPARK_LOCAL_DIRS. If no directories are
explicitly specified then a default directory is created and
configured appropriately</p>
</blockquote>
<p>Seems like <em>/var/data/spark-xx</em> directory is the default one for emptyDir. Thus, I tried to map that emptyDir to Volume (with bigger space) which is already mapped to Driver and Executors Pods.</p>
<p>I mapped it in the properties file and I can see that is mounted in the shell:</p>
<pre><code>spark.kubernetes.driver.volumes.persistentVolumeClaim.checkvolume.mount.path=/checkpoint
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkvolume.mount.readOnly=false
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkvolume.options.claimName=sparkstorage
spark.kubernetes.executor.volumes.persistentVolumeClaim.checkvolume.mount.path=/checkpoint
spark.kubernetes.executor.volumes.persistentVolumeClaim.checkvolume.mount.readOnly=false
spark.kubernetes.executor.volumes.persistentVolumeClaim.checkvolume.options.claimName=sparkstorage
</code></pre>
<p>I am wondering if it's possible to mount emptyDir somehow on my persistent storage, so I can spill more data and avoid job failures?</p>
| Tomasz Krol | <p>I found that spark 3.0 has considered this problem and has completed the feature. </p>
<blockquote>
<p>Spark supports using volumes to spill data during shuffles and other operations. To use a volume as local storage, the volume's name should starts with <code>spark-local-dir-</code>, for example:</p>
</blockquote>
<pre><code>--conf spark.kubernetes.driver.volumes.[VolumeType].spark-local-dir-[VolumeName].mount.path=<mount path>
--conf spark.kubernetes.driver.volumes.[VolumeType].spark-local-dir-[VolumeName].mount.readOnly=false
</code></pre>
<p>Reference:</p>
<ul>
<li><a href="https://issues.apache.org/jira/browse/SPARK-28042" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/SPARK-28042</a></li>
<li><a href="https://github.com/apache/spark/pull/24879" rel="nofollow noreferrer">https://github.com/apache/spark/pull/24879</a></li>
</ul>
| Merlin Wang |
<p>i am new to kubernetes.</p>
<p>How can I make sure that system-critical pods always run and cannot be displaced by other pods? Do I have to set these critical pods as "system-cluster-critical" or "system-node-critical" priorityclasses or do I have to create another priority class with my own value?</p>
<p>I have found 2 options for pod priorities - priority classes and Quality of service. What is the difference between them?</p>
<p>(<a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/</a> <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/</a>)</p>
| exploringunicorn | <p>To mark a Pod as critical, set <strong>priorityClassName</strong> for that Pod to <strong>system-cluster-critical</strong> or <strong>system-node-critical</strong>. system-node-critical is the highest available priority, even higher than system-cluster-critical</p>
<pre><code>spec:
priorityClassName: system-cluster-critical
</code></pre>
<p>Refer below link.
<a href="https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/</a></p>
| Wasim Ansari |
Subsets and Splits