Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have Minikube installed in a virtual machine.
When I run it using the minikube start --vm-driver=none command, it functions perfectly.</p>
<p>As a result of running minikube status, I got:</p>
<p>running kubelet on host: starting apiserver: halted kubectl Pointing to minikube-vm is correctly configured.</p>
| Martin rudez | <p><strong>Try to follow these below steps to resolve your issue</strong> :</p>
<ul>
<li>Go to C:UsersYour User>/.kube/ and delete the config file before restarting minikube.</li>
<li>Update Minikube to the most recent version, which is right now.</li>
<li>Utilizing minikube delete and new, recreate your minikube cluster.</li>
<li>Minikube update context is eager.</li>
</ul>
<blockquote>
<pre><code>minikube stop
minikube delete
minikube start
</code></pre>
</blockquote>
<p>The restart was crucial . It won't start if you issue these commands without restarting it first.</p>
<p><strong>For further use In Minikube <a href="https://minikube.sigs.k8s.io/docs/handbook/troubleshooting/" rel="nofollow noreferrer">Troubleshooting</a>, it is thoroughly explained. The commands listed below</strong></p>
<pre><code>$ minikube logs
$ minikube start --alsologtostderr -v=2
$ kubectl config view
</code></pre>
| Sai Chandini Routhu |
<p>We are running a pod in Kubernetes that needs to load a file during runtime. This file has the following properties:</p>
<ul>
<li>It is known at build time</li>
<li>It should be mounted read-only by multiple pods (the same kind)</li>
<li>It might change (externally to the cluster) and needs to be updated</li>
<li>For various reasons (security being the main concern) the file cannot be inside the docker image</li>
<li>It is potentially quite large, theoretically up to 100 MB, but in practice between 200kB - 10MB.</li>
</ul>
<p>We have considered various options:</p>
<ul>
<li>Creating a persistent volume, mount the volume in a temporary pod to write (update) the file, unmount the volume, and then mount it in the service with ROX (Read-Only Multiple) claims. This solution means we need downtime during upgrade, and it is hard to automate (due to timings).</li>
<li>Creating multiple secrets using the secrets management of Kubernetes, and then "assemble" the file before loading it in an init-container or something similar.</li>
</ul>
<p>Both of these solutions feels a little bit hacked - is there a better solution out there that we could utilize for solving this?</p>
| Frederik | <p>You need to use a shared filesystem that supports Read/Write Multiple Pods.
Here is a link to the CSI Drivers which can be used with Kubernetes and provide those access:
<a href="https://kubernetes-csi.github.io/docs/drivers.html" rel="nofollow noreferrer">https://kubernetes-csi.github.io/docs/drivers.html</a></p>
<p>Ideally, you need a solution that is not an appliance, and can run anywhere meaning it can run in the cloud or on-prem.</p>
<p>The platforms that could work for you are Ceph, GlusterFS, and Quobyte (<em>Disclaimer, I work for Quobyte</em>)</p>
| Franklin |
<p>I installed Meshery with Helm.</p>
<p>All pods are ok except the Cilium</p>
<pre><code>meshery-app-mesh-7d8cb57c85-cfk7j 1/1 Running 0 63m
meshery-cilium-69b4f7d6c5-t9w55 0/1 CrashLoopBackOff 16 (3m16s ago) 63m
meshery-consul-67669c97cc-hxvlz 1/1 Running 0 64m
meshery-istio-687c7ff5bc-bn8mq 1/1 Running 0 63m
meshery-kuma-66f947b988-887hm 1/1 Running 0 63m
meshery-linkerd-854975dcb9-rsgtl 1/1 Running 0 63m
meshery-nginx-sm-55675f57b5-xfpgz 1/1 Running 0 63m
</code></pre>
<p>Logs show</p>
<pre><code>level=info msg="Registering workloads with Meshery Server for version v1.14.0-snapshot.4" app=cilium-adapter
panic: interface conversion: error is *fs.PathError, not *errors.Error
goroutine 42 [running]:
github.com/layer5io/meshkit/errors.GetCode({0x278e020?, 0xc0004f7e30?})
</code></pre>
<p>I checked events also</p>
<pre><code>kubectl get events --field-selector involvedObject.name=meshery-cilium-69b4f7d6c5-t9w55
LAST SEEN TYPE REASON OBJECT MESSAGE
17m Normal Pulling pod/meshery-cilium-69b4f7d6c5-t9w55 Pulling image "layer5/meshery-cilium:stable-latest"
2m44s Warning BackOff pod/meshery-cilium-69b4f7d6c5-t9w55 Back-off restarting failed container meshery-cilium in pod meshery-cilium-69b4f7d6c5-t9w55_default(d7ccd0e8-27e5-4f40-89bc-f8a6dc8fa25a)
</code></pre>
<p>I am adding logs from troubled pod
Events:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 4h25m (x55 over 8h) kubelet Pulling image "layer5/meshery-cilium:stable-latest"
Warning BackOff 3h45m (x1340 over 8h) kubelet Back-off restarting failed container meshery-cilium in pod meshery-cilium-69b4f7d6c5-t9w55_default(d7ccd0e8-27e5-4f40-89bc-f8a6dc8fa25a)
Warning FailedMount 21m (x6 over 21m) kubelet MountVolume.SetUp failed for volume "kube-api-access-tcdml" : object "default"/"kube-root-ca.crt" not registered
Warning NetworkNotReady 20m (x19 over 21m) kubelet network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Warning BackOff 72s (x82 over 19m) kubelet Back-off restarting failed container meshery-cilium in pod meshery-cilium-69b4f7d6c5-t9w55_default(d7ccd0e8-27e5-4f40-89bc-f8a6dc8fa25a)
</code></pre>
<p>The Nginx pod shows network problems</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning NodeNotReady 22m node-controller Node is not ready
Warning NetworkNotReady 22m (x18 over 23m) kubelet network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Warning FailedMount 22m (x7 over 23m) kubelet MountVolume.SetUp failed for volume "kube-api-access-lscv2" : object "default"/"kube-root-ca.crt" not registered
</code></pre>
<p>How to fix this issue?</p>
| Richard Rublev | <p>As per this <a href="https://www.blinkops.com/blog/troubleshooting-the-crashloopbackoff-error" rel="nofollow noreferrer">Blog</a> by Patrick Londa, your cilium Pod is getting <code>CrashLoopBack off</code> error due to the <code>"Back-off restarting failed container"</code>, this means that your container suddenly terminated after Kubernetes started it. <br><br>1. This occurs due to the temporary resource overload as a result of a spike in activity. The solution to fix this is to <strong>adjust periodSeconds or timeoutSeconds</strong> to give the application a longer window of time to respond. <br><br>2. Sometimes this happens due to insufficient memory resources. You can increase the memory limit by <strong>changing the "resources:limits"</strong> in the Container's resource manifest as shown in the blog. <br><br>Verify the above steps and let me know if this fixes the issue.<br></p>
| Hemanth Kumar |
<p>I am trying to learn Kubernetes and installed Minikube on my local. I created a sample python app and its corresponding image was pushed to public docker registry. I started the cluster with</p>
<pre><code>kubectl apply -f <<my-app.yml>>
</code></pre>
<p>It got started as expected. I stopped Minikube and deleted all the containers and images and restarted my Mac.</p>
<blockquote>
<p>My Questions</p>
</blockquote>
<p>I start my docker desktop and as soon as I run</p>
<pre><code>minikube start
</code></pre>
<p>Minikube goes and pulls the images from public docker registry and starts the container. Is there a configuration file that Minikube looks into to start my container that I had deleted from my local? I am not able to understand from where is Minikube picking up my-app's configurations which was defined in manifest folder.</p>
<p>I have tried to look for config files and did find cache folder. But it does not contain any information about my app</p>
| Shivani | <p>I found this is expected behavior:</p>
<blockquote>
<p><em>minikube stop</em> command should stop the underlying VM or container, but keep user data intact.</p>
</blockquote>
<p>So when I manually delete already created resources it does not automatically starts.</p>
<p>More information :
<a href="https://github.com/kubernetes/minikube/issues/13552" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/13552</a></p>
| Shivani |
<p>This is my first time using Docker, Kubernetes, Ingress and Skaffold. I'm following along with a online course, but I'm stuck.</p>
<p>When I call just the home path <code>/</code> it returns a response, but any other path that I try returns 404.</p>
<p>The first step I took was to add a custom local development url in /etc/hosts/ so in that file I've added <code>127.0.0.1 ticketing.com</code>.</p>
<p>Then I have my ingress file. I set <code>host</code> to match ticketing.com and the path uses a regex to match any path after <code>/api/users</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.com
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
</code></pre>
<p>This is the file that has the deploy and service yaml for what is going to be an authentication service. In the second part, it looks like the <code>metadata: name</code> matches <code>auth-srv</code> correctly and <code>port 3000</code> correctly.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: lmsankey/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>If this helps, here is index.ts with the route.</p>
<pre><code>import express from "express";
import { json } from "body-parser";
const app = express();
app.use(json());
app.get("/api/users/currentuser", (req, res) => {
res.send("Hi there!");
});
app.listen(3000, () => {
console.log("Listening on port 3000!!!!!!!!");
});
</code></pre>
<p>I tried following this answer <a href="https://stackoverflow.com/a/52029100/3692615">https://stackoverflow.com/a/52029100/3692615</a> but honestly I'm not sure I understood entirely, or if that's what I need.</p>
<p>Basically it says to add this line to annotations. <code>nginx.ingress.kubernetes.io/rewrite-target: /</code></p>
<p>Maybe someone more experienced can spot whats going on or point me in the right direction?</p>
<p>Thanks.</p>
| Louis Sankey | <p>using your configuration and going to:</p>
<p><a href="http://ticketing.com/api/users/currentuser" rel="nofollow noreferrer">http://ticketing.com/api/users/currentuser</a></p>
<p>i got the message (Hi, there!) and it seems to be fine and correct.</p>
<p>Anyway, i will ask you: "what if you want to add another route on your express app?"</p>
<p>For example, what if you add this lines on your index.ts?</p>
<pre class="lang-js prettyprint-override"><code>app.get('/health', (req, res) => {
res.send("Hi, i'm fine!");
});
app.get('/super-secret-section', (req, res) => {
res.send("shh! this is a supersecret section");
});
</code></pre>
<p>With your current ingress yaml file, adding these lines on your .ts file, implicates that if you want to reach them through ingress, you need to modify your ingress.yaml file to map these two new resources: no good for your aim and overcomplicates the file (if i understood you needs)</p>
<p>Instead, you should consider to use ingress with the prefix (since you want to use pathType: Prefix), then use it adding a prefix for you app (the svc, actually).</p>
<p>Something like this:</p>
<p><code>yourdomain.com/STARTINGPOINTPREFIX/any_path_on_your_app_you_want_add</code></p>
<p>To successfully achieve this goal, you can use a combination of paths and grouping within annotations.</p>
<p>Here, a simple basic example:</p>
<pre class="lang-yaml prettyprint-override"><code>
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2 # this line use the group
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /cool-app(/|$)(.*) # this intercept all the paths of your app using the 'cool-app' prefix
backend:
service:
name: auth-srv
port:
number: 3000
host: ticketing.com
</code></pre>
<p>when you apply this ingress yaml, you can reach your app with:</p>
<p><code>http://ticketing.com/cool-app/api/users/currentuser</code></p>
<p>also, if you add the two lines on .ts mentioned above:</p>
<p><code>http://ticketing.com/cool-app/health</code></p>
<p><code>http://ticketing.com/cool-app/super-secret-section</code></p>
<p>In this way, you separate the path of your app, from the ingress route.</p>
<p>From my point of view, the key point to get the ingress, is that you should use it to create, that i call "entry point slices" for your services.</p>
<p>More details on nginx annotation and rewrite
<a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
<p>Hope it helps.</p>
| kiggyttass |
<p>Is there a way to scale/list node pools for a specific project ID using kubernetes-client java or kubernetes-client-extended libraries.?</p>
<p>I tried searching for APIs present inside kubernetes-client java library but didn't got any.</p>
<p>Link : <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a></p>
| Pawan Gorai | <p>If you are using GKE then refer to this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools" rel="nofollow noreferrer">doc</a> to add or manage the node pools or verify this <a href="https://developers.google.com/resources/api-libraries/documentation/container/v1beta1/python/latest/container_v1beta1.projects.zones.clusters.nodePools.html" rel="nofollow noreferrer">instance's methods</a> of creating a node pool, auto scaling the node pools.</p>
<p>You can also find more client libraries of kubernetes <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Edit 1</strong>: seems to be there is no direct or generic library to list/scale node pools but in GKE we are having an option called <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster" rel="nofollow noreferrer">resize Google Kubernetes Engine (GKE) Standard clusters</a>. You can resize a cluster to increase or decrease the number of nodes in that cluster.</p>
<p>You can also use GKE's <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">cluster autoscaler</a> feature that automatically resizes your node pools in response to changing conditions, such as changes in your workloads and resource usage.</p>
<p><strong>Edit 2</strong>: there is no native k8s library available to list or scale node pools.</p>
<p>You can raise your issue <a href="https://github.com/kubernetes-client/java/#support" rel="nofollow noreferrer">here</a></p>
<p>These are the officially available <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries" rel="nofollow noreferrer">kubernetes client libraries</a> and link for <a href="https://github.com/kubernetes-client/java/" rel="nofollow noreferrer">java client Library</a></p>
| Hemanth Kumar |
<p><a href="https://nni.readthedocs.io/en/stable/TrainingService/FrameworkControllerMode.html" rel="nofollow noreferrer">https://nni.readthedocs.io/en/stable/TrainingService/FrameworkControllerMode.html</a></p>
<p>I follow this example to train my model with nni + kubernetes cluster.</p>
<p>I already set up frameworkcontroller(<a href="https://github.com/Microsoft/frameworkcontroller/tree/master/example/run#run-by-kubernetes-statefulset" rel="nofollow noreferrer">https://github.com/Microsoft/frameworkcontroller/tree/master/example/run#run-by-kubernetes-statefulset</a>), k8s-nvidia-plugin and NFS server.</p>
<p>In commandline, I typed the "nnictl create --config frameworkConfig.yaml"</p>
<p>frameworkConfig.yaml is here:</p>
<pre><code>authorName: default
experimentName: example_mnist
trialConcurrency: 1
maxExecDuration: 10h
maxTrialNum: 100
#choice: local, remote, pai, kubeflow, frameworkcontroller
trainingServicePlatform: frameworkcontroller
searchSpacePath: ~/nni/examples/trials/mnist-tfv1/search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
assessor:
builtinAssessorName: Medianstop
classArgs:
optimize_mode: maximize
trial:
codeDir: ~/nni/examples/trials/mnist-tfv1
taskRoles:
- name: worker
taskNum: 1
command: python3 mnist.py
gpuNum: 1
cpuNum: 1
memoryMB: 8192
image: msranni/nni:latest
frameworkAttemptCompletionPolicy:
minFailedTaskCount: 1
minSucceededTaskCount: 1
frameworkcontrollerConfig:
storage: nfs
nfs:
server: {your_nfs_server}
path: {your_nfs_server_exported_path}
</code></pre>
<p>This is the log message of "kubectl describe pod "</p>
<pre><code>Name: nniexploq1yrw9trialblpky-worker-0
Namespace: default
Priority: 0
Node: mofl-c246-wu4/192.168.0.28
Start Time: Fri, 03 Sep 2021 09:43:07 +0900
Labels: FC_FRAMEWORK_NAME=nniexploq1yrw9trialblpky
FC_TASKROLE_NAME=worker
FC_TASK_INDEX=0
Annotations: FC_CONFIGMAP_NAME: nniexploq1yrw9trialblpky-attempt
FC_CONFIGMAP_UID: 07f61f6b-4073-480e-90a1-cb582b8221cf
FC_FRAMEWORK_ATTEMPT_ID: 0
FC_FRAMEWORK_ATTEMPT_INSTANCE_UID: 0_07f61f6b-4073-480e-90a1-cb582b8221cf
FC_FRAMEWORK_NAME: nniexploq1yrw9trialblpky
FC_FRAMEWORK_NAMESPACE: default
FC_FRAMEWORK_UID: 0ecef503-aaa2-435c-8237-2b6cbb0ff897
FC_POD_NAME: nniexploq1yrw9trialblpky-worker-0
FC_TASKROLE_NAME: worker
FC_TASKROLE_UID: e724f9ec-0c4f-11ec-b7f1-0242ac110006
FC_TASK_ATTEMPT_ID: 0
FC_TASK_INDEX: 0
FC_TASK_UID: e725037b-0c4f-11ec-b7f1-0242ac110006
Status: Pending
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ConfigMap/nniexploq1yrw9trialblpky-attempt
Init Containers:
frameworkbarrier:
Container ID: docker://da951fb0d65c6e42f440c9e950e128dc246cc72ca8b280e8887c80e6931c7847
Image: frameworkcontroller/frameworkbarrier
Image ID: docker-pullable://frameworkcontroller/frameworkbarrier@sha256:4f56b0f70d060ab610bc72d994311432565143cd4bb2613916425f8f3e80c69f
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 03 Sep 2021 09:53:22 +0900
Last State: Terminated
Reason: Error
Message: Framework object from ApiServer: frameworks.frameworkcontroller.microsoft.com "nniexploq1yrw9trialblpky" is forbidden: User "system:serviceaccount:default:default" cannot get resource "frameworks" in API group "frameworkcontroller.microsoft.com" in the namespace "default"
W0903 00:52:56.964433 9 barrier.go:253] Failed to get Framework object from ApiServer: frameworks.frameworkcontroller.microsoft.com "nniexploq1yrw9trialblpky" is forbidden: User "system:serviceaccount:default:default" cannot get resource "frameworks" in API group "frameworkcontroller.microsoft.com" in the namespace "default"
W0903 00:53:06.962820 9 barrier.go:253] Failed to get Framework object from ApiServer: frameworks.frameworkcontroller.microsoft.com "nniexploq1yrw9trialblpky" is forbidden: User "system:serviceaccount:default:default" cannot get resource "frameworks" in API group "frameworkcontroller.microsoft.com" in the namespace "default"
W0903 00:53:16.963508 9 barrier.go:253] Failed to get Framework object from ApiServer: frameworks.frameworkcontroller.microsoft.com "nniexploq1yrw9trialblpky" is forbidden: User "system:serviceaccount:default:default" cannot get resource "frameworks" in API group "frameworkcontroller.microsoft.com" in the namespace "default"
W0903 00:53:16.963990 9 barrier.go:253] Failed to get Framework object from ApiServer: frameworks.frameworkcontroller.microsoft.com "nniexploq1yrw9trialblpky" is forbidden: User "system:serviceaccount:default:default" cannot get resource "frameworks" in API group "frameworkcontroller.microsoft.com" in the namespace "default"
E0903 00:53:16.963998 9 barrier.go:283] BarrierUnknownFailed: frameworks.frameworkcontroller.microsoft.com "nniexploq1yrw9trialblpky" is forbidden: User "system:serviceaccount:default:default" cannot get resource "frameworks" in API group "frameworkcontroller.microsoft.com" in the namespace "default"
E0903 00:53:16.964013 9 barrier.go:470] ExitCode: 1: Exit with unknown failure to tell controller to retry within maxRetryCount.
Exit Code: 1
Started: Fri, 03 Sep 2021 09:43:16 +0900
Finished: Fri, 03 Sep 2021 09:53:16 +0900
Ready: False
Restart Count: 1
Environment:
FC_FRAMEWORK_NAMESPACE: default
FC_FRAMEWORK_NAME: nniexploq1yrw9trialblpky
FC_TASKROLE_NAME: worker
FC_TASK_INDEX: 0
FC_CONFIGMAP_NAME: nniexploq1yrw9trialblpky-attempt
FC_POD_NAME: nniexploq1yrw9trialblpky-worker-0
FC_FRAMEWORK_UID: 0ecef503-aaa2-435c-8237-2b6cbb0ff897
FC_FRAMEWORK_ATTEMPT_ID: 0
FC_FRAMEWORK_ATTEMPT_INSTANCE_UID: 0_07f61f6b-4073-480e-90a1-cb582b8221cf
FC_CONFIGMAP_UID: 07f61f6b-4073-480e-90a1-cb582b8221cf
FC_TASKROLE_UID: e724f9ec-0c4f-11ec-b7f1-0242ac110006
FC_TASK_UID: e725037b-0c4f-11ec-b7f1-0242ac110006
FC_TASK_ATTEMPT_ID: 0
FC_POD_UID: (v1:metadata.uid)
FC_TASK_ATTEMPT_INSTANCE_UID: 0_$(FC_POD_UID)
Mounts:
/mnt/frameworkbarrier from frameworkbarrier-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jgrrf (ro)
Containers:
framework:
Container ID:
Image: msranni/nni:latest
Image ID:
Port: 4000/TCP
Host Port: 0/TCP
Command:
sh
/tmp/mount/nni/LOq1YRw9/BlpKy/run_worker.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 8Gi
nvidia.com/gpu: 1
Requests:
cpu: 1
memory: 8Gi
nvidia.com/gpu: 1
Environment:
FC_FRAMEWORK_NAMESPACE: default
FC_FRAMEWORK_NAME: nniexploq1yrw9trialblpky
FC_TASKROLE_NAME: worker
FC_TASK_INDEX: 0
FC_CONFIGMAP_NAME: nniexploq1yrw9trialblpky-attempt
FC_POD_NAME: nniexploq1yrw9trialblpky-worker-0
FC_FRAMEWORK_UID: 0ecef503-aaa2-435c-8237-2b6cbb0ff897
FC_FRAMEWORK_ATTEMPT_ID: 0
FC_FRAMEWORK_ATTEMPT_INSTANCE_UID: 0_07f61f6b-4073-480e-90a1-cb582b8221cf
FC_CONFIGMAP_UID: 07f61f6b-4073-480e-90a1-cb582b8221cf
FC_TASKROLE_UID: e724f9ec-0c4f-11ec-b7f1-0242ac110006
FC_TASK_UID: e725037b-0c4f-11ec-b7f1-0242ac110006
FC_TASK_ATTEMPT_ID: 0
FC_POD_UID: (v1:metadata.uid)
FC_TASK_ATTEMPT_INSTANCE_UID: 0_$(FC_POD_UID)
Mounts:
/mnt/frameworkbarrier from frameworkbarrier-volume (rw)
/tmp/mount from nni-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jgrrf (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
nni-vol:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: <my nfs server ip>
Path: /another
ReadOnly: false
frameworkbarrier-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-jgrrf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/nniexploq1yrw9trialblpky-worker-0 to mofl-c246-wu4
Normal Pulled 12m kubelet Successfully pulled image "frameworkcontroller/frameworkbarrier" in 2.897722411s
Normal Pulling 2m10s (x2 over 12m) kubelet Pulling image "frameworkcontroller/frameworkbarrier"
Normal Pulled 2m8s kubelet Successfully pulled image "frameworkcontroller/frameworkbarrier" in 2.790335708s
Normal Created 2m7s (x2 over 12m) kubelet Created container frameworkbarrier
Normal Started 2m6s (x2 over 12m) kubelet Started container frameworkbarrier
</code></pre>
<p>kubernetes nni pod remains permanently in "Init" state</p>
<pre><code>frameworkcontroller-0 1/1 Running 0 5m49s
nniexploq1yrw9trialblpky-worker-0 0/1 Init:0/1 0 42s
</code></pre>
| GiwoongLee | <p><a href="http://github.com/microsoft/frameworkcontroller/issues/64" rel="nofollow noreferrer">github.com/microsoft/frameworkcontroller/issues/64</a></p>
<p>Plz refer this link to solve this problem!</p>
| GiwoongLee |
<p>I have an Airflow environment (v2.4.3) on Kubernetes and I want to sync it with a private git repo so that any changes I make to DAGs in my master branch get automatically picked up by my Airflow environment.</p>
<p>According to <a href="https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html#mounting-dags-from-a-private-github-repo-using-git-sync-sidecar" rel="nofollow noreferrer">Airflow documentation</a>, I can use Git-sync sidecar along with an SSH key added to my private git repo and Airflow env to make it work.</p>
<p>However, given that I am constantly creating new private repos and Airflow environments, I am wondering if there is a more simple way of connecting my private git repos to their respective Airflow environment.</p>
<p>If I have a webapp managing my Airflow environments and have access to an OAuth token from Github after signing into my account (or any other git service), could I use that to connect my an Airflow environement and sync changes to any git repo of my choice under my account?</p>
| jorgeavelar98 | <p>I was able to figure it out.</p>
<p>One can use <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="nofollow noreferrer">personal access tokens</a> as passwords provided by whatever git service the private repo is in along with the repo's username.</p>
<p>I just stored the personal access token as an Opaque secret in my Airflow K8s cluster and referenced that in my <a href="https://github.com/kubernetes/git-sync/tree/v3.6.5" rel="nofollow noreferrer">git-sync sidecar container yaml</a> definition which I included in my Airflow yaml deployment definition.</p>
<pre><code> containers:
- name: git-sync
image: registry.k8s.io/git-sync/git-sync:v3.6.5
args:
- "-wait=60"
- "-repo=<repo>"
- "-branch=master"
- "-root=/opt/airflow/dags"
- "-username=<username>"
- "-password-file=/etc/git-secret/token"
volumeMounts:
- name: git-secret
mountPath: /etc/git-secret
readOnly: true
- name: dags-data
mountPath: /opt/airflow/dags
volumes:
- name: dags-data
emptyDir: {}
- name: git-secret
secret:
secretName: github-token
</code></pre>
| jorgeavelar98 |
<p>I'm working on an application that launches K8S Job (dockerised computer science batchs applications) and I want to prioritizes their launchs.</p>
<p>I don't want to use preemption because all jobs have to be done and I want to be sure that the scheduling order is maintained.</p>
<p>When I read this doc: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#non-preempting-priority-class" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#non-preempting-priority-class</a>
It seems that, in non preempting cases, high priority pods can be scheduled after low priority ones if K8S doesn't have the necessary resources at the time.
In case of high priority Jobs are the most demanding, this kind of pods will never be scheduled.</p>
<p>How can I have a control of that decisions?</p>
<p>Thanks!</p>
| Saveriu CIANELLI | <p>As you need to use only Non preemptive refer to this <a href="https://stackoverflow.com/a/62135156/19230181">SO</a> and <a href="https://tutorialwing.com/preemptive-or-non-preemptive-priority-scheduling/" rel="nofollow noreferrer">Doc</a> which helps you in understanding the usage of this non preemptive class.</p>
| Hemanth Kumar |
<p>I am using the kubernetes version 1.25 client and server, I have deployed Airflow using the official helm charts on the environment. I want the Airflow dags kubernetes pod operator that has code to trigger the spark-submit operation to spawn the driver pod and an executor pod that will run inside the spark submit command and perform a task. The Dag performs the following task <strong>1. Take a table from mysql, 2.dump it in a text file, 3. put the same file to a minio bucket(similar to aws S3)</strong> Currently the driver pod spawns with executor pod. The Driver pod then fails eventually as it does not come into a running state. This event causes the executor pod to fail as well. I am authenticating the call going to kubernetes api using the a Service Account that I am passing as a configuration.</p>
<p>This is my redacted dag that I am using, note that spark-submit command works perfectly fine inside the container of the image on the command line and generates a expected outcome, So I doubt its some dag configuration that I might be missing here. Also not that all the jars that I am referring here are already part of the image and are being referenced from the**/opt/spark/connectors/** I have verified this by doing exec inside the container image</p>
<pre><code>import logging
import csv
import airflow
from airflow import DAG
from airflow.utils import dates as date
from datetime import timedelta, datetime
from airflow.providers.apache.spark.operators.spark_jdbc import SparkSubmitOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from dateutil.tz import tzlocal
from airflow.kubernetes.volume import Volume
from airflow.kubernetes.volume_mount import VolumeMount
import pendulum
#from airflow.models import Variables
local_tz = pendulum.timezone("Asia/Dubai")
volume_config = {"persistentVolumeClaim": {"claimName": "nfspvc-airflow-executable"}}
air_connectors_volume_config = {"persistentVolumeClaim": {"claimName": "nfspvc-airconnectors"}}
volume_mount = VolumeMount(
"data-volume",
mount_path="/air-spark/",
sub_path=None,
read_only=False,
)
air_connectors_volume_mount = VolumeMount(
"air-connectors",
mount_path="/air-connectors/",
sub_path=None,
read_only=False,
)
volume = Volume(
name="data-volume",
configs=volume_config
)
air_connectors_volume = Volume(
name="air-connectors",
configs=air_connectors_volume_config
)
default_args = {
'owner': 'panoiqtest',
'depends_on_past': False,
'start_date': datetime(2021, 5, 1, tzinfo=local_tz),
'retries': 1,
'retry_delay': timedelta(hours=1),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False
}
dag_daily = DAG(dag_id='operator',
default_args=default_args,
catchup=False,
schedule_interval='0 */1 * * *')
_config = {
'application': '/opt/spark/connectors/spark-etl-assembly-2.0.jar',
'num_executors': 2,
'driver_memory': '5G',
'executor_memory': '10G',
'driver_class_path':'/opt/spark/connectors/mysql-connector-java-5.1.49.jar',
'jars':'/opt/spark/connectors/mysql-connector-java-5.1.49.jar,/opt/spark/connectors/aws-java-sdk-bundle-1.12.374.jar,/opt/spark/connectors/hadoop-aws-3.3.1.jar',
#'java_class': 'com.spark.ETLHandler'
}
spark_config = {
"spark.executor.extraClassPath":"/opt/spark/connectors/mysql-connector-java-5.1.49.jar,/opt/spark/connectors/aws-java-sdk-bundle-1.12.374.jar,/opt/spark/connectors/hadoop-aws-3.3.1.jar",
"spark.driver.extraClassPath":"/opt/spark/connectors/mysql-connector-java-5.1.49.jar,/opt/spark/connectors/aws-java-sdk-bundle-1.12.374.jar,/opt/spark/connectors/hadoop-aws-3.3.1.jar"
}
t2 = BashOperator(
task_id='bash_example',
# "scripts" folder is under "/usr/local/airflow/dags"
bash_command="ls /air-spark/ && pwd",
dag=dag_daily)
def get_tables(table_file='/csv-directory/success-dag.csv', **kwargs):
logging.info("#Starting get_tables()#")
tables_list=[]
with open(table_file) as csvfile:
reader = csv.reader(csvfile, delimiter=',')
tables_list= [row for row in reader]
tables_list.pop(0) #remove header
return tables_list
def load_table(table_name, application_args, **kwargs):
k8s_arguments = [
'--name=datalake-pod',
'--master=k8s://https://IP:6443',
'--deploy-mode=cluster',
# '--driver-cores=4',
# '--executor-cores=4',
# '--num-executors=1',
# '--driver-memory=8192m',
'--executor-memory=8192m',
'--conf=spark.kubernetes.authenticate.driver.serviceAccountName=air-airflow-sa',
'--driver-class-path=/opt/spark/connectors//mysql-connector-java-5.1.49.jar,/opt/spark/connectors/aws-java-sdk-bundle-1.12.374.jar,/opt/spark/connectors/hadoop-aws-3.3.1.jar',
'--conf=spark.driver.extraJavaOptions=-Divy.cache.dir=/tmp -Divy.home=/tmp',
'--jars=/opt/spark/connectors/mysql-connector-java-5.1.49.jar,/opt/spark/connectors/aws-java-sdk-bundle-1.12.374.jar,/opt/spark/connectors/hadoop-aws-3.3.1.jar',
'--conf=spark.kubernetes.namespace=development',
# '--conf=spark.driver.cores=4',
# '--conf=spark.executor.cores=4',
# '--conf=spark.driver.memory=8192m',
# '--conf=spark.executor.memory=8192m',
'--conf=spark.kubernetes.container.image=image_name',
'--conf=spark.kubernetes.container.image.pullSecrets=Secret_name',
'--conf=spark.kubernetes.container.image.pullPolicy=Always',
'--conf=spark.dynamicAllocation.enabled=true',
'--conf=spark.dynamicAllocation.shuffleTracking.enabled=true',
'--conf=spark.kubernetes.driver.volumes.persistentVolumeClaim.air-connectors.mount.path=/air-connectors/',
'--conf=spark.kubernetes.driver.volumes.persistentVolumeClaim.air-connectors.mount.readOnly=false',
'--conf=spark.kubernetes.driver.volumes.persistentVolumeClaim.air-connectors.options.claimName=nfspvc-airconnectors',
'--conf=spark.kubernetes.file.upload.path=/opt/spark',
'--class=com.spark.ETLHandler',
'/opt/spark/connectors/spark-etl-assembly-2.0.jar'
];
all_arguments = k8s_arguments + application_args
return KubernetesPodOperator(
dag=dag_daily,
name="zombie-dry-run", #spark_submit_for_"+table_name
# image='image_name',
image='imagerepo.io:5050/panoiq/tools:sparktester',
image_pull_policy = 'Always',
image_pull_secrets = 'registry',
namespace='development',
cmds=['spark-submit'],
arguments=all_arguments,
labels={"foo": "bar"},
task_id="dry_run_demo", #spark_submit_for_"+table_name
# config_file="conf",
volumes=[volume, air_connectors_volume],
volume_mounts=[volume_mount, air_connectors_volume_mount],
)
push_tables_list = PythonOperator(task_id= "load_tables_list",
python_callable=get_tables,
dag=dag_daily)
complete = DummyOperator(task_id="complete",
dag=dag_daily)
for rec in get_tables():
table_name = rec[9]
s3_folder_name = rec[14]
s3_object_name = rec[13]
jdbcUrl = rec[4] + rec[8]
lineagegraph = ",".join(rec[17].split("#"))
entitlement = rec[10]
remarks = rec[11]
username = rec[5]
password = rec[6]
s3_folder_format = rec[16]
select_query = rec[9]
application_args= [select_query, s3_folder_name, jdbcUrl, lineagegraph,entitlement, remarks,username,password,s3_folder_format,s3_object_name]
push_tables_list >> load_table(table_name, application_args) >> complete
</code></pre>
<p>Any Help or pointers are appreciated on the issue!! Thanks in advance!!</p>
| SAGE | <p>I was able to fix this issue with the code below, I was able to use the Airflow pod itself as driver and that will just spawn a executor pod and run the jobs and die once completed the job flow</p>
<p>Below is my Python File for anyone that needs to do this again</p>
<pre><code>import logging
import csv
import airflow
from airflow import DAG
from airflow.utils import dates as date
from datetime import timedelta, datetime
from airflow.providers.apache.spark.operators.spark_jdbc import SparkSubmitOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
#from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
from dateutil.tz import tzlocal
from airflow.kubernetes.volume import Volume
from airflow.kubernetes.volume_mount import VolumeMount
import pendulum
#from airflow.models import Variables
local_tz = pendulum.timezone("Asia/Dubai")
default_args = {
'owner': 'test',
'depends_on_past': False,
'start_date': datetime(2021, 5, 1, tzinfo=local_tz),
'retries': 1,
'retry_delay': timedelta(hours=1),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False
}
dag_daily = DAG(dag_id='datapipeline',
default_args=default_args,
catchup=False,
schedule_interval='@hourly')
start = DummyOperator(task_id='run_this_first', dag=dag_daily)
_config = {
'application': '/air-spark/spark-etl-assembly-2.0.jar',
'num_executors': 2,
'driver_memory': '5G',
'executor_memory': '10G',
'driver_class_path':'/air-connectors/mysql-connector-java-5.1.49.jar',
'jars':'/air-connectors/mysql-connector-java-5.1.49.jar,/air-connectors/aws-java-sdk-bundle-1.12.374.jar,/air-connectors/hadoop-aws-3.3.1.jar',
#'java_class': 'com.spark.ETLHandler'
}
spark_config = {
"spark.executor.extraClassPath":"/air-connectors/mysql-connector-java-5.1.49.jar,/air-connectors/aws-java-sdk-bundle-1.12.374.jar,/air-connectors/hadoop-aws-3.3.1.jar",
"spark.driver.extraClassPath":"/air-connectors/mysql-connector-java-5.1.49.jar,/air-connectors/aws-java-sdk-bundle-1.12.374.jar,/air-connectors/hadoop-aws-3.3.1.jar"
}
t2 = BashOperator(
task_id='bash_example',
# "scripts" folder is under "/usr/local/airflow/dags"
bash_command="ls /air-spark/ && pwd",
dag=dag_daily)
def get_tables(table_file='/csv-directory/success-dag.csv', **kwargs):
logging.info("#Starting get_tables()#")
tables_list=[]
with open(table_file) as csvfile:
reader = csv.reader(csvfile, delimiter=',')
tables_list= [row for row in reader]
tables_list.pop(0) #remove header
return tables_list
def load_table(table_name, application_args, **kwargs):
k8s_arguments = [ "--master", "local[*]", "--conf", "spark.executor.extraClassPath=/air-connectors/mysql-connector-java-5.1.49.jar",
"--conf", "spark.driver.extraClassPath=/opt/spark/connectors/mysql-connector-java-5.1.49.jar", "--jars",
"/opt/spark/connectors/mysql-connector-java-5.1.49.jar,/opt/spark/connectors/ojdbc11-21.7.0.0.jar",
"--conf=spark.kubernetes.container.image=imagerepo.io:5050/tools:sparktesterV0.6",
"--conf=spark.kubernetes.container.image.pullSecrets=registry",
"--num-executors", "5", "--executor-memory", "1G", "--driver-memory", "2G", "--class=com.spark.ETLHandler",
"--name", "arrow-spark", "/opt/spark/connectors/spark-etl-assembly-2.0.jar" ];
all_arguments = k8s_arguments + application_args
# spark =
return KubernetesPodOperator(
image="imagerepo.io:5050/tools:sparktesterV0.6",
service_account_name="air-airflow-worker",
name="data_pipeline_k8s",
task_id="data_pipeline_k8s",
get_logs=True,
dag=dag_daily,
namespace="development",
image_pull_secrets="registry",
image_pull_policy="Always",
cmds=["spark-submit"],
arguments=all_arguments
)
# spark.set_upstream(start)
push_tables_list = PythonOperator(task_id= "load_tables_list",python_callable=get_tables,dag=dag_daily)
complete = DummyOperator(task_id="complete",dag=dag_daily)
for rec in get_tables():
table_name = rec[9]
s3_folder_name = rec[14]
s3_object_name = rec[13]
jdbcUrl = rec[4] + rec[8]
lineagegraph = ",".join(rec[17].split("#"))
entitlement = rec[10]
remarks = rec[11]
username = rec[5]
password = rec[6]
s3_folder_format = rec[16]
select_query = rec[9]
application_args= [select_query, s3_folder_name, jdbcUrl, lineagegraph,entitlement, remarks,username,password,s3_folder_format,s3_object_name]
push_tables_list >> load_table(table_name, application_args) >> complete
</code></pre>
| SAGE |
<p>I want to reformat the default logging output to json format without code changes</p>
<p>Docker image: jboss/keycloak:16.1.1</p>
<p>The current log structure is the default</p>
<pre><code>15:04:16,056 INFO [org.infinispan.CLUSTER] (thread-5,null) [Context=authenticationSessions] ISPN100002: Starting rebalance with members [], phase READ_OLD_WRITE_ALL, topology id 2
15:04:16,099 INFO [org.infinispan.CLUSTER] (thread-20,) [Context=offlineClientSessions] ISPN100002: Starting rebalance with members [], phase READ_OLD_WRITE_ALL, topology id 2
</code></pre>
<p>I tried to use <code>LOG_CONSOLE_OUTPUT</code> as described here <a href="https://www.keycloak.org/server/logging" rel="nofollow noreferrer">https://www.keycloak.org/server/logging</a> but it's not working.</p>
<p>Any ideas please?</p>
| Areej Mohey | <p>The link you posted is a tutorial for the Quarkus-based distro. However, your docker image is based on Wildfly.</p>
<p><a href="https://www.youtube.com/watch?v=AnHUqu-Vi_E" rel="nofollow noreferrer">Here is a Youtube video</a> which explains how to configure json logging with Wildfly-based distro.</p>
| sventorben |
<p>`</p>
<pre><code>rules:
- verbs:
- get
- list
apiGroups:
- ''
resources:
- namespaces
- pods
- pods/log
- verbs:
- get
- list
apiGroups:
- apps
resources:
- deployments
</code></pre>
<p>`</p>
<p>I want to know difference between</p>
<p>apiGroups:</p>
<ul>
<li>''</li>
</ul>
<p>and</p>
<p>apiGroups:</p>
<ul>
<li>apps</li>
</ul>
<p>whats the importance of apiGroups in manifests?</p>
| Akash11 | <p>The API group in kubernetes identifies which API group needs to target. This is necessary as different API groups can have the same verbs and also Kubernetes is highly extensible and allows for the addition of new APIs that can have verbs and resource names that clash with other APIs.</p>
<p>In the manifest file, The API group “ ” (empty string) represents the core Kubernetes API and it is used for pods : apigroups is “ ” . for Deployment, apigroups is “apps” and “extensions” is used.</p>
<p>Refer to this <a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">API Group official doc</a></p>
| Hemanth Kumar |
<p>I am running a new relic chart in helm (from this repo -> <a href="https://github.com/newrelic/helm-charts/blob/master/charts/newrelic-logging" rel="nofollow noreferrer">https://github.com/newrelic/helm-charts/blob/master/charts/newrelic-logging</a>, this is my output when running in my cluster:</p>
<p><code>helm list -A -n kube-system</code></p>
<pre><code>NAME NAMESPACE. REVISION. UPDATED.
newrelic-logging kube-system 1 2021-06-23 18:54:54.383769 +0200 CEST
STATUS. CHART. APP VERSION
deployed newrelic-logging-1.4.7 1.4.6
</code></pre>
<p>I am trying to set a specific value here: <a href="https://github.com/newrelic/helm-charts/blob/master/charts/newrelic-logging/values.yaml" rel="nofollow noreferrer">https://github.com/newrelic/helm-charts/blob/master/charts/newrelic-logging/values.yaml</a></p>
<p>To do this I am using <code>helm upgrade</code>. I have tried:</p>
<p><code>helm upgrade newrelic-logging newrelic-logging-1.4.7 -f values.yaml -n kube-system</code>
<code>helm upgrade newrelic-logging-1.4.7 newrelic-logging --set updatedVal=0 -n kube-system</code></p>
<p>However with these commands I am seeing the output:</p>
<p><code>Error: failed to download "newrelic-logging-1.4.7"</code></p>
<p>and</p>
<p><code>Error: failed to download "newrelic-logging"</code></p>
<p>Why and how do I fix this? I have also ran <code>helm repo update</code> and it completes with no error messages.</p>
<p>Unfortunately I don't see how this was initially setup as the previous employee has left the company and it's too risky to stop and redeploy right now.</p>
| i'i'i'i'i'i'i'i'i'i | <p>To update the current chart with new values without upgrading the chart version, you can try:</p>
<pre><code>helm upgrade --reuse-values -f values.yaml newrelic-logging kube-system/newrelic-logging
</code></pre>
| Abhishek Singh |
<p>I'm using reactjs for frontend with Nginx load balancer and laravel for backend with MongoDB.
as old architecture design, code upload to GitHub with different frontend and backend repo.</p>
<p>still did not use DOCKER AND KUBERNETS, I want to implement them, in the new Architecture design, <strong>I used a private cloud server</strong> so, restricted to deploying on AWS\AZURE\GCP\etc...</p>
<p><em>share your Architecture plan and implementation for a better approach to microservices!</em></p>
| SyedAsadRazaDevops | <p>as per my thinking,</p>
<ol>
<li>first make a docker file for react and laravel project</li>
<li>then upload to docker private registry.[dockerhub]</li>
<li>install docker and k8s on VM</li>
<li>deploy container of 1=react and 2=laravel from image</li>
<li>also deploy 3=nginx and 4=mongo container from default market image</li>
</ol>
<p><strong>Some of my questions:</strong></p>
<ol>
<li>How to make the connection?</li>
<li>How to take a new pull on the container, for the new update release version?</li>
<li>How to make a replica, for a disaster recovery plan?</li>
<li>How to monitor errors and performance?</li>
<li>How to make the pipeline MOST important?</li>
<li>How to make dev, staging, and production environments?</li>
</ol>
| SyedAsadRazaDevops |
<p>I have a <code>skaffold.yaml</code>file like below:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
mk:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: learnertester/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: learnertester/ticketing-client
</code></pre>
<p>Now, I have installed <code>microk8s</code> on Ubuntu 22.04 and want to use it. I have also defined an alias <code>mk</code> for <code>microk8s kubectl</code>. But when I change the following line:</p>
<pre><code>deploy:
kubectl:
</code></pre>
<p>To:</p>
<pre><code>deploy:
mk:
</code></pre>
<p>I get the below error running <code>skaffold dev</code>:</p>
<pre><code>parsing skaffold config: error parsing skaffold configuration file: unable to parse config: yaml: unmarshal errors:
line 55: field mk not found in type v2alpha3.DeployConfig
</code></pre>
<p>How should I fix that?</p>
| best_of_man | <p>As per this <a href="https://microk8s.io/docs/getting-started" rel="nofollow noreferrer">doc</a> , use deploy as <code>microk8s kubectl</code> instead mk.
This might not be taking the alias mk or else you have not done a proper alias.</p>
<blockquote>
<p>add an alias (append to ~/.bash_aliases) like this:
alias kubectl='microk8s kubectl'</p>
</blockquote>
<p>Then use as below and have a try :</p>
<pre><code>deploy:
microk8s kubectl:
</code></pre>
<p>Refer to this <a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer">skafold.yaml doc</a> for more information on yaml and also you need to use the correct dest path (#destination path in the container where the files should be synced to..)</p>
<p>You can also check out <a href="https://cloud.google.com/code" rel="nofollow noreferrer">Cloud Code for IntelliJ and VS Code</a> or the online <a href="https://ide.cloud.google.com/" rel="nofollow noreferrer">Cloud Shell Editor</a> which provides skaffold.yaml editing assistance, including highlighting errors in the file.</p>
| Hemanth Kumar |
<p>Consider i have 2 containers inside pod.</p>
<ol>
<li>Configured startup probe on these 2 containers</li>
<li>It will come to READY state only when both probes are successfully executed and satisfied</li>
<li>If one of the probe is failed, I may remain one container in not in READY state.</li>
</ol>
<p>I need to have condition here :
If any one of the container inside my pod is not getting UP and READY state. After few tries in probe.
I need to destroy this POD from the node.</p>
<p>Is that possible ?</p>
| sethu ram | <p>Yes, you can do so by combination of a readiness probe, a liveness probe, and a pod termination policy.</p>
<ol>
<li><p>Configure a readiness probe for each container in the pod to determine if it’s ready to accept traffic.</p>
</li>
<li><p>Configure a liveness probe for each container to check if it’s running and healthy. If the probe fails,the container will be restarted.</p>
</li>
<li><p>Set the pod’s termination policy to ‘Failure’. This means that if any container in the pod fails, the entire pod will be terminated.</p>
</li>
</ol>
<p>Refer to the <a href="https://www.fairwinds.com/blog/a-guide-to-understanding-kubernetes-liveness-probes-best-practices#:%7E:text=A%20few%20other%20liveness%20probe%20best%20practices:%7E:text=Understanding%20liveness%20probe%20settings" rel="nofollow noreferrer">liveness probe best practices</a> written by Adam Zahorscak for more information.</p>
| Murali Sankarbanda |
<p>I've installed pgAdmin (<a href="https://artifacthub.io/packages/helm/runix/pgadmin4" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/runix/pgadmin4</a>) in my k8s cluster. Using port-forwarding I can access the web interface of pgAdmin, but I want to use <code>kubectl proxy</code> instead of <code>kubectl port-forward</code> because when using port forwarding the connection is not stable enough (see the problems with <code>lost connection to pod</code>). So I hope <code>kubectl proxy</code> is more stable, but my problem is when I run <code>kubectl proxy</code> and try to access pgAdmin, I'm getting the following error in my browser:</p>
<p><code>stream error: stream ID 5; INTERNAL_ERROR</code></p>
<p>I'm using the following url to access pgAdmin: <code>http://localhost:8001/api/v1/namespaces/default/services/pgadmin-pgadmin4:80/proxy</code>. Since the browser is already being redirected to <code>http://localhost:8001/api/v1/namespaces/default/services/pgadmin-pgadmin4:80/proxy/login?next=%2F</code> (not the last part), I know that pgAdmin is working, but I've no idea how to solve the <code>INTERNAL ERROR</code> issue. When I check the console that runs the <code>kubectl proxy</code> command, the output is the following:</p>
<pre><code>Starting to serve on 127.0.0.1:8001
E0212 17:56:51.338567 41088 proxy_server.go:147] Error while proxying request: stream error: stream ID 5; INTERNAL_ERROR
</code></pre>
<p>Any idea how to fix this issue? An alternative would be to have a stable port-forwarding, but it seems that there's only the "while-true"-solution to restart the port-forwarding whenever the connection to the pod has been lost.</p>
| Nrgyzer | <p>It seems that few filters / rules are obstructing your access to pgAdmin from kubernettes cluster. This can be resolved by removing the filters. Use the below command for connecting pgAdmin without any filters.</p>
<pre><code>kubectl proxy --address='0.0.0.0' --disable-filter=true
</code></pre>
| Kiran Kotturi |
<p>Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo.</p>
<p>So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).
</p>
<p>I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo.</p>
<pre><code>INFO: detected pid 1, running init handler
I0411 20:50:31.009097 12 main.go:401] "level"=0 "msg"="starting up" "pid"=12 "args"=["/git-sync","-wait=60","-repo=https://github.com/jorgeavelar98/AirflowProject.git","-branch=master","-root=/opt/airflow/dags","-username=jorgeavelar98","-password-file=/etc/git-secret/token"]
I0411 20:50:31.029064 12 main.go:950] "level"=0 "msg"="cloning repo" "origin"="https://github.com/jorgeavelar98/AirflowProject.git" "path"="/opt/airflow/dags"
I0411 20:50:31.031728 12 main.go:956] "level"=0 "msg"="git root exists and is not empty (previous crash?), cleaning up" "path"="/opt/airflow/dags"
I0411 20:50:31.894074 12 main.go:760] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03"
I0411 20:50:31.907256 12 main.go:800] "level"=0 "msg"="adding worktree" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "branch"="origin/master"
I0411 20:50:31.911039 12 main.go:860] "level"=0 "msg"="reset worktree to hash" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03"
I0411 20:50:31.911065 12 main.go:865] "level"=0 "msg"="updating submodules"
</code></pre>
<p> </p>
<p><strong>However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI</strong></p>
<p>This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver</p>
<pre><code> containers:
- name: airflow-scheduler
image: <redacted>
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: "AIRFLOW_SERVICE_NAME-env"
env:
<redacted>
resources:
requests:
memory: RESOURCE_MEMORY
cpu: RESOURCE_CPU
volumeMounts:
- name: scripts
mountPath: /home/airflow/scripts
- name: dags-data
mountPath: /opt/airflow/dags
subPath: dags
- name: dags-data
mountPath: /opt/airflow/plugins
subPath: plugins
- name: variables-pools
mountPath: /home/airflow/variables-pools/
- name: airflow-log-config
mountPath: /opt/airflow/config
command:
- "/usr/bin/dumb-init"
- "--"
args:
<redacted>
- name: git-sync
image: registry.k8s.io/git-sync/git-sync:v3.6.5
args:
- "-wait=60"
- "-repo=<repo>"
- "-branch=master"
- "-root=/opt/airflow/dags"
- "-username=<redacted>"
- "-password-file=/etc/git-secret/token"
volumeMounts:
- name: git-secret
mountPath: /etc/git-secret
readOnly: true
- name: dags-data
mountPath: /opt/airflow/dags
volumes:
- name: scripts
configMap:
name: AIRFLOW_SERVICE_NAME-scripts
defaultMode: 493
- name: dags-data
emptyDir: {}
- name: variables-pools
configMap:
name: AIRFLOW_SERVICE_NAME-variables-pools
defaultMode: 493
- name: airflow-log-config
configMap:
name: airflow-log-configmap
defaultMode: 493
- name: git-secret
secret:
secretName: github-token
</code></pre>
<p>What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated!</p>
| jorgeavelar98 | <p>Looks like my issue was that my worker, scheduler, and web server container had different dag volume mounts from the ones I defined for my git-sync container.</p>
<p>This is what I had:</p>
<pre><code>containers:
- name: airflow-scheduler
image: <redacted>
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: "AIRFLOW_SERVICE_NAME-env"
env:
<redacted>
resources:
requests:
memory: RESOURCE_MEMORY
cpu: RESOURCE_CPU
volumeMounts:
- name: scripts
mountPath: /home/airflow/scripts
- name: dags-data
mountPath: /opt/airflow/dags
subPath: dags
- name: dags-data
mountPath: /opt/airflow/plugins
subPath: plugins
- name: variables-pools
mountPath: /home/airflow/variables-pools/
- name: airflow-log-config
mountPath: /opt/airflow/config
</code></pre>
<p>And the following edits made it work. I removed the dag subpath and plugins volume mount:</p>
<pre><code>containers:
- name: airflow-scheduler
image: <redacted>
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: "AIRFLOW_SERVICE_NAME-env"
env:
<redacted>
resources:
requests:
memory: RESOURCE_MEMORY
cpu: RESOURCE_CPU
volumeMounts:
- name: scripts
mountPath: /home/airflow/scripts
- name: dags-data
mountPath: /opt/airflow/dags
- name: variables-pools
mountPath: /home/airflow/variables-pools/
- name: airflow-log-config
mountPath: /opt/airflow/config
</code></pre>
| jorgeavelar98 |
<p>I have below pod.yaml file and I am expecting the pod to keep running but it completes.
Can anyone please tell me what am I doing wrong ?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app1
spec:
containers:
- name: base
image: "alpine:latest"
command: ["/bin/sh"]
args: ["-c", "while true; do echo app1:$(date -u) >> /data/test.log; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: emy-claim
</code></pre>
| Naxi | <p>Please update the <code>pod.yaml</code> file with the below information to keep the pod running using the <code>while</code> condition and to update the <code>log entries</code> continuosly in the <code>test</code> file under <code>data</code> directory for every <code>5 seconds</code>.</p>
<pre><code>args: ["-c", "while true; do echo echo $(date -u) 'app1' >> /data/test.log; sleep 5; done"]
</code></pre>
<p>Please see the <a href="https://learning-ocean.com/tutorials/kubernetes/kubernetes-side-car-container" rel="nofollow noreferrer">documentation</a> for more details.</p>
| Kiran Kotturi |
<p>I have a cronjob like below</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: foo-bar
namespace: kube-system
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: foo-cleaner
containers:
- name: cleanup
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
command:
- "bin/bash"
- -c
- command1;
command2;
command3;
- new_command1;
new_command2;
new_command3;
</code></pre>
<p>Sometimes <code>command2</code> fails, throws error and cronjob execution fails. I want to run <code>new_command1</code> even if any command in previous block fails</p>
| mbxzxz | <p>In the command section you need to pass the command and args below :</p>
<p>command: ["/bin/sh","-c"] args: ["command 1 || command 2; Command 3 && command 4"]</p>
<p>The command ["/bin/sh", "-c"] is to run a shell, and execute the following instructions. The args are then passed as commands to the shell.</p>
<p>In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeeds, Grep/Pipe (||) runs command1 if it fails then runs command2 also.</p>
<p>As per above command it always runs command 1 if it fails or gives any error then it continues to run command2. If command3 succeeds then only it runs command4. Change accordingly in your Yaml and have a try.</p>
<p>Refer this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#writing-a-job-spec" rel="nofollow noreferrer">Doc</a> for cron jobs.</p>
| Hemanth Kumar |
<p>I'm trying to get secrets with <code>kubectl</code> with:</p>
<pre><code>kubectl get secrets/postgresql -n my-namespace -o=go-template=='{{index .data "postgresql-password" }}'
</code></pre>
<p>it returns the correct value ( I get the same result with lens )</p>
<p>But when I do:</p>
<pre><code>kubectl get secrets/postgresql -n my-namespace -o=go-template=='{{index .data "postgresql-password" }}' | base64 -d
</code></pre>
<p>I get:</p>
<pre><code>base64: invalid input
</code></pre>
<p>But Lens can decode it easily. What am I doing wrong ?</p>
| Juliatzin | <p>First check command :</p>
<pre><code>kubectl get secrets/postgresql -n my-namespace -o=go-template=='{{index .data "postgresql-password" }}'
</code></pre>
<p>if you see base64 string it is good. But also you can see "=" in the start and the end of this string.</p>
<p>Just try to run the following :</p>
<pre><code>kubectl get secrets/postgresql -n my-namespace -o=go-template='{{index .data "postgresql-password" }}' | base64 -d
</code></pre>
| Ivan Ponomarev |
<p>I have a domain at Cloudflare and some wildcards for subdomains</p>
<p><a href="https://i.stack.imgur.com/R46FF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R46FF.png" alt="enter image description here" /></a></p>
<p>which both point to the load balancer of an nginx ingress of a Kubernetes cluster (GKE) of the GCP. Now, we have two pods and services running each (echo1 and echo2, which are essentially identical) and when I apply an ingress</p>
<pre><code>kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "echo1.eu3.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo1
port:
number: 80
- host: "echo2.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo2
port:
number: 80
</code></pre>
<p>I can reach echo2 under echo2.example.com, but not echo1.eu3.example.com. My question is how I can make the second one reachable as well.</p>
| tobias | <p>I can advise you to make some check.</p>
<p>Just set the Proxy status for "echo1.eu3.example.com" as DNS only. Then check the access. If ok - install certificates in kubernetes via cert manager. We faced some times with this issue and resolved by using 3 deep domains. For instance "echo1-eu3.example.com". It seems cloudfront does not like such domains :) Of course if someone write a solution how to work with deep domains in cloudfront - it would be good practice for us :)</p>
| Ivan Ponomarev |
<p>I have a problem.
There is preStop option in manifest file and OOMKilled was happened.
pod was restarted but no heapdump is created.</p>
<p>lifecycle:
preStop:
exec:
command: ["/tmp/preStop.sh"]</p>
<p>heapdump works when I manually terminate the pod.</p>
<p>so I wonder if pod is restarted, preStop is not supposed to be executed?</p>
<p>I thought when pod is restarted, first send TermSignal to application and execute preStop and terminate and start pod again. Am I wrong?</p>
<p>Thanks
Best Regards.</p>
| user2286858 | <blockquote>
<p>when the pod is restarted, first send TermSignal to the application
and execute preStop and terminate and start pod again. Am I wrong?</p>
</blockquote>
<p>As per the official <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p><code>PreStop</code> hooks are not executed asynchronously from the signal to
stop the Container; the hook must complete its execution before the
<strong>TERM</strong> signal can be sent.</p>
<p>If a <code>PreStop</code> hook hangs during execution, the Pod's phase will be
<strong>Terminating</strong> and remain there until the Pod is killed after its <code>terminationGracePeriodSeconds</code> expires. This grace period applies to
the total time it takes for both the PreStop hook to execute and for
the Container to stop normally.</p>
</blockquote>
<p>Hope the above information is useful to you.</p>
| Kiran Kotturi |
<p><em>*Cross-posted from k3d GitHub Discussion: <a href="https://github.com/rancher/k3d/discussions/690" rel="nofollow noreferrer">https://github.com/rancher/k3d/discussions/690</a></em></p>
<p>I am attempting to expose two services over two ports. As an alternative, I'd also love to know how to expose them over the same port and use different routes. I've attempted a few articles and a lot of configurations. Let me know where I'm going wrong with the networking of k3d + k3s / kubernetes + traefik (+ klipper?)...</p>
<p>I posted an example:
<a href="https://github.com/ericis/k3d-networking" rel="nofollow noreferrer">https://github.com/ericis/k3d-networking</a></p>
<h3 id="the-goal-xa62">The goal:</h3>
<ul>
<li>Reach "app-1" on host over port 8080</li>
<li>Reach "app-2" on host over port 8091</li>
</ul>
<h3 id="steps-wa5i">Steps</h3>
<p><em>*See: <a href="https://github.com/ericis/k3d-networking" rel="nofollow noreferrer">files in repo</a></em></p>
<ol>
<li><p>Configure <code>k3d</code> cluster and expose app ports to load balancer</p>
<pre><code>ports:
# map localhost to loadbalancer
- port: 8080:80
nodeFilters:
- loadbalancer
# map localhost to loadbalancer
- port: 8091:80
nodeFilters:
- loadbalancer
</code></pre>
</li>
<li><p>Deploy apps with "deployment.yaml" in Kubernetes and expose container ports</p>
<pre><code>ports:
- containerPort: 80
</code></pre>
</li>
<li><p>Expose services within kubernetes. Here, I've tried two methods.</p>
<ul>
<li><p>Using CLI</p>
<pre><code>$ kubectl create service clusterip app-1 --tcp=8080:80
$ kubectl create service clusterip app-2 --tcp=8091:80
</code></pre>
</li>
<li><p>Using "service.yaml"</p>
<pre><code>spec:
ports:
- protocol: TCP
# expose internally
port: 8080
# map to app
targetPort: 80
selector:
run: app-1
</code></pre>
</li>
</ul>
</li>
<li><p>Expose the services outside of kubernetes using "ingress.yaml"</p>
<pre><code>backend:
service:
name: app-1
port:
# expose from kubernetes
number: 8080
</code></pre>
</li>
</ol>
| Eric Swanson | <p>You either have to use an ingress, or have to open ports on each individual node (k3d runs on docker, so you have to expose the docker ports)</p>
<p><em>Without opening a port <em>during the creation of the k3d cluster</em>, a nodeport service will not expose your app</em></p>
<p><code>k3d cluster create mycluster -p 8080:30080@agent[0]</code></p>
<p>For example, this would open an "outside" port (on your localhost) 8080 and map it to 30080 on the node - then you can use a NodePort service to actually connect the traffic from that port to your app:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: some-port
nodePort: 30080
selector:
app: pgadmin
type: NodePort
</code></pre>
<p>You can also open ports on the server node like:
<code>k3d cluster create mycluster -p 8080:30080@server[0]</code></p>
<p>Your apps can get scheduled to run on whatever node, and if you force a pod to appear on a specific node (lets say you open a certain port on agent[0] and set up your .yaml files to work with that certain port), for some reason the local-path rancher storage-class just breaks and will not create a persistent volume for your claim. You kinda have to get lucky & have your pod get scheduled where you need it to. (if you find a way to schedule pods on specific nodes without the storage provisioner breaking, let me know)</p>
<p>You also can map a whole range of ports, like:
<code>k3d cluster create mycluster --servers 1 --agents 1 -p "30000-30100:30000-30100@server[0]"</code>
but be careful with the amount of ports you open, if you open too much, k3d will crash.</p>
<p><em>Using a load balancer</em> - it's similar, you just have to open one port & map to to the load balancer.</p>
<p><code>k3d cluster create my-cluster --port 8080:80@loadbalancer</code></p>
<p>You then <em>have</em> to use an ingress, (or the traffic won't reach)</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
</code></pre>
<p>I also think that ingress will only route http & https traffic, https should be done on the port <code>443</code>, supposedly you can map both port <code>80</code> and port <code>443</code>, but I haven't been able to get that to work (I think that certificates need to be set up as well).</p>
| LemonadeJoe |
<p>I am working on my AKS cluster upgrade which fails because of PodDrainFailure error. I see "Allowed Disruptions" for one of my PDB is zero which can be an issue here.</p>
<p>I checked my deployment settings using "k get deployment <deployment_name>" and I see number of replicas is 1. Changing number of replicas to 2 increases "Allowed Disruptions" to 1 temporarily.</p>
<p>However, the number of replicas under deployment details is reverted back to 1 which in turn changes allowed disruptions back to zero.</p>
<p>I can't figure out why my replica count is being reverted back to 1 even though I am directly editing and saving it in deployment setting. I can see the change is confirmed and new POD is also created, which means my changes are implemented successfully.</p>
| Susheel Bhatt | <p>As mentioned in <a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster?tabs=azure-cli" rel="nofollow noreferrer">the doc</a>:</p>
<blockquote>
<p>Ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod
replica to be moved at a time otherwise the drain/evict operation will
fail. If the drain operation fails, the upgrade operation will fail by
design to ensure that the applications are not disrupted. Please
correct what caused the operation to stop (incorrect PDBs, lack of
quota, and so on) and re-try the operation.</p>
</blockquote>
<p>If the replica count is going down automatically, you might have a horizontal pod autoscaler (HPA) scaling the deployment down.</p>
<p>Once you have both resources above configured, trigger the upgrade command again.</p>
<p>If you dont want to do any changes to your resources, you can trigger the upgrade command, and monitor the nodes being upgraded, and force delete any pod that isnt getting removed by <code>kubectl delete pod -n <ns> <pod-name> --force --grace-period=0</code></p>
| akathimi |
<p>I have a k8s cronjob which exports metrics periodically and there's k8s secret.yaml and I execute a script from it, called run.sh
Within run.sh, I would like to refer another script and I can't seem to find a right way to access this script or specify it in cronjob.yaml</p>
<p>cronjob.yaml</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: exporter
labels:
app: metrics-exporter
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: exporter
spec:
volumes:
- name: db-dir
emptyDir: { }
- name: home-dir
emptyDir: { }
- name: run-sh
secret:
secretName: exporter-run-sh
- name: clusters
emptyDir: { }
containers:
- name: stats-exporter
image: XXXX
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- /home/scripts/run.sh
resources: { }
volumeMounts:
- name: db-dir
mountPath: /.db
- name: home-dir
mountPath: /home
- name: run-sh
mountPath: /home/scripts
- name: clusters
mountPath: /db-clusters
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
terminationGracePeriodSeconds: 30
restartPolicy: OnFailure
</code></pre>
<p>Here's how in secret.yaml, I run script run.sh and refer to another script inside /db-clusters.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run.sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here
</code></pre>
<p>Here's the <a href="https://i.stack.imgur.com/IqJh5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IqJh5.png" alt="directory structure" /></a></p>
<p>Error Message:</p>
<pre><code>/home/scripts/run.sh: line 57: /home/scripts/db-clusters/cluster1.sh: No such file or directory
</code></pre>
| flowAlong | <p>As per <a href="https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/" rel="nofollow noreferrer">secret yaml</a>, In string data you need to mention “run-sh” as you need to include the secret in the run-sh volume mount.</p>
<p>Have a try by using <code>run-sh</code> in stringData as below :</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run-sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here
</code></pre>
| Hemanth Kumar |
<p>What is benefit of using bastion (EC2) instance in kubernetes cluster (EKS) ?</p>
| jinglebird | <p>This access option improves the cluster security by preventing all internet access to the control plane. However, disabling access to the public endpoint prevents you from interacting with your cluster remotely, unless you add the IP address of your remote client as an authorized network.</p>
| Siegfred V. |
<p>I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docs <a href="https://v3.helm.sh/docs/faq/changes_since_helm2/" rel="nofollow noreferrer">here</a> (search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict.</p>
<p>So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced?</p>
| Ben Futterleib | <p>Based on the description, the <code>--force</code> flag should do the trick.</p>
<p><code>--force force resource updates through a replacement strategy</code></p>
<p>However, there are some issues with it as mentioned in this <a href="https://github.com/helm/helm/issues/9433" rel="nofollow noreferrer">GitHub issue</a>.</p>
| akathimi |
<p>I want to create two profiles and deploy pods to each minikube profile in parallel instead of deploying pods to one profile and then switching to another profile and then deploying pods again. I am not seeing anyway to do this?</p>
| user1870400 | <p>Seems to be there is no feasibility or possibility to use the two profiles simultaneously.
You can track the latest updates on this concern in this <a href="https://github.com/kubernetes/minikube/issues/15548" rel="nofollow noreferrer">GitLink</a>.
Refer to this <a href="https://ervikrant06.github.io/kubernetes/Kuberenetes-Minikube-profile/" rel="nofollow noreferrer">doc</a> and for How to run multiple minikubes on a single machine and <a href="https://minikube.sigs.k8s.io/docs/commands/profile/" rel="nofollow noreferrer">minikube Profile</a>, <a href="https://stackoverflow.com/questions/68586788/how-to-create-more-than-one-cluster-in-minikube">SO</a> for commands for switching the profiles.</p>
| Hemanth Kumar |
<p>I am deploying Elasticsearch 7.10.1 to AWS EKS Fargate but I got below error when running them:</p>
<pre><code>ERROR: [2] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [4096]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
</code></pre>
<p>I found solutions for them is <a href="https://stackoverflow.com/questions/40683285/max-file-descriptors-4096-for-elasticsearch-process-is-too-low-increase-to-at">max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]</a> and <a href="https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc">Elasticsearch: Max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]</a>.</p>
<p>But both requires a change on the host machine. I am using EKS Fargate which means I don't have access to the Kubernete cluster host machine. What else should I do to solve this issue?</p>
| Joey Yi Zhao | <p>Your best bet is to set these via privileged init containers within your Elasticsearch pod/deployment/statefulset, for example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: elasticsearch-node
spec:
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
containers:
- name: elasticsearch-node
...
</code></pre>
<p>You could also do this through Daemonsets, although Daemonsets aren't very well suited to one-time tasks (but it's possible to hack around this).
But the init container approach will guarantee that your expected settings are in effect precisely before an Elasticsearch container is launched.</p>
| Dan Simone |
<p>I have installed containerd 1.5.4 by following below steps in CentOS 7.9:</p>
<pre><code>wget -c https://github.com/containerd/containerd/releases/download/v1.5.4/containerd-1.5.4-linux-amd64.tar.gz
tar -zxvf containerd-1.5.4-linux-amd64.tar.gz -C /
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
</code></pre>
<p>I have followed the docs from <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">here</a> and also created the config according to them. But when I try to start containerd:</p>
<pre><code>[root@iZuf62lgwih3vksz3640gnZ sysctl.d]# systemctl start containerd
Failed to start containerd.service: Unit not found.
</code></pre>
<p>What should I do to fix this problem?</p>
| Dolphin | <p>The main issue is that you are only copying binary files, you are not creating any systemd service.</p>
<p>Be careful when using <code>-C /</code> flag with the <code>tar</code> command. On my CentOS 7 machine, two first commands:</p>
<pre><code>wget -c https://github.com/containerd/containerd/releases/download/v1.5.4/containerd-1.5.4-linux-amd64.tar.gz
tar -zxvf containerd-1.5.4-linux-amd64.tar.gz -C /
</code></pre>
<p>led to overwrite the <code>/bin</code> directory which destroyed the OS.</p>
<p>Back to the question, it seems like you are mixing two different instructions for installing <code>containerd</code> package. The <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">instructions</a> from official Kubernetes wiki that you mentioned in your question are pretty-forward and good to follow. Try them:</p>
<p>Step 1. Install the containerd.io package from the official Docker repositories:</p>
<pre><code>yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io
</code></pre>
<p>Step 2. Configure containerd:</p>
<pre><code>sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
</code></pre>
<p>Step 3. Restart containerd:</p>
<pre><code>systemctl restart containerd
</code></pre>
| Mikolaj S. |
<p>Can I target a K8s service to a Pod without labels?</p>
<p>That is, I have a K8s Pod created with the following configuration.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nofrills-first-pod
spec:
containers:
- name: nofrills-container
image: nofrills/to-deploy:0.0.1
ports:
- containerPort: 3000
</code></pre>
<p>I would like to expose this pod as a K8s service. Normally, I would do this by creating a Service configuration that looked something like this</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-nofrills-service
spec:
type: NodePort
selector:
## ?????? no labels to target?
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 32525
</code></pre>
<p>However, since the pod doesn't have any labels I don't know how to tell the Service which pod to use. I suppose another way of asking this questions is "Can a K8s selector target an object without any labels?"</p>
<p>I realize I could (in many scenarios) easily add labels to the Pod -- but I'm specifically interested in the abilities of K8s selectors here.</p>
| Alana Storm | <p>You can define a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">Service without specifying a selector</a> to match Pods. Because this Service has no selector, the corresponding EndpointSlice (and legacy Endpoints) objects are not created automatically.</p>
<p>You can map the Service to the network address and port where it's running, by adding an EndpointSlice object manually.</p>
| Siegfred V. |
<h1>Problem</h1>
<p>I've got a Kubernetes deployment of 20 replicas that needs runs long celery tasks. We deploy frequently to our cluster (at most ~every 10 mins). I set a long termination period of 3 hours for my pods so the tasks don't get interrupted. The problem is, on every deploy Kubernetes tries to create an additional 20 pods. How do I keep Kubernetes from scheduling additional pods before the existing ones are terminated?</p>
<h1>Release</h1>
<p>My release is a GitHub action that does a helm upgrade and swaps in the new image tag that was just built.</p>
<pre><code>helm upgrade --kube-context SK2 --install
-f ./kube/helm/p1/values_SK2.yaml
-n $REPOSITORY_NAME
--set image.tag=$GIT_COMMIT
$REPOSITORY_NAME
./kube/helm/p1
--atomic --debug --history-max=100 --timeout 6m
</code></pre>
<h1>Attempt 1</h1>
<p>I tried to set a RollingUpdate strategy with the maxUnavilable to 20 and the max surge to 1 so that it would only ever create 1 additional pod on the deployments, but not schedule new pods until all the old pods are terminated.</p>
<pre><code>strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 20
maxSurge: 1
</code></pre>
<h1>Attempt 2</h1>
<p>I've also tried pinning the image to latest instead of updating the image on deploy then changing the deployment's image tag after the helm install.</p>
<pre><code>kubectl --context SK2 -n p1 set image deployment/celery-io-long celery-io-long=stringking/p1:$GIT_COMMIT
</code></pre>
<p>Both of these techniques have the end result of creating a new batch of 20 pods on deployment before the old ones are terminated. What am I doing wrong here?</p>
| MikeSchem | <p>setting the maxUnavailable value to 20, means that 20 will be unavailable simultaneously during the rolling update and will spin up new pods with the new deployment.</p>
<p>You should set its value to:</p>
<pre><code>maxUnavailable: 1
maxSurge: 1
</code></pre>
<p>This ensures that only 1 out of the 20 existing pods will be terminated once the rolling update has started and doesn't schedule additional pods until the existing ones are terminated.</p>
| Siegfred V. |
<p>I have an Azure kubernetes cluster created using the following Terraform code</p>
<pre><code># Required Provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
data "azurerm_client_config" "current" {}
provider "azurerm" {
subscription_id = local.subscription_id
tenant_id = local.tenant_id
client_id = local.client_id
client_secret = local.client_secret
features {}
}
resource "random_pet" "rg-name" {
prefix = var.resource_group_name_prefix
}
resource "azurerm_resource_group" "rg" {
name = random_pet.rg-name.id
location = var.resource_group_location
}
resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
private_cluster_enabled = var.private_cluster
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
load_balancer_sku = "standard"
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
</code></pre>
<p>It creates the Load Balancer with Public IP as shown below</p>
<p><a href="https://i.stack.imgur.com/jQTPT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jQTPT.png" alt="enter image description here" /></a></p>
<p>However, I don't want to have a public IP for the Load balancer instead it should have the Internal Private IP.</p>
<p><a href="https://i.stack.imgur.com/RV9I8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RV9I8.png" alt="enter image description here" /></a></p>
<p>What should I to do have this load balancer with Internal Private IP and Service is not exposed over the Internet using the Public IP?</p>
<p><strong>Note:</strong> As per the <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb#:%7E:text=internal%2Dapp%20%20%20LoadBalancer%20%20%2010.0.248.59%20%20%2010.240.0.7%20%20%20%2080%3A30555/TCP%20%20%202m" rel="nofollow noreferrer">Microsoft documentation</a>, even if you annotate with <strong>annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true"</strong>, external IP is still assigned which I am trying to avoid.</p>
| One Developer | <p>The load balancer that gets created with the AKS cluster (usually called kubernetes) is used for egress (not ingress) traffic and is a public LB, and it cannot be private. This is part of the outbound type configuration.</p>
<p>The "outbound type" of the AKS cluster can be set to "LoadBalancer, UserDefinedRouting or managedNatGateway". if you choose any option other than LB, then you would need to configure your network to route the traffic externally. check <a href="https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype" rel="nofollow noreferrer">this doc</a> for more info.</p>
<p>For ingress traffic, you have the choice to use public or private LB. This is configured in the service resource (of type LoadBalancer) under kubernetes, where you would use the annotation that you mentioned to create a private LB. Public rules will use the same public LB created with the cluster.</p>
<p>You can set the private Ip of the LB using annotations as well:</p>
<pre><code> annotations:
service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.240.0.25
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
</code></pre>
<p>This is mentioned in the <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">same doc</a> that you shared.</p>
| akathimi |
<p>Label values in Kubernetes need to be valid.</p>
<p>See <a href="https://github.com/kubernetes/apimachinery/blob/bfd2aff97e594f6aad77acbe2cbbe190acc93cbc/pkg/util/validation/validation.go#L167" rel="nofollow noreferrer">IsValidLabelValue()</a></p>
<p>For example the input I receive from a rest-API of a provider, which I want to write to a label: <code>Dedicated Server 1U</code>.</p>
<p>Is there a way to generate a valid label via Go from an arbitrary string?</p>
| guettli | <p>you can have a function to do this, for example:</p>
<pre><code>func generateLabel(input string) string {
input = strings.Replace(input, " ", "-", -1)
return "api-label=" + input
}
</code></pre>
<ul>
<li>the function replaces the spaces in the received string to "-"</li>
<li>you can change the key to any string you like.</li>
<li>you can also add a regex check to make sure that the generated value complies with the label constraints. (this depends if any special characters are being received from the API)</li>
</ul>
<p>To accept the string even when there are unwanted characters, check the below:</p>
<pre><code>package main
import (
"regexp"
"strings"
"fmt"
)
func generateLabel(input string) string {
input = strings.Replace(input, " ", "-", -1)
re := regexp.MustCompile("[^a-zA-Z0-9-]")
input = re.ReplaceAllString(input, "")
re = regexp.MustCompile("^[^a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?$")
input = re.ReplaceAllString(input, "")
return "api-label=" + input
}
func main() {
label := generateLabel("Dedicated Server 1U")
fmt.Println(label) // Output: "api-label=Dedicated-Server-1U"
label1 := generateLabel("Dedicated&test")
fmt.Println(label1) // Output: "api-label=Dedicatedtest"
label2 := generateLabel("Dedicated,test##&(*!great")
fmt.Println(label2) // Output: "api-label=Dedicatedtestgreat"
}
</code></pre>
| akathimi |
<p>So I am using the <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx ingress</a> and installed it trough the helm chart:</p>
<pre><code>helm install --set controller.kind=DaemonSet --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-size-slug"="lb-large" --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-hostname"="some\.url\.com" ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<p>This automatically created a loadbalancer on digital ocean as well.</p>
<p>As far as I understand until now, referencing <a href="https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/" rel="nofollow noreferrer">this</a> i have to:</p>
<ol>
<li>Create a docker image which will act as default backend, like <a href="https://github.com/kubernetes/ingress-nginx/tree/main/images/custom-error-pages" rel="nofollow noreferrer">this</a>.</li>
<li>Have to set the following things in the helm chart:
<ul>
<li>defaultBackend.enable => true</li>
<li>defaultBackend.image => The image created in step 1.</li>
<li>controller.config.custom-http-errors => [404, 503, ..] (all the errors i want to be handled by the default backend (<a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md" rel="nofollow noreferrer">config values reference</a>).</li>
</ul>
</li>
<li>Upgrade the helm chart.</li>
</ol>
<p>Would this be the right and also easiest approach or is there a simpler way?</p>
<p>Also would upgrading the helm chart, remove the old loadbalancer and create a new one?</p>
| natschz | <blockquote>
<p>Would this be the right and also easiest approach or is there a simpler way?</p>
</blockquote>
<p>The steps presented in the question based on <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-errors/" rel="nofollow noreferrer">official NGINX Ingress Controller wiki</a> are correct and seems to be easiest approach.</p>
<blockquote>
<p>Also would upgrading the helm chart, remove the old loadbalancer and create a new one?</p>
</blockquote>
<p>After using command <code>helm upgrade</code> the LoadBalancer will stay the same - the IP address won't change.
By running <code>helm list</code> command you can see that upgrade took place, by checking <code>REVISION</code> and <code>UPDATED</code> fields.</p>
<pre><code>NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-nginx default 4 2021-09-06 10:38:14.447456942 +0000 UTC deployed ingress-nginx-3.35.0 0.48.1
</code></pre>
| Mikolaj S. |
<p>I am using Istio VirtualService and am wondering if it's possible to use the header rules to deny requests on specific routes if a header is missing.</p>
<p>per the docs it looks like we can only add or remove headers at virtualservice level? Is that correct?</p>
<p>Or in what way can I blanket deny requests to specific URI paths in the virtualservice without a specific header?</p>
| Papi Abi | <p>Virtual service focuses only on <a href="https://istio.io/v1.1/docs/tasks/policy-enforcement/control-headers/" rel="nofollow noreferrer">rewriting and routing</a> including headers and it doesnt support denying traffic without the header. However Upon checking <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">EnvoyFilter</a> it can manipulate deny requests based on header condition.</p>
<p>Sharing also this <a href="https://discuss.istio.io/t/add-headers-using-virtual-service/2786" rel="nofollow noreferrer">link</a> where one of the comments uses EnvoyFilter to manipulate header rules. Please check the sample yaml below if this fits your concern.</p>
<pre><code>kind: EnvoyFilter
metadata:
name: any-name
spec:
workloadSelector:
labels:
app: your-app
filters:
- listenerMatch:
listenerType: SIDECAR_INBOUND
listenerProtocol: HTTP
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
local header_value = request_handle:headers():get("Your-Header-Name")
if header_value == nil then
request_handle:respond({[":status"] = "403"}, "Header is missing")
end
end
</code></pre>
| Yvan G. |
<p>I am running airflow using postgres.</p>
<p>There was a phenomenon that the web server was slow during operation.</p>
<p>It was a problem caused by data continuing to accumulate in dag_run and log of the db table (it became faster by accessing postgres and deleting data directly).</p>
<p>Are there any airflow options to clean the db periodically?</p>
<p>If there is no such option, we will try to delete the data directly using the dag script.</p>
<p>And I think it's strange that the web server slows down because there is a lot of data. Does the web server get all the data when opening another window?</p>
| user14989010 | <p>You can purge old records by running:</p>
<pre class="lang-bash prettyprint-override"><code>airflow db clean [-h] --clean-before-timestamp CLEAN_BEFORE_TIMESTAMP [--dry-run] [--skip-archive] [-t TABLES] [-v] [-y]
</code></pre>
<p>(<a href="https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#clean" rel="nofollow noreferrer">cli reference</a>)</p>
<p>It is a quite common setup to include this command in a DAG that runs periodically.</p>
| TJaniF |
<p>We have observed that GKE does not delete the Disks even after deleting the cluster which results into lots of Disks for the cluster which don't even exists.</p>
<p>In below link, It is mentioned that GKE will not delete persistent disks but what is the rational behind not deleting the disks.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deleting-a-cluster#overview" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/deleting-a-cluster#overview</a></p>
<p>If we are deleting the cluster it means we don't need the resources associated with it so ideally it should delete disks too which would help in saving the storage cost too.</p>
| saurabh umathe | <p>Persistent disks are located independently from your virtual machine (VM) instances, so you can detach or move persistent disks to keep your data even after you delete your instances. This is the reason Persistent disks will not delete when we delete the cluster and also in order to prevent your data from getting lost during sudden deletion of a cluster or under some unwanted cirucmstances.</p>
<p>If you want to delete the disk also permanently then this can be fixed by first deleting all the namespaces. When you delete a claim, the corresponding PersistentVolume object and the provisioned Compute Engine persistent disk are also deleted.</p>
<p>Refer to this <a href="https://stackoverflow.com/questions/73297596/how-to-delete-data-from-a-persistent-volume-in-gke">SO1</a> and <a href="https://stackoverflow.com/questions/59655519/deleting-all-the-associate-persistent-disks-when-deleting-cluster">SO2</a>, how to delete persistent volume disks.</p>
| Hemanth Kumar |
<p>I have created a Kind cluster with containerd runtime.
Here is my node:</p>
<pre><code>root@dev-001:~# k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
local-cluster-control-plane Ready control-plane,master 7d8h v1.20.2 172.18.0.2 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker Ready <none> 7d8h v1.20.2 172.18.0.5 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker2 Ready <none> 7d8h v1.20.2 172.18.0.3 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker3 Ready <none> 7d8h v1.20.2 172.18.0.4 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
</code></pre>
<p>How I can ssh into nodes?</p>
<p>Kind version: 0.11.1 or greater</p>
<p>Runtime: containerd ( not docker )</p>
| deepak | <p>Kind Kuberenetes uses Docker to <strong><a href="https://kind.sigs.k8s.io/" rel="noreferrer">create container(s) which will be Kubernetes node(s)</a></strong>:</p>
<blockquote>
<p><a href="https://sigs.k8s.io/kind" rel="noreferrer">kind</a> is a tool for running local Kubernetes clusters using Docker container “nodes”.</p>
</blockquote>
<p>So basically the layers are: your host -> containers hosted on yours host's docker which are acting as <strong>Kubernetes nodes</strong> -> on nodes there are container runtimes used for running pods</p>
<p>In order to SSH into nodes you need to exec into docker containers. Let's do it.</p>
<p>First, we will get list of nodes by running <code>kubectl get nodes -o wide</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane,master 5m5s v1.21.1 172.18.0.2 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
kind-worker Ready <none> 4m38s v1.21.1 172.18.0.4 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
kind-worker2 Ready <none> 4m35s v1.21.1 172.18.0.3 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
</code></pre>
<p>Let's suppose we want to SSH into <code>kind-worker</code> node.</p>
<p>Now, we will get list of docker containers (<code>docker ps -a</code>) and check if all nodes are here:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ee204ad5fd1 kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes kind-worker
434f54087e7c kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes 127.0.0.1:35085->6443/tcp kind-control-plane
2cb2e9465d18 kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes kind-worker2
</code></pre>
<p>Take a look at the <code>NAMES</code> column - here are nodes names used in Kubernetes.</p>
<p>Now we will use standard <a href="https://docs.docker.com/engine/reference/commandline/exec/" rel="noreferrer"><code>docker exec</code> command</a> to connect to the running container and connect to it's shell - <code>docker exec -it kind-worker sh</code>, then we will run <code>ip a</code> on the container to check if IP address matches the address from the <code>kubectl get nodes</code> command:</p>
<pre><code># ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
...
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet 172.18.0.4/16 brd 172.18.255.255 scope global eth0
...
#
</code></pre>
<p>As can see, we successfully connected to the node used by Kind Kubernetes - the IP address <code>172.18.0.4</code> matches the IP address from the <code>kubectl get nodes</code> command.</p>
| Mikolaj S. |
<p>I have an EKS cluster, where a code is ran as service 24/7 on the node. Now if i create a fargate profile on the same cluster, can the pods deployed by the Fargate profile communicate with the EKS service node which is running on the same cluster? As the pods does makes calls to that service for data.</p>
<p>Thanks</p>
| Landerson | <p>As far as you have same cluster and all node/pod/service under same VPC, any pod deployed on Fargate will be able to communicate with any other pod/node that is deployed on node group.</p>
<p>You can use FQDN of service name to communicate with other pod deployed on another namespace.</p>
<p>FQDN (fully qualified domain name) of any service is</p>
<p><service-name>.<namespace-name>.svc.cluster.local</p>
<p>You can read more about DNS of pod and service from here <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
| Braj Kumar |
<p>So we have been trying to get a few applications running on a GKE cluster with autopilot.</p>
<p>However when we went to test we get 5XX errors when we were expecting 4XX. This is even stranger because when we receive a 2XX response the message is received accordingly.</p>
<p>Upon reviewing application logs we see that the inteded output is the 4XX however, when the response is sent to the client, it is sent as a 5XX. What could be changing the response? Where does this response come from?</p>
<pre><code><html lang="en">
<head>
<title>Unable to forward your request to a backend - Web Forwarder - Cloud Shell</title>
</head>
<body style="font-family: monospace;">
<h1 style="font-size: 1.5em;">Unable to forward your request to a backend</h1>
<p>Couldn&#39;t connect to a server on port 8080</p>
</body>
</code></pre>
<p>The load balancers follow this template</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-gateway
namespace: namespace
annotations:
networking.gke.io/load-balancer-type: "Internal"
cloud.google.com/neg: '{"ingress": true}'
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: app-gateway
ports:
- port: 80
targetPort: 80
name: http
</code></pre>
<p>And the ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-gateway
namespace: namespace
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
rules:
- host: app.internal
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: app-gateway
port:
number: 80
</code></pre>
<p>Upon request this is the architecture for the system. Names were omitted<a href="https://i.stack.imgur.com/HtNCi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HtNCi.png" alt="enter image description here" /></a></p>
<p>This is a very straightforward approach, just a few workloads behind a internal load balancer, connected to an on-prem Mongo and Rabbit.</p>
<p><strong>Edit</strong> - Some more details</p>
<p>The way i'm doing is by setting up a port forward from my gcp console to the pod.</p>
<p>When I go to /swagger/index.html and try to test the API it returns 503 errors when a 4XX is expected. However a 2XX is sent sucessfully.
<a href="https://i.stack.imgur.com/uYTMJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uYTMJ.png" alt="Response using swagger" /></a>
When I port forward in my own console (using the same command as in GCP console) and do a <code>curl -X -I GET localhost:8080/swagger/index.html</code> I get the correct response.
<a href="https://i.stack.imgur.com/cTzJo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cTzJo.png" alt="enter image description here" /></a></p>
<p>Meaning it's likely something regarding the cloud shell itself.</p>
| Eddoasso | <p>We need to know how your traffic routes using both sources when testing using curl. I suggest using the Connectivity Test. In this way we can identify if both devices (source) have the same route when reaching its destination.</p>
<p>To know more about my suggestion you can check this <a href="https://cloud.google.com/network-intelligence-center/docs/connectivity-tests/concepts/overview#analyze-configurations" rel="nofollow noreferrer">link</a>. From the link you will see the information where traffic might go to a different path. From here we might see another useful information why we are having these kinds of error messages and will look at a different angle on how to resolve the issue.</p>
| Yvan G. |
<p>I have set up Minio and Velero backup for my k8s cluster. Everything works fine as I can take backups and I can see them in Minio. I have a PGO operator cluster hippo running with load balancer service. When I restore a backup via Velero, everything seems okay. It creates namespaces and all the deployments and pods in running state.
However I am not able to connect to my database via PGadmin. When I delete the pod it is not recreating it but shows an error of unbound PVC.
This is the output.</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get PV
error: the server doesn't have a resource type "PV"
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-4bhf-pgdata openebs-hostpath 16m
pvc-2dd12937-a70e-40b4-b1ad-be1c9f7b39ec 5G RWO Delete Bound default/local-hostpath-pvc openebs-hostpath 6d9h
pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-xvhq-pgdata openebs-hostpath 16m
pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO Delete Bound postgres-operator/hippo-instance2- p4ct-pgdata openebs-hostpath 7m32s
pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO Delete Bound postgres-operator/hippo-instance2- s6fs-pgdata openebs-hostpath 7m33s
pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-c4rt-pgdata openebs-hostpath 16m
pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO Delete Bound postgres-operator/hippo-instance2- 29gh-pgdata openebs-hostpath 7m32s
pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO Delete Bound postgres-operator/hippo-repo2 openebs-hostpath 7m30s
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 7m51s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 7m51s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 16m
hippo-s3-repo1 Pending pgo 16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m43s
hippo-instance2-29gh-0 4/4 Running 0 8m5s
hippo-instance2-p4ct-0 4/4 Running 0 8m5s
hippo-instance2-s6fs-0 4/4 Running 0 8m5s
hippo-repo-host-0 2/2 Running 0 8m5s
hippo-s3-instance2-c4rt-0 3/4 Running 0 16m
hippo-s3-repo-host-0 0/2 Pending 0 16m
pgo-7c867985c-kph6l 1/1 Running 0 16m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl delete pods hippo-s3-repo-host-0 -n postgres-operator
pod "hippo-s3-repo-host-0" deleted
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m57s
hippo-instance2-29gh-0 4/4 Running 0 8m19s
hippo-instance2-p4ct-0 4/4 Running 0 8m19s
hippo-instance2-s6fs-0 4/4 Running 0 8m19s
hippo-repo-host-0 2/2 Running 0 8m19s
hippo-s3-instance2-c4rt-0 3/4 Running 0 17m
hippo-s3-repo-host-0 0/2 Pending 0 2s
pgo-7c867985c-kph6l 1/1 Running 0 17m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 17m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 8m45s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 8m45s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 8m45s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 8m45s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 17m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 17m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 17m
hippo-s3-repo1 Pending pgo 17m
</code></pre>
<p><strong>What Do I Want?</strong></p>
<p>I want Velero to restore the full backup and I should be able to get access to my databases as I can before restore. It seems like Velero is not able to perform full backups.
Any suggestion will be appreciated</p>
| tauqeerahmad24 | <p>Velero is a backup and restore solution for Kubernetes clusters and their associated persistent volumes. While Velero does not currently support full backup and restore of databases Refer these <a href="https://velero.io/docs/main/file-system-backup/#limitations" rel="nofollow noreferrer">limitations</a>. It does support snapshotting and restoring persistent volumes. This means that, while you may not be able to directly restore a full database, you can restore the persistent volumes associated with the database and then use the appropriate tools to restore the data from the snapshots. Additionally, Velero's plugin architecture allows you to extend the capabilities of Velero with custom plugins that can add custom backup and restore functionality.</p>
<p>Refer to this blog from digital ocean by <a href="https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-velero" rel="nofollow noreferrer">Hanif Jetha and Jamon Camisso</a> for more information on backup and restore.</p>
| Hemanth Kumar |
<p>I have a docker container python app deployed on a kubernetes cluster on Azure (I also tried on a container app). I'm trying to connect this app to Azure key vault to fetch some secrets. I created a managed identity and assigned it to both but the python app always fails to find the managed identity to even attempt connecting to the key vault.</p>
<p>The Managed Identity role assignments:</p>
<p>Key Vault Contributor -> on the key vault</p>
<p>Managed Identity Operator -> Managed Identity</p>
<p>Azure Kubernetes Service Contributor Role,
Azure Kubernetes Service Cluster User Role,
Managed Identity Operator -> on the resource group that includes the cluster</p>
<p>Also on the key vault Access policies I added the Managed Identity and gave it access to all key, secrets, and certs permissions (for now)</p>
<p>Python code:</p>
<pre><code> credential = ManagedIdentityCredential()
vault_client = SecretClient(vault_url=key_vault_uri, credential=credential)
retrieved_secret = vault_client.get_secret(secret_name)
</code></pre>
<p>I keep getting the error:</p>
<pre><code>azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: no azure identity found for request clientID
</code></pre>
<p>So at some point I attempted to add the managed identity clientID in the cluster secrets and load it from there and still got the same error:</p>
<p>Python code:</p>
<pre><code> def get_kube_secret(self, secret_name):
kube_config.load_incluster_config()
v1_secrets = kube_client.CoreV1Api()
string_secret = str(v1_secrets.read_namespaced_secret(secret_name, "redacted_namespace_name").data).replace("'", "\"")
json_secret = json.loads(string_secret)
return json_secret
def decode_base64_string(self, encoded_string):
decoded_secret = base64.b64decode(encoded_string.strip())
decoded_secret = decoded_secret.decode('UTF-8')
return decoded_secret
managed_identity_client_id_secret = self.get_kube_secret('managed-identity-credential')['clientId']
managed_identity_client_id = self.decode_base64_string(managed_identity_client_id_secret)
</code></pre>
<p><strong>Update:</strong></p>
<p>I also attempted to use the secret store CSI driver, but I have a feeling I'm missing a step there. Should the python code be updated to be able to use the secret store CSI driver?</p>
<pre><code># This is a SecretProviderClass using user-assigned identity to access the key vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: "$CLIENT_ID" # Set the clientID of the user-assigned managed identity to use
vmmanagedidentityclientid: "$CLIENT_ID"
keyvaultName: "$KEYVAULT_NAME" # Set to the name of your key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: ""
tenantId: "$AZURE_TENANT_ID"
</code></pre>
<p>Deployment Yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: redacted_namespace
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: redacted_image
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
imagePullPolicy: Always
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "250m"
env:
- name: test-secrets
valueFrom:
secretKeyRef:
name: test-secrets
key: test-secrets
volumeMounts:
- name: test-secrets
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: test-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-user-msi"
dnsPolicy: ClusterFirst
</code></pre>
<p><strong>Update 16/01/2023</strong></p>
<p>I followed the steps in the answers and the linked docs to the letter, even contacted Azure support and followed it step by step with them on the phone and the result is still the following error:</p>
<p><code>"failed to process mount request" err="failed to get objectType:secret, objectName:MongoUsername, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://<RedactedVaultName>.vault.azure.net/secrets/<RedactedSecretName>/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=<RedactedClientId>&resource=https%3A%2F%2Fvault.azure.net"</code></p>
| Rimon | <p>Using the <a href="https://secrets-store-csi-driver.sigs.k8s.io/topics/set-as-env-var.html" rel="nofollow noreferrer">Secrets Store CSI Driver</a>, you can configure the <code>SecretProviderClass</code> to use a <a href="https://learn.microsoft.com/azure/aks/workload-identity-overview" rel="nofollow noreferrer">workload identity</a> by setting the <code>clientID</code> in the <code>SecretProviderClass</code>. You'll need to use the client ID of your <a href="https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview?source=recommendations#managed-identity-types" rel="nofollow noreferrer">user assigned managed identity</a> and change the <code>usePodIdentity</code> and <code>useVMManagedIdentity</code> setting to <code>false</code>.</p>
<p>With this approach, you don't need to add any additional code in your app to retrieve the secrets. Instead, you can mount a secrets store (using CSI driver) as a volume mount in your pod and have secrets loaded as environment variables which is documented <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/sync-with-k8s-secrets/" rel="nofollow noreferrer">here</a>.</p>
<p>This <a href="https://learn.microsoft.com/azure/aks/csi-secrets-store-identity-access#configure-workload-identity" rel="nofollow noreferrer">doc</a> will walk you through setting it up on Azure, but at a high-level here is what you need to do:</p>
<ol>
<li>Register the <code>EnableWorkloadIdentityPreview</code> feature using Azure CLI</li>
<li>Create an AKS cluster using Azure CLI with the <code>azure-keyvault-secrets-provider</code> add-on enabled and <code>--enable-oidc-issuer</code> and <code>--enable-workload-identiy</code> flags set</li>
<li>Create an Azure Key Vault and set your secrets</li>
<li>Create an Azure User Assigned Managed Identity and set an access policy on the key vault for the the managed identity' client ID</li>
<li>Connect to the AKS cluster and create a Kubernetes <code>ServiceAccount</code> with annotations and labels that enable this for Azure workload identity</li>
<li>Create an Azure identity federated credential for the managed identity using the AKS cluster's OIDC issuer URL and Kubernetes ServiceAccount as the subject</li>
<li>Create a Kubernetes <code>SecretProviderClass</code> using <code>clientID</code> to use workload identity and adding a <code>secretObjects</code> block to enable syncing objects as environment variables using Kubernetes secret store.</li>
<li>Create a Kubernetes <code>Deployment</code> with a <code>label</code> to use workload identity, the <code>serviceAccountName</code> set to the service account you created above, volume using CSI and the secret provider class you created above, volumeMount, and finally environment variables in your container using <code>valueFrom</code> and <code>secretKeyRef</code> syntax to mount from your secret object store.</li>
</ol>
<p>Hope that helps.</p>
| pauldotyu |
<p>Now here are the details:</p>
<ol>
<li>I'm running minikube on WSL2 Ubuntu app (5.10.16.3-microsoft-standard-WSL2):</li>
</ol>
<pre><code>$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
</code></pre>
<ol start="2">
<li>My minikube VM's driver: <code>--driver=docker</code>.</li>
</ol>
<p>Whenever I run the <code>minikube service</code> command, my web app is automatically opened on my browser at the localhost URL with a random port number different from the port number specified on my service configuration file.</p>
<p>After starting minikube and successfully creating deployment and service with configuration files, when I run:</p>
<pre><code>$ minikube service demo-app-service -n demo-app
</code></pre>
<p>I get the following output:</p>
<pre><code>|-----------------|------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------------|------------------|-------------|---------------------------|
| demo-app | demo-app-service | 80 | http://192.168.49.2:30021 |
|-----------------|------------------|-------------|---------------------------|
🏃 Starting tunnel for service demo-app-service.
|-----------------|------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------------|------------------|-------------|------------------------|
| demo-app | demo-app-service | | http://127.0.0.1:38243 |
|-----------------|------------------|-------------|------------------------|
🎉 Opening service demo-app/demo-app-service in default browser...
❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it.
</code></pre>
<p>First, my local browser opens up automatically and I am able to access my demo app on the localhost URL (<a href="http://127.0.0.1:38243" rel="nofollow noreferrer">http://127.0.0.1:38243</a>), but the port number seems to be randomly assigned as it keeps changing every time I rerun the <code>minikube service</code> command to deploy the same app.</p>
<p>Secondly, my main concern is that the demo app is never reachable at <code>minikube_IP:nodePort</code> (<a href="http://192.168.49.2:30021" rel="nofollow noreferrer">http://192.168.49.2:30021</a>) on my local browser. This is the same <code>nodePort</code> I defined on my service configuration file, and the <code>minikube_IP</code> is the same IP minikube returns when I run</p>
<pre><code>$ minikube ip
</code></pre>
<p>However, when I execute</p>
<pre><code>$ minikube service demo-app-service -n demo-app --url
</code></pre>
<p>the output only provides the localhost URL with a new random port number (<a href="http://127.0.0.1" rel="nofollow noreferrer">http://127.0.0.1</a>:<random_portNumber>).</p>
<p>According to minikube's <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-service-with-tunnel" rel="nofollow noreferrer">official documentation</a>, it was stated that,</p>
<p>"The network is limited if using the Docker driver on Darwin, Windows, or WSL, and the Node IP is not reachable directly."</p>
<p>However, I was hoping this issue may have been resolved by now or maybe there is a workaround it. I have also tried installing VirtualBox as <code>--driver</code> but it just doesn't work.</p>
<p>Please any help will be greatly appreciated. Thank you.</p>
| Cloudlord | <p>I kept running into errors trying to install <code>google-chrome</code> on WSL2. Eventually, I had to resort to using the port-forwarding cmd:</p>
<pre><code>kubectl port-forward service/demo-app-service 30021:80 -n demo-app
</code></pre>
<p>With this, I can access my app at my pre-defined node port number (http://localhost:30021) which works well for me.</p>
| Cloudlord |
<p>I've just deployed websocket based <code>echo-server</code> on <a href="https://aws.amazon.com/eks/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&eks-blogs.sort-by=item.additionalFields.createdDate&eks-blogs.sort-order=desc" rel="nofollow noreferrer">AWS EKS</a>. I see it's running stable and okay but when I was searching for implementation details I was finding only articles that were saying something about <code>nginx ingress controller</code> or <code>AWS application loadbalancer</code> and a lot of troubles with them.</p>
<p>Do I miss anything in my current, vanilla config? Do I need the AWS ALB or nginx ingress controller?</p>
<p>Thank you for all the replies.
All the best.</p>
| user7374044 | <blockquote>
<p>Do I miss anything in my current, vanilla config?</p>
</blockquote>
<p>You probably exposed your <code>echo-server</code> app using <a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types/67931156#67931156">service type - <code>ClusterIP</code> or <code>NodePort</code></a> which is fine if you only need to access your app locally in the cluster (<code>ClusterIP</code>) or using your <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">node</a> IP address (<code>NodePort</code>).</p>
<blockquote>
<p>Do I need the AWS ALB or nginx ingress controller?</p>
</blockquote>
<p>They both are different things, but they have similar common goal - to make your websocket app available externally and <a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer/55161523#55161523">distribute traffic based on defined L7 routing routes</a>. It's good solution if you have multiple deployments. So you need to answer yourself if you need some kind of Ingress Controller. If you are planning to deploy your application into production you should consider using those solutions, but probably it may be fine with service type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>.</p>
<p><strong>EDIT:</strong></p>
<p>If you are already using service type LoadBalancer your app is already available externally. Ingress controller provides additional configuration possibilities to configure L7 traffic route to your cluster (Ingress Controllers are often using LoadBalancer under the hood). Check this <a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer/45084511#45084511">answer</a> for more details about differences between LoadBalancer and Ingress.</p>
<p>Also check:</p>
<ul>
<li><a href="https://www.nginx.com/blog/aws-alb-vs-nginx-plus/" rel="nofollow noreferrer">Choosing the Right Load Balancer on Amazon: AWS Application Load Balancer vs. NGINX Plus</a></li>
<li><a href="https://blog.getambassador.io/configuring-kubernetes-ingress-on-aws-dont-make-these-mistakes-1a602e430e0a" rel="nofollow noreferrer">Configuring Kubernetes Ingress on AWS? Don’t Make These Mistakes</a></li>
<li><a href="https://websockets.readthedocs.io/en/latest/howto/kubernetes.html" rel="nofollow noreferrer">WebSocket - Deploy to Kubernetes</a></li>
<li><a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer/45084511">LoadBalancer vs Ingress</a></li>
</ul>
| Mikolaj S. |
<p>a have a problem with Kubernetes service. My service only sends requests to one pod ignoring other pods. I don't know why and how can I debug it. It should distribute request in a round-robin way. For me, it seems that something's wrong service but I don't know to debug it. Outputs of kubectl describe service and nodes along with endpoints</p>
<pre><code>
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30002
selector:
app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: webimage
ports:
- containerPort: 80
imagePullPolicy: Never
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
Name: web-svc
Namespace: default
Labels: <none>
Annotations: Selector: app=webpod
Type: NodePort
IP: 10.111.23.112
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30002/TCP
Endpoints: 10.244.1.7:80,10.244.1.8:80,10.244.1.9:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 172.18.0.3:6443
Session Affinity: None
Events: <none>
Name: web-depl-5c87b748f-kvtqr
Namespace: default
Priority: 0
Node: kind-worker/172.18.0.2
Start Time: Mon, 04 May 2020 04:20:34 +0000
Labels: app=webpod
pod-template-hash=5c87b748f
Annotations: <none>
Status: Running
IP: 10.244.1.8
IPs:
IP: 10.244.1.8
Controlled By: ReplicaSet/web-depl-5c87b748f
Containers:
web:
Container ID: containerd://8b431d80fd729c8b0d7e16fa898ad860d1a223b3e191367a68e3b65e330fe61a
Image: web
Image ID: sha256:16a4c5d1a652b1accbacc75807abc1d9a05e2be38115dc8a5f369a04a439fad2
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 04 May 2020 04:20:36 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
Requests:
cpu: 500m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c9tgf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-c9tgf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c9tgf
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
=========
Name: iweblens-svc
Namespace: default
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-05-04T04:20:36Z
Subsets:
Addresses: 10.244.1.7,10.244.1.8,10.244.1.9
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 80 TCP
Events: <none>
</code></pre>
| Zaharlan | <p>Is the client using a persistent/long-lived connection? Because the service endpoint will only distribute the new connections in a round-robin manner as per your setting. Kubernetes doesn't offer any built-in mechanism to load balance long-lived connections. For long-lived connections, you can handle the load balancing on the client side or use a reverse proxy (service mesh/traefik ingress) which can take care of the load balancing responsibility.</p>
| harnoor |
<p>I have superset installed via helmchart in my kubernetes environment, I took everything from official documentation and repository: <a href="https://github.com/apache/superset" rel="nofollow noreferrer">https://github.com/apache/superset</a></p>
<p>I'm trying to archive a data autorefresh of the dashboard every 12 hours via helmchart and not via the UI; I read that this can be done enabling superset cache so data will be cached for 12 hours and then dynamically refreshed and everyone that access superset UI can see the same values.</p>
<p>My problem now is one.... I can see the cache configuration on the superset/config.py file:</p>
<pre><code># Default cache for Superset objects
CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"}
# Cache for datasource metadata and query results
DATA_CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"}
# Cache for dashboard filter state (`CACHE_TYPE` defaults to `SimpleCache` when
# running in debug mode unless overridden)
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
"CACHE_DEFAULT_TIMEOUT": int(timedelta(days=90).total_seconds()),
# should the timeout be reset when retrieving a cached value
"REFRESH_TIMEOUT_ON_RETRIEVAL": True,
}
# Cache for explore form data state (`CACHE_TYPE` defaults to `SimpleCache` when
# running in debug mode unless overridden)
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
"CACHE_DEFAULT_TIMEOUT": int(timedelta(days=7).total_seconds()),
# should the timeout be reset when retrieving a cached value
"REFRESH_TIMEOUT_ON_RETRIEVAL": True,
}
</code></pre>
<p>As per documentation I'm using the <strong>configOverrides</strong> section of the helmchart to overwrite the default values and enable the cache of config, data, filter ad explore but I can't find any example of how to do it and everything I try always fail in helmrelease.</p>
<p>I try to read the helmchart but looks that it take all the <strong>configOverrides</strong> section and I was not able to find where it overwrite those specific values.</p>
<p>Some example of what I try to overwrite, for example enabling some flag works without problem:</p>
<pre><code>configOverrides:
enable_flags: |
FEATURE_FLAGS = {
"DASHBOARD_NATIVE_FILTERS": True,
"ENABLE_TEMPLATE_PROCESSING": True,
"DASHBOARD_CROSS_FILTERS": True,
"DYNAMIC_PLUGINS": True,
"VERSIONED_EXPORT": True,
"DASHBOARD_RBAC": True,
}
</code></pre>
<p>But if I try to overwrite one or more cache value it fail (config.py <a href="https://github.com/apache/superset/blob/master/superset/config.py" rel="nofollow noreferrer">https://github.com/apache/superset/blob/master/superset/config.py</a>), this is one of the different way I try to overwrite checking the helm value file, the template and the superser config.py (and checkign other articles):</p>
<pre><code>configOverrides:
cache_config: |
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
data_cache_config: |
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
filter_cache_config: |
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
explore_cache_config: |
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
</code></pre>
<p>Any help pls? Or redirect to some good documentation that has example! Ps the redis installation I have it's the default one created by the helmchart, I didn't change anything on it.</p>
| Carlo 1585 | <p><strong>TL;DR;</strong> your <code>configOverrides</code> should look like this:</p>
<pre><code>configOverrides:
cache_config: |
from datetime import timedelta
from superset.superset_typing import CacheConfig
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
</code></pre>
<h2>Details:</h2>
<p>After running a helm install with your settings, your config file will look a bit like this:</p>
<pre><code>import os
from cachelib.redis import RedisCache
...
CACHE_CONFIG = {
'CACHE_TYPE': 'redis',
'CACHE_DEFAULT_TIMEOUT': 300,
'CACHE_KEY_PREFIX': 'superset_',
'CACHE_REDIS_HOST': env('REDIS_HOST'),
'CACHE_REDIS_PORT': env('REDIS_PORT'),
'CACHE_REDIS_PASSWORD': env('REDIS_PASSWORD'),
'CACHE_REDIS_DB': env('REDIS_DB', 1),
}
DATA_CACHE_CONFIG = CACHE_CONFIG
...
# Overrides
# cache_config
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
# data_cache_config
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
# enable_flags
FEATURE_FLAGS = {
"DASHBOARD_NATIVE_FILTERS": True,
"ENABLE_TEMPLATE_PROCESSING": True,
"DASHBOARD_CROSS_FILTERS": True,
"DYNAMIC_PLUGINS": True,
"VERSIONED_EXPORT": True,
"DASHBOARD_RBAC": True,
}
# explore_cache_config
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
# filter_cache_config
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
</code></pre>
<p>When I looked at the pod logs, there were a lot of errors due to the function <code>timedelta</code> not being defined, here is a sample of the logs I can see:</p>
<pre><code>File "/app/pythonpath/superset_config.py", line 42, in <module>
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
NameError: name 'timedelta' is not defined
</code></pre>
<p>The file in question, <code>/app/pythonpath/superset_config.py</code> , is loaded via an import <a href="https://github.com/apache/superset/blob/master/superset/config.py#L1587" rel="nofollow noreferrer">here</a> as mentioned in <a href="https://github.com/apache/superset/blob/master/superset/config.py#L19" rel="nofollow noreferrer">the comment at the top of the file</a>.</p>
<p>Notice that you're writing a fresh new <code>.py</code> file; which means that you need to add <code>from datetime import timedelta</code> at the top in the configOverrides section.</p>
<p>However, since the doc in the helm chart states the following warning <code>WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py</code>, and you clearly want to use the function <code>timedelta</code>, we must combine all three blocks under the same section like this:</p>
<pre><code>configOverrides:
cache_config: |
from datetime import timedelta
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
</code></pre>
<p>Furthermore, you wanted to use the type <code>CacheConfig</code>, so we should also include an import for it at the top.</p>
| Eslam Hossam |
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/</a></p>
<p>To enable the NGINX Ingress controller, run the following command:</p>
<pre><code> minikube addons enable ingress
</code></pre>
<p>How to enable it without minikube on windows? Kubernetes is enabled through Docker-desktop. So minikube is not installed.</p>
<h2>UPDATE</h2>
<p>From doc: <a href="https://kubernetes.github.io/ingress-nginx/deploy/#installation-guide" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#installation-guide</a></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.1/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>Error:</p>
<pre><code>kubectl describe pod ingress-nginx-controller-78cd9ffdfc-lwrwd -n ingress-nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-78cd9ffdfc-lwrwd to docker-desktop
Warning FailedMount 9m (x10 over 13m) kubelet MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
Warning FailedMount 8m53s (x2 over 11m) kubelet Unable to attach or mount volumes: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert kube-api-access-7wb8v]: timed out waiting for the condition
Normal Pulling 6m57s kubelet Pulling image "registry.k8s.io/ingress-nginx/controller:v1.7.1@sha256:7244b95ea47bddcb8267c1e625fb163fc183ef55448855e3ac52a7b260a60407"
</code></pre>
| eastwater | <p>Based from this <a href="https://kubernetes.github.io/ingress-nginx/deploy/#installation-guide" rel="nofollow noreferrer">documentation</a>, you can either install NGINX Ingress through <code>Helm</code> using the project repository chart, or with <code>kubectl apply</code> using YAML manifest.</p>
<p>To install it using <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start:%7E:text=%C2%B6-,If%20you%20have%20Helm,-%2C%20you%20can" rel="nofollow noreferrer">Helm</a>:</p>
<pre><code>helm install --namespace kube-system nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
</code></pre>
<p>If you already have helm, you can use the command:</p>
<pre><code>helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
</code></pre>
<p>To install it using <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start:%7E:text=If%20you%20don%27t%20have%20Helm%20or%20if%20you%20prefer%20to%20use%20a%20YAML%20manifest%2C%20you%20can%20run%20the%20following%20command%20instead%3A" rel="nofollow noreferrer">kubectl</a>:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>Unfortunately, The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.</p>
| James S |
<p>Here's my problem. My GKE GCP node IP addresses have access to an on premise network using ipsec/vpn and on premise firewall rules but my pod IP addresses do not. I want my traffic going from pods to use one of the acceptable node source IP addresses. How can I achieve that?</p>
| tumpy | <p>You should read about <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ip-masquerade-agent?hl=en" rel="nofollow noreferrer">IP Masquerarding</a> and how to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">edit the IP Masquerade agent</a></p>
<p><em>IP masquerading is a form of source network address translation (SNAT) used to perform many-to-one IP address translations. GKE can use IP masquerading to change the source IP addresses of packets sent from Pods. When IP masquerading applies to a packet emitted by a Pod, GKE changes the packet's source address from the Pod IP to the underlying node's IP address. Masquerading a packet's source is useful when a recipient is configured to receive packets only from the cluster's node IP addresses.</em></p>
<p>Please see this post that discusses <a href="https://stackoverflow.com/questions/59207115/egress-traffic-from-gke-pod-through-vpn">Egress traffic from GKE Pod through VPN</a></p>
| James S |
<p>We run a kubernetes cluster provisioned with kubespray and discovered that each time when a faulty node goes down (we had this due to hardware issue recently) the pods executing on this node stuck in Terminating state indefinitely. Even after many hours the pods are not being redeploying on healthy nodes and thus our entire application is malfunctioning and the users are affected for a prolonged period of time.</p>
<p>How it is possible to configure kubernetes to perform failover in situations like this?</p>
<p>Below is our statefulset manifest.</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: project-stock
name: ps-ra
spec:
selector:
matchLabels:
infrastructure: ps
application: report-api
environment: staging
serviceName: hl-ps-report-api
replicas: 1
template:
metadata:
namespace: project-stock
labels:
infrastructure: ps
application: report-api
environment: staging
spec:
terminationGracePeriodSeconds: 10
containers:
- name: ps-report-api
image: localhost:5000/ps/nodejs-chrome-application:latest
ports:
- containerPort: 3000
protocol: TCP
name: nodejs-rest-api
volumeMounts:
resources:
limits:
cpu: 1000m
memory: 8192Mi
requests:
cpu: 333m
memory: 8192Mi
livenessProbe:
httpGet:
path: /health/
port: 3000
initialDelaySeconds: 180
periodSeconds: 10
failureThreshold: 12
timeoutSeconds: 10
</code></pre>
| roman | <p>Posted community wiki for better visibility. Feel free to expand it.</p>
<hr />
<p>In my opinion, the behaviour on your <code>kubespray</code> cluster (pod staying in <code>Terminating</code> state) is fully intentional. Based on <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>A Pod is not deleted automatically when a node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">timeout</a>. Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node.</p>
</blockquote>
<p>The same documentation introduces ways in which a Pod in <code>Terminating</code> state can be removed. Also there are some recommended best practices:</p>
<blockquote>
<p>The only ways in which a Pod in such a state can be removed from the apiserver are as follows:</p>
<ul>
<li>The Node object is deleted (either by you, or by the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node Controller</a>).</li>
<li>The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.</li>
<li>Force deletion of the Pod by the user.</li>
</ul>
</blockquote>
<blockquote>
<p>The recommended best practice is to use the first or second approach. If a Node is confirmed to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete the Node object. If the Node is suffering from a network partition, then try to resolve this or wait for it to resolve. When the partition heals, the kubelet will complete the deletion of the Pod and free up its name in the apiserver. Normally, the system completes the deletion once the Pod is no longer running on a Node, or the Node is deleted by an administrator. You may override this by force deleting the Pod.</p>
</blockquote>
<p>You can implement <a href="https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/" rel="nofollow noreferrer">Graceful Node Shutdown</a> if your node is shutdown in one of the <a href="https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/" rel="nofollow noreferrer">following ways</a>:</p>
<blockquote>
<p>On Linux, your system can shut down in many different situations. For example:</p>
<ul>
<li>A user or script running <code>shutdown -h now</code> or <code>systemctl poweroff</code> or <code>systemctl reboot</code>.</li>
<li>Physically pressing a power button on the machine.</li>
<li>Stopping a VM instance on a cloud provider, e.g. <code>gcloud compute instances stop</code> on GCP.</li>
<li>A Preemptible VM or Spot Instance that your cloud provider can terminate unexpectedly, but with a brief warning.</li>
</ul>
</blockquote>
<p>Keep in mind this feature is supported from version <strong>1.20</strong> (where it is in alpha state) and up (currently in <strong>1.21</strong> is in beta state).</p>
<p>The other solution, mentioned in documentation is to manually delete a node, for example using a <code>kubectl delete node <your-node-name></code>:</p>
<blockquote>
<p>If a Node is confirmed to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete the Node object.</p>
</blockquote>
<p>Then pod will be re-scheduled on the other node.</p>
<p>The last workaround is to set <code>TerminationGracePeriodSeconds</code> to <code>0</code>, but this is <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">strongly discouraged</a>:</p>
<blockquote>
<p>For the above to lead to graceful termination, the Pod <strong>must not</strong> specify a <code>pod.Spec.TerminationGracePeriodSeconds</code> of 0. The practice of setting a <code>pod.Spec.TerminationGracePeriodSeconds</code> of 0 seconds is unsafe and strongly discouraged for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">shuts down gracefully</a> before the kubelet deletes the name from the apiserver.</p>
</blockquote>
| Mikolaj S. |
<p>i have like 2 or 3 clusters that i am collecting logs from centrally using grafana loki. I want to able to distinguish the logs from each of environment, each environment is its own k8s cluster.
But i still see only the stock labels that are added and not the ones i am trying to add</p>
<p>Here's how i tried to add the labels using external_labels:</p>
<pre><code>promtail:
enabled: true
config:
logLevel: info
serverPort: 3100
clients:
- url: http://loki:3100/loki/api/v1/push
external_labels:
cluster: prod
scrape_configs:
- job_name: kubernetes
kubernetes_sd_configs:
- role: pod
label_config:
external_labels:
cluster: prod
</code></pre>
<p>Is this the correct approach or am i missing something?</p>
| sal | <p>The following configuration worked for me:</p>
<pre><code>promtail:
enabled: true
config:
logLevel: info
serverPort: 3100
clients:
- url: http://loki:3100/loki/api/v1/push
external_labels:
cluster: "prod"
scrape_configs:
- job_name: kubernetes
kubernetes_sd_configs:
- role: pod
</code></pre>
| sal |
<p>What I am going to use:</p>
<ul>
<li>Microk8s</li>
<li>istio addon</li>
<li>Metallb addon</li>
<li>Cert manager (if possible)</li>
</ul>
<hr />
<p>With microk8s, I want to deploy several micro services.</p>
<p>I want to set up istio gateway to check whether rest api requests are using https protocol.</p>
<ul>
<li>Each micro service has its own virtual service.</li>
</ul>
<p>If there is no dns, but can only use a private ip address(ex. 192.168.2xx.xxx), what do I have to do first? If this approach is not possible technically, please let me know.</p>
<p>(With dns, letsencrypt would be a solution using cert-manager. Is there any options for a private IP address that works like letsencrypt?)</p>
| stella | <blockquote>
<p>Are there any options for a private IP address that works like
letsencrypt? :</p>
</blockquote>
<p>If you are using a private IP address and do not have DNS, you cannot use LetsEncrypt to obtain a SSL certificate. Instead, you will need to use a certificate from a Certificate Authority (CA) that can generate certificates for private IPs. To do this, you will need to generate a Certificate Signing Request (CSR) and submit it to the CA. The CA will then generate a certificate that is signed with its private key and send it back to you. You will then install this certificate on your Istio gateway and use it to check whether requests to your microservices are using HTTPS protocol. Additionally, you will need to ensure that each microservice has its own virtual service to make sure that the requests are routed to the correct microservice. Refer to this <a href="https://www.digitalocean.com/community/tutorials/a-comparison-of-let-s-encrypt-commercial-and-private-certificate-authorities-and-self-signed-ssl-certificates" rel="nofollow noreferrer">doc</a> for more information.</p>
<blockquote>
<p>To set up istio gateway to check whether rest api requests are using
https protocol.</p>
</blockquote>
<p>To set up an <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#configuring-ingress-using-a-gateway" rel="nofollow noreferrer">Istio gateway</a> to check whether REST API requests are using the HTTPS protocol, you need to configure a gateway and virtual service in Istio. The gateway should be configured to route traffic on the HTTPS port to the port where your REST API is running. The virtual service should be configured to match requests that have the X-Forwarded-Proto header set to https and route them to the correct service. You can also configure Istio to reject requests that don't have the X-Forwarded-Proto header set to https or that have the X-Forwarded-Proto header set to http. Once you have configured the gateway and virtual service, you should be able to test that requests to your REST API are using the HTTPS protocol.</p>
<p>Refer this <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-tls-ingress-gateway-for-a-single-host" rel="nofollow noreferrer">doc</a> on configuring TLS ingress gateway .</p>
| Hemanth Kumar |
<p>I am trying to learn to deploy a k8 cluster to eks and access the website through eks and I am receiving a [failed to call webhook error] in my travis-ci log files as well as a [no endpoints available for service "ingress-nginx-controller admission"]. The deployment runs successfully and the pods get launched to the eks cluster, I just cannot figure out how to access the url to the app. I attached some images that I thought might be important for my error.</p>
<p><a href="https://i.stack.imgur.com/iD6mf.png" rel="nofollow noreferrer">CoreDNSError</a></p>
<p><a href="https://i.stack.imgur.com/0qXAu.png" rel="nofollow noreferrer">Travis-CI Logs</a></p>
<p><a href="https://i.stack.imgur.com/pM7dL.png" rel="nofollow noreferrer">UnavailableLoadBalancerError</a></p>
| ayfantis53 | <p>which website do you want to access? What is your web server, and how do you deploy it?</p>
<p>Chances are, that you are missing a loadbalancer to make the web server available.</p>
| Christoph Fischer |
<p>Instead of navigating a namespace via e.g. <code>:service</code>, then <code>:pod</code> etc, I would like to see everything that's in the namespace in a single view. As if you would type <code>kubectl -n argocd get all</code>.</p>
<p>Can't find the info in the docs. Is this even possible?</p>
<p>Thanks for any suggestion!</p>
| ss1 | <p>Posting community wiki answer based on GitHub topic - <a href="https://github.com/derailed/k9s/issues/771" rel="nofollow noreferrer">Show multiple resource types without having to switch</a>. Feel free to expand it.</p>
<hr />
<p>That's true, there is no information about this in the documentation because simply there is no such possibility. There is <a href="https://github.com/derailed/k9s/issues/771#issue-640485968" rel="nofollow noreferrer">open issue with this request on the GitHub page of k9s</a>:</p>
<blockquote>
<p><strong>Is your feature request related to a problem? Please describe.</strong><br />
Oftentimes I'm watching/working on multiple resource types at the same time and it's very helpful to not have to switch from one to another. This is something very like <code>kubectl get pod,deploy,...</code> or <code>kubectl get-all</code> commands allows</p>
<p><strong>Describe the solution you'd like</strong><br />
Being able to see multiple or all resources in the same screen without having to switch between resource types like:<br />
<code>:pod,configmap</code> shows all pods & configmaps in the current namespace<br />
or<br />
<code>:all</code> shows all resources in the current namespace (get-all like)</p>
</blockquote>
<p>Last <a href="https://github.com/derailed/k9s/issues/771#issuecomment-960530786" rel="nofollow noreferrer">pinged November 4 2021</a>.</p>
| Mikolaj S. |
<p>I use <code>kubectl</code> to list Kubernetes custom resources of a kind <code>mykind</code> with an additional table column <code>LABEL</code> that contains the value of a label <code>a.b.c.com/key</code> if present:</p>
<pre><code>kubectl get mykind -o=custom-columns=LABEL:.metadata.labels.'a\.b\.c\.com/key'
</code></pre>
<p>This works, i.e., the label value is properly displayed.</p>
<p>Subsequently, I wanted to add a corresponding additional printer column to the custom resource definition of <code>mykind</code>:</p>
<pre><code>- description: Label value
jsonPath: .metadata.labels.'a\.b\.c\.com/key'
name: LABEL
type: string
</code></pre>
<p>Although the additional column is added to <code>kubectl get mykind</code>, it is empty and no label value is shown (in contrast to above <code>kubectl</code> command). My only suspicion were problems with escaping of the special characters - but no variation helped.</p>
<p>Are you aware of any difference between the JSON path handling in <code>kubectl</code> and additional printer columns? I expected strongly that they are exactly the same.</p>
| anekdoti | <p>mdaniel's comment works!</p>
<pre class="lang-yaml prettyprint-override"><code>- description: Label value
jsonPath: '.metadata.labels.a\.b\.c\.com/key'
name: LABEL
type: string
</code></pre>
<p>You need to use <code>\.</code> instead of <code>.</code> and use single quotes <code>' '</code>. It doesn't work with double quotes for the reasons I don't understand</p>
| Vishal-Chdhry |
<p>I'm developing a service running in Google Kubernetes Engine and I would like to use Google Cloud functionality from that service.
I have created a service account in Google Cloud with all the necessary roles and I would like to use these roles from the pod running my service.</p>
<p>I have read this: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform</a>
and I was wondering if there is an easier way to "connect" the two kinds of service accounts ( defined in Kubernetes - defined in Google Cloud IAM ) ?</p>
<p>Thanks </p>
| barczajozsef | <p>Read the topic which I have shared. You need to enable Workload Identity on your cluster and then you can annotate Kubernetes service account with IAM on google.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">gke-document</a></p>
| Vadym Nych |
<p>I'm stuck deploying the microservices locally with the following stack: <em>Skaffold, minikube, helm, and harbor</em>.
These microservices can be deployed locally without any problem with docker and docker-compose.
When I run <strong>skaffold dev</strong>, it stop at this point:</p>
<p><code>- statefulset/service0: Waiting for 1 pods to be ready...</code></p>
<p>When I describe the pod with the command:
<strong>kubectl describe pod service-0</strong></p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12s (x3 over 13s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
</code></pre>
<p>I don't know what I am doing wrong... Any ideas?</p>
| Martín Marrari | <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/</a></p>
<p>Assign labels to a node to match your manifest or alter your manifest to match the <code>nodeSelector</code> statement in your YAML.</p>
| ceecow |
<p>getting a 503 error for the ingress, did the basic trouble shooting with labels and stuff looks good though. I see the pods are running and can be listed when ran with the service label.</p>
<p>the readiness probe has a warning but it did not fail</p>
<p>what else can be checked tor resolve this issue. any ideas appreciated</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl get service -n staging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-staging ClusterIP 172.20.174.146 <none> 8000/TCP 242d
kubectl describe service app-staging -n staging
Name: app-staging
Namespace: staging
Labels: <none>
Annotations: <none>
Selector: app=app-staging
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.174.146
IPs: 172.20.174.146
Port: app-staging 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.200.32.6:8000,10.200.64.2:8000
Session Affinity: None
Events: <none>
kubectl get pods -n staging -l app=app-staging
NAME READY STATUS RESTARTS AGE
app-staging-5677656dc8-djp8l 1/1 Running 0 4d7h
app-staging-5677656dc8-dln5v 1/1 Running 0 4d7h</code></pre>
</div>
</div>
</p>
<p>this is the readiness probe</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code> kubectl describe pod app-staging-5677656dc8-djp8l -n staging|grep -i readiness
Readiness: http-get http://:8000/ delay=30s timeout=1s period=30s #success=1 #failure=6
Warning ProbeWarning 40s (x12469 over 4d7h) kubelet Readiness probe warning:</code></pre>
</div>
</div>
</p>
<p>here is the manifest file for the pod, service and ingress</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code># This deployment is setup to use ECR for now, but should switch to Artifactory in the future.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-staging
namespace: staging
spec:
replicas: 2
selector:
matchLabels:
app: app-staging
template:
metadata:
labels:
app: app-staging
spec:
containers:
- name: app-staging
image: "${DOCKER_REGISTRY}/:${IMAGE_TAG}"
readinessProbe:
failureThreshold: 6
httpGet:
path: /
port: 8000
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
imagePullPolicy: Always
# Setting AUTODYNATRACE_FORKABLE environment variable will cause an ominous looking error message similar to the one below:
#
# `WARNING autodynatrace - init: Could not initialize the OneAgent SDK, AgentState: 1`
#
# This error message is expected when "forkable" mode is enabled. See the link below for more information:
# https://github.com/Dynatrace/OneAgent-SDK-for-Python/blob/fa4dd209b6a21407abca09a6fb8da1b85755ab0a/src/oneagent/__init__.py#L205-L217
command: ["/bin/sh"]
args:
- -c
- >-
/bin/sed -i -e "s/# 'autodynatrace.wrappers.django'/'autodynatrace.wrappers.django'/" /app//ON_/ON_/settings.py &&
/usr/local/bin/python manage.py collectstatic --noinput &&
AUTOWRAPT_BOOTSTRAP=autodynatrace AUTODYNATRACE_FORKABLE=True /usr/local/bin/gunicorn --workers 8 --preload --timeout 120 --config gunicorn.conf.py --bind 0.0.0.0:8000
env:
- name: AUTODYNATRACE_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: AUTODYNATRACE_APPLICATION_ID
value: Django ($(AUTODYNATRACE_POD_NAME):8000)
ports:
- containerPort: 8000
volumeMounts:
# mount config in both locations while we migrate to running container as non-root user.
- name: secrets
readOnly: true
mountPath: /root/FHIREngine/conf
- name: secrets
readOnly: true
mountPath: /home//FHIREngine/conf
imagePullSecrets:
- name: jfrogcred
volumes:
- name: secrets
secret:
secretName: config
defaultMode: 420
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app-staging
namespace: staging
spec:
ports:
- name: app-staging
port: 8000
targetPort: 8000
selector:
app: app-staging
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-staging
namespace: staging
annotations:
external-dns.alpha.kubernetes.io/hostname: staging.tv-pd.sh.io
external-dns.alpha.kubernetes.io/type: internal
kubernetes.io/ingress.class: nginx-internal
spec:
rules:
- host: staging.tv-pd.sh.io
http:
paths:
- path: /
backend:
service:
name: app
port:
number: 8000
pathType: ImplementationSpecific
#pathType is now required for each specified path. Options are Prefix, Exact, and ImplementationSpecific. To match the undefined v1beta1 behavior, use ImplementationSpecific
---</code></pre>
</div>
</div>
</p>
| green | <p>I see that your service is named "app-staging"</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-staging
</code></pre>
<p>But in the ingress the path mapping to service is incorrectly identifying the service name as "app"</p>
<pre><code>spec:
rules:
- host: staging.tv-pd.sh.io
http:
paths:
- path: /
backend:
service:
name: app
port:
number: 8000
</code></pre>
<p>Please change the backend service name in ingress to "app-staging" instead of "app".</p>
<p>Please accept the answer if this resolves your issue.</p>
| Raghu |
<p>I have a requirement to delete all pods for <code>service1-deployment</code> when container restart happens in <code>service2-deployment</code>.</p>
<p>I found out that we can do it through <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">lifeCycle event</a> handler <code>service2-deployment</code>.</p>
<p>But we can not specify <code>kubeclt delete pod</code> command here since it runs inside a pod. Is there any easy way to restart 2nd pod based on 1st Pod's lifeCycle events?</p>
| Gajukorse | <p><strong>Disclaimer</strong> - as mentioned in the comments, you should avoid this solution (<a href="https://softwareengineering.stackexchange.com/questions/411082/should-microservices-be-independent/411136#411136">microservices should be independent</a>) until you really have no other choice.</p>
<hr />
<p>You can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">setup both <code>postStart</code> and <code>preStop</code> handlers</a> (for installing <code>kubectl</code> binary and for deleting the pods from deployment), but first you need to create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">proper Service Account for the pod</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Role(Bindings)</a> with permissions to delete pods:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-for-hook
subjects:
- kind: ServiceAccount
name: service-account-for-hook
namespace: default
roleRef:
kind: ClusterRole
name: delete-pods-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account-for-hook
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: delete-pods-role
labels:
# Add these permissions to the "view" default role.
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["*"]
resources: ["pods"]
verbs: ["delete","list"]
</code></pre>
<p>Then, you can use newly created Service Account + <code>postStart</code> and <code>preStop</code> handlers in pod / deployment definition - example for NGINX image. I assumed, that the label for the pods from the <code>Service1</code> is <code>app=service1</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: service2-deployment
spec:
selector:
matchLabels:
app: service2
replicas: 2
template:
metadata:
labels:
app: service2
spec:
serviceAccountName: service-account-for-hook
containers:
- name: service2
image: nginx:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install curl && curl -L https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl"]
preStop:
exec:
command: ["/bin/sh", "-c", "kubectl delete pods -l app=service1"]
ports:
- containerPort: 80
</code></pre>
<p>Then, if the pod from <code>Service2</code> is restarted, the pods from <code>Service1</code> are also restarted.</p>
<p>Commands used in the <code>command:</code> could be different for you, it depends which base image you are using for your application.</p>
| Mikolaj S. |
<p>We are using tika to extract text from a lot of documents,
for this we need to give tika service custom config file (xml)</p>
<p>While in docker you can do it just the same as it appears in <a href="https://github.com/apache/tika-docker#custom-config" rel="nofollow noreferrer">tika docker image instructions</a>:</p>
<pre><code>docker run -d -p 9998:9998 -v `pwd`/tika-config.xml:/tika-config.xml apache/tika:1.25-full --config /tika-config.xml
</code></pre>
<p>I don't know how to achieve the same result with k8s deployment</p>
<p>The deployment I'm using now is this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tika
labels:
app: tika
spec:
replicas: 1
selector:
matchLabels:
app: tika
template:
metadata:
labels:
app: tika
spec:
containers:
- name: tika
image: apache/tika:2.4.0
</code></pre>
<p>How can I add a custom config to this image?</p>
| NNH | <p>Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configmap</a> can be used to load xml file.</p>
| Nataraj Medayhal |
<p>On AKS, I am getting a very strange error pulling an image from a public docker repository: <code>Failed to pull image "jeremysv/eventstore-proxy:latest": rpc error: code = InvalidArgument desc = failed to pull and unpack image "docker.io/jeremysv/eventstore-proxy:latest": unable to fetch descriptor (sha256:46e5822176985eff4826449b3e4d4de5589aa012847153a4a2e134d216b7b28a) which reports content size of zero: invalid argument</code></p>
<p>I have tried deleting and recreating the AKS cluster, however the error is still there.</p>
<p>Using <code>docker pull</code> on my local machine works fine.</p>
| Jeremy Morren | <p>First of all have you been able to pull images from your repo to AKS in the past? If yes, what is the difference between this time and the previous successful one?</p>
<p>If not, I look it up and it seems to be an error for which Azure is aware of. Both of those guys kind of had the same issue as you: <a href="https://faultbucket.ca/2022/05/aks-image-pull-failed-from-proget/" rel="nofollow noreferrer">AKS image pull failed</a> and <a href="https://learn.microsoft.com/en-us/answers/questions/653640/kubernetes-in-aks-error-while-pulling-image-from-p.html" rel="nofollow noreferrer">Kubernetes (in AKS) error while pulling image</a>, and it seems to come from:</p>
<blockquote>
<p>localy Content-Length for HTTP HEAD request (downloading docker image manifets) is OK (real non 0 size), but
for HTTP HEAD request (downloading docker image manifets) from Internet, where network traffic is through a Azure proxy, Content-Length is set to 0 and containerd returns an error when pull docker image.</p>
</blockquote>
<p>So Azure is working on it, but it isn't clear if it's going to change it.</p>
<p>That being said, those guys tried to pull images from private repo, while your image is public and I was able to pull it too on a VM.
So I think that the problem either comes from your syntax (you probably already checked and re-checked it but if you want to have a triple check you can post it here) or from Azure proxying images coming from some repo it doesn't know.</p>
<p>A simple way to overcome this that comes in mind is to have your image in another repo, in Azure Container Registry for example.</p>
| JujuPat |
<p>I tried to delete my jobs with a LabelSelector by <a href="https://github.com/kubernetes/client-go" rel="noreferrer">client-go</a>:</p>
<pre class="lang-golang prettyprint-override"><code>cli.BatchV1().Jobs("default").Delete(context.TODO(), name, metav1.DeleteOptions{})
</code></pre>
<p>And the job was deleted successfully, but the pods of it didn't!</p>
<p>If I delete this job by <code>kubectl</code>, the pod it created would be deleted automatically.</p>
<p>How could I delete jobs with its pods simply by <code>client-go</code>?</p>
| Reed Chan | <p>you need to set the <code>PropagationPolicy</code> field in <code>DeleteOptions</code> to <code>Background</code>. This ensures that the Job and its child Pods are deleted.</p>
<pre><code>import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
//...
backgroundDeletion := metav1.DeletePropagationBackground
err := cli.BatchV1()
.Jobs("default")
.Delete(context.TODO(), name, metav1.DeleteOptions{
PropagationPolicy: &backgroundDeletion,
})
</code></pre>
| dom1 |
<p><a href="https://i.stack.imgur.com/gVffU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gVffU.png" alt="This is the image of the steps that needs to be done using a dockerfile and a kubernetes file." /></a></p>
<p><a href="https://i.stack.imgur.com/xm0GC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xm0GC.png" alt="This is the Dockerfile that I have written to perform the tasks but it's not running properly and I am not able to figure out the error." /></a></p>
<p>I will be very thankful if anybody can help me out with this dockerfile and kubernetes conf file to perform the following tasks.
I wanted to create a Dockerfile which can fetch the source code of an angular app from github and build it and also push it to the docker hub.
I have tried with the Dockerfile below but there are issues with the file. If anyone can guide me to the mistakes I have done or can provide a suitable Docker file then it will be great.
Also If possible I also want to ask for the kubernetes conf file which can pull the image from the dockerhub and run it as a service.
Thank You.</p>
| XANDER_015 | <p>Assuming that you have Docker and Kubernetes solutions setup and ready.</p>
<p>First, as mentioned by the others, the best option is just to use <a href="https://github.com/wkrzywiec/aston-villa-app/blob/master/Dockerfile" rel="nofollow noreferrer">Dockerfile from the repo</a> instead of writing your own:</p>
<pre><code>### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build /usr/src/app/dist/aston-villa-app /usr/share/nginx/html
</code></pre>
<p>Please <a href="https://github.com/wkrzywiec/aston-villa-app" rel="nofollow noreferrer">clone the repo</a>:</p>
<pre><code>git clone https://github.com/wkrzywiec/aston-villa-app.git
cd aston-villa-app
</code></pre>
<p>Create your Docker repository - steps are <a href="https://docs.docker.com/docker-hub/repos/" rel="nofollow noreferrer">presented here</a> - in this example I will create a public repository named <code>testing-aston-villa-app</code>.</p>
<p>Login to the <a href="https://docs.docker.com/engine/reference/commandline/login/" rel="nofollow noreferrer">Docker registry</a> on your host:</p>
<pre class="lang-sh prettyprint-override"><code>docker login
...
Login Succeeded
</code></pre>
<p><a href="https://docs.docker.com/docker-hub/#step-4-build-and-push-a-container-image-to-docker-hub-from-your-computer" rel="nofollow noreferrer">Build and push Docker image to your repo - commands are like this</a>:</p>
<pre class="lang-sh prettyprint-override"><code>docker build -t <your_username>/my-private-repo .
docker push <your_username>/my-private-repo
</code></pre>
<p>In our example (make sure that you are in the directory where repo is cloned):</p>
<pre class="lang-sh prettyprint-override"><code>docker build -t {your-username}/testing-aston-villa-app .
docker push {your-username}/testing-aston-villa-app
</code></pre>
<p>Ok, image is now on your Docker repository. Time to use it in Kubernetes. Please do below instructions on the host where you <a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">have <code>kubectl</code> installed and configured to interact with your cluster</a>.</p>
<p>Following yaml file has definitions for <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> and for <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a>. In <code>image</code> field please use <code><your_username>/my-private-repo</code> name:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: aston-villa-app-deployment
spec:
selector:
matchLabels:
app: aston-villa-app
replicas: 2
template:
metadata:
labels:
app: aston-villa-app
spec:
containers:
- name: aston-villa-app
image: {your-username}/testing-aston-villa-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aston-villa-app-service
spec:
selector:
app: aston-villa-app
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Please save this yaml and run <code>kubectl apply -f {file.yaml}</code>.</p>
<p>After applied, check if pods are up and service exits:</p>
<pre><code>kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/aston-villa-app-deployment-5f5478b66d-592nd 1/1 Running 0 13m
pod/aston-villa-app-deployment-5f5478b66d-vvhq2 1/1 Running 0 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/aston-villa-app-service ClusterIP 10.101.176.184 <none> 80/TCP 13m
</code></pre>
<p>Now, let's check if service is working by making request to it from another pod:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i --tty busybox --image=busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # wget 10.101.176.184
Connecting to 10.101.176.184 (10.101.176.184:80)
saving to 'index.html'
index.html 100% |*****************************************************************************| 596 0:00:00 ETA
'index.html' saved
/ # cat index.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>AstonVillaApp</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="assets/images/as_logo.svg">
</head>
<body>
<app-root></app-root>
<script type="text/javascript" src="runtime.js"></script><script type="text/javascript" src="polyfills.js"></script><script type="text/javascript" src="styles.js"></script><script type="text/javascript" src="vendor.js"></script><script type="text/javascript" src="main.js"></script></body>
</html>
</code></pre>
<p>Note that I used IP address <code>10.101.176.184</code> because it's the IP address of the <code>service/aston-villa-app-service</code>. In your case, it will be probably different.</p>
| Mikolaj S. |
<p>I executed the <code>kubectl get nodes -o wide</code> command, and in the results, there is a column labeled INTERNAL-IP, which displays the internal IP for each node. I would like to understand if the nodes use this IP to communicate with each other and with the control plane (master node) as well.</p>
<p>Additionally, what role does Calico play in this particular scenario?</p>
| Dev OV | <p>So a kubernetes cluster is basically setup like a network. Every node/pod gets its own internal ip address and it's own entry in the kube-dns! All nodes and pods can communicate with each other over the given ip addresses (it doesn't matter if its a master node or not) or hostname.</p>
<p>If you use Calico it implements a more advanced networking model using the BGP Protocol (<a href="https://www.ibm.com/docs/fr/cloud-private/3.1.1?topic=ins-calico" rel="nofollow noreferrer">more detailed information about Calcico</a>)</p>
<p>Calcico also brings some other features like:</p>
<ul>
<li>more possibilities to define network policy's</li>
<li>more advanced security features</li>
<li>especially designed for large scale deployments</li>
<li>ip address management solutions can be used, for greater control over ip allocation</li>
</ul>
| Oberwalder Sven |
<p>I try to set up ingres in order to view my frontend.</p>
<p>Everything looks OK, including using curl to verify it:</p>
<pre><code>$ curl http://octopus.ea.mot-solutions.com
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://octopus.ea.mot-solutions.com/clustercontrol/">here</a>.</p>
<hr>
<address>Apache/2.4.38 (Debian) Server at octopus.ea.mot-solutions.com Port 80</address>
</body></html>
</code></pre>
<p>However using the same <a href="http://octopus.ea.mot-solutions.com/" rel="nofollow noreferrer">http://octopus.ea.mot-solutions.com/</a> from my browser gives "This site can’t be reached"</p>
<p>Here are the details:<br />
Running minikube on Linux Debian<br />
describe service:</p>
<pre><code>Name: fe-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=octopus-frontend
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.104.191.56
IPs: 10.104.191.56
Port: <unset> 90/TCP
TargetPort: 9090/TCP
NodePort: <unset> 31977/TCP
Endpoints: 172.17.0.3:9090
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Describe ingres:</p>
<pre><code>Name: frontend-ingress
Labels: <none>
Namespace: default
Address: 192.168.49.2
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
octopus.ea.mot-solutions.com
/ fe-service:9090 (172.17.0.3:9090)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 25m (x2 over 26m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>I have used:</p>
<pre><code> kubectl expose deployment octopus-frontend --type=NodePort
</code></pre>
<p>and:</p>
<pre><code>minikube service fe-service
</code></pre>
<p>and put the minikube ip to /etc/hosts:</p>
<pre><code>192.168.49.2 octopus.ea.mot-solutions.com
</code></pre>
<p>minikube version: v1.25.2</p>
<p>Am I missing something?</p>
| Tamar | <p>The node port number is missing in the url</p>
<pre><code>http://octopus.ea.mot-solutions.com:31977
</code></pre>
| Nataraj Medayhal |
<p>I am trying to update a deployment via the YAML file, similar to <a href="https://stackoverflow.com/questions/48191853/how-to-update-a-deployment-via-editing-yml-file">this question</a>. I have the following yaml file...</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-deployment
labels:
app: simple-server
spec:
replicas: 3
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: simple-server
image: nginx
ports:
- name: http
containerPort: 80
</code></pre>
<p>I tried changing the code by changing <code>replicas: 3</code> to <code>replicas: 1</code>. Next I redeployed like <code>kubectl apply -f simple-deployment.yml</code> and I get <code>deployment.apps/simple-server-deployment configured</code>. However, when I run <code>kubectl rollout history deployment/simple-server-deployment</code> I only see 1 entry...</p>
<pre><code>REVISION CHANGE-CAUSE
1 <none>
</code></pre>
<p>How do I do the same thing while increasing the revision so it is possible to rollback?</p>
<p><em>I know this can be done without the YAML but this is just an example case. In the real world I will have far more changes and need to use the YAML.</em></p>
| Jackie | <p>You can use <a href="https://stackoverflow.com/questions/61875309/what-does-record-do-in-kubernetes-deployment"><code>--record</code> flag</a> so in your case the command will look like:</p>
<pre><code>kubectl apply -f simple-deployment.yml --record
</code></pre>
<p>However, a few notes.</p>
<p>First, <a href="https://github.com/kubernetes/kubernetes/issues/40422" rel="nofollow noreferrer"> <code>--record</code> flag is deprecated</a> - you will see following message when you will run <code>kubectl apply</code> with the <code>--record</code> flag:</p>
<pre><code>Flag --record has been deprecated, --record will be removed in the future
</code></pre>
<p>However, <a href="https://github.com/kubernetes/kubernetes/issues/40422#issuecomment-995371023" rel="nofollow noreferrer">there is no replacement for this flag yet</a>, but keep in mind that in the future there probably will be.</p>
<p>Second thing, not every change will be recorded (even with <code>--record</code> flag) - I tested your example from the main question and there is no new revision. Why? <a href="https://github.com/kubernetes/kubernetes/issues/23989#issuecomment-207226153" rel="nofollow noreferrer">It's because:</a>:</p>
<blockquote>
<p><a href="https://github.com/deech" rel="nofollow noreferrer">@deech</a> this is expected behavior. The <code>Deployment</code> only create a new revision (i.e. another <code>Replica Set</code>) when you update its pod template. Scaling it won't create another revision.</p>
</blockquote>
<p>Considering the two above, you need to think (and probably test) if the <code>--record</code> flag is suitable for you. Maybe it's better to use some <a href="https://en.wikipedia.org/wiki/Version_control" rel="nofollow noreferrer">version control system</a> like <a href="https://git-scm.com/" rel="nofollow noreferrer">git</a>, but as I said, it depends on your requirements.</p>
| Mikolaj S. |
<pre><code>kubectl get pods
kubectl get namespace
kubectl describe ns.yml
</code></pre>
<p>When I try to execute any commands from above I am getting the following error:</p>
<blockquote>
<p>E0824 14:41:42.499373 27188 memcache.go:238] couldn't get current server API group list: Get "https://127.0.0.1:53721/api?timeout=32s": dial tcp 127.0.0.1:53721: connectex: No connection could be made because the target machine actively refused it.</p>
</blockquote>
<p>Could you please help me out, how to resolve it?</p>
<p>Can any one tell me what is wrong with my kubutcl utility</p>
| Kavitha Boda | <p>So, the kubectl commands needs to be configured! Currently kubectl is trying to connect to the kube-api on your local machine (127.0.0.1), clearly there is no kubernetes there so it throws the error "No connection could be made..."</p>
<p>To change the kubectl settings you need to find the kubectl config file at the path:</p>
<p>Windows: <code>%userprofile"\.kube\config</code>
Linux: <code>$HOME/.kube/config</code></p>
<p>Open the file with any editor and then you need to change the ip 127.0.0.1 to any external ip of one of your nodes!</p>
<p>This will solve your connection problem but there might be another issue with your certificate! I will alter my answer when you tell me what kubernetes distro you are using (e.g. k3s, k8s, ...)</p>
| Oberwalder Sven |
<p>Im trying to update my flux via bootstrap by following this documentation:
<a href="https://fluxcd.io/flux/use-cases/azure/#flux-installation-for-azure-devops" rel="nofollow noreferrer">https://fluxcd.io/flux/use-cases/azure/#flux-installation-for-azure-devops</a>
Im running this code, enter my password, and run in the error:</p>
<pre><code>flux bootstrap git \
--url=https://dev.azure.com/mycompany/mycomp/_git/myrepo \
--branch=main \
--password=${AZ_PAT_TOKEN} \
--token-auth=true \
--path=clusters/dev \
--version=v0.35.0
</code></pre>
<p>Error</p>
<pre><code>► cloning branch "main" from Git repository "https://dev.azure.com/mycompany/mycomp/_git/myrepo"
✗ failed to clone repository: unexpected client error: unexpected requesting "https://dev.azure.com/mycompany/mycomp/_git/myrepo/git-upload-pack" status code: 400
</code></pre>
<p>The repository link is working, the branch and path are available.</p>
| Leo | <p>The problem was that my Flux CLI wasn't at the same version as the cluster.</p>
| Leo |
<p>I need to use <a href="https://github.com/Azure/azure-sdk-for-python" rel="nofollow noreferrer">Azure Python SDK</a> and <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a> to list the Pods CPU limits for a cluster running in AKS.</p>
<p>Although its straight forward using CLI/PowerShell but I need to use Python exclusively.
Must not use <a href="https://stackoverflow.com/questions/53535855/how-to-get-kubectl-configuration-from-azure-aks-with-python">subprocess calls</a>.</p>
<p>Here is snippet that gets <code>KubeConfig</code> object after authentication with Azure:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.mgmt.containerservice import ContainerServiceClient
credential = DefaultAzureCredential(exclude_cli_credential=True)
subscription_id = "XXX"
resource_group_name= 'MY-SUB'
cluster_name = "my-aks-clustername"
container_service_client = ContainerServiceClient(credential, subscription_id)
kubeconfig = container_service_client.managed_clusters. \
list_cluster_user_credentials(resource_group_name, cluster_name). \
kubeconfigs[0]
</code></pre>
<p>But I am unsure how to put this to be used by K8s Python client:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config() ## How to pass?
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
| Rajesh Swarnkar | <p>You can use the <code>config.load_kube_config</code> method and pass in the kubeconfig object you obtained earlier as a parameter. The method accepts a <code>config_file</code> parameter, which can be a file object, a file-like object, or a string file path.</p>
<p>Since <strong>kubeconfig</strong> is a string, you can pass it as a string file path, like so:</p>
<pre><code>from kubernetes import client, config
# Pass the kubeconfig string as a file path
config.load_kube_config(config_file=kubeconfig)
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
| najx |
<p>I'm following Les Jackson's <a href="https://www.youtube.com/watch?v=DgVjEo3OGBI" rel="nofollow noreferrer">tutorial</a> to microservices and got stuck at 05:30:00 while creating a deployment for a ms sql server. I've written the deployment file just as shown on the yt video:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
</code></pre>
<p>The persistent volume claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
</code></pre>
<p>But when I apply this deployment, the pod ends up with ImagePullBackOff status:</p>
<pre><code>commands-depl-688f77b9c6-vln5v 1/1 Running 0 2d21h
mssql-depl-5cd6d7d486-m8nw6 0/1 ImagePullBackOff 0 4m54s
platforms-depl-6b6cf9b478-ktlhf 1/1 Running 0 2d21h
</code></pre>
<p><strong>kubectl describe pod</strong></p>
<pre><code>Name: mssql-depl-5cd6d7d486-nrrkn
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 28 Jul 2022 12:09:34 +0200
Labels: app=mssql
pod-template-hash=5cd6d7d486
Annotations: <none>
Status: Pending
IP: 10.1.0.27
IPs:
IP: 10.1.0.27
Controlled By: ReplicaSet/mssql-depl-5cd6d7d486
Containers:
mssql:
Container ID:
Image: mcr.microsoft.com/mssql/server:2017-latest
Image ID:
Port: 1433/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MSSQL_PID: Express
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false
Mounts:
/var/opt/mssql/data from mssqldb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube- api-access-xqzks (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mssqldb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-claim
ReadOnly: false
kube-api-access-xqzks:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not- ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m42s default-scheduler Successfully assigned default/mssql-depl-5cd6d7d486-nrrkn to docker-desktop
Warning Failed 102s kubelet Failed to pull image "mcr.microsoft.com/mssql/server:2017-latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 102s kubelet Error: ErrImagePull
Normal BackOff 102s kubelet Back-off pulling image "mcr.microsoft.com/mssql/server:2017-latest"
Warning Failed 102s kubelet Error: ImagePullBackOff
Normal Pulling 87s (x2 over 3m41s) kubelet Pulling image "mcr.microsoft.com/mssql/server:2017-latest"
</code></pre>
<p>In the events it shows</p>
<blockquote>
<p>"rpc error: code = Unknown desc = context deadline exceeded"</p>
</blockquote>
<p>But it doesn't tell me anything and resources on troubleshooting this error don't include such error.</p>
<p>I'm using kubernetes on docker locally.
I've researched that this issue can happen when pulling the image from a private registry, but this is public one, right <a href="https://hub.docker.com/_/microsoft-mssql-server" rel="nofollow noreferrer">here</a>. I copy pasted the image path to be sure, I tried with different ms sql version, but to no avail.</p>
<p>Can someone be so kind and show me the right direction I should go / what should I try to get this to work? It worked just fine on the video :(</p>
| Ceres | <p>I fixed it by manually pulling the image via <code>docker pull mcr.microsoft.com/mssql/server:2017-latest</code> and then deleting and re-applying the deployment.</p>
| Ceres |
<p>My cluster is running on-prem. Currently when I try to ping the external IP of service type LoadBalancer assigned to it from Metal LB. I get a reply from one of the VM's hosting the pods - <strong>Destination Host unreachable</strong>. Is this because the pods are on an internal kubernetes network(I am using calico) and cannot be pinged. A detailed explanation of the scenario can help to understand it better. Also all the services are performing as expected. I am just curious to know the exact reason behind this since I am new at this. Any help will be much appreciated. Thank you</p>
| Dravid S Sundaram | <p>The LoadbalancerIP or the External SVC IP will never be pingable.</p>
<p>When you define a service of type LoadBalancer, you are saying I would like to Listen on TCP port 8080 for eg: on this SVC.</p>
<p>And that is the only thing your External SVC IP will respond to.</p>
<p>A ping would be UDP ICMP packets that do not match the destination of TCP port 8080.</p>
<p>You can do an <code>nc -v <ExternalIP> 8080</code> to test it.</p>
<p>OR</p>
<p>use a tool like <code>mtr</code> and pass --tcp --port 8080 to do your tests</p>
| Sushil Suresh |
<p>I have a spring cloud gateway that works fine in the docker configuration, like this:
(all routes/services except ratings are removed for readability's sake)</p>
<pre><code>@Value("${hosts.ratings}")
private String ratingsPath;
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.host("*").and().path("/api/ratings/**")
.uri(ratingsPath + ":2226/api/ratings/"))
...other routes...
.build();
}
</code></pre>
<p>This gets it values from the <code>application.properties</code> locally, and from an environment variable in docker, like so in the docker-compose:</p>
<pre><code> apigw:
build: ./Api-Gateway
container_name: apigw
links:
- ratings
...
depends_on:
- ratings
...
ports:
- "80:8080"
environment:
- hosts_ratings=http://ratings
...
</code></pre>
<p>This configuration works just fine. However, when porting this to our kubernetes cluster, all routes get a <code>404</code>.
The deployment of our api gateway is as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apigw
name: apigw-deployment
spec:
replicas: 1
selector:
matchLabels:
app: apigw
template:
metadata:
labels:
app: apigw
spec:
containers:
- name: apigw
image: redacted
ports:
- containerPort: 8080
env:
- name: hosts_ratings
value: "ratings-service.default.svc.cluster.local"
...
</code></pre>
<p>With <code>ratings-service</code> being our ratings service (that definitely works, because when exposing it directly from its service, it does work), defined like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ratings-service
labels:
app: ratings
spec:
selector:
app: ratings
ports:
- port: 2226
targetPort: 2226
</code></pre>
<p>The service of our api gateway is as follows, using bare metal with an external IP that does work:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apigw-service
labels:
app: apigw
spec:
selector:
app: apigw
ports:
- port: 80
targetPort: 8080
externalIPs:
- A.B.C.D
</code></pre>
<p>How I believe it should work is that <code>ratings-service.default.svc.cluster.local</code> would get translated to the correct ip, filled in to the <code>ratingsPath</code> variable, and the query would succeed, but this is not the case.<br />
Our other services are able to communicate in the same way, but the api gateway does not seem to be able to do that.
What could be the problem?</p>
| Raven | <p>Posting community wiki based on comment for better visibility. Feel free to expand it.</p>
<hr />
<p>The issue was a faulty version of the image:</p>
<blockquote>
<p>It seems like the service i was using just straight up didn't work. Must have been a faulty version of the image i was using.</p>
</blockquote>
<p>Check also:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">Access Services Running on Clusters | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods | Kubernetes</a></li>
</ul>
| Mikolaj S. |
<p>Please help ..</p>
<p>I am building nginx plus ingress controller and deplyoing in eks using Dockerfile</p>
<pre><code>
Dockerfile:
FROM amazonlinux:2
LABEL maintainer="[email protected]"
ENV NGINX_VERSION 23
ENV NJS_VERSION 0.5.2
ENV PKG_RELEASE 1.amzn2.ngx
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${PATH}"
RUN mkdir -p /etc/ssl/nginx
ADD nginx-repo.crt /etc/ssl/nginx
ADD nginx-repo.key /etc/ssl/nginx
ADD qlik.crt /etc/ssl/nginx
RUN update-ca-trust extract
RUN yum -y update \
&& yum -y install sudo
RUN set -x \
&& chmod 644 /etc/ssl/nginx/* \
&& yum install -y --setopt=tsflags=nodocs wget ca-certificates bind-utils wget bind-utils vim-minimal shadow-utils \
&& groupadd --system --gid 101 nginx \
&& adduser -g nginx --system --no-create-home --home /nonexistent --shell /bin/false --uid 101 nginx \
&& usermod -s /sbin/nologin nginx \
&& usermod -L nginx \
&& wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-amazon2.repo \
&& yum --showduplicates list nginx-plus \
&& yum install -y --setopt=tsflags=nodocs nginx-plus-${NGINX_VERSION}-${PKG_RELEASE} \
&& rm /etc/nginx/conf.d/default.conf \
&& mkdir -p /var/cache/nginx \
&& mkdir -p /var/lib/nginx/state \
&& chown -R nginx:nginx /etc/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& ulimit -c -m -s -t unlimited \
&& yum clean all \
&& rm -rf /var/cache/yum \
&& rm -rf /etc/yum.repos.d/* \
&& rm /etc/ssl/nginx/nginx-repo.crt /etc/ssl/nginx/nginx-repo.key
RUN echo "root:root" | chpasswd
EXPOSE 80 443 8080
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>I am starting the container using helm commands</p>
<pre><code>
helm upgrade \
--install my-athlon-ingress-controller nginx-stable/nginx-ingress --version 0.11.3 --debug \
--set controller.image.pullPolicy=Always \
--set controller.image.tag=6.0.1 \
--set controller.image.repository=957123096554.dkr.ecr.eu-central-1.amazonaws.com/nginx-service \
--set controller.nginxplus=true \
--set controller.enableSnippets=true \
--set controller.enablePreviewPolicies=true \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-type'='nlb' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol'='tcp' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-proxy-protocol'='*' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports'='443'
echo Setting up SSL
export tlskey=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-key |jq --raw-output '.SecretString' )
echo $tlskey
export tlscrt=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-crt |jq --raw-output '.SecretString' )
echo $tlscrt
helm upgrade --install nginx-certificate-secrets ./helm-chart-nginx-certificates --set tlscrt=$tlscrt --set tlskey=$tlskey
</code></pre>
<p>Ok, let me give more clarity , i have a nginx pod running in debian 10 and when i try to curl a particular endpoint in keycloak i get a error like</p>
<p>2022/06/13 12:17:46 [info] 35#35: *35461 invalid JWK set while sending to client, client: 141.113.3.32, server: gate-acc.athlon.com, request:</p>
<p>but when i curl the same end point from a application (java pod) i get a response 200 .</p>
<p>Both nginx pod and all my application pod is in same namespace and the from the same cluster in EKS.</p>
<p>the difference i see between nginx pod and application pod is application pod i used the base image as amazon linux but the ngnix pod is with the base image of debian .</p>
<p>so i suspect the OS is the issue , so now i try to build a ngnix plus image using amazon linux and deploy using helm and then try to curl the keycloak end point , that is when i get this PATH not found issue ,</p>
<p>I assume amazon linux may have some root certificate already trusted inbuild so it is able to curl my keycloak but debian does not .</p>
<p>This is the reason i am doing this , adding a certificate in the docker file is a interim solution , if this works then i can add this as secrets and mount as file system .</p>
<p>Both the ngnix pod build in amazon linux or debian as only nginx user , i am not able to login as root , so i am not able to install any utilities like tcpdump or MRT or dig to see what is happening when i do curl , the strange thing is not even ps , sudo or any basis command is working as i dont have root , i am not even able to install anything .</p>
<p>Error :</p>
<p>Error: failed to start container "my-athlon-ingress-controller-nginx-ingress": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-nginx-plus=true": executable file not found in $PATH: unknown</p>
<p>My goal is deploy this image and have the root certifcate installed in amazon linux machine and have root access for the pod .</p>
<p>I am getting the above message any help is much appreciated , i also added ENV path in my docker file .</p>
<p>qlik.crt has the root certificate</p>
<p>Please help, thanks</p>
| Dilu | <p>For loading certificates you need not to build the nginx docker image. You can use secrets to load the same as volume mount to deployment/daemon set configuration.</p>
| Nataraj Medayhal |
<p>I managed to install kubernetes 1.22, longhorn, kiali, prometheus and istio 1.12 (profile=minimal) on a dedicated server at a hosting provider (hetzner).</p>
<p>I then went on to test httpbin with an istio ingress gateway from the istio tutorial. I had some problems making this accessible from the internet (I setup HAProxy to forward local port 80 to the dynamic port that was assigned in kubernetes, so port 31701/TCP in my case)</p>
<p>How can I make kubernetes directly available on bare metal interface port 80 (and 443).</p>
<p>I thought I found the solution with metallb but I cannot make that work so I think it's not intended for that use case. (I tried to set EXTERNAL-IP to the IP of the bare metal interface but that doesn't seem to work)</p>
<p>My HAProxy setup is not working right now for my SSL traffic (with cert-manager on kubernetes) but before I continue looking into that I want to make sure. Is this really how you are suppose to route traffic into kubernetes with an istio gateway configuration on bare metal?</p>
<p>I came across <a href="https://stackoverflow.com/questions/51331902/kubernetes-with-istio-ingress-not-running-on-standard-http-ports-443-80">this</a> but I don't have an external Load Balancer nor does my hosting provider provide one for me to use.</p>
| 2Fast2BCn | <p>Posted community wiki answer for better visibility based on the comment. Feel free to expand it.</p>
<hr />
<p>The solution for the issue is:</p>
<blockquote>
<p>I setup HAProxy in combination with Istio gateway and now it's working.</p>
</blockquote>
<p>The reason:</p>
<blockquote>
<p>I think the reason why SSL was not working was because <a href="https://istio.io/latest/docs/setup/additional-setup/gateway/" rel="nofollow noreferrer">istio.io/latest/docs/setup/additional-setup/gateway</a> creates the ingress gateway in a different namespace (<code>istio-ingress</code>) from the rest of the tutorials (<code>istio-system</code>).</p>
</blockquote>
| Mikolaj S. |
<p>In al the tutorials about Kubernetes cluster I have read I didn't see that they mention to 2 load balancers, but only one for the ingress pods.</p>
<p>However, in a proper production environment, should's we have 2 different load balancers?</p>
<ol>
<li>to balance between the master nodes for requests to the ApiServer.</li>
<li>to balance between the Ingress podes to control the external traffic.</li>
</ol>
| Ohad | <ol>
<li>to balance between the master nodes for requests to the ApiServer.</li>
</ol>
<blockquote>
<p>For all production environments its advised to have load
balancer for API Server. This is the first step as part of K8S HA mode creation. More details are in <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#create-load-balancer-for-kube-apiserver" rel="nofollow noreferrer">k8s documentation</a></p>
</blockquote>
<ol start="2">
<li>to balance between the Ingress podes to control the external traffic.</li>
</ol>
<blockquote>
<p>You are correct for this also it’s definitely required to handle external traffic. All the ingress services are created of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> Type in their implementations.</p>
</blockquote>
| Nataraj Medayhal |
<p>I have a django deployment on kubernetes cluster and in the <code>readinessProbe</code>, I am running <code>python</code>, <code>manage.py</code>, <code>migrate</code>, <code>--check</code>. I can see that the return value of this command is 0 but the pod never becomes ready.</p>
<p>Snippet of my deployment:</p>
<pre><code> containers:
- name: myapp
...
imagePullPolicy: Always
readinessProbe:
exec:
command: ["python", "manage.py", "migrate", "--check"]
initialDelaySeconds: 15
periodSeconds: 5
</code></pre>
<p>When I describe the pod which is not yet ready:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 66s default-scheduler Successfully assigned ... Normal Pulled 66s kubelet Successfully pulled image ...
Normal Created 66s kubelet Created container ...
Normal Started 66s kubelet Started container ...
Warning Unhealthy 5s (x10 over 50s) kubelet Readiness probe failed:
</code></pre>
<p>I can see that <code>migrate</code> <code>--check</code> returns 0 by execing into the container which is still in not ready state and running</p>
<pre><code>python manage.py migrate
echo $?
0
</code></pre>
<p>Is there something wrong in my exec command passed as <code>readinessProbe</code>?</p>
<p>The version of kubernetes server that I am using is 1.21.7.
The base image for my deployment is python:3.7-slim.</p>
| Divick | <p>The solution for the issue is to increase <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer"><code>timeoutSeconds</code> parameter, which is by default set to 1 second</a>:</p>
<blockquote>
<ul>
<li><code>timeoutSeconds</code>: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</li>
</ul>
</blockquote>
<p>After increasing the <code>timeoutSeconds</code> parameter, the application is able to pass the readiness probe.</p>
<p>Example snippet of the deployment with <code>timeoutSeconds</code> parameter set to 5:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: myapp
...
imagePullPolicy: Always
readinessProbe:
exec:
command: ["python", "manage.py", "migrate", "--check"]
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
</code></pre>
| Mikolaj S. |
<p>I am trying to work on sample project for istio. I have two apps demo1 and demo2.</p>
<p>demoapp Yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-1-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-1-app
template:
metadata:
labels:
app: demo-1-app
spec:
containers:
- name: demo-1-app
image: muzimil:demo-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-1-app
spec:
selector:
app: demo-1-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-1-app
labels:
account: demo-1-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-2-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-2-app
template:
metadata:
labels:
app: demo-2-app
spec:
containers:
- name: demo-2-app
image: muzimil:demo2-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-2-app
spec:
selector:
app: demo-2-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-2-app
labels:
account: demo-2-app
</code></pre>
<p>And My gateway os this</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-app-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-service1
spec:
hosts:
- "*"
gateways:
- demo-app-gateway
http:
- match:
- uri:
exact: /demo1
route:
- destination:
host: demo-1-app
port:
number: 8080
- match:
- uri:
exact: /demo2
route:
- destination:
host: demo-2-app
port:
number: 8080
</code></pre>
<p>I tried to hit url with localhost/demo1/getDetails both 127.0.0.1/demo1/getDetails</p>
<p>But I am getting always 404</p>
<p>istioctl analyse does not give any errors.</p>
| Patan | <p>To access the application - either change istio-ingressgateway service to NodePort or do port forwarding for the istio ingress gateway service. Edit the istio-ingressgateway service to change the service type.</p>
<pre><code>type: NodePort
</code></pre>
<p>K8s will give a node port then you can provide the same nodeport values in istio gateway.</p>
<pre><code> selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: <nodeportnumber>
name: http
protocol: HTTP
</code></pre>
| Nataraj Medayhal |
<p>How do we update the <code>imagePullPolicy</code> alone for certain deployments using <code>kubectl</code>? The image tag has changed, however we don't require a restart. Need to update existing deployments with <code>--image-pull-policy</code> as <code>IfNotPresent</code></p>
<p>Note: Don't have the complete YAML or JSON for the deployments, hence need to do it via <code>kubectl</code></p>
| Rayyan | <p>If you need to do this across several deployments, here is a code to help</p>
<pre><code>from kubernetes import client, config
# Load the Kubernetes configuration from default location
config.load_kube_config()
# Create a Kubernetes API client
api = client.AppsV1Api()
# Set the namespace to update deployments in
namespace = "my-namespace"
# Get a list of all deployments in the namespace
deployments = api.list_namespaced_deployment(namespace)
# Loop through each deployment and update its imagePullPolicy
for deployment in deployments.items:
deployment.spec.template.spec.image_pull_policy = "Always"
api.patch_namespaced_deployment(
name=deployment.metadata.name,
namespace=namespace,
body=deployment
)
</code></pre>
| Hariharan Madhavan |
<p>I would like to catch the Client IP Address inside my .NET application running behind GKE Ingress Controller to ensure that the client is permitted.</p>
<pre><code>var requestIpAddress = request.HttpContext.Connection.RemoteIpAddress.MapToIPv4();
</code></pre>
<p>Instead of getting Client IP Address I get my GKE Ingress IP Address, due to The Ingress apply some forwarding rule.</p>
<p>The GKE Ingress controller is pointing to the Kubernetes service of type NodePort.</p>
<p>I have tried to add spec to NodePort service to preserve Client IP Address but it doesn't help. It is because the NodePort service is also runng behind the Ingress</p>
<pre><code>externalTrafficPolicy: Local
</code></pre>
<p>Is it possible to preserve Client IP Address with GKE Ingress controller on Kubernetes?</p>
<p>NodePort Service for Ingress:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-ingress-service
labels:
app/name: ingress.api
spec:
type: NodePort
externalTrafficPolicy: Local
selector:
app/template: api
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: default
labels:
kind: ingress
app: ingress
annotations:
networking.gke.io/v1beta1.FrontendConfig: frontend-config
spec:
tls:
- hosts:
- '*.mydomain.com'
secretName: tls-secret
rules:
- host: mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: api-ingress-service
port:
number: 80
</code></pre>
| Mikolaj | <p>Posted community wiki for better visibility. Feel free to expand it.</p>
<hr />
<p>Currently the only way to get the client source IP address in GKE Ingress is to <a href="https://cloud.google.com/load-balancing/docs/features#ip_addresses" rel="nofollow noreferrer">use <code>X-Forwarded-For</code> header. It's known limitation</a> for all GCP HTTP(s) Load Balancers (GKE Ingress is using External HTTP(s) LB).</p>
<p>If it does not suit your needs, consider migrating to a third-party Ingress Controller which is using an external TCP/UDP network LoadBalancer, like <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p>
| Mikolaj S. |
<p>On Cloud Composer I have long running DAG tasks, each of them running for 4 to 6 hours. The task ends with an error which is caused by Kubernetes API. The error message states 401 Unauthorized.</p>
<p>The error message:</p>
<pre><code>kubernetes.client.rest.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'e1a37278-0693-4f36-8b04-0a7ce0b7f7a0', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 07 Jul 2023 08:10:15 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
</code></pre>
<p>The kubernetes API token has an expiry of 1 hour and the Composer is not renewing the token before it expires.
This issue never happened with Composer1, it started showing only when I migrated from Composer1 to Composer2</p>
<p>Additional details:
There is an option in GKEStartPodOperator called is_delete_operator_pod
that is set to true. This option deletes the pod from the cluster after the job is done. So, after the task is completed in about 4 hours, the Composer tries to delete the pod, and that time this 401 Unauthorized error is shown.</p>
<p>I have checked some Airflow configs like kubernetes.enable_tcp_keepalive that enables TCP keepalive mechanism for kubernetes clusters, but it doesn't help resolving the problem.</p>
<p>What can be done to prevent this error?</p>
| Kavya | <p>As mentioned in the comment This issue might occur when you try to run a kubectl command in your GKE cluster from a local environment. The command fails and displays an error message, usually with HTTP status code (Unauthorized).</p>
<p>The cause of this issue might be one of the following:</p>
<ul>
<li><p>The gke-gcloud-auth-plugin authentication plugin is not correctly installed or configured.</p>
</li>
<li><p>You lack the permissions to connect to the cluster API server and run kubectl commands.</p>
</li>
</ul>
<p>To diagnose the cause, do the following this <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#connect_to_the_cluster_using_curl" rel="nofollow noreferrer">Link</a></p>
<p>If you get a 401 error or a similar authorization error, ensure that you have the correct permissions to perform the operation. And for more information you can see <a href="https://github.com/apache/airflow/issues/31648" rel="nofollow noreferrer">Git Link</a></p>
| Arpita Shrivastava |
<p>I created the following <code>configMap </code>for my NGINX ingress controller:</p>
<pre><code>apiVersion: v1
data:
allow-snippet-annotations: "true"
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"
modsecurity-snippet: |-
SecRuleEngine On
SecRequestBodyAccess On
SecAuditLog /dev/stdout
SecAuditLogFormat JSON
SecAuditEngine RelevantOnly
SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: ingress-basic
creationTimestamp: "2023-01-20T11:31:53Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.5.1
helm.sh/chart: ingress-nginx-4.4.2
name: nginx-ingress-ingress-nginx-controller
namespace: ingress-basic
resourceVersion: "200257665"
uid: e6ab9121-9a73-47e3-83ec-6c1fa19072ee
</code></pre>
<p>I would expect that following SecRule</p>
<pre><code>SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'"
</code></pre>
<p>would block any request containing the word <code>attack </code>in the URI or in the querystring, for example in:</p>
<p><a href="https://secrule.sample.com/api?task=attack" rel="nofollow noreferrer">https://secrule.sample.com/api?task=attack</a></p>
<p>But it doesn't. There is clearly something missing in the definition of the configMap of my NGINX ingress controller, but I don't understand what. Any clue? Thanks!</p>
<p>I'd like to use ModSecurity with an NGINX Ingress Controller to block incoming calls that contain a given word in the querystring.</p>
| Paolo Salvatori | <p>I solved the issue by escaping quotes and double quotes of the SecRule in the configmap as follows:</p>
<pre><code>SecRule REQUEST_URI|ARGS|QUERY_STRING \"@contains attack\" \"id:100001,phase:1,t:lowercase,deny,status:403,msg:\'Attack Detected\'\"
</code></pre>
| Paolo Salvatori |
<p>I have two clusters.
Kubernetes 1.25 and Openshift 4.11</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: testcustom
#namespace: ics-e2e-pods-456
namespace: ics-e2e-deploy-7887
labels:
app: testcustom
spec:
replicas: 1
selector:
matchLabels:
app: testcustom
template:
metadata:
labels:
app: testcustom
spec:
containers:
- image: busybox #image name which should be avilable within cluster
name: container-name # name of the container inside POD
volumeMounts:
- mountPath: /myvolumepath # mount path for pvc from container
name: pvc-name # pvc name for this pod
securityContext:
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop: ["ALL"]
volumes:
- name: pvc-name # volume resource name in this POD, user can choose any name as per kubernetes
persistentVolumeClaim:
claimName: csi-block-pvc-custom-one # pvc name which was created by using claim.yaml file
</code></pre>
<p>When I try to deploy this pod, it fails in either of the above cluster throwing errors related to security context. If I fix issue for one cluster, the same spec doesn't work in other cluster. I am wondering how to get a common deployment file which can be used in both clusters</p>
<p>Error</p>
<pre><code>Error creating: pods "testcustom-589767ccd5-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, spec.containers[0].securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000640000, 1000649999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": : Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]
</code></pre>
| ambikanair | <p>In openshift when namespace/project is created please ensure below ranges specified properly. the values should be mapped to security context specified in yaml definition for fsgroup and runAsUser. More details are in <a href="https://docs.openshift.com/container-platform/4.6/authentication/managing-security-context-constraints.html" rel="nofollow noreferrer">openshift</a> documentation. the same pod definition will work in k8s and openshift.</p>
<pre><code>openshift.io/sa.scc.uid-range
openshift.io/sa.scc.supplemental-groups
</code></pre>
| Nataraj Medayhal |
<p>I want to add those flags : <strong>--insecure-port=8080 and insecure-bind-address=0.0.0.0</strong> so I need to edit this file :<strong>/etc/kubernetes/manifests/kube-apiserver.yaml</strong>. However, I've done some research and I've found out that I should run this command: <strong>minikube ssh</strong>. After that, when I found the file and when I try to edit it I get <strong>sudo: nano command not found</strong> .</p>
<p>vim didn't work as well although I tried to install them but i got a whole bunch of errors.</p>
| NutellaTN | <p>After running <code>minikube ssh</code> command and switching to sudo mode by <code>sudo -i</code>, you can install editor to the ubuntu base using:
<code>sudo apt update && sudo apt install -y vim-tiny</code></p>
<p>P.S: For me vim-tiny worked well and then I could able to edit the <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> file using vim commands.</p>
<p>There is also a closed <a href="https://github.com/kubernetes/minikube/issues/8393" rel="nofollow noreferrer">issue</a> on Minikube on this topic.</p>
| Neuling_101 |
<p>Commands used:</p>
<pre><code>git clone https://github.com/helm/charts.git
</code></pre>
<hr />
<pre><code>cd charts/stable/prometheus
</code></pre>
<hr />
<pre><code>helm install prometheus . --namespace monitoring --set rbac.create=true
</code></pre>
<p>After Running the 3rd command I got below error:</p>
<p><a href="https://i.stack.imgur.com/XCVd0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XCVd0.png" alt="enter image description here" /></a></p>
<p>Anyone Please help me out from this issue...</p>
<p>Thanks...</p>
| harish hari | <p>On the <a href="https://github.com/helm/charts/tree/master/stable/prometheus#prometheus" rel="nofollow noreferrer">GitHub page</a> you can see that this repo it deprecated:</p>
<blockquote>
<p>DEPRECATED and moved to <a href="https://github.com/prometheus-community/helm-charts" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts</a></p>
</blockquote>
<p>So I'd recommend to add and use <a href="https://github.com/prometheus-community/helm-charts" rel="nofollow noreferrer">Prometheus Community Kubernetes Helm Charts</a> repository:</p>
<pre><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
</code></pre>
<p>Then you can install Prometheus with your flags using following command:</p>
<pre><code>helm install prometheus prometheus-community/prometheus --namespace monitoring --set rbac.create=true
</code></pre>
<p>If you really want to stick to the version from the old repository, you don't have to clone repo to your host. Just <a href="https://github.com/helm/charts/tree/master/stable/prometheus#prometheus" rel="nofollow noreferrer">follow steps from the repository page</a>. Make sure you have <code>https://charts.helm.sh/stable</code> repository added to the helm by running <code>helm repo list</code>. If not, add it using command:</p>
<pre><code>helm repo add stable https://charts.helm.sh/stable
</code></pre>
<p>Then, you can install your chart:</p>
<pre><code>helm install prometheus stable/prometheus --namespace monitoring --set rbac.create=true
</code></pre>
| Mikolaj S. |
<p>I have a bare-metal kubernetes cluster, which use metallb as ELB.</p>
<p>I am tring to expose a service with istio <code>gateway</code>, but facing <strong>connection refused</strong> problem. I am new to istio, please help to check my manifests.</p>
<p>versions:</p>
<pre><code>Kubernetes clsuter version: 1.27
Docker version 20.10.12, build e91ed57
cni-dockerd : cri-dockerd-0.3.4
OS: CentOS 7
MetalLB v0.13.10
</code></pre>
<p>problem:</p>
<p><strong>Note</strong>: <code>ceph-dashboard.xxx.com</code> is in /etc/hosts file</p>
<pre><code>[ggfan@fedora rook]$ curl -vvv https://ceph-dashboard.xxx.com/
* Trying 172.28.6.200:443...
* connect to 172.28.6.200 port 443 failed: Connection refused
* Failed to connect to ceph-dashboard.xxx.com port 443 after 2 ms: Connection refused
* Closing connection 0
curl: (7) Failed to connect to ceph-dashboard.xxx.com port 443 after 2 ms: Connection refused
</code></pre>
<p>the service:</p>
<pre><code>Name: rook-ceph-mgr-dashboard
Namespace: rook-ceph
Labels: app=rook-ceph-mgr
rook_cluster=rook-ceph
Annotations: <none>
Selector: app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.185.38
IPs: 10.102.185.38
Port: http-dashboard 7000/TCP
TargetPort: 7000/TCP
Endpoints: 172.16.228.168:7000
Session Affinity: None
Events: <none>
</code></pre>
<p>gateway and virtual service definition:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ceph-dashboard-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 7000
name: http-dashboard
protocol: http-web
tls:
mode: SIMPLE
credentialName: lecerts
hosts:
- ceph-dashboard.bgzchina.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ceph-dashboard-vs
spec:
hosts:
- "ceph-dashboard.bgzchina.com"
gateways:
- ceph-dashboard-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 7000
host: rook-ceph-mgr-dashboard
</code></pre>
<p>lecerts is tls secret created from certs from let's encrypt:</p>
<pre><code>[ggfan@fedora ingress-nginx]$ kubectl describe secret lecerts -n rook-ceph
Name: lecerts
Namespace: rook-ceph
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 5238 bytes
tls.key: 241 bytes
</code></pre>
<p>the istio ingressgateway service:</p>
<pre><code>
West Farmer
上午 10:50
Hi, I am tring to expose a service with gateway, but I am facing connection refused problem, any idea ?
the service :
Name: rook-ceph-mgr-dashboard
Namespace: rook-ceph
Labels: app=rook-ceph-mgr
rook_cluster=rook-ceph
Annotations: <none>
Selector: app=rook-ceph-mgr,mgr_role=active,rook_cluster=rook-ceph
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.185.38
IPs: 10.102.185.38
Port: http-dashboard 7000/TCP
TargetPort: 7000/TCP
Endpoints: 172.16.228.168:7000
Session Affinity: None
Events: <none>
gateway and virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ceph-dashboard-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 7000
name: http-dashboard
protocol: http-web
tls:
mode: SIMPLE
credentialName: lecerts
hosts:
- ceph-dashboard.bgzchina.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ceph-dashboard-vs
spec:
hosts:
- "ceph-dashboard.bgzchina.com"
gateways:
- ceph-dashboard-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 7000
host: rook-ceph-mgr-dashboard
istio-ingressgateway:
[ggfan@fedora rook]$ kubectl -n istio-system describe svc istio-ingressgateway
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=unknown
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.18.1
release=istio
Annotations: metallb.universe.tf/ip-allocated-from-pool: default-pool
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.117.31
IPs: 10.98.117.31
LoadBalancer Ingress: 172.28.6.200
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31967/TCP
Endpoints: 172.16.228.161:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 31509/TCP
Endpoints: 172.16.228.161:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 30320/TCP
Endpoints: 172.16.228.161:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 32554/TCP
Endpoints: 172.16.228.161:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 32483/TCP
Endpoints: 172.16.228.161:15443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 45m metallb-speaker announcing from node "k8sc01wn03" with protocol "layer2"
Normal nodeAssigned 28m (x4 over 88m) metallb-speaker announcing from node "k8sc01mn01" with protocol "layer2"
</code></pre>
| WestFarmer | <p>Please match your application gateway ports with istio-ingressgateway service https port information. There is no 7000 port in ingress gateway service defined.</p>
<p>Below information in application gateway has to be matched information in istio-ingressgateway service https information.</p>
<pre><code>- port:
number: 443
name: https
protocol: HTTPS
</code></pre>
| Nataraj Medayhal |
<p>I'm writing custom k8s metrics collector for purpose of monitoring application's versions in two various clusters. How to expose specific location for metrics like "/metrics" and collect metrics in infinity cycle?</p>
<p>That's my custom metrics collector example:</p>
<pre><code>import time
from prometheus_client import start_http_server
from prometheus_client.core import REGISTRY, CounterMetricFamily
from kubernetes import client, config, watch
class CustomCollector(object):
def __init__(self):
pass
def collect(self):
g = CounterMetricFamily("retail_pods_info", 'info about pods', labels=['secret','namespace','deployment_name','image','helm'])
config.load_kube_config('config')
v1 = client.CoreV1Api()
group = "argoproj.io"
version = "v1alpha1"
plural = "applications"
#kind = "Application"
namespace = "argo-cd"
pod_list: client.V1PodList = v1.list_pod_for_all_namespaces(watch=False)
pods: list[client.V1Pod] = pod_list.items
metrics_list = []
for pod in pods:
metadata: client.V1ObjectMeta = pod.metadata
spec: client.V1PodSpec = pod.spec
volumes: list[client.V1Volume] = spec.volumes
if volumes is not None:
for volume in volumes:
if volume.projected:
projected: client.V1ProjectedVolumeSource = volume.projected
sources: list[client.V1VolumeProjection] = projected.sources
for source in sources:
if source.secret:
secret: client.V1SecretProjection = source.secret
s = secret.name + " " + metadata.namespace.lower() + " " + metadata.name.lower().rsplit('-',2)[0] + " " + pod.spec.containers[0].image
metrics_list.append(s.split())
api_client = client.ApiClient()
argocd_api = client.CustomObjectsApi(api_client)
argocd_apps = argocd_api.list_namespaced_custom_object(group, version, namespace, plural, watch=False)
for metric in metrics_list:
for app in argocd_apps["items"]:
if metric[2] == app["metadata"]["name"]:
helm_version=app["spec"]["source"]["repoURL"]+"-"+app["spec"]["source"]["targetRevision"]
metric.append(helm_version)
g.add_metric([metric[0], metric[1], metric[2], metric[3], metric[4]], 1)
yield g
#
# for k in metrics_list:
# g.add_metric([k[0],k[1],k[2],k[3]], 1)
# yield g
if __name__ == '__main__':
start_http_server(8000)
REGISTRY.register(CustomCollector())
while True:
time.sleep(60)
</code></pre>
| Garamoff | <p>Regarding endpoint <code>/metrics</code>:<br />
Method <code>start_http_server</code> starts a server on the specified port, that responds to any query with metrics. So request to path <code>/metrics</code> will be responded to with generated metric by default.</p>
<p>Regarding "collect metrics in infinity cycle":<br />
Your app is already doing it (kinda). Since you registered your custom collector every request will invoke the <code>collect</code> method. And since Prometheus while gathering metrics basically does it in an infinite cycle - your collector does it too.</p>
| markalex |
<p>How to <em>list all</em> certificates & make a <em>describe</em> in the particular namespaces using <a href="https://github.com/kubernetes-client/python/tree/release-18.0" rel="nofollow noreferrer">kubernetes python cli</a>?</p>
<pre><code># list certificates
kubectl get certificates -n my-namespace
# describe a certificate
kubectl describe certificate my-certificate -n my-namespace
</code></pre>
| marcin_ | <p>Kubernetes by default doesn't have a kind <code>certificate</code>, you must first install <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">cert-manager's</a> <code>CustomResourceDefinition</code>.</p>
<p>Considering the above means that in Kuberentes Python client we must use <a href="https://github.com/kubernetes-client/python/blob/8a36dfb113868862d9ef8fd0a44b1fb7621c463a/kubernetes/client/api/custom_objects_api.py" rel="nofollow noreferrer">custom object API</a>, especially in your case: functions <code>list_namespaced_custom_object()</code> and <code>get_namespaced_custom_object()</code>.</p>
<p>Below code has two functions, one is returning all certificates (equivalent to the <code>kubectl get certificates</code> command), the second one is returning information about one specific certificate (equivalent to the <code>kubectl describe certificate {certificate-name}</code> command). Based on <a href="https://github.com/kubernetes-client/python/blob/8a36dfb113868862d9ef8fd0a44b1fb7621c463a/examples/namespaced_custom_object.py" rel="nofollow noreferrer">this example code</a>:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
api = client.CustomObjectsApi()
# kubectl get certificates -n my-namespace
def list_certificates():
resources = api.list_namespaced_custom_object(
group = "cert-manager.io",
version = "v1",
namespace = "my-namespace",
plural = "certificates"
)
return resources
# kubectl describe certificate my-certificate -n my-namespace
def get_certificate():
resource = api.get_namespaced_custom_object(
group = "cert-manager.io",
version = "v1",
name = "my-certificate",
namespace = "my-namespace",
plural = "certificates"
)
return resource
</code></pre>
<p>Keep in mind that both functions are returning <a href="https://www.w3schools.com/python/python_dictionaries.asp" rel="nofollow noreferrer">Python dictionaries</a>.</p>
| Mikolaj S. |
<p>When the pod is Evicted by disk issue, I found there are two reasons:</p>
<ol>
<li>The node had condition: <code>[DiskPressure]</code></li>
<li>The node was low on resource: ephemeral-storage. Container NAME was using 16658224Ki, which exceeds its request of 0.</li>
</ol>
<p>I found <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions" rel="nofollow noreferrer">Node conditions</a> for <code>DiskPressure</code>.</p>
<p>What is the difference?</p>
| Peter | <p>Both reasons are caused by the same error - that the worker node has ran out of disk space. However, the difference is when they are exactly happening.</p>
<p>To answer this question I decided to dig inside <a href="https://github.com/kubernetes/kubernetes" rel="noreferrer">Kubernetes source code</a>.</p>
<p>Starting with <code>The node had condition: [DiskPressure]</code> error.</p>
<p>We can find that it is used in <code>pkg/kubelet/eviction/helpers.go</code> file, <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/kubelet/eviction/helpers.go#L44" rel="noreferrer">line 44</a>:</p>
<pre><code>nodeConditionMessageFmt = "The node had condition: %v. "
</code></pre>
<p>This variable is used by <code>Admit</code> function in <code>pkg/kubelet/eviction/eviction_manager.go</code> file, <a href="https://github.com/kubernetes/kubernetes/blob/2ba872513da76568274885f8c5736b415a77b0cd/pkg/kubelet/eviction/eviction_manager.go#L137" rel="noreferrer">line 137</a>.</p>
<p><code>Admit</code> function is used by <code>canAdmitPod</code> function in <code>pkg/kubelet/kubelet.go</code> file, <a href="https://github.com/kubernetes/kubernetes/blob/047a6b9f861b2cc9dd2eea77da752ac398e7546f/pkg/kubelet/kubelet.go#L1932" rel="noreferrer">line 1932</a>:</p>
<p><code>canAdmitPod</code> function is used by <code>HandlePodAdditions</code> function, also in the <code>kubelet.go</code> file, <a href="https://github.com/kubernetes/kubernetes/blob/047a6b9f861b2cc9dd2eea77da752ac398e7546f/pkg/kubelet/kubelet.go#L2195" rel="noreferrer">line 2195</a>.</p>
<p>Comments in the code in <code>Admit</code> and <code>canAdmitPod</code> functions:</p>
<blockquote>
<p>canAdmitPod determines if a pod can be admitted, and gives a reason if it cannot. "pod" is new pod, while "pods" are all admitted pods.
The function returns a boolean value indicating whether the pod can be admitted, a brief single-word reason and a message explaining why the pod cannot be admitted.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Check if we can admit the pod; if not, reject it.</p>
</blockquote>
<p>So based on this analysis we can conclude that <code>The node had condition: [DiskPressure]</code> error message happens when kubelet agent won't admit <strong>new</strong> pods on the node, that means they won't start.</p>
<hr />
<p>Now moving on to the second error - <code>The node was low on resource: ephemeral-storage. Container NAME was using 16658224Ki, which exceeds its request of 0</code></p>
<p>Similar as before, we can find it in <code>pkg/kubelet/eviction/helpers.go</code> file, <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/kubelet/eviction/helpers.go#L42" rel="noreferrer">line 42</a>:</p>
<pre><code>nodeLowMessageFmt = "The node was low on resource: %v. "
</code></pre>
<p>This variable is used in the same file by <code>evictionMessage</code> function, <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/kubelet/eviction/helpers.go#L1003" rel="noreferrer">line 1003</a>.</p>
<p><code>evictionMessage</code> function is used by <code>synchronize</code> function in <code>pkg/kubelet/eviction/eviction_manager.go</code> file, <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/kubelet/eviction/eviction_manager.go#L231" rel="noreferrer">line 231</a></p>
<p><code>synchronize</code> function is used by <code>start</code> function, in the same file, <a href="https://github.com/kubernetes/kubernetes/blob/2ba872513da76568274885f8c5736b415a77b0cd/pkg/kubelet/eviction/eviction_manager.go#L177" rel="noreferrer">line 177</a>.</p>
<p>Comments in the code in the <code>synchronize</code> and <code>start</code> functions:</p>
<blockquote>
<p>synchronize is the main control loop that enforces eviction thresholds.
Returns the pod that was killed, or nil if no pod was killed.</p>
</blockquote>
<blockquote>
<p>Start starts the control loop to observe and response to low compute resources.</p>
</blockquote>
<p>So we can conduct that the error <code>The node was low on resource:</code> error message happens when kubelet agent decides to kill currently <strong>running</strong> pods on the node.</p>
<hr />
<p>It is worth emphasising that both error messages comes from node conditions (which are set in function <code>synchronize</code>, <a href="https://github.com/kubernetes/kubernetes/blob/2ba872513da76568274885f8c5736b415a77b0cd/pkg/kubelet/eviction/eviction_manager.go#L308" rel="noreferrer">line 308</a>, to the values detected by eviction manager). Then kubelet's agent makes the decisions that results in these two error messages.</p>
<hr />
<p><strong>To sum up</strong>:</p>
<p>Both errors are due to insufficient disk space, but:</p>
<ul>
<li><code>The node had condition:</code> error is related to the pods that are <strong>about to start</strong> on the node, but they can't</li>
<li><code>The node was low on resource:</code> error is related to the <strong>currently running pods</strong> that must be terminated</li>
</ul>
| Mikolaj S. |
<p>I'm trying to execute:</p>
<pre><code>microk8s kubectl apply -f deployment.yaml
</code></pre>
<p>and I am always getting:</p>
<pre><code>error: string field contains invalid UTF-8
</code></pre>
<p>No matter which file and string as a file path parameter I'm trying to use. Even if I execute:</p>
<pre><code>microk8s kubectl apply -f blablabla
</code></pre>
<p>Result is the same.</p>
<hr />
<p>UPD: I resolved the problem by restarting microk8s service. After restart everything is fine, but I still have no idea what it was.</p>
| Сергей Коновалов | <p>This is not a wrong format in the manifest, instead, it's a corrupted cache in <code>$HOME/.kube/</code></p>
<p>try to delete the cache:</p>
<p><code>rm -rf $HOME/.kube/http-cache</code></p>
<p><code>rm -rf $HOME/.kube/cache</code></p>
| Mahmoud |
<p>I'm trying to create an alert using promql/prometheus but having trouble generating the proper time series. In K8s, my objective is to display any code/version mismatch for a particular app sitting in multiple clusters. By using count, any app with a value greater than 1 would tell me that there are > 1 version deployed. Currently my query will generate the following:</p>
<p><code>count(kube_deployment_labels{label_app=~".*"}) by (label_version, label_app)</code></p>
<pre><code>| label_app | label_version | Value #A |
-------------------------------------------
| app_1 | 0.0.111 | 2 |
| app_1 | 0.0.222 | 1 |
| app_2 | 0.0.111 | 2 |
| app_2 | 0.0.222 | 1 |
| app_3 | 0.0.111 | 3 |
</code></pre>
<p>The values on the 4th column represent the # of said code (<em>label_version</em>) deployed in each cluster; for example, <strong>app_1</strong> & <strong>app_2</strong> has version <strong>0.0.111</strong> deployed in two clusters & <strong>0.0.222</strong> deployed in one cluster, but <strong>app_3</strong> has the same version deployed in all three clusters.</p>
<p>My end goal is to only count distinct <em>label_version</em> and have the time series populate in this way:</p>
<pre><code>| label_app | Value #A |
-------------------------
| app_1 | 2 |
| app_2 | 2 |
| app_3 | 1 |
</code></pre>
<p>Executing <code>(count(group by(label_version)kube_deployment_labels{label_app=~".*"})) </code> gives me the correct <em>Value</em> but I'd like to list out all the apps associated to the <em>Value</em> as well. I've tried a variety of grouping but was unsuccessful.</p>
<p>Any help would be appreciated! Thanks in advance.</p>
| KC14 | <p>You can apply one more <code>count by</code> over your previous result.</p>
<pre><code>count(
count(
kube_deployment_labels{label_app=~".*"}
) by (label_version, label_app)
) by (label_app)
</code></pre>
<p>will return number of <code>label_version</code>s associated with each <code>label_app</code>.</p>
<p>Somewhat similar query can be seen in this <a href="https://prometheus.demo.do.prometheus.io/graph?g0.expr=go_info&g0.tab=1&g0.stacked=0&g0.range_input=1h&g1.expr=count(alertmanager_alerts_received_total)%20by%20(status%2Cversion)&g1.tab=1&g1.stacked=0&g1.range_input=1h&g2.expr=count(%0A%20%20count(%0A%20%20%20%20alertmanager_alerts_received_total%0A%20%20)%20by%20(status%2Cversion)%0A)by(status)&g2.tab=1&g2.stacked=0&g2.range_input=1h" rel="nofollow noreferrer">demo</a>.</p>
| markalex |
<p>I am trying to host a web app in a container with read only file system. Whenever I try to configure the root file system as read only through the <code>SecurityContext</code> of the container I get the following error:</p>
<pre><code> Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 23 Sep 2021 18:13:08 +0300
Finished: Thu, 23 Sep 2021 18:13:08 +0300
Ready: False
</code></pre>
<p>I've tried to achieve the same using an AppArmor profile as follows:</p>
<pre><code>profile parser-profile flags=(attach_disconnected) {
#include <abstractions/base>
...
deny /** wl,
...
</code></pre>
<p>Unfortunately the result is the same.</p>
<p>What I assume is happening is that the container is not capable of saving the files for the web app and fails.</p>
<p>In my scenario, I will be running untrusted code and I must make sure that users are not allowed to access the file system.</p>
<p>Any ideas of what I am doing wrong and how can I achieve a read only file system?</p>
<p>I am using AKS and below is my deployment configuration:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: parser-service
spec:
selector:
app: parser
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: parser-deployment
spec:
replicas: 5
selector:
matchLabels:
app: parser
template:
metadata:
labels:
app: parser
annotations:
container.apparmor.security.beta.kubernetes.io/parser: localhost/parser-profile
spec:
containers:
- name: parser
image: parser.azurecr.io/parser:latest
ports:
- containerPort: 80
- containerPort: 443
resources:
limits:
cpu: "1.20"
securityContext:
readOnlyRootFilesystem: true
</code></pre>
<p>Edit: I also tried creating a cluster level PSP which also did not work.</p>
| Georgi Yankov | <p>I managed to replicate your issue and achieve read only filesystem with exception for one directory.</p>
<p>First, worth to note that you are using both solutions in your deployment - the AppArmor profile and SecurityContext. As AppArmor seems to be much more complex and needs configuration to be done per node I decided to use only SecurityContext as it is working fine.</p>
<p>I got this error that you mention in the comment:</p>
<pre><code>Failed to create CoreCLR, HRESULT: 0x80004005
</code></pre>
<p>This error doesn't say to much, but after some testing I found that it only occurs when you are running the pod which filesytem is read only - the application tries to save files but cannot do so.</p>
<p>The app creates some files in the <code>/tmp</code> directory so the solution is to mount <code>/tmp</code> using <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">Kubernetes Volumes</a> so it will be read write. In my example I used <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noreferrer">emptyDir</a> but you can use any other volume you want as long as it supports writing to it. The deployment configuration (you can see that I added <code>volumeMounts</code> and <code>volumes</code> and the bottom):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: parser-service
spec:
selector:
app: parser
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: parser-deployment
spec:
replicas: 5
selector:
matchLabels:
app: parser
template:
metadata:
labels:
app: parser
spec:
containers:
- name: parser
image: parser.azurecr.io/parser:latest
ports:
- containerPort: 80
- containerPort: 443
resources:
limits:
cpu: "1.20"
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /tmp
name: temp
volumes:
- name: temp
emptyDir: {}
</code></pre>
<p>After executing into pod I can see that pod file system is mounted as read only:</p>
<pre><code># ls
app bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# touch file
touch: cannot touch 'file': Read-only file system
# mount
overlay on / type overlay (ro,...)
</code></pre>
<p>By running <code>kubectl describe pod {pod-name}</code> I can see that <code>/tmp</code> directory is mounted as read write and it is using <code>temp</code> volume:</p>
<pre><code>Mounts:
/tmp from temp (rw)
</code></pre>
<p>Keep in mind if you are using other directories (for example to save files) you need to also mount them the same way as the <code>/tmp</code>.</p>
| Mikolaj S. |
<p>In trying to securely install metrics-server on Kubernetes, I'm having problems.</p>
<p>It seems like the metric-server pod is unable to successfully make requests to the Kubelet API on it's <code>10250</code> port.</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 0/1 1 0 16h
</code></pre>
<p>The Metrics Server deployment never becomes ready and it repeats the same sequence of error logs:</p>
<pre class="lang-sh prettyprint-override"><code>I0522 01:27:41.472946 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0522 01:27:41.798068 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0522 01:27:41.798092 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0522 01:27:41.798068 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/front-ca/front-proxy-ca.crt"
I0522 01:27:41.798107 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0522 01:27:41.798240 1 secure_serving.go:266] Serving securely on [::]:4443
I0522 01:27:41.798265 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0522 01:27:41.798284 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0522 01:27:41.898439 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0522 01:27:55.297497 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.106:10250/metrics/resource\": context deadline exceeded" node="system76-pc"
E0522 01:28:10.297872 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.106:10250/metrics/resource\": context deadline exceeded" node="system76-pc"
I0522 01:28:10.325613 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
I0522 01:28:20.325231 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
E0522 01:28:25.297750 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.106:10250/metrics/resource\": context deadline exceeded" node="system76-pc"
</code></pre>
<p>I'm running Kubernetes deployed with <code>kubeadm</code> version 1.23.4 and I'm trying to securely use metrics-server.</p>
<p>I'm looking for advice that could help with:</p>
<ol>
<li>How I can accurately diagnose the problem?</li>
<li>Or alternatively, what configuration seems most fruitful to check first?</li>
<li>Anything that will help with my mental model of which certificates and keys I need to configure explicitly and what is being handled automatically.</li>
</ol>
<p>So far, I have tried to validate that the I can retrieve API metrics:</p>
<p><code>kubectl get --raw /api/v1/nodes/system76-pc/proxy/stats/summary</code></p>
<pre class="lang-json prettyprint-override"><code>{
"node": {
"nodeName": "system76-pc",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2022-05-20T01:51:28Z",
"cpu": {
"time": "2022-05-22T00:48:40Z",
"usageNanoCores": 59453039,
"usageCoreNanoSeconds": 9768130002000
},
"memory": {
"time": "2022-05-22T00:48:40Z",
"usageBytes": 84910080,
"workingSetBytes": 84434944,
"rssBytes": 67149824,
"pageFaults": 893055,
"majorPageFaults": 290
}
},
{
"name": "runtime",
"startTime": "2022-05-20T00:33:24Z",
"cpu": {
"time": "2022-05-22T00:48:37Z",
"usageNanoCores": 24731571,
"usageCoreNanoSeconds": 3955659226000
},
"memory": {
"time": "2022-05-22T00:48:37Z",
"usageBytes": 484306944,
"workingSetBytes": 242638848,
"rssBytes": 84647936,
"pageFaults": 56994074,
"majorPageFaults": 428
}
},
{
"name": "pods",
"startTime": "2022-05-20T01:51:28Z",
"cpu": {
"time": "2022-05-22T00:48:37Z",
"usageNanoCores": 292818104,
"usageCoreNanoSeconds": 45976001446000
},
"memory": {
"time": "2022-05-22T00:48:37Z",
"availableBytes": 29648396288,
"usageBytes": 6108573696,
</code></pre>
<p><code>kubectl get --raw /api/v1/nodes/system76-pc/proxy/metrics/resource</code></p>
<pre class="lang-sh prettyprint-override"><code># HELP container_cpu_usage_seconds_total [ALPHA] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="alertmanager",namespace="flux-system",pod="alertmanager-prometheus-stack-kube-prom-alertmanager-0"} 108.399948 1653182143362
container_cpu_usage_seconds_total{container="calico-kube-controllers",namespace="kube-system",pod="calico-kube-controllers-56fcbf9d6b-n87ts"} 206.442768 1653182144294
container_cpu_usage_seconds_total{container="calico-node",namespace="kube-system",pod="calico-node-p6pxk"} 6147.643669 1653182155672
container_cpu_usage_seconds_total{container="cert-manager",namespace="cert-manager",pod="cert-manager-795d7f859d-8jp4f"} 134.583294 1653182142601
container_cpu_usage_seconds_total{container="cert-manager",namespace="cert-manager",pod="cert-manager-cainjector-5fcddc948c-vw4zz"} 394.286782 1653182151252
container_cpu_usage_seconds_total{container="cert-manager",namespace="cert-manager",pod="cert-manager-webhook-5b64f87794-pl7fb"} 404.53758 1653182140528
container_cpu_usage_seconds_total{container="config-reloader",namespace="flux-system",pod="alertmanager-prometheus-stack-kube-prom-alertmanager-0"} 6.01391 1653182139771
container_cpu_usage_seconds_total{container="config-reloader",namespace="flux-system",pod="prometheus-prometheus-stack-kube-prom-prometheus-0"} 42.706567 1653182130750
container_cpu_usage_seconds_total{container="controller",namespace="flux-system",pod="sealed-secrets-controller-5884bbf4d6-mql9x"} 43.814816 1653182144648
container_cpu_usage_seconds_total{container="controller",namespace="ingress-nginx",pod="ingress-nginx-controller-f9d6fc8d8-sgwst"} 645.109711 1653182141169
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-64897985d-crtd9"} 380.682251 1653182141861
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-64897985d-rpmxk"} 365.519839 1653182140533
container_cpu_usage_seconds_total{container="dashboard-metrics-scraper",namespace="kubernetes-dashboard",pod="dashboard-metrics-scraper-577dc49767-cbq8r"} 25.733362 1653182141877
container_cpu_usage_seconds_total{container="etcd",namespace="kube-system",pod="etcd-system76-pc"} 4237.357682 1653182140459
container_cpu_usage_seconds_total{container="grafana",namespace="flux-system",pod="prometheus-stack-grafana-757f9b9fcc-9f58g"} 345.034245 1653182154951
container_cpu_usage_seconds_total{container="grafana-sc-dashboard",namespace="flux-system",pod="prometheus-stack-grafana-757f9b9fcc-9f58g"} 123.480584 1653182146757
container_cpu_usage_seconds_total{container="grafana-sc-datasources",namespace="flux-system",pod="prometheus-stack-grafana-757f9b9fcc-9f58g"} 35.851112 1653182145702
container_cpu_usage_seconds_total{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-system76-pc"} 14166.156638 1653182150749
container_cpu_usage_seconds_total{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-system76-pc"} 4168.427981 1653182148868
container_cpu_usage_seconds_total{container="kube-prometheus-stack",namespace="flux-system",pod="prometheus-stack-kube-prom-operator-54d9f985c8-ml2qj"} 28.79018 1653182155583
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-gg2wd"} 67.215459 1653182155156
container_cpu_usage_seconds_total{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-system76-pc"} 579.321492 1653182147910
container_cpu_usage_seconds_total{container="kube-state-metrics",namespace="flux-system",pod="prometheus-stack-kube-state-metrics-56d4759d67-h6lfv"} 158.343644 1653182153691
container_cpu_usage_seconds_total{container="kubernetes-dashboard",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-69dc48777b-8cckh"} 78.231809 1653182139263
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="helm-controller-dfb4b5478-7zgt6"} 338.974637 1653182143679
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="image-automation-controller-77fd9657c6-lg44h"} 280.841645 1653182154912
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="image-reflector-controller-86db8b6f78-5rz58"} 2909.277578 1653182144081
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="kustomize-controller-cd544c8f8-hxvk6"} 596.392781 1653182152714
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="notification-controller-d9cc9bf46-2jhbq"} 244.387967 1653182142902
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="source-controller-84bfd77bf8-r827h"} 541.650877 1653182148963
container_cpu_usage_seconds_total{container="metrics-server",namespace="flux-system",pod="metrics-server-55bc5f774-zznpb"} 174.229886 1653182146946
container_cpu_usage_seconds_total{container="nfs-subdir-external-provisioner",namespace="flux-system",pod="nfs-subdir-external-provisioner-858745f657-zcr66"} 244.061329 1653182139840
container_cpu_usage_seconds_total{container="node-exporter",namespace="flux-system",pod="prometheus-stack-prometheus-node-exporter-wj2fx"} 29.852036 1653182148779
container_cpu_usage_seconds_total{container="prometheus",namespace="flux-system",pod="prometheus-prometheus-stack-kube-prom-prometheus-0"} 7141.611234 1653182154042
# HELP container_memory_working_set_bytes [ALPHA] Current working set of the container in bytes
# TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="alertmanager",namespace="flux-system",pod="alertmanager-prometheus-stack-kube-prom-alertmanager-0"} 2.152448e+07 1653182143362
</code></pre>
<p>metric-server config:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- args:
- --secure-port=4443
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-preferred-address-types=Hostname
- --requestheader-client-ca-file=/front-ca/front-proxy-ca.crt
- --kubelet-certificate-authority=/ca/ca.crt
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /front-ca
name: front-proxy-ca-dir
- mountPath: /ca
name: ca-dir
dnsPolicy: ClusterFirst
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: metrics-server
serviceAccountName: metrics-server
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: front-proxy-ca
name: front-proxy-ca-dir
- configMap:
defaultMode: 420
name: kubelet-ca
name: ca-dir
</code></pre>
<p>kube-apiserver config:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.106:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.1.106
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.23.4
</code></pre>
| user1307434 | <p>In my case I had the same issue with the metrics-server because of just 1 ocpu on master-node. Use at least 2.</p>
| Dmitry Sverkalov |
<p>I'm trying to forward port 8080 for my nginx ingress controller for Kubernetes. I'm running the command:</p>
<pre><code>kubectl -n nginx-ingress port-forward nginx-ingress-768dfsssd5bf-v23ja 8080:8080 --request-timeout 0
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
</code></pre>
<p>It just hangs on <code>::1</code> forever. Is there a log somewhere I can view to see why its hanging?</p>
| itinneed | <p>Well, it's expected behaviour - <code>kubectl port-forward</code> is not getting <a href="https://stackoverflow.com/questions/48863164/kubernetes-prompt-freezes-at-port-forward-command/48866118#48866118">daemonized by default</a>. From the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod" rel="nofollow noreferrer">official Kubernetes documentation</a>:</p>
<blockquote>
<p><strong>Note:</strong> <code>kubectl port-forward</code> does not return. To continue with the exercises, you will need to open another terminal.</p>
</blockquote>
<p>The all logs will be shown in <code>kubectl port-forward</code> command output:</p>
<pre><code>user@shell:~$ kubectl port-forward deployment/nginx-deployment 80:80
Forwarding from 127.0.0.1:80 -> 80
</code></pre>
<p><a href="https://stackoverflow.com/questions/53799600/kubernetes-port-forwarding-connection-refused">Including errors</a>:</p>
<pre><code>Handling connection for 88
Handling connection for 88
E1214 01:25:48.704335 51463 portforward.go:331] an error occurred forwarding 88 -> 82: error forwarding port 82 to pod a017a46573bbc065902b600f0767d3b366c5dcfe6782c3c31d2652b4c2b76941, uid : exit status 1: 2018/12/14 08:25:48 socat[19382] E connect(5, AF=2 127.0.0.1:82, 16): Connection refused
</code></pre>
<p>If you don't have any logs that means you didn't make an attempt to connect or you specified a wrong address / port.</p>
<p>As earlier mentioned, you can open a new terminal window, you can also run <code>kubectl port-forward</code> <a href="https://linuxize.com/post/how-to-run-linux-commands-in-background/" rel="nofollow noreferrer">in the background</a> by adding <code>&</code> at the end of the command:</p>
<pre><code>kubectl -n nginx-ingress port-forward nginx-ingress-768dfsssd5bf-v23ja 8080:8080 --request-timeout 0 &
</code></pre>
<p>If you want to run <code>kubectl port-forward</code> in background and save all logs to the file you can use <a href="https://linux.101hacks.com/unix/nohup-command/" rel="nofollow noreferrer"><code>nohup</code> command</a> + <code>&</code> at the end:</p>
<pre><code>nohup kubectl -n nginx-ingress port-forward nginx-ingress-768dfsssd5bf-v23ja 8080:8080 --request-timeout 0 &
</code></pre>
| Mikolaj S. |
<p>Is it possible to define multiple label and taints for a daemonset (i.e.) so that it can access / deploy in multiple nodes?</p>
<pre><code> spec:
tolerations:
- key: "sample-node","example-node"
operator: "Equal"
effect: "NoSchedule"
value: "true"
</code></pre>
<p>Or in a different format? Thanks in advance!</p>
| Lakshmi Narayanan | <p>Syntax mentioned in your query will fail with below error</p>
<pre><code>is invalid: spec.tolerations[0].key: Invalid value: "sample-node,example-node"
</code></pre>
<p>The tolerations can be added as below:</p>
<pre><code>tolerations:
- key: "sample-node"
operator: "Equal"
effect: "NoSchedule"
value: "true"
- key: "example-node"
operator: "Equal"
effect: "NoSchedule"
value: "true"
</code></pre>
<p>The above can be achieved by tainting multiple nodes with same label instead of two different labels and adding it twice. An single node can have multiple tolerations and more details are explained in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts" rel="nofollow noreferrer">k8s documentation</a>.</p>
| Nataraj Medayhal |
<p>I executed query to get pods running time and I noticed a strange thing that the data in the Graph is not the same in the panel table
as picture shows the table panel display only running pods<br />
<a href="https://i.stack.imgur.com/IE0R3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IE0R3.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/SNcWw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SNcWw.jpg" alt="enter image description here" /></a></p>
<p>and when I print the result of this query with java code I only get the data in table</p>
| trey | <p>Graph's legend show all metrics that are present on graph. That means, all metrics that were scraped within graphs time range and satisfy your query.</p>
<p>Table, on the other hand, shows only current<sup>1</sup> metrics. Same applies to query executed through code.</p>
<p>Notice: Prometheus' API has two related endpoints: <code>/api/v1/query</code> and <code>/api/v1/query_range</code>. Those are endpoints to get query results in single time point and over time range, respectively. Documentation on mentioned endpoints can be found <a href="https://prometheus.io/docs/prometheus/latest/querying/api/" rel="nofollow noreferrer">here</a>.</p>
<p>Your library most likely has similar functionality.</p>
<hr />
<p><sup>1</sup> Current in relation of time selected for query. In UI, it is configured through field "Evaluation time", in API it's parameter <code>time</code>.</p>
| markalex |
<p>I am using nginx-ingress in my cluster to expose certain services. I have an "auth" service that handles authentication, which I am trying to setup through nginx. Currently the service has a very simple GET endpoint, that always responds with a <code>UserId</code> header and tries to set two cookies:</p>
<pre class="lang-js prettyprint-override"><code>// This is implemented on Nest.js which uses express.js
@Get('*')
auth(@Res() res: Response): void {
res.header('UserId', '1')
res.cookie('key', 'value')
res.cookie('x', 'y')
res.status(200).send('hello')
}
</code></pre>
<p>I can confirm that both cookies are being set when I manually send a request to that endpoint, but when I set it as an annotation to the ingress:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-url: http://auth.dev.svc.cluster.local
</code></pre>
<p>and send a request through the ingress, only one of the cookies is forwarded to the Response (the first one <code>key=value</code>). I am not familiar with the nginx configuration, is there something I am supposed to change to make this work, so that both cookies are set?</p>
<p>I found <a href="https://github.com/kubernetes/ingress-nginx/issues/8183" rel="nofollow noreferrer">this issue</a> on GitHub, but it seems to be about OAuth2 there is no clear explanation on what I am supposed to change.</p>
| yisog | <p>I couldn't find a way to make this work with the <code>Set-Cookie</code> header. Not sure if there is a better way, but here is a workaround:</p>
<p>I added a snippet for the <code>location</code> block that converts two headers to cookies:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $auth_cookie1 $upstream_http_x_header1;
auth_request_set $auth_cookie2 $upstream_http_x_header2;
add_header Set-Cookie $auth_cookie1;
add_header Set-Cookie $auth_cookie2;
</code></pre>
<p>And the <code>auth()</code> endpoint now responds with the <code>X-Header1</code> and <code>X-Header2</code> headers:</p>
<pre class="lang-js prettyprint-override"><code>import { serialize } from 'cookie'
@Get('*')
auth(@Res() res: Response): void {
res.header('UserId', '1')
res.header('X-Header1', serialize('key', 'value'))
res.header('X-Header2', serialize('x', 'y'))
res.status(200).send('hello')
}
</code></pre>
<p>Everything seems to be working well and this solution is similar to how nginx is adding the Set-Cookie header which doesn't support multiple cookies. The code below is copied from the <code>nginx.conf</code> file in the <code>nginx-controller</code> pod that <code>nginx-ingress</code> creates.</p>
<pre><code>auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
</code></pre>
| yisog |
<p>We are leveraging Kubernetes ingress with external service <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#external-authentication" rel="nofollow noreferrer">JWT authentication</a> using <code>auth-url</code> as a part of the ingress.</p>
<p>Now we want to use the <code>auth-cache-key</code> annotation to control the caching of JWT token. At current our external auth service just respond with <code>200</code>/<code>401</code> by looking at the token. All our components are backend micro-services with rest api. Incoming request may not be the UI request. How do we fill in the `auth-cache-key' for a JWT token coming in.</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/auth-url: http://auth-service/validate
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
nginx.ingress.kubernetes.io/auth-cache-key: '$remote_user$http_authorization'
nginx.ingress.kubernetes.io/auth-cache-duration: '1m'
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>Looking at the example, <code>$remote_user$http_authorization</code> is specified as an example in K8s documentation. However not sure if <code>$remote_user</code> will be set in our case. Because this is not external basic auth. How do we decide on the auth cache key in case of this?</p>
<p>Not enough example/documentations exists around this.</p>
| Santosh | <p>Posting general answer as no further details and explanation provided.</p>
<p>It's true that there is not so much documentation around, so I decided to dig into <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">NGINX Ingress source code</a>.</p>
<p>The value set in annotation <code>nginx.ingress.kubernetes.io/auth-cache-key</code> is a variable <code>$externalAuth.AuthCacheKey</code> <a href="https://github.com/kubernetes/ingress-nginx/blob/07e54431ff069a5452a1f68ca3dbb98da0e0f35b/rootfs/etc/nginx/template/nginx.tmpl#L986" rel="nofollow noreferrer">in code</a>:</p>
<pre><code>{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
</code></pre>
<p>As can see, <code>$externalAuth.AuthCacheKey</code> is used by variable <code>$tmp_cache_key</code>, which is encoded to <code>base64</code> format and set as variable <code>$cache_key</code> using <a href="https://github.com/openresty/lua-nginx-module#name" rel="nofollow noreferrer">lua NGINX module</a>:</p>
<pre><code>rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
</code></pre>
<p>Then <code>$cache_key</code> is used to set <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key" rel="nofollow noreferrer">variable <code>$proxy_cache_key</code> which defines a key for caching</a>:</p>
<pre><code>proxy_cache_key "$cache_key";
</code></pre>
<p>Based on the above code, we can assume that we can use any <a href="https://www.javatpoint.com/nginx-variables" rel="nofollow noreferrer">NGINX variable</a> to set <code>nginx.ingress.kubernetes.io/auth-cache-key</code> annotation. Please note that some variables are only available if the <a href="http://nginx.org/en/docs/varindex.html" rel="nofollow noreferrer">corresponding module is loaded</a>.</p>
<p>Example - I set following <code>auth-cache-key</code> annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-cache-key: '$proxy_host$request_uri'
</code></pre>
<p>Then, on the NGINX Ingress controller pod, in the file <code>/etc/nginx/nginx.conf</code> there is following line:</p>
<pre><code>set $tmp_cache_key '{my-host}/_external-auth-Lw-Prefix$proxy_host$request_uri';
</code></pre>
<p>If you will set <code>auth-cache-key</code> annotation to nonexistent NGINX variable, the NGINX will throw following error:</p>
<pre><code>nginx: [emerg] unknown "nonexistent_variable" variable
</code></pre>
<p>It's up to you which variables you need.</p>
<p>Please check also following articles and topics:</p>
<ul>
<li><a href="https://www.nginx.com/blog/nginx-caching-guide/" rel="nofollow noreferrer">A Guide to Caching with NGINX and NGINX Plus</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/2862" rel="nofollow noreferrer">external auth provider results in a <em>lot</em> of external auth requests</a></li>
</ul>
| Mikolaj S. |
<p>I am using Prometheus and Grafana to collect and display pod/container status for a Kubernetes cluster. I'm collecting the information from the following metrics:</p>
<pre><code>kube_pod_container_status_running
kube_pod_container_status_terminated
kube_pod_container_status_waiting
</code></pre>
<p><strong>Note</strong>: I left a fourth metric, <code>kube_pod_container_status_ready</code> out as it seems to be a duplicate of <code>kube_pod_container_status_running</code>. If I am mistaken, please let me know what the difference is.</p>
<p>Each metric returns a 0 or 1 result, where 1 indicates the container is currently in that state (e.g. runnning). I'm making the assumption that at any given time, only one of these metrics should have a value of 1 for a given set of labels representing a specific container in the cluster. From what I've seen, each metric collects the same set of label dimensions.</p>
<p>What I would like to do is display a table of container information of interest (pod, container, namespace, etc.) plus a column indicating the current state (Running, Terminated, etc.). I may need to include other queries to integrate other information not available from this current set.</p>
<p>I have tried a couple of experiments that have allowed me to collect the information into a single table view, but cannot figure out how to translate the 3 metric results into a single state representation. So, for example: [running=1, terminated=0, waiting=0] into "Running", or [running=0, terminated=0, waiting=1] into "Waiting".</p>
<p>Any help on this would be appreciated.</p>
| Joseph Gagnon | <p>You metrics seems rather strange, and even against Prometheus' recommendations.
Usually such metrics would be in format <code>kube_pod_container_status{status="waiting"}</code>.</p>
<p>You can convert your metrics to conventional format, and display them in table.</p>
<p>Use query</p>
<pre><code>label_replace(
label_replace(
{__name__=~"kube_pod_container_status_.*"} == 1
,"status","$1","__name__","kube_pod_container_status_(.*)")
,"__name__","kube_pod_container_status","__name__",".*")
</code></pre>
<p>And switch <a href="https://grafana.com/docs/grafana/latest/datasources/prometheus/query-editor/#format" rel="nofollow noreferrer">format</a> of you query (under your query editor) to "Table".</p>
<p>If you'll use panel "Table", you'll see metrics like you described.</p>
| markalex |
<p>I am trying to deploy a pod and copy 10GB of TPCDS data into the pod.
I am using a PVC with storage capacity of 50 GB.
My specifications are:</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
limits:
cpu: "1"
memory: 20Gi
#ephemeral-storage: 20Gi
requests:
cpu: "1"
memory: 20Gi
</code></pre>
<p>But still I am facing this issue while copying data into the pod.</p>
<pre class="lang-none prettyprint-override"><code>The node was low on resource: ephemeral-storage. Container spark-kafka-cont was using 10542048Ki, which exceeds its request of 0.
</code></pre>
| Abhik NASKAR | <p>Your issue is related to the present running pods that must be terminated due to insufficient disk space. <code>Updating to a bigger disk helps to delay the eviction process.</code></p>
<p>Looks like that particular node does not have enough storage available. Suggest you to explicitly specify the local ephemeral storage request and limit (<code>set limits/requests on ephemeral-storage on all your workloads</code>), so that kubernetes will respect that. Otherwise, it is possible that the pod will get evicted. Refer to official docs on <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">Local ephemeral storage</a> & <a href="https://kubernetes.io/blog/2022/09/19/local-storage-capacity-isolation-ga/" rel="nofollow noreferrer">Local Storage Capacity Isolation Reaches GA</a> for details.</p>
<p>To understand and resolve your issue quickly, use <code>kubectl top</code> command to list all the running nodes and pods along with their resource utilization. And also <code>kubectl exec</code> to get a shell in the pod, and then use normal Unix commands like</p>
<p><code>du -sh</code> [run inside a container] <code>df -h</code>
and <code>du -h someDir</code> commands to find where more space is being utilized. Sometimes disk space is taken by <code>logs or emptyDir</code> that are causing this issue. Just get an external volume, map it into the container, and get the logs outside of the node. Using the <code>kubectl logs</code> command helps to find out these logs. Check processes you may be configured to log to a file, In that case set your logging setup to log to stdout.</p>
| Veera Nagireddy |
Subsets and Splits