Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>by default, nginx ingress listens to two ports 80 and 443, how to add listening on port 9898
I tried to change it in daemon set, but nothing came out
, I don 't even know where else to dig</p>
| Iceforest | <p>I'm not sure what will work exactly for you, but here's a few things you can try to approach this (read carefully because nginx is confusing):</p>
<ol>
<li><p>Define <code>service</code> for your deployment, and make sure it covers port routes you want and support on deployment end:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: web
labels:
app: web-app
spec:
ports:
- port: 80
targetPort: 1337
protocol: TCP
selector:
app: web-app
</code></pre>
</li>
<li><p>Refer to it in nginx ingress:</p>
<pre><code> rules:
- host: mycoolwebapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
</code></pre>
</li>
</ol>
<p>The catch here is that you can route <strong>ALL</strong> services via port 80 but use any target port you want, so that you can, say, add 50 ingress hosts/routes over a morning routing to port 80 and only difference they'll have is target port in <code>service</code>.<br />
3. If you are specifically unhappy with ports 80 and 443, you are welcome to edit <code>ingress-nginx-controller</code> (<code>service</code> one, because as I said nginx is confusing).<br />
4. Alternatively, you can find example of <code>ingress-nginx-controller</code> <em>service</em> on the web, customize it and apply, then connect <code>ingress</code> to it... but I advise against this because if nginx doesn't like anything you set up as custom service, it's easier to just reinstall whole helm release of it and try again.</p>
| Randych |
<p>If we want to to build OCI container images with <code>docker</code>
and e.g. want to the following pod setup:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
containers:
- name: build
image: docker:23.0.1-cli
command:
- cat
tty: true
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: dind-daemon
image: docker:23.0.1-dind-rootless
securityContext:
privileged: true
resources:
requests:
cpu: 20m
memory: 512Mi
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}
</code></pre>
<p>I am wondering what the replacement is for</p>
<pre class="lang-yaml prettyprint-override"><code>securityContext:
privileged: true
</code></pre>
<p>since that is deprecated in <code>kubernetes >1.25</code> because: <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/" rel="nofollow noreferrer">https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/</a></p>
<p>and if its still possible to do the same as above and how?</p>
| Gabriel | <p>As per kubernetes official API reference documentation for V 1.26 they have changed the fields for security context.</p>
<p>Instead of using <code>privileged: true</code> they got other parameters in the latest versions. That are</p>
<p><strong>runAsUser:</strong> You can run as any user in the latest versions by using the UID of the user if your image has that user. In general the UID for root users is 0, so you can mention the UID of root user in the yaml file while creating the deployment.</p>
<p><strong>allowPrivilegeEscalation:</strong> If allowPrivilegeEscalation is set to true privileges will be escalated to the root user when required.</p>
<p><strong>runAsNonRoot:</strong> If <code>runAsNonRoot</code> is set to true a validation will be performed and kubernetes will stop the pod or container from starting else if it’s unset or set to false it won’t prevent root execution, provided your image is built to run as root.</p>
<p>Both <code>runAsUser</code> and <code>runAsNonRoot</code> can be used if you want to execute the job or task continuously as root whereas <code>allowPrivilegeEscalation</code> can be used for temporarily escalating privileges. Below is the yaml example file for the latest version, use it as a reference</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox:1.28
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
</code></pre>
<p>Note: The yaml code and the above explanation is derived from official kubernetes documentation.</p>
<p>[1]<a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>
[2]<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#podsecuritycontext-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#podsecuritycontext-v1-core</a></p>
| Kranthiveer Dontineni |
<p>I have an issue with <a href="https://github.com/percona/mongodb_exporter" rel="nofollow noreferrer">mongodb_exporter</a> (prometheus metrics exporter for mongodb). I think it's because of a configuration problem in my side but after 2 days of searching I'm empty of solution :)</p>
<p>I run mongodb on K8S and mongodb_exporter as a sidecar pod.
exporter starts OK (I think, because no error), display some metrics but my problem is that's only "go" metrics (see below), I have just one "mongodb" metric!! => mongodb_up 1. Even if i put options "--collect-all" or "--collector.collstats".
I do not have any "useful" metric on my "config" database such as collections size, etc....</p>
<p>Connection to the database is OK because if I change username/password/db port I ran to connection problem.
My user have correct rights I think (changing real password to "password" in my text) :</p>
<pre><code>Successfully added user: {
"user" : "exporter",
"roles" : [
{
"role" : "clusterMonitor",
"db" : "admin"
},
{
"role" : "read",
"db" : "local"
}
]
}
</code></pre>
<p><strong>Here is my pod configuration :</strong></p>
<pre><code> - name: metrics
image: docker.io/bitnami/mongodb-exporter:latest
command:
- /bin/bash
- '-ec'
args:
- >
/bin/mongodb_exporter --web.listen-address ":9216"
--mongodb.uri=mongodb://exporter:password@localhost:27017/config? --log.level="debug" --collect-all
ports:
- name: metrics
containerPort: 9216
protocol: TCP
env:
resources:
limits:
cpu: 50m
memory: 250Mi
requests:
cpu: 25m
memory: 50Mi
livenessProbe:
httpGet:
path: /
port: metrics
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: metrics
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
</code></pre>
<p><strong>Logs</strong></p>
<ul>
<li>Exporter start log (with debug activated) :</li>
</ul>
<pre><code>time="2023-02-03T09:02:25Z" level=debug msg="Compatible mode: false"
time="2023-02-03T09:02:25Z" level=debug msg="Connection URI: mongodb://exporter:password@localhost:27017/config?"
level=info ts=2023-02-03T09:02:25.224Z caller=tls_config.go:195 msg="TLS is disabled." http2=false
</code></pre>
<ul>
<li>Displayed metrics :</li>
</ul>
<pre><code># HELP collector_scrape_time_ms Time taken for scrape by collector
# TYPE collector_scrape_time_ms gauge
collector_scrape_time_ms{collector="general",exporter="mongodb"} 0
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 17
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.17.13"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.655088e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 3.655088e+06
[....]
# HELP mongodb_up Whether MongoDB is up.
# TYPE mongodb_up gauge
mongodb_up 1
[...]
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.35940608e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
</code></pre>
<p><strong>Environment</strong>
K8S
MongoDB version : 4.2.0
Thanks in advance for any help :)</p>
| Doul | <p>Yes, found the solution, with the help of JMLX42 on github. The key was to take an another image. Change from <code>docker.io/bitnami/mongodb-exporter:latest</code> to <code>percona/mongodb_exporter:0.37.0</code> (and adapt the configuration) :</p>
<pre><code> - name: metrics
image: percona/mongodb_exporter:0.37.0
args:
- '--mongodb.uri=mongodb://exporter:password@localhost:27017/config'
- '--collect-all'
ports:
- name: metrics
containerPort: 9216
protocol: TCP
resources:
requests:
memory: "50Mi"
cpu: "25m"
limits:
memory: "250Mi"
cpu: "50m"
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /
port: metrics
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
httpGet:
path: /
port: metrics
</code></pre>
<p>Note : strange thing is that with <code>--collect-all</code> it does not collect collection data. I have to specify collection list with <code>--mongodb.collstats-colls</code>.</p>
<p>For example : <code>--mongodb.collstats-colls=config.App,config.ApplicationOverride,config.Caller,config.CommonRules,config.HabilitationOverride,config.IconSvg,config.UpdateStatusEntity,config.feature</code></p>
| Doul |
<p>I want to use the <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/" rel="nofollow noreferrer">App-of-apps</a> practice with ArgoCD. So I created a simple folder structure like the one below. Then I created a project called <code>dev</code> and I created an app that will look inside the folder <code>apps</code>, so when new <code>Application</code> manifests are included, it will automatically create new applications. This last part works. Every time I add a new <code>Application</code> manifest, a new app is created as a child of the <code>apps</code>. However, the actual app that will monitor the respective folder and create the service and deployment is not created and I can't figure out what I am doing wrong. I have followed different tutorials that use Helm and Kustomize and all have given the same end result.</p>
<p>Can someone spot what am I missing here?</p>
<ul>
<li>Folder structure</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>deployments/dev
├── apps
│ ├── app1.yaml
│ └── app2.yaml
├── app1
│ ├── app1-deployment.yaml
│ └── app1-svc.yaml
└── app-2
├── app2-deployment.yaml
└── app2-svc.yaml
</code></pre>
<ul>
<li>Parent app <code>Application</code> manifest that is watching <code>/dev/apps</code> folder</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-app
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
namespace: argocd
project: dev
source:
path: deployments/dev/apps/
repoURL: https://github.com/<repo>.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: true
</code></pre>
<ul>
<li>And the App1 and App2 <code>Application</code> manifest is the same for both apps like so:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: <app1>/<app2>
namespace: default
spec:
destination:
server: https://kubernetes.default.svc
namespace: default
project: dev
source:
path: deployments/dev/<app1> or deployments/dev/<app2>
repoURL: https://github.com/<repo>.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: true
</code></pre>
| everspader | <h2>Posting comment as the community wiki answer for better visibility</h2>
<hr />
<p>It turns out that at the moment ArgoCD can only recognize application declarations made in ArgoCD namespace, but @everspader was doing it in the default namespace. For more info, please refer to <a href="https://github.com/argoproj/argo-cd/issues/3474" rel="nofollow noreferrer">GitHub Issue</a></p>
| Bazhikov |
<p>Going through <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">this</a> doc, was wondering if there is a way to restrict unix socket creation using Kubernetes Network policies.</p>
| ambikanair | <p><strong>Kubernetes network policies</strong> can’t restrict <strong>unix socket</strong> creation, they can only be useful for managing the traffic flow in between the pods. If you want to restrict new unix sockets from getting created you need to configure the <strong>SElinux parameter</strong> in the <strong>security context</strong> field of the kubernetes manifest file. This feature is only available in the recent releases of <em><strong>kubernetes 1.25</strong></em> and above. Follow this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#assign-selinux-labels-to-a-container" rel="nofollow noreferrer"><strong>official documentation</strong></a> for more information</p>
| Kranthiveer Dontineni |
<p>I use the Google Cloud Code extension on VSCode.
I have a minikube running on my macbook (using the virtualbox driver).</p>
<p>I can run <code>skaffold debug</code> from my terminal just fine; the Helm chart gets deployed, but I haven't done the debugger setup so the breakpoints don't hit (as expected).</p>
<p>I want to use the Cloud Code extension to avoid manually doing the debugger setup.
However, if I run "debug on Kubernetes" in the Cloud Code extension, I get a prompt saying "Docker was found in the path but does not appear to be running. Start Docker to continue":
<a href="https://i.stack.imgur.com/0HAI0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0HAI0.png" alt="VSCode prompt asking me to start docker" /></a></p>
<p>If I select "start Docker", then Docker Desktop will be started, which I want to avoid. It seems to me that Cloud Code needs to do the equivalent of running <code>eval $(minikube -p minikube docker-env)</code> to use the minikube Docker daemon. Is there a setting to get it to do that?</p>
| AdrienF | <p>my work around.</p>
<pre><code>ls -l /var/run/docker.sock
# /var/run/docker.sock -> $HOME/.docker/run/docker.sock
sudo rm /var/run/docker.sock
ssh -i ~/.minikube/machines/minikube/id_rsa -L $HOME/.minikube/docker.sock:/var/run/docker.sock docker@$(minikube ip) -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
sudo ln -s $HOME/.minikube/docker.sock /var/run/docker.sock
if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null
then
echo "Running"
else
echo "Not running"
fi
# Running
</code></pre>
| user21144791 |
<p>I have to process tasks stored in a work queue and I am launching this kind of Job to do it:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
parallelism: 10
containers:
- name: pi
image: perl
command: ["some", "long", "command"]
restartPolicy: Never
backoffLimit: 0
</code></pre>
<p>The problem is that if one of the Pod managed by the Job fails, the Job will terminate all the other Pods before they can complete. On my side, I would like the Job to be marked as failed but I do not want its Pods to be terminated. I would like them to continue running and finish processing the items they have picked in the queue.</p>
<p>Is there a way to do that please?</p>
| Fabrice Jammes | <p>As it already mentioned in the comments, you can set <code>restartPolicy: OnFailure</code>, that means kubelet will perform restarts until the Job succeeds. However every retry <a href="https://docs.openshift.com/container-platform/4.1/nodes/jobs/nodes-nodes-jobs.html" rel="nofollow noreferrer">doesn't increment the number of failures</a>. However, you can set <code>activeDeadlineSeconds</code> to some value in order to avoid a loop of failing.</p>
| Bazhikov |
<p>Do we need to have Istio sidecar proxy containers running alongside the application pod for Istio Authorization Policy to work as expected?</p>
<p>Do we have any Istio docs around this?</p>
<p>I tried running my application without sidecars and the authorisation policy is not getting applied.</p>
| Chandra Sekar | <p>As per the architecture provided in the <a href="https://istio.io/latest/docs/concepts/security/#authentication-architecture:%7E:text=receives%20the%20traffic.-,Authentication%20architecture,-You%20can%20specify" rel="nofollow noreferrer">official Istio documentation</a>. Istio proxy acts as a gateway between your incoming and outgoing traffic of your application container and is responsible for traffic management, security and for enforcing various policies whether they are custom made or from existing templates.</p>
<p>Authentication is one such policy and sidecar proxy helps you in applying these policies, you can specify various methods or policies for authenticating to your workloads and these policies will be stored in the Istio configuration storage once deployed. Whenever a policy got changed or a new pod got generated matching the policy requirements this proxy container will apply the policy to the single workload or multiple workloads based on the policy specifications as shown in the figure below.</p>
<p><a href="https://i.stack.imgur.com/04TIy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/04TIy.png" alt="enter image description here" /></a></p>
<p><strong>Note:</strong> This image is taken form the official Istio documentation which is embedded in the content above.</p>
| Kranthiveer Dontineni |
<p>I have a python app</p>
<pre><code>from flask import Flask, jsonify
app = Flask(__name__)
@app.route("/")
def index():
return jsonify({"hey":"test6"})
</code></pre>
<p>And a deployment.yaml file</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: mqttdemob
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mqttdemob
strategy: {}
template:
metadata:
labels:
app: mqttdemob
spec:
containers:
- image: ****/fluxdemo:main-584db7b6-1687862454 # {"$imagepolicy": "flux-system:mqttdemob"}
name: app
ports:
- containerPort: 6000
</code></pre>
<p>And a service.yaml file</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: mqttdemob
spec:
selector:
app: mqttdemob
ports:
- protocol: TCP
port: 6000
targetPort: 6000
nodePort: 30400
externalIPs:
- 1.2.4.122
type: NodePort
</code></pre>
<p>When I deploy using these files I would want the Flask app to run on port 6000 which then would be forwarded to 30400.</p>
<p>But when I run</p>
<p><code>kubectl exec <pod name> -- netstat -tulpn</code></p>
<p>It outputs</p>
<pre><code> Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 1/python
</code></pre>
<p>Such that it is not using port 6000 but port 5000.</p>
<p>What am I doing wrong and how can I make sure that Flask will use port 6000 instead of 5000?</p>
| Kspr | <p><code>containerPort</code> is only informational and doesn't configure the port your app is running on.</p>
<p>You flask application is using its default port.
Try changing it in you main file where you are starting your server using:</p>
<pre class="lang-py prettyprint-override"><code>port = 6000 #you can change it to any unused port.
if __name__ == '__main__':
app.run(host='0.0.0.0', port)
</code></pre>
<ul>
<li>unused in the above case means the port which is currently not in use.</li>
</ul>
| Ansh Tyagi |
<h2 id="background">Background:</h2>
<p>I have a GKE cluster which has suddenly stopped being able to pull my docker images from GCR; both are in the same GCP project. It has been working well for several months, no issues pulling images, and has now started throwing errors without having made any changes.</p>
<p>(NB: I'm generally the only one on my team who accesses Google Cloud, though it's entirely possible that someone else on my team may have made changes / inadvertently made changes without realising).</p>
<p>I've seen a few other posts on this topic, but the solutions offered in others haven't helped. Two of these posts stood out to me in particular, as they were both posted around the same day my issues started ~13/14 days ago. Whether this is coincidence or not who knows..</p>
<p><a href="https://stackoverflow.com/questions/68305918/troubleshooting-a-manifests-prod-403-error-from-kubernetes-in-gke">This post</a> has the same issue as me; unsure whether the posted comments helped them resolve, but it hasn't fixed for me. <a href="https://serverfault.com/questions/1069107/gke-node-from-new-node-pool-gets-403-on-artifact-registry-image">This post</a> seemed to also be the same issue, but the poster says it resolved by itself after waiting some time.</p>
<h2 id="the-issue">The Issue:</h2>
<p>I first noticed the issue on the cluster a few days ago. Went to deploy a new image by pushing image to GCR and then bouncing the pods <code>kubectl rollout restart deployment</code>.</p>
<p>The pods all then came back with <code>ImagePullBackOff</code>, saying that they couldn't get the image from GCR:</p>
<p><code>kubectl get pods</code>:</p>
<pre><code>XXX-XXX-XXX 0/1 ImagePullBackOff 0 13d
XXX-XXX-XXX 0/1 ImagePullBackOff 0 13d
XXX-XXX-XXX 0/1 ImagePullBackOff 0 13d
...
</code></pre>
<p><code>kubectl describe pod XXX-XXX-XXX</code>:</p>
<pre><code>Normal BackOff 20s kubelet Back-off pulling image "gcr.io/<GCP_PROJECT>/XXX:dev-latest"
Warning Failed 20s kubelet Error: ImagePullBackOff
Normal Pulling 8s (x2 over 21s) kubelet Pulling image "gcr.io/<GCP_PROJECT>/XXX:dev-latest"
Warning Failed 7s (x2 over 20s) kubelet Failed to pull image "gcr.io/<GCP_PROJECT>/XXX:dev-latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/<GCP_PROJECT>/XXX:dev-latest": failed to resolve reference "gcr.io/<GCR_PROJECT>/XXX:dev-latest": unexpected status code [manifests dev-latest]: 403 Forbidden
Warning Failed 7s (x2 over 20s) kubelet Error: ErrImagePull
</code></pre>
<h2 id="troubleshooting-steps-followed-from-other-posts">Troubleshooting steps followed from other posts:</h2>
<p>I know that the image definitely exists in GCR -</p>
<ul>
<li>I can pull the image to my own machine (also removed all docker images from my machine to confirm it was really pulling)</li>
<li>I can see the tagged image if I look on the GCR UI on chrome.</li>
</ul>
<p>I've SSH'd into one of the cluster nodes and tried to docker pull manually, with no success:</p>
<pre><code>docker pull gcr.io/<GCP_PROJECT>/XXX:dev-latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
</code></pre>
<p>(Also did a docker pull of a public mongodb image to confirm <em>that</em> was working, and it's specific to GCR).</p>
<p>So this leads me to believe it's an issue with the service account not having the correct permissions, as <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted" rel="noreferrer">in the cloud docs</a> under the 'Error 400/403' section. This seems to suggest that the service account has either been deleted, or edited manually.</p>
<p>During my troubleshooting, I tried to find out exactly <em>which</em> service account GKE was using to pull from GCR. In the steps outlined in the docs, it says that: <code>The name of your Google Kubernetes Engine service account is as follows, where PROJECT_NUMBER is your project number:</code></p>
<pre><code>service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com
</code></pre>
<p>I found the service account and checked the polices - it did have one for <code>roles/container.serviceAgent</code>, but nothing specifically mentioning kubernetes as I would expect from the description in the docs.. '<em>the Kubernetes Engine Service Agent role</em>' (unless that is the one they're describing, in which case I'm no better off that before anyway..).</p>
<p>Must not have had the correct roles, so I then followed the steps to re-enable (disable then enable the Kubernetes API). Running <code>cloud projects get-iam-policy <GCP_PROJECT></code> again and diffing the two outputs (before/after), the only difference is that a service account for '@cloud-filer...' has been deleted.</p>
<p>Thinking maybe the error was something else, I thought I would try spinning up a new cluster. Same error - can't pull images.</p>
<h2 id="send-help">Send help..</h2>
<p>I've been racking my brains to try to troubleshoot, but I'm now out of ideas! Any and all help much appreciated!</p>
| localghost | <p>I don't know if it still helps, but I had the same issue and managed to fix it.</p>
<p>In my case I was deploying GKE trough terraform and did not specify <code>oauth_scope</code> property for node pool as show in <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#example-usage---with-the-default-node-pool" rel="nofollow noreferrer">example</a>. As I understand you need to make gcp APIs available here to make nodes able to use them.</p>
| jwwebsensa |
<p>In my <code>Deployment</code>, i specify 2 replicas for my application, but when i look at the result generated by kubernetes, it has 3 <code>ReplicaSet</code>, in my <code>Deployment</code> i don't specify anything related to <code>ReplicaSet</code>, why is that?</p>
<p><a href="https://i.stack.imgur.com/8kfjJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8kfjJ.png" alt="enter image description here" /></a></p>
| hades | <p>Those are previous revisions of your deployment. Notice the little "rev1", "rev2" labels. Kubernetes retains old replicaSets, per default 10 revisions are stored.</p>
<p>Only the latest revision has your two pod <code>replicas</code> in the <code>replicaSet</code>. You can set the cleanup policy using <code>.spec.revisionHistoryLimit</code> according to the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#clean-up-policy" rel="nofollow noreferrer">kubernetes documentation</a>.</p>
| CloudWatcher |
<p>I have a Kubernetes cluster with 3 HA nodes on 3 different servers. One of the server had the control plane and got deleted (meaning, I lost the server).</p>
<p>Right now, other two servers are running normally and all deployments and services are running, however I don't have any access to the cluster.</p>
<p>How can I recover this scenario?</p>
<p>Thanks for your help.</p>
| user3184767 | <p>Restoring a lost control plane node is very troublesome if you haven’t taken snapshots of the instance and the backup of ETCD data, then follow these steps as mentioned in this <strong><a href="https://www.lisenet.com/2023/replacing-a-failed-control-plane-node-in-a-ha-kubernetes-cluster/" rel="nofollow noreferrer">blog</a></strong>.</p>
<ul>
<li>Check whether the control plane is in <code>Not Ready</code> state and check
whether you have the etcd client installed in the nodes, else
install it.</li>
<li>Now remove the etcd member which is associated with the deleted
control plane from the other two nodes after checking the etcd
member status.</li>
<li>Now create a new node using the existing snapshot else create a new
instance and install kubernetes control plane components.</li>
<li>Join the nodes to the new control plane node using the new token
generated.</li>
</ul>
<p>Follow the above blog for more information on instructions related to installing etcd client and other commands.</p>
| Kranthiveer Dontineni |
<p>When join node :
<code>sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2</code></p>
<p>The outputs:</p>
<pre><code>[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
</code></pre>
<p>And <code>systemctl status kubelet</code>:</p>
<pre><code>node@node:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 26716 (kubelet)
Tasks: 16 (limit: 1111)
CGroup: /system.slice/kubelet.service
└─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml -
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found
</code></pre>
<p>To <code>Unauthorized</code> I checked at master with <code>kubeadm token list</code>, token is valid.
So what's the problem? Thanks a lot.</p>
| 叶同学 | <p>on worker nodes doing</p>
<pre><code>sudo kubeadm reset
</code></pre>
<p>and then rejoining will solve this issue</p>
| Syed Sadath |
<p>I'm using a kubernetes and want to filter only the <strong>sampledb</strong> on my postgreSQL.</p>
<p>I tried to use this command:</p>
<pre><code>kubectl exec podname-89tq4763k23-klp83 -- bash -c "psql -U postgres --tuples-only -P format=unaligned -c "SELECT datname FROM pg_database WHERE NOT datistemplate AND datname <> 'postgres'";"
</code></pre>
<p>but it does not show any output from my terminal, my expectation is it should show <strong>sampledb</strong> as an output</p>
<p>Tried using this command to list all databasename and it is working.</p>
<pre><code>kubectl exec -it podname-89tq4763k23-klp83 -- psql -U postgres -c "\l"
</code></pre>
<p>my question is how can I filter only the <strong>sampledb</strong> name from the list of databases?</p>
<pre><code> List of databases
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges
-------------+----------+----------+------------+------------+------------+-----------------+-----------------------
sampledb | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc |
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
(4 rows)
</code></pre>
<p>Note: I'm also planning to use this kubectl command if it works as a variable for my bash script.</p>
| 生きがい | <p>I think your command needs a quoting fix.</p>
<pre><code>kubectl exec podname-89tq4763k23-klp83 -- bash -c "psql -U postgres --tuples-only -P format=unaligned -c "SELECT datname FROM pg_database WHERE NOT datistemplate AND datname <> 'postgres'";"
</code></pre>
<p>Try this:</p>
<pre><code>kubectl exec $POD -- bash -c "psql -U postgres -t -A -c \"SELECT datname FROM pg_database WHERE NOT datistemplate AND datname <> 'postgres'\""
</code></pre>
| Filip Rembiałkowski |
<p>I would like to understand the difference between PVC capacity I assigned (5G) and the container available capacity reported against the mounted volume inside the container.</p>
<ol>
<li>Have a GCE PD with 10G capacity</li>
<li>Have a PVC with 5G capacity against the same GCE PD.</li>
<li>However, when I run df -h inside the PVC volume mount inside a container:
It shows available capacity against the volume as 9.5G.</li>
</ol>
<blockquote>
<pre><code>sh-4.4$ kubectl get pvc,pv -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/nfs-claim Bound nfs-pv 5Gi RWX 17d Filesystem
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/nfs-pv 5Gi RWX Retain Bound default/nfs-claim 17d Filesystem
sh-4.4$ kubectl get pvc,pv -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/nfs-claim Bound nfs-pv 5Gi RWX 17d Filesystem
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/nfs-pv 5Gi RWX Retain Bound default/nfs-claim 17d Filesystem
</code></pre>
</blockquote>
<p>Deployment PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 5Gi
</code></pre>
<p>Deployment:</p>
<pre><code>...
volumes:
- name: molecule-storage
persistentVolumeClaim:
claimName: nfs-claim
volumeMounts:
- name: molecule-storage
mountPath: "/mnt/boomi"
</code></pre>
<p>I would expect this to be 5G as per PVC capacity instead.
Any ideas?</p>
| Rubans | <p>Persistent volumes can be attached to pods in a variety of ways. When you set the storage requests for dynamic provisioning to 30 Gi, GCE PD will be automatically provisioned for you with 30 Gi. However, the storage size will equal the size of the GCE PD if you opt to use the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">preexisting PD</a> or static provisioning.</p>
<p>The <a href="https://github.com/kubernetes/kubernetes/issues/48701" rel="nofollow noreferrer">PV storage capacity</a> field is merely a label; K8s won't enforce the storage capacity. It is the implementer's responsibility to make sure that storage size and <a href="https://github.com/kubernetes/kubernetes/issues/68736" rel="nofollow noreferrer">capacity</a> are compatible.</p>
| Bryan L |
<p>I am writing a script that receives a Kubernetes context name as an input and outputs the different elements of the cluster -></p>
<pre><code>class GKE:
def __init__(self, context):
s = context.split("_")
self.provider: str = s[0]
self.project: str = s[1]
self.data_center: GKE.DataCenter = GKE.DataCenter(data_center=s[2])
self.cluster_name: str = s[3]
def __str__(self):
return f'provider: {self.provider}, project: {self.project}, {self.data_center}, cluster name: {self.cluster_name}'
class DataCenter:
def __init__(self, data_center: str):
s = data_center.split("-")
self.country: str = s[0]
self.region: str = s[1]
self.zone: str = s[2]
def __str__(self):
return f'country: {self.country}, region: {self.region}, zone: {self.zone}'
class EKS:
# TODO: What are the fields? What is the convention?
pass
class AKS:
# TODO: What are the fields? What is the convention?
pass
if __name__ == '__main__':
print(GKE(context="gke_XXX-YYY-ZZZ_us-central1-c_name"))
</code></pre>
<p>Output:</p>
<pre><code>provider: gke, project: XXX-YYY-ZZZ, country: us, region: central1, zone: c, cluster name: name
</code></pre>
<p>This will support only the three main providers (GKE, EKS, AKS).</p>
<p>My question is:</p>
<p>What are the different elements of EKS and AKS context names?</p>
| David Wer | <p>You need to differentiate between the correct name of the cluster and the naming schema of a resource.</p>
<p>When I run <code>kubectl config get-contexts</code> on the clusters Aks, Eks, and Gke I get the following results:</p>
<pre><code>NAME AUTHINFO
gke_project-1234_us-central1-c_myGKECluster gke_project-1234_us-central1-c_myGKECluster
myAKSCluster clusterUser_myResourceGroup_myAKSCluster
arn:aws:eks:eu-west-1:1234:cluster/myEKSCluster arn:aws:eks:eu-west-1:1234:cluster/myEKSCluster
</code></pre>
<p>In all three clouds, the correct name of the cluster in this example is <code>my***Cluster</code>.</p>
<p>The naming scheme in <code>~/.kube/config</code> is used to distinguish one cluster (contexts wise) from another.
For example when you want to change the context with kubectl, then you have to differentiate between cluster whose name is <code>myCluster</code> and is in <code>region-code1</code> Compared to another cluster whose name is also <code>myCluster</code> but he is in <code>region-code2</code>, and so on, so you will use the naming scheme.</p>
<p><strong>GKE:</strong></p>
<p>As you wrote, the naming scheme in gke consists of 4 parts: <code>provider_project-id_zone_cluster-name</code><br />
For example <code>gke_project-123_us-central1-c_myGKECluster</code></p>
<ul>
<li>provider: <code>gke</code></li>
<li>project-id: <code>project-123</code></li>
<li>zone: <code>us-central1-c</code></li>
<li>cluster-name: <code>myGKECluster</code></li>
</ul>
<p><strong>AKS:</strong><br />
In aks the naming schema is the name of the cluster.<br />
But the <code>AUTHINFO</code>, (which is actually the configuration of the user in the kubeconfig file), consists of three parts: <code>Resource-type_Resource-group_Resource-name</code><br />
For example <code>clusterUser_myResourceGroup_myAKSCluster</code></p>
<ul>
<li>The Resource-type is <code>clusterUser</code></li>
<li>The Resource-group is <code>myResourceGroup</code></li>
<li>The Resource-name is <code>myAKSCluster</code></li>
</ul>
<p><strong>EKS:</strong></p>
<blockquote>
<p><strong>AWS</strong> requires an ARN when needed to specify a resource unambiguously across all of AWS.</p>
</blockquote>
<p>The ARN format is <code>arn:partition:service:region:account-id:resource-type/resource-id</code><br />
For example <code>arn:aws:eks:eu-west-1:1234:cluster/myEKSCluster</code></p>
<ul>
<li>partition: the partition in which the resource is located (such as <code>aws</code> Regions).</li>
<li>service: The service namespace that identifies the AWS product (such as <code>eks</code>).</li>
<li>region: The Region code (such as <code>eu-west-1</code>).</li>
<li>account-id: The ID of the AWS account that owns the resource(such as <code>1234</code>).</li>
<li>resource-type: The resource type (such as <code>cluster</code>).</li>
<li>resource-id The resource identifier. This is the name of the resource, the ID of the resource, or a resource path (such as <code>myEKSCluster</code>).</li>
</ul>
<hr />
<p>Additional resources:</p>
<p><a href="https://stackoverflow.com/a/63824179/20571972">https://stackoverflow.com/a/63824179/20571972</a>
<a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#aws-resource-eks-cluster-return-values" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#aws-resource-eks-cluster-return-values</a></p>
| Moshe Rappaport |
<p>I am trying to enable ingress in minkube. When I run <code>minikube addons enable ingress</code> it hangs for a while then I get the following error message:</p>
<pre><code>❌ Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.15/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
stdout:
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
configmap/ingress-nginx-controller unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
service/ingress-nginx-controller-admission unchanged
stderr:
error: error validating "/etc/kubernetes/addons/ingress-deploy.yaml": error validating data: [ValidationError(Service.spec): unknown field "ipFamilies" in io.k8s.api.core.v1.ServiceSpec, ValidationError(Service.spec): unknown field "ipFamilyPolicy" in io.k8s.api.core.v1.ServiceSpec]; if you choose to ignore these errors, turn validation off with --validate=false
waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ Please also attach the following file to the GitHub issue: │
│ - /tmp/minikube_addons_2c0e0cafd16ea0f95ac51773aeef036b316005b6_0.log │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
</code></pre>
<p>This the the minikube start command I used:
<code>minikube start --kubernetes-version=v1.19.15 --vm-driver=docker</code></p>
<p>I have tried reinstalling minikube. It was working fine last week when I ran the same command.</p>
<p>If more specific information is needed please let me know and I will edit the question. Does anyone know how I could go about fixing this?</p>
<p>Thanks in advance.</p>
| joe | <p>Bit late, but I hope someone find this useful, this happens becouse minikube could not pull the image(ingress-nginx-controller) in time, the way to know is:</p>
<pre><code>kubectl get pod -n ingress-nginx
</code></pre>
<p>If the ingress-nginx-controller-xxxx (xxxx is the identifier of the pod) has a status of ImagePullBackOff or something like that, you are on this scenario.</p>
<p>To fix you will need to first describe you pod:</p>
<pre><code>kubectl describe pod ingress-nginx-controller-xxxxx -n ingress-nginx
</code></pre>
<p>Look under containers/controller/images and copy its value(don't need to copyp the @sha256:... if it contains it). You must to manually pull it, but before probably delete the related deployment as well:</p>
<pre><code>kubectl delete deployment ingress-nginx-controller -n ingress-nginx
</code></pre>
<p>And then pull the image from the vm itself, in my case looks like this:</p>
<pre><code>minikube ssh docker pull k8s.gcr.io/ingress-nginx/controller:v1.2.1
</code></pre>
<p>Wait for it and then try to "addons enable ingress" again and see if it works, it did it for me.</p>
| Steve Rodriguez |
<p>I use istio-ingress gateway and virtualservice to expose different microservices. So far all of them have been http services, so it was straight-forward to follow istio's documentation.</p>
<p>But with kafka I am facing some issues. I am using <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">bitnami/kafka</a> helm chart for kafka installation. Here's the values.yaml used for it:</p>
<pre><code>global:
storageClass: "kafka-sc"
replicaCount: 3
deleteTopicEnable: true
resources:
requests:
memory: 1024Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 1000m
zookeeper:
replicaCount: 3
resources:
requests:
memory: 1024Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 1000m
</code></pre>
<p>This deployment exposes kafka on this endpoint: <code>my-kafka.kafka.svc.cluster.local:9092</code></p>
<p>I want this endpoint to be accessible via internet using ingress controller. Therefore, I applied following kubernetes manifests --></p>
<p>A. kafka-ingress-gateway.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kafka-ingress-gateway
namespace: kafka
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9092
name: tcp
protocol: TCP
hosts:
- "kafka.<public_domain>"
</code></pre>
<p>B. kafka-ingress-virtualservice.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kafka-ingress-virtualservice
namespace: kafka
spec:
hosts:
- "kafka.<public_domain>"
gateways:
- kafka/kafka-ingress-gateway
tcp:
- match:
- port: 9092
route:
- destination:
host: my-kafka.kafka.svc.cluster.local
port:
number: 9092
</code></pre>
<p>To verify whether this works, I am using following approach:</p>
<ol>
<li>Create a kafka-client pod and login to it in two different terminals</li>
<li>In first terminal, I produce in a topic called <code>test</code> using this command: <code>kafka-console-producer.sh --broker-list my-kafka-0.my-kafka-headless.kafka.svc.cluster.local:9092 --topic test</code></li>
<li>In second terminal, I consume in <code>test</code> topic using this command.</li>
</ol>
<p>In here, this works: <code>kafka-console-consumer.sh --bootstrap-server my-kafka.kafka.svc.cluster.local:9092 --topic test --from-beginning</code></p>
<p>This does not work: <code>kafka-console-consumer.sh --bootstrap-server kafka.<public_domain>:9092 --topic test --from-beginning</code></p>
<p>I am getting this error: <code>WARN [Consumer clientId=consumer-console-consumer-89304-1, groupId=console-consumer-89304] Bootstrap broker kafka.<public_domain>:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)</code></p>
<p>I am new to kafka, so not sure what else is required to expose the consumer endpoint. From similar questions on stackoverflow, I noticed we are supposed to <a href="https://github.com/bitnami/charts/blob/master/bitnami/kafka/values.yaml#L403" rel="nofollow noreferrer">define "advertisedListeners" in kafka config</a>, but not sure what value to put there.</p>
<p>Please let me know if I am missing any details here.</p>
| Grimlock | <p>edit your istio-ingressgateway and add 9092 for tcp port</p>
<pre><code>kubectl edit svc -nistio-system istio-ingressgateway
</code></pre>
<p>add</p>
<pre><code>- name: kafka-broker
port: 9092
protocol: TCP
targetPort: 9092
</code></pre>
| Charles Chiu |
<p>I set up a microk8s deployment with the Minio service activated. I can connect to the Minio dashboard with a browser but cannot find a way to connect to the service via the API.</p>
<p>Here is the output to the <code>microk8s kubectl get all --all-namespaces</code> command</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
minio-operator pod/minio-operator-67dcf6dd7c-vxccn 0/1 Pending 0 7d22h
kube-system pod/calico-node-bpd4r 1/1 Running 4 (26m ago) 8d
kube-system pod/dashboard-metrics-scraper-7bc864c59-t7k87 1/1 Running 4 (26m ago) 8d
kube-system pod/hostpath-provisioner-69cd9ff5b8-x664l 1/1 Running 4 (26m ago) 7d22h
kube-system pod/kubernetes-dashboard-dc96f9fc-4759w 1/1 Running 4 (26m ago) 8d
minio-operator pod/console-66c4b79fbd-mw5s8 1/1 Running 3 (26m ago) 7d22h
kube-system pod/calico-kube-controllers-79568db7f8-vg4q2 1/1 Running 4 (26m ago) 8d
kube-system pod/coredns-6f5f9b5d74-fz7v8 1/1 Running 4 (26m ago) 8d
kube-system pod/metrics-server-6f754f88d-r7lsj 1/1 Running 4 (26m ago) 8d
minio-operator pod/minio-operator-67dcf6dd7c-8dnlq 1/1 Running 9 (25m ago) 7d22h
minio-operator pod/microk8s-ss-0-0 1/1 Running 9 (25m ago) 7d22h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 11d
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 8d
kube-system service/metrics-server ClusterIP 10.152.183.43 <none> 443/TCP 8d
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.232 <none> 443/TCP 8d
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.226 <none> 8000/TCP 8d
minio-operator service/operator ClusterIP 10.152.183.48 <none> 4222/TCP,4221/TCP 7d22h
minio-operator service/console ClusterIP 10.152.183.193 <none> 9090/TCP,9443/TCP 7d22h
minio-operator service/minio ClusterIP 10.152.183.195 <none> 80/TCP 7d22h
minio-operator service/microk8s-console ClusterIP 10.152.183.192 <none> 9090/TCP 7d22h
minio-operator service/microk8s-hl ClusterIP None <none> 9000/TCP 7d22h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 1/1 1 1 8d
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 8d
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 8d
minio-operator deployment.apps/console 1/1 1 1 7d22h
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 7d22h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 8d
kube-system deployment.apps/metrics-server 1/1 1 1 8d
minio-operator deployment.apps/minio-operator 1/2 2 1 7d22h
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-6f5f9b5d74 1 1 1 8d
kube-system replicaset.apps/dashboard-metrics-scraper-7bc864c59 1 1 1 8d
kube-system replicaset.apps/kubernetes-dashboard-dc96f9fc 1 1 1 8d
minio-operator replicaset.apps/console-66c4b79fbd 1 1 1 7d22h
kube-system replicaset.apps/hostpath-provisioner-69cd9ff5b8 1 1 1 7d22h
kube-system replicaset.apps/calico-kube-controllers-79568db7f8 1 1 1 8d
kube-system replicaset.apps/metrics-server-6f754f88d 1 1 1 8d
minio-operator replicaset.apps/minio-operator-67dcf6dd7c 2 2 1 7d22h
NAMESPACE NAME READY AGE
minio-operator statefulset.apps/microk8s-ss-0 1/1 7d22h
</code></pre>
<p>I've tried the following commands to connect to the pod via the Python API, but keep getting errors:</p>
<pre><code>client = Minio("microk8s-ss-0-0", secure=False)
try:
objects = client.list_objects("bucket-1",prefix='/',recursive=True)
for obj in objects:
print (obj.bucket_name)
except InvalidResponseError as err:
print (err)
</code></pre>
<p>And received the following error:</p>
<pre><code>MaxRetryError: HTTPConnectionPool(host='microk8s-ss-0-0', port=80): Max retries exceeded with url: /bucket-1?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=%2F (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f29041e1e40>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
</code></pre>
<p>I also tried:
<code>client = Minio("10.152.183.195", secure=False)</code></p>
<p>And got the same result. How do I access the minio pod from the API?</p>
| Mark C | <p>Can you try this?</p>
<pre><code>client = Minio("minio.minio-operator.svc.cluster.local:80",
YOUR_ACCESS_KEY, YOUR_SECRET_KEY, secure=False)
</code></pre>
<p>If the service cannot be reached where your python code is running you can port-forward the service using the command below</p>
<pre><code>microk8s kubectl -n minio-operator port-forward svc/minio 80
</code></pre>
<p>and then you can do</p>
<pre><code>client = Minio("localhost:80",
YOUR_ACCESS_KEY, YOUR_SECRET_KEY, secure=False)
</code></pre>
| dilverse |
<p>when creating a Pod with one container. The Container has a memory request of 200 MiB and a memory limit of 400 MiB. Look at configuration file for the creating Pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
requests:
memory: "200Mi"
limits:
memory: "400Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1]
</code></pre>
<p>Above yaml not working as intended, The args section in the configuration file not providing arguments for the Container when it starting.</p>
<p>Tried to create kubernetes pod memory limits & requests & seems fail to create.</p>
| KMS | <p>Change your <code>args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]</code>. your yaml will work.</p>
| Pranavi |
<p>We are shutting down the AKS Cluster for our Non-Production Environments during night-time to save cost.
For the shutdown we first scale the user nodepool to 0 and then shut down the cluster using the following PowerShell Commands:</p>
<pre><code>Update-AzAksNodePool -Name $NodePoolName -ClusterName $AksClusterName -ResourceGroupName $ResourceGroupName -NodeCount 0
Stop-AzAksCluster -Name $AksClusterName -ResourceGroupName $ResourceGroupName
</code></pre>
<p>Startup is done in the opposite order</p>
<pre><code>Start-AzAksCluster -Name $AksClusterName -ResourceGroupName $ResourceGroupName
Update-AzAksNodePool -Name $NodePoolName -ClusterName $AksClusterName -ResourceGroupName $ResourceGroupName -NodeCount 1
</code></pre>
<p>When the cluster is started again, new nodes are assigned to it and i see some pods are stuck in the state Terminating on Nodes that are not existing anymore.</p>
<p>What's even crazier is that some pods of the deployment are in state "Running" but on a node that does not exist anymore. So for K8s the deployment seems to be satisfied but in reality there is no pod running from the deployment.</p>
<pre><code>server-67d4fc5bd5-6n7pp 1/1 Terminating 0 47h 10.244.0.25 aks-ocpp-41161936-vmss00000t <none> <none>
server-67d4fc5bd5-982jf 1/1 Terminating 0 4d20h 10.244.0.33 aks-ocpp-41161936-vmss00000q <none> <none>
server-67d4fc5bd5-fx8g6 1/1 Terminating 0 47h 10.244.0.24 aks-ocpp-41161936-vmss00000t <none> <none>
server-67d4fc5bd5-ptmzt 1/1 Terminating 0 4d20h 10.244.0.34 aks-ocpp-41161936-vmss00000q <none> <none>
server-68b79b75dd-kn5cx 1/1 Terminating 0 6d14h 10.244.0.26 aks-ocpp-41161936-vmss00000o <none> <none>
server-68b79b75dd-nnjwb 1/1 Terminating 0 6d14h 10.244.0.27 aks-ocpp-41161936-vmss00000o <none> <none>
server-b699b9cc7-f64lc 1/1 Running 0 21h 10.244.0.11 aks-ocpp-41161936-vmss00000u <none> <none>
server-b699b9cc7-hhngk 1/1 Running 0 21h 10.244.0.10 aks-ocpp-41161936-vmss00000u <none> <none>
</code></pre>
<p>When i output the current nodes i see that the nodes mentioned above where nodes seem to be still terminating are not existing anymore:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
aks-agentpool-36225625-vmss00001o Ready agent 107m v1.26.0
aks-agentpool-36225625-vmss00001p Ready agent 105m v1.26.0
aks-ocpp-41161936-vmss00000v Ready agent 107m v1.26.0
</code></pre>
<p>I know i can kill the pods with <code>kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE></code> but i would like to find out the core issue.
An ordinary kubectl delete pod will wait forever (probably as the pod does not run anymore), i really have to apply the additional flags to force-delete it.</p>
<p>When i delete the pods with the cluster up and running they don't have any issue and just restart as expected.</p>
<p>Is this a problem related to the implementation of the services running in the pod? Is the service failing to react on some shutdown-command?
Or is there a problem in the way we shut down our cluster?</p>
| Markus S. | <p>It seems pods were not gracefully terminated before the nodes are deleted. As per your comment, same thing is happening while taking AKS cluster down. It can happen if the pods are not given enough time to gracefully terminate before the nodes are deleted.</p>
<p>To prevent this issue happening in future, you can perform cordon and drain on all nodes in the node pool before deleting the nodes or taking AKS cluster down using the below command.</p>
<pre><code>kubectl drain <node-name> --ignore-daemonsets
</code></pre>
<p><img src="https://i.imgur.com/bHZmSWy.png" alt="enter image description here" />
<img src="https://i.imgur.com/jUB8VDL.png" alt="enter image description here" /></p>
<p>Once you have applied <strong>kubectl drain</strong> command, then <strong>you can down the AKS cluster</strong>.</p>
<pre><code>Stop-AzAksCluster -Name cluster-name -ResourceGroupName rg-name
</code></pre>
<p>You can also use <code>terminationGracePeriodSeconds</code> field in the <strong>pod spec</strong> to ensure that pods are terminated quickly when the node is deleted. This will ensure that pods are not stuck in the running state for a long time. Refer to this link <a href="https://livebook.manning.com/concept/kubernetes/terminationgraceperiodsecond" rel="nofollow noreferrer">terminationgraceperiodsecond | Kubernetes | Manning Publications Co</a> for more information.</p>
| HowAreYou |
<p>I'm just doing a wordpress deploy to practice openshift, I have this problem that I can't solve:</p>
<pre><code>AH00558: apache2: Unable to reliably determine the FQDN of the server, using [10.128.2.60](https://10.128.2.60/). Globally set the "ServerName" directive to suppress this message
(13) Permission denied: AH00072: make_sock: unable to connect to address \[::\]:80
(13) Permission denied: AH00072: make_sock: unable to connect to address [0.0.0.0:80](https://0.0.0.0:80/)
no listening sockets available, switch off
AH00015: Unable to open logs
</code></pre>
<p>This is my wordpress deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployments
metadata:
name: wordpress
specs:
replies: 1
selector:
matchLabels:
apps: wordpress
templates:
metadata:
labels:
apps: wordpress
specs:
containers:
- name: wordpress
image: wordpress:5.8.3-php7.4-apache
ports:
- containerPort: 8443
name: wordpress
volumeMounts:
- name: wordpress-data
mountPath: /var/www
env:
- name: WORDPRESS_DB_HOST
value: mysql-service.default.svc.cluster.local
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wp-db-secrets
key: MYSQL_ROOT_PASSWORD
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_NAME
value: wordpress
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress-volume
</code></pre>
<p>My Service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: wordpress-service
specs:
type: LoadBalancer
selector:
apps: wordpress
ports:
- name: http
protocol: TCP
port: 8443
targetPort: 8443
</code></pre>
<p>I should simply put the ip address it returns to me and log into wordpress</p>
| Belbo | <p>Permission denied is on port 80, your configured port is 8443
Access it with :8443 or change it to Port 80</p>
| My Codebucket |
<p>I installed vanilla kubernetes 1.24, cluster is up and healthy but when i try to install kubernetes-dashboard, i realized can not access dashboard token. Before 1.24 i just describe token and get.</p>
<p>Normally when sa created, secret should be created automaticaly, BUT NOW, I just tried create service account then secret not created automaticaly.</p>
<p>just create sa with: "kubectl create serviceaccount servicename" you should see secret as a servicename-token with kubectl get secrets. But not created..</p>
<p>anyone faced this problem?</p>
| Cagatay Atabay | <p>For your question “Anyone faced this problem?”, the answer is that all the people that install version 1.24 are going to face the same behavior, as this <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">documentation</a> indicates, this version includes features such as the <em><strong>LegacyServiceAccountTokenNoAutoGeneration</strong></em>, the one that is the root cause of what you are experiencing.</p>
<p>So, the workaround right now is to manually create the token, as this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token" rel="nofollow noreferrer">guide</a> indicates.</p>
| Nestor Daniel Ortega Perez |
<p>I have an application that runs a code and at the end it sends an email with a report of the data. When I deploy pods on <strong>GKE</strong> , certain pods get terminated and a new pod is created due to Auto Scale, but the problem is that the termination is done after my code is finished and the email is sent twice for the same data.</p>
<p>Here is the <strong>JSON</strong> file of the deploy API:</p>
<pre><code>{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "$name",
"namespace": "$namespace"
},
"spec": {
"template": {
"metadata": {
"name": "********"
},
"spec": {
"priorityClassName": "high-priority",
"containers": [
{
"name": "******",
"image": "$dockerScancatalogueImageRepo",
"imagePullPolicy": "IfNotPresent",
"env": $env,
"resources": {
"requests": {
"memory": "2000Mi",
"cpu": "2000m"
},
"limits":{
"memory":"2650Mi",
"cpu":"2650m"
}
}
}
],
"imagePullSecrets": [
{
"name": "docker-secret"
}
],
"restartPolicy": "Never"
}
}
}
}
</code></pre>
<p>and here is a screen-shot of the pod events:</p>
<p><a href="https://i.stack.imgur.com/IvZIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IvZIZ.png" alt="enter image description here" /></a></p>
<p>Any idea how to fix that?</p>
<p>Thank you in advance.</p>
| Elias | <p>"Perhaps you are affected by this "Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice." from doc. What happens if you increase terminationgraceperiodseconds in your yaml file? – "</p>
<p>@danyL</p>
| Nestor Daniel Ortega Perez |
<p>How to reproduce no pod is ready when deploying a daemonset?</p>
<p>I try a small example:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: logging
labels:
app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
</code></pre>
<p>A follow-up question is how to use field-selector to get events for daemonset error?</p>
<p>involvedObject.kind=Daemonset,involvedObject.name="{daemonset-name}"</p>
| susanna | <p>To set to a not ready pod of your DaemonSet, you can simply add a readiness probe that will never be ready.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: default
labels:
app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
# here the readinessProbe
readinessProbe:
initialDelaySeconds: 5
httpGet:
host: www.dominiochenonesiste.com
port: 80
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
</code></pre>
<p>To check the pod status, you can use the <code>kubetcl describe pod <nameofyourdaemonsetpodcreated></code></p>
<p>Within the output you will have:</p>
<pre class="lang-bash prettyprint-override"><code>Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
</code></pre>
<p>Hope it helps!</p>
<p>Cose belle!</p>
| kiggyttass |
<p>We have a .NET 7 application, that uses autofac IOC and needs to pull RegionInfo from the appsetting file, in a class library that is part of our solution. When running locally in Visual Studio, or by using Bridge to Kubernetes the IOptions in the class library are populated and the functionality works as expected. When deployed to our cluster as a docker container though the options are not returned to the class causing the api to fail.</p>
<p>I have spent hours on this trying various suggestions online with autofac with no solution, and I'm hoping that someone can point me in the right direction to resolve this. Here is the code...</p>
<p>StartUp.cs</p>
<pre><code> services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "IntegrationService", Version = "v1" });
});
services.Configure<RegionInformation>(Configuration.GetSection("RegionInfo"));
services.AddOptions();
var container = new ContainerBuilder();
</code></pre>
<p>GlobalDateManager.cs</p>
<pre><code> private readonly ILogger<GlobalDateManager> _logger;
private readonly RegionInformation _regionInformation;
public GlobalDateManager(IOptions<RegionInformation> regionInformation, ILogger<GlobalDateManager> logging)
{
_regionInformation = regionInformation.Value;
_logger = logging;
}
</code></pre>
| Damian70 | <p>I created a simple webapi using part of your code.
It uses the IOptions pattern like your code does, and it outputs the configuration.</p>
<p>It was created launching the simple command <code>dotnet new webapi -o abcde</code></p>
<p>Here the entire files i changed or created:</p>
<p><strong>Program.cs (changed)</strong></p>
<pre class="lang-js prettyprint-override"><code>using Microsoft.OpenApi.Models;
using Autofac;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
var services = builder.Services;
services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
services.AddEndpointsApiExplorer();
// start of your snippet code
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "IntegrationService", Version = "v1" });
});
services.Configure<RegionInformation>(builder.Configuration.GetSection("RegionInfo"));
services.AddOptions();
var container = new ContainerBuilder(); // added this line of your code: anyway i did not used it in the app
// end of your snippet code
var app = builder.Build();
// Configure the HTTP request pipeline.
app.UseSwagger();
app.UseSwaggerUI();
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
</code></pre>
<p><strong>RegionInformation.cs (created)</strong></p>
<pre class="lang-js prettyprint-override"><code>public class RegionInformation{
public const string RegionInfo = "RegionInfo";
public string Name {get;set;} = string.Empty;
}
</code></pre>
<p><strong>Controllers/GetConfigAndOptionsController.cs (created)</strong></p>
<pre class="lang-js prettyprint-override"><code>using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;
namespace abcde.Controllers;
[ApiController]
[Route("[controller]")]
public class GetConfigAndOptionsController : ControllerBase
{
private readonly ILogger<GetConfigAndOptionsController> _logger;
private readonly IConfiguration _configuration;
private readonly IOptions<RegionInformation> _regionInformation;
public GetConfigAndOptionsController(ILogger<GetConfigAndOptionsController> logger, IConfiguration configuration, IOptions<RegionInformation> regionInformation )
{
_logger = logger;
_configuration = configuration;
_regionInformation = regionInformation;
}
[HttpGet(Name = "GetGetConfigAndOptions")]
public string Get()
{
var sb = new System.IO.StringWriter();
sb.WriteLine("Listing all providers (appsettings.json should be present in the list))\n");
// listing all providers
foreach (var provider in ((IConfigurationRoot)_configuration).Providers.ToList())
{
sb.WriteLine(provider.ToString());
}
// getting the Name using the configuration object
sb.WriteLine($"\nFrom config: {_configuration["RegionInfo:Name"]}");
// or getting the value using IOption like your code does
sb.WriteLine($"\nFrom IOptions: {_regionInformation.Value.Name}");
return sb.ToString();
}
}
</code></pre>
<p><strong>appsettings.json (changed)</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"RegionInfo":{
"Name":"kiggyttass"
}
}
</code></pre>
<p>Finally i created a simple <strong>Dockerfile</strong></p>
<pre class="lang-yaml prettyprint-override"><code>FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build-env
WORKDIR /App
# Copy everything
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /App
COPY --from=build-env /App/out .
ENTRYPOINT ["dotnet", "abcde.dll"]
</code></pre>
<p>Then,</p>
<pre class="lang-bash prettyprint-override"><code># build your docker image
docker build -t abcde .
# run it
docker run --name=abcde --rm -p 9234:80 abcde
</code></pre>
<p>Call the corresponding GET method in the Swagger UI, or cUrl to http://localhost:9234/GetConfigAndOptions</p>
<p>and you should get something like this:</p>
<pre><code>Listing all providers (appsettings.json should be present in the list))
MemoryConfigurationProvider
EnvironmentVariablesConfigurationProvider Prefix: 'ASPNETCORE_'
MemoryConfigurationProvider
EnvironmentVariablesConfigurationProvider Prefix: 'DOTNET_'
JsonConfigurationProvider for 'appsettings.json' (Optional)
JsonConfigurationProvider for 'appsettings.Production.json' (Optional)
EnvironmentVariablesConfigurationProvider Prefix: ''
Microsoft.Extensions.Configuration.ChainedConfigurationProvider
From config: kiggyttass
From IOptions: kiggyttass
</code></pre>
<p>Try to create this dockerized application and check if you get a similar output.
You should get your value using IConfiguration or IOptions.</p>
<p>Hope it helps.</p>
| kiggyttass |
<p>I got problem with connecting my k3s cluster to GitLab Docker Registry.</p>
<p>On cluster I got created secret in default namespace like this</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=https://gitlab.domain.tld:5050 --docker-username=USERNAME --docker-email=EMAIL --docker-password=TOKEN
</code></pre>
<p>Then in Deployment config I got this secret included, my config:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app.kubernetes.io/name: "app"
app.kubernetes.io/version: "1.0"
namespace: default
spec:
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: regcred
containers:
- image: gitlab.domain.tld:5050/group/appproject:1.0
name: app
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>But the created pod is still unable to pull this image.
There is still error message of:</p>
<pre><code>failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
</code></pre>
<p>Can you help me, where the error may be?
If I try connect to this GitLab registry via secrets above on local docker, it working fine, docker login is right, also a pulling of this image.</p>
<p>Thanks</p>
| vdobes | <p>To pull from a private container registry on Gitlab you must first create a <code>Deploy Token</code> similar to how the pipeline or similar "service" would access it. Go to the repository then go to <code>Settings</code> -> <code>Repository</code> -> <code>Deploy Tokens</code></p>
<p>Give the deploy token a <code>name</code>, and a <code>username</code>(it says optional but we'll be able to use this custom username with the token) and make sure it has read_registry access. That is all it needs to pull from the registry. If you later need to push then you would need write_registry. Once you click <code>create deploy token</code> it will show you the token be sure to copy it as you won't see it again.</p>
<p>Now just recreate your secret in your k8s cluster.</p>
<pre><code> kubectl create secret docker-registry regcred --docker-server=<private gitlab registry> --docker-username=<deploy token username> --docker-password=<deploy token>
</code></pre>
<p>Make sure to apply the secret to the same namespace as your deployment that is pulling the image.</p>
<p>[See Docs] <a href="https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token</a></p>
| EternalDev1 |
<p>I am working on a centOS environment and I have configured kubectl using kubeadm to work with k8s and learn about it. I have configured it using containerd as I am working with k8s v1.26.2.</p>
<p>The problem is that I am not able to configure the coredns pod when executing kubeadm init as you can see here:</p>
<pre><code>[root@portov7 config_files]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default podtest 0/1 ContainerCreating 0 130m
kube-flannel kube-flannel-ds-hqh9z 0/1 Init:ImagePullBackOff 0 55m
kube-system coredns-787d4945fb-2msm9 0/1 ContainerCreating 0 154m
kube-system coredns-787d4945fb-hms2c 0/1 ContainerCreating 0 154m
kube-system etcd-portov7.gre.hpecorp.net 1/1 Running 0 154m
kube-system kube-apiserver-portov7.gre.hpecorp.net 1/1 Running 0 154m
kube-system kube-controller-manager-portov7.gre.hpecorp.net 1/1 Running 0 154m
kube-system kube-proxy-7r7nz 1/1 Running 0 154m
kube-system kube-scheduler-portov7.gre.hpecorp.net 1/1 Running 0 154m
</code></pre>
<p>Seems that the problem is related to the flannel plugin that is not being found on the /opt/cni/bin directory. I have found that this plugin is not used anymore and I am thinking that maybe the problem is related to mu docker version(correct me if I am wrong please).</p>
<p>also, the problem here is that I have some docker containers running in parallel and I would like to migrate them to the latest docker version (currently I am using 1.13.1 version).</p>
<p>So I have two questions here:</p>
<ul>
<li>Is the flannel error generated by the docker version incompatibility somehow? I already created the 10-<strong>flannel.conflist</strong> flannel file but seems is not working.</li>
<li>If the docker version is the problem, the only way to migrate is by using the volumes to save the data and delete and recreate again the containers?</li>
</ul>
<p>Thank you in advance :)</p>
| Luis Roset | <p>Already fixed it by performing the download and installation of the <a href="https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml</a> file.</p>
<p>After the download I executed the command kubectl apply -f kube-flannel.yml and the flannel was configured and the coredns pods are running.</p>
<p>I had to add the <code>imagePullPolicy: Never</code> to the images after imported them to the CRI by using the <code>ctr</code> command.</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-lx6cn 1/1 Running 0 9s
kube-system coredns-787d4945fb-5rjbv 1/1 Running 0 27m
kube-system coredns-787d4945fb-jqs5w 1/1 Running 0 27m
kube-system etcd-portov7.gre.hpecorp.net 1/1 Running 2 27m
kube-system kube-apiserver-portov7.gre.hpecorp.net 1/1 Running 2 27m
kube-system kube-controller-manager-portov7.gre.hpecorp.net 1/1 Running 2 27m
kube-system kube-proxy-mkrwd 1/1 Running 0 27m
kube-system kube-scheduler-portov7.gre.hpecorp.net 1/1 Running 2 27m
</code></pre>
| Luis Roset |
<p>I'd like to know if <code>kubectl</code> offers an easy way to list all the secrets that a certain pod/deployment/statefulset is using, or if there is some way to cleanly retrieve this info. When doing a <code>kubectl describe</code> for a pod, I see I can get a list of mounted volumes which include the ones that come from secrets that I could extract using <code>jq</code> and the like, but this way feels a bit clumsy. I have been searching a bit to no avail. Do you know if there is anything like that around? Perhaps using the API directly?</p>
| DavSanchez | <p>To <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">List all Secrets</a> currently in use by a pod use:</p>
<pre><code>kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
</code></pre>
<p>In the other hand if you want to access to stored secrets in the API:</p>
<blockquote>
<p>Kubernetes Secrets are, by default, stored unencrypted in the API
server's underlying data store (etcd). Anyone with API access can
retrieve or modify a Secret, and so can anyone with access to etcd.
Additionally, anyone who is authorized to create a Pod in a namespace
can use that in order to safely use Secrets, take at least the
following steps:</p>
<ul>
<li>Enable Encryption at Rest for Secrets.</li>
<li>Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means).</li>
<li>Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed to create new Secrets or replace existing<br />
ones.access to read any Secret in that namespace; this includes<br />
indirect access such as the ability to create a Deployment.</li>
</ul>
</blockquote>
<p>If you want more information about secrets in kubernetes, <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">follow this link.</a></p>
| Ismael Clemente Aguirre |
<p>I got freez of Sklearn-classifier in MLRun (the job is still running after 5, 10, 20, ... minutes), see log output:</p>
<pre><code>2023-02-21 13:50:15,853 [info] starting run training uid=e8e66defd91043dda62ae8b6795c74ea DB=http://mlrun-api:8080
2023-02-21 13:50:16,136 [info] Job is running in the background, pod: training-tgplm
</code></pre>
<p>see freez/pending issue on Web UI:</p>
<p><a href="https://i.stack.imgur.com/6IhBB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6IhBB.png" alt="enter image description here" /></a></p>
<p>I used this source code and <code>classifier_fn.run(train_task, local=False)</code> generates freez:</p>
<pre><code># Import the Sklearn classifier function from the function hub
classifier_fn = mlrun.import_function('hub://sklearn-classifier')
# Prepare the parameters list for the training function
training_params = {"model_name": ['risk_xgboost'],
"model_pkg_class": ['sklearn.ensemble.GradientBoostingClassifier']}
# Define the training task, including the feature vector, label and hyperparams definitions
train_task = mlrun.new_task('training',
inputs={'dataset': transactions_fv.uri},
params={'label_column': 'n4_pd30'}
)
train_task.with_hyper_params(training_params, strategy='list', selector='max.accuracy')
# Specify the cluster image
classifier_fn.spec.image = 'mlrun/mlrun'
# Run training
classifier_fn.run(train_task, local=False)
</code></pre>
<p>Did you have and solve the same issue?</p>
| JIST | <p>I solved the same issue and the problem was with different MLRun version between client side and server side. I had MLRun on client in version <strong>1.2.1rc2</strong> and server side in version <strong>1.2.1</strong> (these versions have different interfaces and it generates freez issue).</p>
<p><strong>Please, synch MLRun versions between client and server and it will works.</strong></p>
<p>BTW: Your part of code seems as this original sample here <a href="https://docs.mlrun.org/en/stable/feature-store/end-to-end-demo/02-create-training-model.html" rel="nofollow noreferrer">https://docs.mlrun.org/en/stable/feature-store/end-to-end-demo/02-create-training-model.html</a></p>
| JzD |
<p>I am using VS Code with Google's Cloud Code plugin. I was using my personal email address which is tied to a personal instance of Google Cloud. I was given a sandbox instance of Google Cloud for a project at work. Although I have logged into Google Cloud with my work email address, whenever I try to run my project with Cloud Code in a Kubernetes development session, an error is logged about permissions referencing my personal account. I believe this is because VS Code is somehow stuck with my personal email address.</p>
<p>I tried logging out of all accounts from the VS Code command pallet to no avail.</p>
<p>How do I resolve this?</p>
<p>Here is the error in GCP logging. I was clued in that it was my personal account's email address because of the logging entry's label: <code>principal_email: [email protected]</code>. Naturally, my personal email address is not part of my work's GCP instance, therefore the permissions issue.</p>
<pre class="lang-js prettyprint-override"><code>{
insertId: "redacted"
logName: "projects/redacted/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload: {
@type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {1}
authorizationInfo: [2]
metadata: {1}
methodName: "storage.buckets.create"
requestMetadata: {4}
resourceLocation: {1}
resourceName: "projects/_/buckets/redacted"
serviceName: "storage.googleapis.com"
status: {1}}
receiveTimestamp: "2022-09-09T12:07:41.184289826Z"
resource: {2}
severity: "ERROR"
timestamp: "2022-09-09T12:07:40.135804318Z"
}
</code></pre>
<p>I apologize if this is the wrong stack exchange for this question. If so, please direct me to the right one.</p>
| Dshiz | <p><strong>According with @Dshiz the solution was:</strong></p>
<blockquote>
<p>It was definitely some kind of corruption of the plugin. I
uninstalled/reinstalled it, then signed out and back in via the
plugin's Help and Feedback menu (which coincidentally was missing
before reinstall), and the error cleared.</p>
</blockquote>
| Ismael Clemente Aguirre |
<p>I'm working with Prometheus alerts, and I would like to dynamically add a 'team' label to all of my alerts based on a regex pattern. I have an example alert:</p>
<pre><code>expr: label_replace(label_replace(increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*"}[30m]) > 2, "team", "data", "container", ".*test.*"), "team", "data", "pod", ".*test.*")
</code></pre>
<p>This example alert adds the 'team' label with the value 'data' for metrics matching the regex pattern ".test." in the 'container' and 'pod' labels.</p>
<p>However, I want to apply this logic to all of my alerts, not just this specific one. Is there a way to do this dynamically in Prometheus or Alertmanager? Any guidance would be appreciated.</p>
<p>I tried using the <strong>label_replace</strong> function in the expression of the alert, and it worked as expected for the specific alert mentioned above. I was expecting to find a way to apply this label addition to all of my alerts without having to modify each alert expression individually.</p>
<p>Is there a way to achieve this? Any help or guidance would be greatly appreciated.</p>
| TomerA | <p>AFAIK, there is no possibility to add labels to your alerts based on condition without rewriting all rules.</p>
<p>Best solution for your exact question is to create separate alerts for all environments/teams/conditions and just add static labels.</p>
<p>Something along the lines of</p>
<pre class="lang-yaml prettyprint-override"><code> - alert: many_restarts_data
expr: increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*", container=~".*test.*"}[30m]) > 2
labels:
team: data
- alert: many_restarts_data
expr: increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*", container=~".*prod.*"}[30m]) > 2
labels:
team: sre
</code></pre>
<p>But it will require multiplying number of alerts by number of teams.</p>
<p>I would argue way easier solution is to use routing capabilities of alertmanager (or PagerDuty if it provides similar functionality). This way you write criteria which alerts with which labels should be routed to which teams, at alertmanager configuration, and it works independently from alerts creation part.</p>
<pre class="lang-yaml prettyprint-override"><code> routes:
- matchers:
- container =~ ".*test.*"
- severity =~ ".*test.*"
- alertname =~ "my_alert_1|my_alert_2"
receiver: team-data
- matchers:
- container =~ ".*prod.*"
- severity =~ ".*prod.*"
- alertname =~ "my_alert_1|my_alert_2"
receiver: team-sre
</code></pre>
| markalex |
<p>When nodes reboot based on the job "kured_reboots", alertManager still alerts on other node alerts. Is there a way to stop alertManager alerts for nodes going through a reboot based on the job "kured_reboot'? Something like this:</p>
<pre><code>route:
receiver: default
group_by:
- cluster
- namespace
- severity
- alertname
- job
continue: false
routes:
- receiver: receiver1
matchers:
- job =~ "kured_reboot"
active_time_intervals:
- business-hours
inhibit_rules:
- source_matchers:
- job="kured_reboot"
target_matchers:
- severity=~"critical|warning|info"
equal:
- namespace
- alertname
</code></pre>
<p>So far, I am still researching this and have not tried anything yet. I wanted to ask if anyone has done something like this before.</p>
| JBeach | <p>Generally there are two way to do it:</p>
<ol>
<li><p>Incorporate additional check about this job into your alert rules. For this you'll need to add something like <code>unless on() my_job_status{name="kured_reboot"} == 1</code> (You might need to something similar or even completely different depending on your situation and style of job and metrics related to it)</p>
</li>
<li><p>Create alert based on the fact that this job is running and add the <a href="https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule" rel="nofollow noreferrer">inhibition rule</a>, that will prevent other alert from firing while job is running.</p>
</li>
</ol>
| markalex |
<p><code>kubectl port-forward service/nginx 8080:443</code></p>
<pre><code>Forwarding from 127.0.0.1:8080 -> 443
Forwarding from [::1]:8080 -> 443
Handling connection for 8080
E0314 16:38:47.834517 25527 portforward.go:406] an error occurred forwarding 8080 -> 443: error forwarding port 443 to pod bb5e0e3b270881ce659aa820d29dd47170e229abb90fb128e255467a8dba606a, uid : failed to execute portforward in network namespace "/var/run/netns/cni-5ef7f945-3c15-25c0-8540-39513d9d3285": failed to connect to localhost:443 inside namespace "bb5e0e3b270881ce659aa820d29dd47170e229abb90fb128e255467a8dba606a", IPv4: dial tcp4 **127.0.0.1:443: connect: connection refused IPv6 dial tcp6 [::1]:443: connect: connection refused
E0314 16:38:47.834846 25527 portforward.go:234] lost connection to pod**
</code></pre>
<p><code>the same working with port 80</code></p>
| newbie | <p>The command used is as follows:</p>
<pre><code>kubectl port-forward pod/<pod-name> <local-port>:<exposed-port>
</code></pre>
<p>where <strong>local-port</strong> is the port from which the container will be accessed from the browser while the <strong>exposed-port</strong> is the port on which the container listens to which is defined with the <strong>EXPOSE</strong> command in the Dockerfile.</p>
<p>The error <strong>failed to connect to localhost:443</strong> resembles that there is no process listening on <strong>port 443</strong>. The dockerfile used to create the nginx image exposes port 80 and nginx is configured to listen on <strong>port 80</strong> by <strong>default</strong>.</p>
<p>In order to establish a connection using port 443,please change the ngnix config file to listen on port 443 instead of port 80. Also, please update the docker image with the corresponding changes.</p>
<p>As you have not shared the nginx configuration file, the default <strong>nginx.conf</strong> file will be as follows:</p>
<pre><code>server {
listen ${PORT};
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
include /etc/nginx/extra-conf.d/*.conf;
}
</code></pre>
<p>Please update the <strong>{PORT}</strong> to <strong>443</strong> in the above template and in DockerFile please add the below line:</p>
<pre><code>COPY ./nginx.conf /etc/nginx/conf.d/default.conf
</code></pre>
<p>For more details please look into the below documentations:</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Link_1</a> <a href="https://www.golinuxcloud.com/kubectl-port-forward/" rel="nofollow noreferrer">Link_2</a></p>
| Kiran Kotturi |
<p>Laravel application deployed on Kubernetes and making requests to <a href="https://maps.google.com/maps/api/geocode/json" rel="nofollow noreferrer">https://maps.google.com/maps/api/geocode/json</a> failing with:</p>
<pre><code>SSL routines:tls_process_server_certificate:certificate verify failed
</code></pre>
<p>The same application works when running on Docker.</p>
<p>I have appended Google's Root CA certs from here <a href="https://developers.google.com/maps/root-ca-faq#what_is_happening" rel="nofollow noreferrer">https://developers.google.com/maps/root-ca-faq#what_is_happening</a> to the server's trust store but no luck there either.</p>
<p>I can disable verification but that's not the correct approach.</p>
<p>Any ideas would be much appreciated.</p>
<p>Thanks.</p>
| user2881726 | <p>According to the OP, the solution was:</p>
<blockquote>
<p>The issue was that our security team scans external certificates and
re-package them with the company's own cert. Once I added the
company's cert to the trust store, everything worked fine. It seems
it's only an internal issue.</p>
</blockquote>
| Ismael Clemente Aguirre |
<p>This is realy two questions in one - I think they are related.</p>
<h2>What does the <code>kube_pod_status_phase</code> metric <em>value</em> represent?</h2>
<p>When I view the <code>kube_pod_status_phase</code> metric in Prometheus, the metric value is always a 0 or 1, but it's not clear to me what 0 and 1 means. Let's use an example. The query below returns the value of this metric where the "phase" label equals "Running".</p>
<p>Query:</p>
<pre><code>kube_pod_status_phase{phase="Running"}
</code></pre>
<p>Result: (sample)</p>
<pre><code>kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="argocd", phase="Running", pod="argocd-server-6f8487c84d-5qqv7", service="prometheus-kube-state-metrics", uid="ee84e48d-0302-4f5a-9e81-f4f0d7d0223f"}
1
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="default", phase="Running", pod="rapid7-monitor-799d9f9898-fst5q", service="prometheus-kube-state-metrics", uid="1561cd66-b5c4-48b9-83d0-11f4f1f0d5d9"}
1
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Running", pod="clean-deploy-cronjob-28112310-ljws6", service="prometheus-kube-state-metrics", uid="5510f859-74ca-471f-9c50-c1b8976119f3"}
0
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Running", pod="clean-deploy-cronjob-28113750-75m8v", service="prometheus-kube-state-metrics", uid="d63e5038-a8bb-4f88-bd77-82c66d183e1b"}
0
</code></pre>
<p>Why do some "running" pods have a value of 0, while others have a value of 1? Are the items with a value of 1 "currently" running (at the time the query was run) and the items with a value of 0 "had been" running, but are no longer?</p>
<h2>There seem to be inconsistencies with what the <code>kube_pod_status_phase</code> metric produces between Prometheus and Grafana. Why?</h2>
<p>If I use a slightly different version of the query above, I get different results between Prometheus and what is shown in Grafana.</p>
<p>Query:</p>
<pre><code>kube_pod_status_phase{phase=~"Pending"} != 0
</code></pre>
<p>Result: (Prometheus}</p>
<pre><code>empty query result
</code></pre>
<p>Result: (Grafana table view)</p>
<pre><code>pod namespace phase
clean-deploy-cronjob-28115190-2rhv5 deploy Pending
</code></pre>
<p>If I go back to Prometheus and focus on that pod specifically:</p>
<p>Query:</p>
<pre><code>kube_pod_status_phase{pod="clean-deploy-cronjob-28115190-2rhv5"}
</code></pre>
<p>Result:</p>
<pre><code>kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Failed", pod="clean-deploy-cronjob-28115190-2rhv5", service="prometheus-kube-state-metrics", uid="4dd948f6-327b-4c00-abc9-57d16bd588d0"}
0
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Pending", pod="clean-deploy-cronjob-28115190-2rhv5", service="prometheus-kube-state-metrics", uid="4dd948f6-327b-4c00-abc9-57d16bd588d0"}
0
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Running", pod="clean-deploy-cronjob-28115190-2rhv5", service="prometheus-kube-state-metrics", uid="4dd948f6-327b-4c00-abc9-57d16bd588d0"}
0
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Succeeded", pod="clean-deploy-cronjob-28115190-2rhv5", service="prometheus-kube-state-metrics", uid="4dd948f6-327b-4c00-abc9-57d16bd588d0"}
1
kube_pod_status_phase{container="kube-state-metrics", endpoint="http", instance="10.244.211.138:8080", job="kube-state-metrics", namespace="deploy", phase="Unknown", pod="clean-deploy-cronjob-28115190-2rhv5", service="prometheus-kube-state-metrics", uid="4dd948f6-327b-4c00-abc9-57d16bd588d0"}
0
</code></pre>
<p>Notice that the entry with phase "Running" has a value of 0, while the entry with a value of 1 has the phase "Succeeded". You could argue that the status changed during the period when I ran these queries. No, it has not. It has been showing these results for a long time.</p>
<p>This is just one example of strange inconsistencies I've seen between a query run in Prometheus vs. Grafana.</p>
<p><strong>UPDATE</strong>:</p>
<p>I think I have gained some insight into the inconsistencies question. When I run the query in Prometheus it gives me the results as of "now" (a guess on my part). In Grafana, it takes into account the "time window" that's available in the dashboard header. When I dialed it back to "the last 5 minutes", the pending entry disappeared.</p>
<p>I see that there is an option at the dashboard level in Grafana to hide the time picker, which if set to hide, hides not only the time picker, but also the refresh period selector. If this option is used, I'm curious as to how often the dashboard is <em>actually</em> refreshed. Should I use this to effectively make Grafana only care about "now", instead of some time window into the past?</p>
| Joseph Gagnon | <blockquote>
<h5>What does the <code>kube_pod_status_phase</code> metric value represent?</h5>
</blockquote>
<p><code>kube_pod_status_phase</code> contains a set of metrics for every pod with label <code>phase</code> being set to "Failed", "Pending", "Running", "Succeeded", "Unknown".</p>
<p>Only one of those metrics (for every pod) will have value 1. It means that pod is in corresponding phase.</p>
<blockquote>
<p>Why do some "running" pods have a value of 0, while others have a value of 1?</p>
</blockquote>
<p>Remember, that Prometheus is not real time solution. It has values only with resolution of <code>scrape_interval</code>. Check suspicious pods for other states, it's quite possible, that pod's state wasn't updated. Plus, for short-lived pods all kinds of strange behavior in metrics is possible.</p>
<blockquote>
<h5>There seem to be inconsistencies with what the kube_pod_status_phase metric produces between Prometheus and Grafana. Why?</h5>
</blockquote>
<p>Most likely your query in Grafana has type "Range" or "Both" and in table mode it shows all values over time range selected for dashboard.</p>
<p>If you only want to see last values (according to "To" value of dashboard time range), you can go to query options (under query in panel edit mode) and set type to "Instant".</p>
<blockquote>
<p>I see that there is an option at the dashboard level in Grafana to hide the time picker, which if set to hide, hides not only the time picker, but also the refresh period selector. If this option is used, I'm curious as to how often the dashboard is actually refreshed. Should I use this to effectively make Grafana only care about "now", instead of some time window into the past?</p>
</blockquote>
<p>No. This is for other uses. For example for presentation mode.</p>
| markalex |
<p>When managing a cluster, I might want to know how many tcp connections to one port on each node. This can help me to alert if some node's tcp connections number to some port are too many.</p>
<p>I can use <code>netstat -nat | grep -i "22" | wc -l</code> on a linux node to see how many connections to port 22. But I'm looking for a method that can do this and export the metric on prometheus.</p>
<p><strong>node_exporter</strong> has a metric <strong>node_netstat_Tcp_CurrEstab</strong> that can show tcp connections total number. But it has only a label named <strong>instance</strong> to show the total number on that node. But I want to check by a label like <strong>port</strong>.</p>
<p>In a word, I want to track tcp connections on node 192.168.1.1, port 22.</p>
<p>Anybody knows is there a metric in <strong>node_exporter</strong> or is there a exporter can do this?</p>
| 54vault | <p>I don't think it is possible through default metrics: netstat collector shows a summary with no detailing. From <a href="https://github.com/prometheus/node_exporter#enabled-by-default" rel="nofollow noreferrer">official docs</a>:</p>
<blockquote>
<p>Exposes network statistics from <code>/proc/net/netstat</code>. This is the same information as <code>netstat -s</code>.</p>
</blockquote>
<p>But you can always utilize a <a href="https://github.com/prometheus/node_exporter#textfile-collector" rel="nofollow noreferrer">textfile collector</a>, and add a script to your cron that will update number of open metrics.</p>
<p>I believe you'll need a command like this:</p>
<pre class="lang-bash prettyprint-override"><code>echo 'custom_established_connections{port="22"}' $(netstat -nat|grep -P ':22\b.*ESTABLISHED$'|wc -l) > /path/to/directory/role.prom.$$
</code></pre>
| markalex |
<p>I have a grpc service, here is the yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: staging
labels:
app: staging
spec:
replicas: 4
selector:
matchLabels:
app: staging
template:
metadata:
labels:
app: staging
spec:
containers:
- name: staging
image: ...
imagePullPolicy: Always
ports:
- containerPort: 5274
- containerPort: 5900
---
apiVersion: v1
kind: Service
metadata:
name: staging-service
spec:
type: NodePort
selector:
app: staging
ports:
- name: staging
protocol: TCP
port: 5274
targetPort: 5274
nodePort: 30277
- name : staging
protocol: TCP
port: 5900
targetPort: 5900
nodePort: 30278
</code></pre>
<p>As you can see, the grpc is on 5900 port, now I have a ingres yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-rpc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
ingressClassName: nginx
rules:
-http:
paths:
- path: /st(/|$)(.*)
pathType: Prefix
backend:
service:
name: staging-service
port:
number: 5900
</code></pre>
<p>now, the k8s master is 192.168.70.1, so if I access 192.168.70.1/st in my nestjs project like:</p>
<pre><code> url: ‘192.168.70.1/st’,
package: ‘test’,
</code></pre>
<p>I will get the error:</p>
<pre><code>details: ‘Name resolution failed for target dns:192.168.70.1/st’,
</code></pre>
<p>if I access the grpc service via 192.168.70.1:30378 everything is fine.</p>
<p>Am I missing something here?</p>
<p>Thank you</p>
| Erika | <p>gRPC supports DNS as the default name-system.The following format is supported which is related to IPV4 address.</p>
<p><strong>ipv4:address[:port][,address[:port],...] -- IPv4 addresses</strong></p>
<p>Here,you can specify multiple comma-delimited addresses of the form <strong>address[:port]:</strong></p>
<p><strong>address</strong> is the IPv4 address to use.</p>
<p><strong>port</strong> is the port to use. If not specified, 443 is used.</p>
<p>This is the reason you are able to access the grpc service via 192.168.70.1:30378</p>
<p>You can refer the <a href="https://github.com/grpc/grpc/blob/master/doc/naming.md" rel="nofollow noreferrer">link</a> for more useful information.</p>
| Kiran Kotturi |
<p>I have installed <code>Azure Workload Identity</code>, e.g. like that:</p>
<p><code>az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys</code></p>
<p>This has installed a mutating webhook that is of version <code>0.15.0</code> in <code>kube-system</code>. Now when the new versions will come out, how do I keep it updated?</p>
<p>Does this happen automatically or I would need to uninstall/install again or do something like that?</p>
| Ilya Chernomordik | <p>yes, addons are maintained by Microsoft. Any update/upgrades will be rolled out automatically.</p>
<p>As <a href="https://learn.microsoft.com/en-us/azure/aks/integrations#add-ons" rel="nofollow noreferrer">mentioned here</a>:</p>
<blockquote>
<p>Add-ons are a fully supported way to provide extra capabilities for
your AKS cluster. Add-ons' installation, configuration, and lifecycle
is managed by AKS</p>
</blockquote>
<p>Workload Identity is not even considered as an additional feature, but the same thing applies since it's a managed component of the cluster, and Microsoft is responsible for the lifecycle of it.</p>
<p>Generally, any out of box resource in the kube-system namespace is managed by Microsoft and will receive the updates automatically.</p>
| akathimi |
<p>I have two pods, each with a LoadBalancer svc. Each service's IP address is working.</p>
<p>My first service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-world-1
spec:
type: LoadBalancer
selector:
greeting: hello
version: one
ports:
- protocol: TCP
port: 60000
targetPort: 50000
</code></pre>
<p>My second service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-world-2
spec:
type: LoadBalancer
selector:
greeting: hello
version: two
ports:
- protocol: TCP
port: 5000
targetPort: 5000
</code></pre>
<p>My ingress is:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
defaultBackend:
service:
name: hello-world-1
port:
number: 60000
rules:
- http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: hello-world-1
port:
number: 60000
- path: /v2
pathType: ImplementationSpecific
backend:
service:
name: hello-world-2
port:
number: 5000
</code></pre>
<p>Only the first route works this way and when I put</p>
<pre><code><MY_IP>/v2
</code></pre>
<p>in the url bar I get</p>
<pre><code>Cannot GET /v2
</code></pre>
<p>How do I configure the ingress so it hits the / route when no subpath is specified and the /v2 route when /v2 is specified?</p>
<p>If I change the first route to</p>
<pre><code>backend:
service:
name: hello-world-2
port:
number: 5000
</code></pre>
<p>and get rid of the second one it works.</p>
<p>but if I change the route to /v2 it stops working?</p>
<p>***** EDIT *****</p>
<p>Following this tutorial here <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">ingress tut</a> I tried changing the yaml so the different routes were on different ports and this breaks it. Does anybody know why?</p>
| Davtho1983 | <p>By default, when you create an ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application, as stated in the following document [1]. So, you should not be configuring your services as LoadBalancer type, instead you need to configure them as NodePort.</p>
<p>Here, you can follow an example of a complete implementation similar to what you want to accomplish:</p>
<ol>
<li>Create a manifest that runs the application container image in the specified port, for each version:</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web1
namespace: default
spec:
selector:
matchLabels:
run: web1
template:
metadata:
labels:
run: web1
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: web1
ports:
- containerPort: 8000
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web2
namespace: default
spec:
selector:
matchLabels:
run: web2
template:
metadata:
labels:
run: web2
spec:
containers:
- image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
name: web2
ports:
- containerPort: 9000
protocol: TCP
</code></pre>
<ol start="2">
<li>Create two services (one for each version) as type <strong>NodePort</strong>. A very important note at this step, is that the <strong>targetPort</strong> specified should be the one the application is listening on, in my case both services are pointing to port 8080 since I am using the same application but different versions:</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web1
namespace: default
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8080
selector:
run: web1
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: web2
namespace: default
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 8080
selector:
run: web2
type: NodePort
</code></pre>
<ol start="3">
<li>Finally, you need to create the ingress with the path rules:</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
defaultBackend:
service:
name: web1
port:
number: 8000
rules:
- http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: web1
port:
number: 8000
- path: /v2
pathType: ImplementationSpecific
backend:
service:
name: web2
port:
number: 9000
</code></pre>
<p>If you configured everything correctly, the output of the command <strong>kubectl get ingress my-ingress</strong> should be something like this:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> * <External IP> 80 149m
</code></pre>
<p>And, if your services are pointing to the correct ports, and your applications are listening on those ports, doing a curl to your external ip (<strong>curl External IP</strong>) should get you to the version one of your application, here is my example output:</p>
<pre><code>Hello, world!
Version: 1.0.0
Hostname: web1-xxxxxxxxxxxxxx
</code></pre>
<p>Doing a curl to your external ip /v2 (<strong>curl External IP/v2</strong>) should get you to the version two of your application:</p>
<pre><code>Hello, world!
Version: 2.0.0
Hostname: web2-xxxxxxxxxxxxxx
</code></pre>
<p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p>
| Gabriel Robledo Ahumada |
<p>I have elastic operator installed and running on my kubernetes cluster and I want to be able to access kibana through the <code>/kibana</code> subpath. I have an ingress that is configured like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-kb
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: ""
http:
paths:
- path: /kibana(/|$)(.*)
pathType: Prefix
backend:
service:
name: kb-qs-kb-http
port:
number: 5601
</code></pre>
<p>What does the yaml file for my kibana instance have to look like, so that it is accessible through <code>/kibana</code> path?</p>
| Cake | <p>Try this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: "localhost" #OR YOUR HOST
http:
paths:
- pathType: Prefix
path: "/kibana"
backend:
service:
name: kb-qs-kb-http
port:
number: 5601
</code></pre>
<p>Also take a look at this link to have a clearer understanding of how the NGINX rewrite works:
<a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
| glv |
<p>I hope somebody can help me out here.</p>
<p>I have a basic configuration in azure which consists in a web app and database.</p>
<p>The web app is able to connect to the database using managed identity adn everything here works just fine, but i wanted to try the same configuration using aks.</p>
<p>I deployed AKS and enabled managed identity. I deployed a pod into the cluster as follow:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dockerimage
ports:
- containerPort: 80
env:
- name: "ConnectionStrings__MyDbConnection"
value: "Server=server-url; Authentication=Active Directory Managed Identity; Database=database-name"
- name: "ASPNETCORE_ENVIRONMENT"
value: "Development"
securityContext:
allowPrivilegeEscalation: false
restartPolicy: Always
</code></pre>
<p>The deployment went trough just smoothly and everything works just fine. But this is where i have the problem and cannot figure out the best solution.</p>
<p>The env block is in plain text, i would like to protect those environment variables by storing them in a keyvault.</p>
<p>I have been looking around into different forums and documentation and the options start confusing me. Is there any good way to achieve security in this scenario?</p>
<p>In my web app, under configurations i have the managed identity enabled and using this i can access the secrets in a keyvault and retrieve them. Can i do the same using AKS?</p>
<p>Thank you so much for any help you can provide or help with.</p>
<p>And please if my question is not 100% clear, just let me know</p>
| Nayden Van | <ol>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#upgrade-an-existing-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support" rel="nofollow noreferrer">Upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver support</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-identity-access#use-a-user-assigned-managed-identity" rel="nofollow noreferrer">Use a user-assigned managed identity to access KV</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#set-an-environment-variable-to-reference-kubernetes-secrets" rel="nofollow noreferrer">Set an environment variable to reference Kubernetes secrets</a></li>
</ol>
<p>You will need to do some reading, but the process is straight forward.
The KV secrets will be stored in k8s secrets, that you can reference in the pods environment variables.</p>
| akathimi |
<p>Imagine a scenario where I have 3 classes of worker node (A,B,C) and 2 master nodes (X,Y) as part of a Kubernetes cluster. There maybe multiple worker nodes of each class. Is it possible to route the traffic such that traffic arriving at the different master nodes is routed to a different set of worker nodes. For example I want master node X to route traffic to workers of class A and potentially fail over to class B under heavy load and for master node Y to route traffic to nodes of class C but also use B as a failover when needed.</p>
| Richard Chester | <p>As correctly pointed out in the comments, the traffic is not handled by the master nodes; its responsibility, at a high level, is to observe and maintain the desired state of the Kubernetes resources deployed in the worker nodes, as well as to push any changes to them.</p>
<p>That being said, yes, it is possible to discriminate and send the traffic to different worker nodes using <a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/" rel="nofollow noreferrer">Topology Aware Hints</a>:</p>
<blockquote>
<p>Topology Aware Hints enable topology aware routing by including
suggestions for how clients should consume endpoints. This approach
adds metadata to enable consumers of EndpointSlice and / or Endpoints
objects, so that traffic to those network endpoints can be routed
closer to where it originated.</p>
</blockquote>
<p>This feature is specifically designed to handle traffic in a multi-zone environment and has a series of built-in <a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/#safeguards" rel="nofollow noreferrer">Safeguards</a> to prevent situations like overload of an endpoint, insufficient endpoints in a zone, etc.</p>
| Gabriel Robledo Ahumada |
<p>i defined a memory limit for one of my pods and a preStop hook that the running service can be gracefully shutdown, but the defined method for the preStop hook doesn't get exectuted when the pod got restarted after the memory limit got reached.</p>
<pre><code> lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- echo 'Hi from the postStart handler' >> /app/public/system/test
name: my_pod
resources:
limits:
memory: 800Mi
requests:
memory: 500Mi
</code></pre>
<p>Any suggestion how to gracefully shutdown a service when it got killed by a memory limit?</p>
<p>if i kill the pod manually then it works as expected, but it looks like that the OOMKilled error does't use the lifecycle hook.</p>
| stereosupersonic | <p>OOM killed is not a graceful stop of the pod. Therefore, the prestop hook would not run. I suggest configuring the application to fail the health check when the memory limit is about to be reached.</p>
| akathimi |
<p>I've got a pod that restarts immediately. When I delete the pod <code>kubectl delete pod <pod_name></code> it restarts immediately. However, I want this pod to be restarted after 1 min when I perform the delete operation.</p>
<p><code>kubectl delete pod <pod_name></code>... it waits 1 minute then it restarts running again.</p>
<p>To clarify, I don't want <code>--grace-period</code> flag where it waits for specified time. I want to delete pod immediately, and make it run again after 1 minute without manual intervention. Is it applicable to do?</p>
<p>How can I do that?</p>
| baris | <p>What you're asking for is not only not possible, but it also goes against one of the best qualities of Kubernetes, which is its resiliency.</p>
<p>The reason your Pod tries to be rescheduled (after failure or voluntary shutdown) is because it is being managed via ReplicaSet.</p>
<blockquote>
<p>A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/</a></p>
<p>What you could do is create a Pipeline (Jenkins, GitLab, GitHub, whatever you prefer...) where you uninstall the Deployment/StatefulSet, wait some time and reinstall the Deployment/StatefulSet.</p>
| glv |
<p>The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#networkpolicyingressrule-v1-networking-k8s-io" rel="nofollow noreferrer">K8s documentation on NetworkPolicy</a> states that if the <code>spec.ingress.from.ports</code> array is not specified, traffic is allowed from any <code>port</code> for the matching <code>peers</code> array:</p>
<blockquote>
<p>List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. <strong>If this field is empty or missing, this rule matches all ports (traffic not restricted by port)</strong>. If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list.</p>
</blockquote>
<p>But what if one of the <code>port</code> items inside of <code>ports</code> is created like this?</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: empty-port-item-network-policy
namespace: some-namespace
spec:
podSelector:
matchLabels:
app: front
ingress:
- from:
- podSelector: {}
ports:
- {}
</code></pre>
<p>When I <code>describe</code> this NetworkPolicy, I don't get enough information (ignore <code>PodSelector=none</code>):</p>
<pre><code>Created on: 2023-01-02 18:58:32 +0200 IST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=front
Allowing ingress traffic:
To Port: <nil>/TCP
From:
PodSelector: <none>
Not affecting egress traffic
Policy Types: Ingress
</code></pre>
<p>What does <code>To Port: <nil>/TCP</code> mean here? Any port? all ports?</p>
| zerohedge | <p>You are passing an empty array to the ports when using:</p>
<pre><code> ports:
- {}
</code></pre>
<p>This means that no ports are allowed. Everything is blocked.</p>
<p>When omitting the ports entry, you would allow traffic to all ports.
kubectl describe output:</p>
<pre><code> Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
</code></pre>
<p>The yaml would be something like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: empty-port-item-network-policy
namespace: some-namespace
spec:
podSelector:
matchLabels:
app: front
ingress:
- from:
- podSelector: {}
</code></pre>
| akathimi |
<p>My team is experiencing an issue with longhorn where sometimes our RWX PVCs are indefinitely terminating after running <code>kubectl delete</code>. A symptom of this is that the finalizers never get removed.</p>
<p>It was explained to me that the longhorn-csi-plugin containers should execute <code>ControllerUnpublishVolume</code> when no workload is using the volume and then execute <code>DeleteVolume</code> to remove the finalizer. Upon inspection of the logs when this issue occurs, the <code>ControllerUnpublishVolume</code> event looks unsuccessful and <code>DeleteVolume</code> is never called. It looks like the response to <code>ControllerUnpublishVolume</code> is <code>{}</code> which does not seem right to me. The following logs are abridged and only include lines relevant to the volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1:</p>
<pre><code>2023-04-04T19:28:52.993226550Z time="2023-04-04T19:28:52Z" level=info msg="CreateVolume: creating a volume by API client, name: pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1, size: 21474836480 accessMode: rwx"
...
2023-04-04T19:29:01.119651932Z time="2023-04-04T19:29:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume created at 2023-04-04 19:29:01.119514295 +0000 UTC m=+2789775.717296902"
2023-04-04T19:29:01.123721718Z time="2023-04-04T19:29:01Z" level=info msg="CreateVolume: rsp: {\"volume\":{\"capacity_bytes\":21474836480,\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}}"
...
2023-04-04T19:29:01.355417228Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
...
2023-04-04T19:29:01.362958346Z time="2023-04-04T19:29:01Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com"
2023-04-04T19:29:01.363013363Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com"
...
2023-04-04T19:29:13.477036437Z time="2023-04-04T19:29:13Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:29:13.476922567 +0000 UTC m=+2789788.074705223"
2023-04-04T19:29:13.479320941Z time="2023-04-04T19:29:13Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com"
...
2023-04-04T19:31:59.230234638Z time="2023-04-04T19:31:59Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T19:31:59.233597451Z time="2023-04-04T19:31:59Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com"
...
2023-04-04T19:32:01.242531135Z time="2023-04-04T19:32:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:01.242373423 +0000 UTC m=+2789955.840156051"
2023-04-04T19:32:01.245744768Z time="2023-04-04T19:32:01Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com"
...
2023-04-04T19:32:01.268399507Z time="2023-04-04T19:32:01Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T19:32:01.270584270Z time="2023-04-04T19:32:01Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com"
...
2023-04-04T19:32:02.512117513Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
...
2023-04-04T19:32:02.528810094Z time="2023-04-04T19:32:02Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node2.example.com"
2023-04-04T19:32:02.528829340Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node2.example.com"
...
2023-04-04T19:32:03.273890290Z time="2023-04-04T19:32:03Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:03.272811565 +0000 UTC m=+2789957.870594214"
2023-04-04T19:32:03.289152604Z time="2023-04-04T19:32:03Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com"
...
2023-04-04T19:32:03.760644399Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T19:32:03.770050254Z time="2023-04-04T19:32:03Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com"
2023-04-04T19:32:03.770093689Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com"
...
2023-04-04T19:32:04.654700819Z time="2023-04-04T19:32:04Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:04.654500435 +0000 UTC m=+2789959.252283106"
2023-04-04T19:32:04.657991819Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node2.example.com"
2023-04-04T19:32:04.658583043Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: rsp: {}"
...
2023-04-04T19:32:05.822264526Z time="2023-04-04T19:32:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:05.82208573 +0000 UTC m=+2789960.419868382"
2023-04-04T19:32:05.826506892Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com"
2023-04-04T19:32:05.827051042Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: rsp: {}"
...
2023-04-04T20:07:03.798730851Z time="2023-04-04T20:07:03Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T20:07:03.802360032Z time="2023-04-04T20:07:03Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com"
2023-04-04T20:07:05.808796454Z time="2023-04-04T20:07:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:05.808607472 +0000 UTC m=+2792060.406390073"
2023-04-04T20:07:05.811653301Z time="2023-04-04T20:07:05Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com"
...
2023-04-04T20:07:11.017524059Z time="2023-04-04T20:07:11Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T20:07:11.024127188Z time="2023-04-04T20:07:11Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com"
...
2023-04-04T20:07:13.047834933Z time="2023-04-04T20:07:13Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com"
2023-04-04T20:07:13.047839690Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}"
2023-04-04T20:07:13.378731066Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T20:07:13.384575838Z time="2023-04-04T20:07:13Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com"
...
2023-04-04T20:07:13.385792532Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}"
2023-04-04T20:07:15.386784410Z time="2023-04-04T20:07:15Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:15.386596264 +0000 UTC m=+2792069.984378910"
2023-04-04T20:07:15.391059508Z time="2023-04-04T20:07:15Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com"
</code></pre>
<p>We are using Longhorn v1.2.2 on Rancher RKE v2.6.5.</p>
<p>We would expect that DeleteVolume would be called, the finalizers would be removed, and the PVC would be deleted, but none of those events occur.</p>
<p>As a workaround we tried forcefully removing the finalizer using the command <code>kubectl patch pvc my-pvc -p '{"metadata":{"finalizers":null}}' --type=merge</code>. This worked, but is not ideal to do every time.</p>
<p>Any ideas about what is wrong? If not, what should be my next steps in investigating this issue?</p>
| Josh | <p>Probably some Pod mounts the volume you are trying to remove.</p>
<p>Check out this old answer of mine: <a href="https://stackoverflow.com/a/75768413/21404450">https://stackoverflow.com/a/75768413/21404450</a></p>
| glv |
<p>I am using <code>kubectl</code>. How to find dangling resources by the namespace?</p>
<p>For example, I have some namespaces.</p>
<pre><code>kubectl get ingress --all-namespaces |awk '{print $1}'
</code></pre>
<p>That is supposed to be deleted. If I find those namespaces on GKE there is no result returned.</p>
<p>So why <code>kubectl</code> are me showing those namespaces?</p>
| Rodrigo | <p>You can find dangling resources on a particular namespace with the following command:</p>
<pre><code>kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
</code></pre>
<p>If you need to force the namespace deletion, you can do so by removing the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/" rel="nofollow noreferrer">Finalizer</a>:</p>
<p>1.</p>
<pre><code>kubectl get namespace <namespace> -o json > <namespace>.json
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>kubectl replace --raw "/api/v1/namespaces/<namespace>/finalize" -f ./<namespace>.json
</code></pre>
| Gabriel Robledo Ahumada |
<p>I have a Google Cloud Composer 1 environment (Airflow 2.1.2) where I want to run an Airflow DAG that utilizes the <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html" rel="nofollow noreferrer">KubernetesPodOperator</a>.</p>
<p>Cloud Composer <a href="https://cloud.google.com/composer/docs/concepts/cloud-storage#folders_in_the_bucket" rel="nofollow noreferrer">makes available</a> to all DAGs a shared file directory for storing application data. The files in the directory reside in a Google Cloud Storage bucket managed by Composer. Composer uses FUSE to map the directory to the path <code>/home/airflow/gcs/data</code> on all of its Airflow worker pods.</p>
<p>In my DAG I run several Kubernetes pods like so:</p>
<pre class="lang-py prettyprint-override"><code> from airflow.contrib.operators import kubernetes_pod_operator
# ...
splitter = kubernetes_pod_operator.KubernetesPodOperator(
task_id='splitter',
name='splitter',
namespace='default',
image='europe-west1-docker.pkg.dev/redacted/splitter:2.3',
cmds=["dotnet", "splitter.dll"],
)
</code></pre>
<p>The application code in all the pods that I run needs to read from and write to the <code>/home/airflow/gcs/data</code> directory. But when I run the DAG my application code is unable to access the directory. Likely this is because Composer has mapped the directory into the worker pods but does not extend this courtesy to my pods.</p>
<p>What do I need to do to give my pods r/w access to the <code>/home/airflow/gcs/data</code> directory?</p>
| urig | <p>Cloud Composer uses FUSE to mount certain directories from Cloud Storage into Airflow worker pods running in Kubernetes. It mounts these with default permissions that cannot be overwritten, because that metadata is not tracked by Google Cloud Storage. A possible solution is to use a bash operator that runs at the beginning of your DAG to copy files to a new directory. Another possible solution can be to use a non-Google Cloud Storage path like a <code>/pod</code> path.</p>
| Jose Gutierrez Paliza |
<p>I've been building and destroying couple of terraform projects and then after couple of hours came to a weird error saying:</p>
<pre><code>kubectl get pods
E0117 14:10:23.537699 21524 memcache.go:238] couldn't get current server API group list: Get "https://192.168.59.102:8443/api?t
E0117 14:10:33.558130 21524 memcache.go:238] couldn't get current server API group list: Get "https://192.168.59.102:8443/api?t
</code></pre>
<p>tried to check everything I can even restore and purge data on docker-desktop but it didn't help.</p>
<p>my .kube/config :</p>
<pre><code>kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority: C:\Users\dani0\.minikube\ca.crt
extensions:
- extension:
last-update: Tue, 17 Jan 2023 14:04:24 IST
provider: minikube.sigs.k8s.io
version: v1.28.0
name: cluster_info
server: https://192.168.59.102:8443
name: minikube
contexts:
- context:
cluster: docker-desktop
name: docker-desktop
- context:
extensions:
- extension:
last-update: Tue, 17 Jan 2023 14:04:24 IST
provider: minikube.sigs.k8s.io
version: v1.28.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: minikube
user:
client-certificate: C:\Users\dani0\.minikube\profiles\minikube\client.crt
client-key: C:\Users\dani0\.minikube\profiles\minikube\client.key
</code></pre>
| zolo | <p>well I've fixed it by deleting the minikube cluster with "minikube delete" and then just "minikube start" and now it seems to work to me :)</p>
| zolo |
<p>I have a kubernetes cluster with two node groups in AWS. One for Spot instances and the other for on demand instances. I have installed Vault and CSI driver to manage the secrets.</p>
<p>When I create this deployment everything works fine, the pods are created, run and the secrets are there.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: vault-test
name: vault-test
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: vault-test
strategy: {}
template:
metadata:
labels:
app: vault-test
spec:
containers:
- image: jweissig/app:0.0.1
name: app
envFrom:
- secretRef:
name: dev-secrets
resources: {}
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets"
readOnly: true
serviceAccountName: vault-sa
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: dev
status: {}
</code></pre>
<p>But when I add nodeAffinity and tolerations to create the pods in the Spot machines the pods stay in a ContainerCreating status with the following error:</p>
<blockquote>
<p>Warning FailedMount 10m (x716 over 24h) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod development/pod-name, err: error connecting to provider "vault": provider not found: provider "vault"</p>
</blockquote>
<p>I created two applications to test the vault behavior, one with no tolerations just for testing and the real one, with the tolerations and nodeAffinity. And after a lot of tests I realized the problem was where the pods are being scheduled, but I don't understand why that behavior</p>
| Mateo Arboleda | <p>The problem is the vault CSI driver configuration, the <code>DaemonSet</code> is not running in all nodes because of the missing <code>tolerations</code>. I had to add the <code>tolerations</code> to the <code>DaemonSet</code> manifest so there is a <code>Pod</code> in all nodes, and this way all nodes know what vault is.</p>
| Mateo Arboleda |
<p>I want to monitor external transmit and receive traffics of pod. External traffic means traffic that send or receive from outside of k8s cluster.</p>
<p>For example NodePort, LoadBalancer and ingress types of service.
I have <code>container_network_receive_bytes_total</code> and <code>container_network_transmit_bytes_total</code> in Prometheus metrics but I can't seprate internal and external traffic with them. I also used <a href="https://github.com/k8spacket/k8spacket" rel="nofollow noreferrer">k8spacket</a> but It did not satisfy my need.</p>
<p>What should I do?</p>
| Ali Rezvani | <p>I think the only way to get the information you need is to install tcpdump in your Pod and exploit its potential.</p>
<p>If you don't want to dirty your application, you can think of creating a management Deployment where you install the tools you need to manage this type of request and connect to that.</p>
<p>I don't know which provider you have installed Kubernetes on, but there are very vertical documentations on the subject --></p>
<p>OpenShift: <a href="https://access.redhat.com/solutions/4569211" rel="nofollow noreferrer">https://access.redhat.com/solutions/4569211</a></p>
<p>Azure: <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/packet-capture-pod-level" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/packet-capture-pod-level</a></p>
<p><a href="https://www.redhat.com/sysadmin/capture-packets-kubernetes-ksniff" rel="nofollow noreferrer">https://www.redhat.com/sysadmin/capture-packets-kubernetes-ksniff</a></p>
| glv |
<p>The Node JS app that I'm trying to deploy in Kubernetes runs on <code>express js</code> as a backend framework.The repository is managed via <code>Bitbucket</code>. The application is a microservice and the pipeline manifest file for building the Docker image is written this way:</p>
<pre><code>options:
docker: true
image: node:14.17.0
pipelines:
branches:
test:
- step:
services:
- docker
name: Build and push new docker image
deployment: dev
script:
- yarn install
- yarn build
- yarn test
- yarn lint
- make deploy.dev
- docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASSWORD
- docker build -t testapp/helloapp:latest -f ./Dockerfile .
- docker push testapp/helloapp
caches:
- docker # adds docker layer caching
</code></pre>
<p>The K8s cluster is hosted on <a href="https://upcloud.com/community/tutorials/deploy-kubernetes-using-kubespray/" rel="nofollow noreferrer">cloud</a> but does not have the internal <code>Load Balancer</code> of their own. The K8s cluster version is <code>v1.22.4</code> and <code>MetalLB v0.11.0</code> is configured to serve the Load Balancing purpose. To expose the K8s <code>service</code>- Cloudflare Tunnel is configured as a K8s <code>deployment</code>.</p>
<p>So, this is the manifest file set-up used for building the Docker image. The pipeline deploys successfully and in the Kubernetes part, this is the <code>service</code> and <code>deployment</code> manifest:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: helloapp
labels:
app: helloapp
spec:
type: NodePort
ports:
- port: 5000
targetPort: 7000
protocol: TCP
name: https
selector:
app: helloapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
labels:
app: helloapp
spec:
replicas: 3
selector:
matchLabels:
app: helloapp
template:
metadata:
labels:
app: helloapp
spec:
imagePullSecrets:
- name: regcred
containers:
- name: helloapp
image: testapp/helloapp:latest
</code></pre>
<p>Also, here is the <code>Dockerfile</code> snippet to give more clarity on what I have been doing:</p>
<pre><code>FROM node:14.17.0
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 8100
</code></pre>
<p>Just to give a context, the service and deployment works fine, with no <code>CrashLoopBackOff</code> or any other errors.
My doubt here is, there is <code>dist</code> directory which is not getting deployed to Docker Hub as it is generated during <code>npm build</code>. How can I deploy the app along with the <code>dist</code> directory without having to worry about security risks? Any feedbacks and suggestions on where I could add a script to pull the <code>dist</code> directory would be appreciated.</p>
| Sebastian | <p>Eventually, I could resolve the issue. The issue was trivial yet bothering. In the <code>Dockerfile</code>, there was a missing script, i.e., <code>npm run build</code>. So, here is the final <code>Dockerfile</code> I used it for building the <code>dist</code> directory along with other requirements:</p>
<pre><code>FROM node:14.17.0
WORKDIR /app
COPY package.json /app
RUN npm install
RUN npm run build <------ the missing script
COPY . /app
CMD node app.js
EXPOSE 8100
</code></pre>
<p>This way, the entire <code>dist</code> directory gets built inside the container. Also, I removed all the <code>.ENV</code> dependencies from <code>dist</code> directory and stored as Kubernetes secret in <code>base64</code> format.</p>
| Sebastian |
<p>I get the following message when I run the <code>skaffold dev</code> command:</p>
<blockquote>
<p>Build Failed. Cannot connect to the Docker daemon at unix:
///var/run/docker.sock. Check if docker is running.</p>
</blockquote>
<p>Tools versions:</p>
<ol>
<li>MacOS Desktop Docker: 4.13.0 (89412)</li>
<li>Kubernetes: v1.25.2</li>
<li>Skaffold: v2.0.0</li>
</ol>
<p>Docker runs correctly in fact I can create resources on the cluster and create containers with the docker-cli commands. I successfully launch both docker info and docker version.</p>
<p>The command <code>/Applications/Docker.app/Contents/MacOS/com.docker.diagnose check</code></p>
<p>reports</p>
<blockquote>
<p>"No fatal errors detected."</p>
</blockquote>
<p>(all tests pass).</p>
<p>I also tried setting the <code>DOCKER_HOST</code> variable:
<code>DOCKER_HOST = /Users/<my folder>/.docker/run/docker.sock skaffold dev</code></p>
<p>Result:</p>
<pre><code>invalid skaffold config: error getting docker client: unable to parse docker host `/Users/<my folder>/.docker/run/docker.sock`
</code></pre>
<p>My Skaffold.yaml file</p>
<pre><code>apiVersion: skaffold/v3
kind: Config
metadata:
name: test
build:
local:
push: false
artifacts:
- image: <myimage>
context: <folder>
docker:
dockerfile: Dockerfile
manifests:
rawYaml:
- infra/k8s/deployment.yaml
</code></pre>
<p>How can I solve?</p>
| Davide | <p>The solution was to set the variable DOCKER_HOST before launching the <code>skaffold dev</code> command:</p>
<pre><code>DOCKER_HOST="unix:///Users/<you>/.docker/run/docker.sock" skaffold dev
</code></pre>
| Davide |
<p>I'm using the Apache Flink Kubernetes operator to deploy a standalone job on an Application cluster setup.</p>
<p>I have setup the following files using the Flink official documentation - <a href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/resource-providers/standalone/kubernetes/" rel="nofollow noreferrer">Link</a></p>
<ol>
<li>jobmanager-application-non-ha.yaml</li>
<li>taskmanager-job-deployment.yaml</li>
<li>flink-configuration-configmap.yaml</li>
<li>jobmanager-service.yaml</li>
</ol>
<p>I have not changed any of the configurations in these files and am trying to run a simple WordCount example from the Flink examples using the Apache Flink Operator.</p>
<p>After running the kubectl commands to setting up the job manager and the task manager - the job manager goes into a NotReady state while the task manager goes into a CrashLoopBackOff loop.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
flink-jobmanager-28k4b 1/2 NotReady 2 (4m24s ago) 16m
flink-kubernetes-operator-6585dddd97-9hjp4 2/2 Running 0 10d
flink-taskmanager-6bb88468d7-ggx8t 1/2 CrashLoopBackOff 9 (2m21s ago) 15m
</code></pre>
<p>The job manager logs look like this</p>
<pre><code>org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout
at org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkCheckerImpl.lambda$schedulePendingRequestBulkWithTimestampCheck$0(PhysicalSlotRequestBulkCheckerImpl.java:86) ~[flink-dist-1.16.0.jar:1.16.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:453) ~[flink-rpc-akka_be40712e-8b2e-47cd-baaf-f0149cf2604d.jar:1.16.0]
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_be40712e-8b2e-47cd-baaf-f0149cf2604d.jar:1.16.0]
</code></pre>
<p>The Task manager it seems cannot connect to the job manager</p>
<pre><code>2023-01-28 19:21:47,647 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Connecting to ResourceManager akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*(00000000000000000000000000000000).
2023-01-28 19:21:57,766 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Could not resolve ResourceManager address akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*.
2023-01-28 19:22:08,036 INFO akka.remote.transport.ProtocolStateActor [] - No response from remote for outbound association. Associate timed out after [20000 ms].
2023-01-28 19:22:08,057 WARN akka.remote.ReliableDeliverySupervisor [] - Association with remote system [akka.tcp://flink@flink-jobmanager:6123] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink@flink-jobmanager:6123]] Caused by: [No response from remote for outbound association. Associate timed out after [20000 ms].]
2023-01-28 19:22:08,069 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Could not resolve ResourceManager address akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager_*.
2023-01-28 19:22:08,308 WARN akka.remote.transport.netty.NettyTransport [] - Remote connection to [null] failed with org.jboss.netty.channel.ConnectTimeoutException: connection timed out: flink-jobmanager/100.127.18.9:6123
</code></pre>
<p>The flink-configuration-configmap.yaml looks like this</p>
<pre><code> flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 2
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 1600m
taskmanager.memory.process.size: 1728m
parallelism.default: 2
</code></pre>
<p>This is what the pom.xml looks like - <a href="https://github.com/shroffrushabh/WordCount/blob/main/pom.xml" rel="nofollow noreferrer">Link</a></p>
| user1386101 | <p>You deployed the Kubernetes Operator in the namespace, but you did not create the CRDs the Operator requires. Instead you tried to create a standalone Flink Kubernetes cluster.</p>
<p>The Flink Operator makes it a lot easier to deploy your Flink jobs, you only need to deploy the operator itself and <code>FlinkDeployment</code>/<code>FlinkSessionJob</code> CRDs. The operator will manage your deployment after.</p>
<p>Please use this documentation for the Kubernetes Operator: <a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/try-flink-kubernetes-operator/quick-start/" rel="nofollow noreferrer">Link</a></p>
| mateczagany |
<p>I need the URI of requests that reach my <code>myapp</code> pods to be rewritten to remove the prefix <code>/foo</code> from the path. For example, a URI <code>/foo/bar</code> should be received as <code>/bar</code>. I am using a GCP load balancer that routes traffic directly to pods. I am not using Istio ingress, so Istio has no control over the load balancer's behavior.</p>
<p>I tried creating a <code>VirtualService</code> to handle the path rewrite:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp-route
spec:
hosts:
- myapp
http:
- match:
- uri:
prefix: "/foo/"
rewrite:
uri: "/"
route:
- destination:
host: myapp
</code></pre>
<p>(This may not be <em>exactly</em> correct as I adapted/simplified what I tried for the question.)</p>
<p>This works when sending requests from a pod with an Istio sidecar to the <code>myapp</code> service, but not from the load balancer. I can see the URI is being rewritten as it goes out from any other pod, not when it's coming into a <code>myapp</code> pod.</p>
<p>How can I get URI rewriting as an incoming rule?</p>
| Drago Rosson | <p>I found <a href="https://github.com/istio/istio/issues/22290#issuecomment-1317595537" rel="nofollow noreferrer">https://github.com/istio/istio/issues/22290#issuecomment-1317595537</a> which shows how to write a custom <code>EnvoyFilter</code> to do path rewriting and adapted it to my needs. I'm not at all sure if the directives to specify how and where the filter should be applied are correct, but it at least does the prefix rewrite as an inbound rule.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: myapp-rewrite-filter
spec:
workloadSelector:
labels:
app: myapp
configPatches:
# The first patch adds the lua filter to the listener/http connection manager
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.filters.http.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function remove_prefix(s, prefix)
return (s:sub(0, #prefix) == prefix) and s:sub(#prefix+1) or s
end
function envoy_on_request(request_handle)
local filter_name = "SIMPLE_PATH_REWRITER"
request_handle:logDebug(filter_name .. ": Starting")
local prefix = "/foo"
local path = request_handle:headers():get(":path")
local unprefixed = remove_prefix(path, prefix)
request_handle:headers():replace(":path", unprefixed)
local msg = filter_name .. ": Path " .. path .. " rewritten to " .. unprefixed
request_handle:logDebug(msg)
end
</code></pre>
| Drago Rosson |
<p>Is there a way to run a privileged pod that I can use to install RPMs and packages on the host where the pod is running?</p>
<p>Thanks!</p>
<p>If I just run privileged pod, exec into it and would try to run the command:</p>
<pre><code>rpm -i <package.rpm>
</code></pre>
<p>I would install that RPM in the pod itself, not on the physical machine.</p>
| Lorenzo Bassi | <p>Using a Pod that connects to the Node where it "runs" goes against the containerization pattern; take a look at the images below this link: <a href="https://www.docker.com/resources/what-container/" rel="nofollow noreferrer">https://www.docker.com/resources/what-container/</a></p>
<p>However you can do something, perhaps creating a bridge as done here: <a href="https://github.com/kubernetes-sigs/kind/issues/1200#issuecomment-568735188" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kind/issues/1200#issuecomment-568735188</a></p>
| glv |
<p>I want to run some gpu workloads on my bare metal k8s cluster. So I have installed the nvidia containerd runtime engine on my cluster. But the cilium cni pods crashes when I make nvidia the default runtime. (I'll post about that some other place)</p>
<p>I'm thinking I should be able to work around this problem by scheduling only the gpu pods on the nvidia runtime and leave runc as the default. Is it possible to specify different runtime engines for different workloads? Is this a good workaround? If so, how do I configure it?</p>
<p>This is how I've install the nvidia drivers and containerd runtime <a href="https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#option-2-installing-kubernetes-using-kubeadm" rel="nofollow noreferrer">https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#option-2-installing-kubernetes-using-kubeadm</a></p>
<p>I found this documentation, but it's a little dry <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/runtime-class/</a></p>
| d s | <p>well... I feel dumb for not reading the docs more closely. Here I am to answer my own question.</p>
<ol>
<li>create a RuntimeClass like this:</li>
</ol>
<pre><code>kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: nvidia
handler: nvidia
</code></pre>
<ol start="2">
<li>add <code>runtimeClassName: nvidia</code> to the container spec of any containers that you want to use the nvidia containerd engine.</li>
</ol>
<p>Thats all. It just works.</p>
| d s |
<p>I want to store the data from a PostgreSQL database in a <code>persistentvolumeclaim</code>.
(on a managed Kubernetes cluster on Microsoft Azure)</p>
<p><strong>And I am not sure which access mode to choose.</strong></p>
<p>Looking at the available <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access modes</a>:</p>
<ul>
<li><code>ReadWriteOnce</code></li>
<li><code>ReadOnlyMany</code></li>
<li><code>ReadWriteMany</code></li>
<li><code>ReadWriteOncePod</code></li>
</ul>
<p>I would say, I should choose either <code>ReadWriteOnce</code> or <code>ReadWriteMany</code>.</p>
<p>Thinking about the fact that I might want to migrate the database pod to another node pool at some point, I would intuitively choose <code>ReadWriteMany</code>.</p>
<p>Is there any disadvantage if I choose <code>ReadWriteMany</code> instead of <code>ReadWriteOnce</code>?</p>
| zingi | <p>You are correct with the migration, where the access mode should be set to <code>ReadWriteMany</code>.</p>
<p>Generally, if you have access mode <code>ReadWriteOnce</code> and a multinode cluster on microsoft azure, where multiple pods need to access the database, then the kubernetes will enforce all the pods to be scheduled on the node that mounts the volume first. Your node can be overloaded with pods. Now, if you have a <code>DaemonSet</code> where one pod is scheduled on each node, this could pose a problem. In this scenario, you are best with tagging the <code>PVC</code> and <code>PV</code> with access mode <code>ReadWriteMany</code>.</p>
<p>Therefore</p>
<ul>
<li>if you want multiple pods to be scheduled on multiple nodes and have access to DB, for write and read permissions, use access mode <code>ReadWriteMany</code></li>
<li>if you logically need to have pods/db on one node and know for sure, that you will keep the logic on the one node, use access mode <code>ReadWriteOnce</code></li>
</ul>
| Pavol Krajkovič |
<p>I set up a local kubernetes cluster with minikube. On my cluster I have only one deployment runnning and one service attached to it. I used a NodePort on port 30100 to expose the service, so I can access it from my browser or via curl.</p>
<p>here is the <code>python-server.yml</code> file I use to setup the cluster:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: python-server-deployment
namespace: kubernetes-hello-world
labels:
app: python-server
spec:
replicas: 1
selector:
matchLabels:
app: python-server
template:
metadata:
labels:
app: python-server
spec:
containers:
- name: python-hello-world
image: hello-world-python:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: python-server-internal-service
namespace: kubernetes-hello-world
spec:
type: NodePort
selector:
app: python-server
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30100
</code></pre>
<p>my <code>python-hello-world</code> image is based on this python file:</p>
<pre class="lang-py prettyprint-override"><code>from http.server import BaseHTTPRequestHandler, HTTPServer
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
html = """
<!DOCTYPE html>
<html>
<head>
<title>Hello World</title>
<meta charset="utf-8">
</head>
<body>
<h1>Hello World</h1>
</body>
</html>
"""
self.send_response(200)
self.send_header('Access-Control-Allow-Origin', '*')
self.send_header('Content-type', 'text/html')
self.end_headers()
self.wfile.write(bytes(html, "utf-8"))
def run():
addr = ('', 5000)
httpd = HTTPServer(addr, MyServer)
httpd.serve_forever()
if __name__ == '__main__':
run()
</code></pre>
<p>When I run the cluster I can as expected receive the hello world html with <code>curl {node_ip}:30100</code>. But when I try to access my service via my browser with the same ip:port I get a time out.
I read that that can be caused by missing headers but I think I have all necessary ones covered in my python file, so what else could cause this?</p>
| Cake | <p>It is not said that you reach the IP of your node (you should provide some more information about the environment if necessary).</p>
<p>But you could port forward the service and reach it easily.</p>
<p>Take a look here:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</a></p>
<p>Some other doc:
<a href="https://stackoverflow.com/questions/51468491/how-kubectl-port-forward-works">How kubectl port-forward works?</a></p>
| glv |
<p>My Rancher desktop was working just fine, until today when I switched container runtime from containerd to dockerd. When I wanted to change it back to containerd, it says:</p>
<pre><code>Error Starting Kubernetes
Error: unable to verify the first certificate
</code></pre>
<p>Some recent logfile lines:</p>
<pre><code> client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUV1eXhYdFYvTDZOQmZsZVV0Mnp5ekhNUmlzK2xXRzUxUzBlWklKMmZ5MHJvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNGdQODBWNllIVzBMSW13Q3lBT2RWT1FzeGNhcnlsWU8zMm1YUFNvQ2Z2aTBvL29UcklMSApCV2NZdUt3VnVuK1liS3hEb0VackdvbTJ2bFJTWkZUZTZ3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
2022-09-02T13:03:15.834Z: Error starting lima: Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:390:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12) {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
</code></pre>
<p>Tried reinstalling, factory reset etc. but no luck. I am using 1.24.4 verison.</p>
| susgreg | <p><strong>TLDR: Try turning off Docker/Something that is binding to port 6443.</strong> Reset Kubernetes in Rancher Desktop, then try again.</p>
<p>Try checking if there is anything else listening on port 6443 which is needed by kubernetes:rancher-desktop.</p>
<p>In my case, <code>lsof -i :6443</code> gave me...</p>
<pre><code> ~ lsof -i :6443
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 63385 ~~~~~~~~~~~~ 150u IPv4 0x44822db677e8e087 0t0 TCP localhost:sun-sr-https (LISTEN)
ssh 82481 ~~~~~~~~~~~~ 27u IPv4 0x44822db677ebb1e7 0t0 TCP *:sun-sr-https (LISTEN)
</code></pre>
| Winston Boson |
<p>pinniped cli is not working in widnows.
pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized.</p>
<p>C:\Users\hello>pinniped
pinniped is not recognized as a internal command or external command, operable program or batch file.</p>
<p>Seem windows is not recognizing this .exe file as published by a valid publisher.</p>
<p>pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work.</p>
| Chai | <p>I have faced the same issue, so i had purged the cache here: C:\Users\user.kube\cache</p>
<p>And then, i have modified the path of the program pinniped in the config file below, at the line <strong>command</strong> (obviously, the program pinniped itself has to be present in this path) :</p>
<p>C:\Users\user.kube\config</p>
<ul>
<li>name: cluster-np-a2op-cluster-pinniped
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1</li>
</ul>
<p>.....
.....
.....</p>
<ul>
<li>--upstream-identity-provider-flow=browser_authcode
command: <strong>C:\Program Files\Pinniped\pinniped.exe</strong></li>
</ul>
<p>env: []</p>
<p>.....
.....
.....</p>
<p>Hope this will help.</p>
<p>;)</p>
| Lawar |
<p>When running the Kubernetes Dashboard in a Windows Docker Desktop when I click on "pods" either nothing is shown</p>
<blockquote>
<p>There is nothing to display here No resources found.</p>
</blockquote>
<p>or I get this error:</p>
<blockquote>
<p>deployments.apps is forbidden: User
"system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard"
cannot list resource "deployments" in API group "apps" in the
namespace "default"</p>
</blockquote>
<p>Was there anything running? Yes.</p>
<p><a href="https://i.stack.imgur.com/GAgsW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GAgsW.png" alt="enter image description here" /></a></p>
<blockquote>
<p><strong>How can I get an overview of my pods?</strong></p>
</blockquote>
<p>What's the config? In the Windows Docker Desktop environment, I stared with a fresh Kubernetes. I removed any old user "./kube/config" file.</p>
<p>To get the Kubernetes dashboard runnig, I did the procedure:</p>
<ol>
<li><p>Get the dashboard: kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml</a></p>
</li>
<li><p>Because generating tokens via a standard procedure (as found on many places) did not work, I took the alternative short-cut:</p>
</li>
</ol>
<p>kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-skip-login"}]'</p>
<ol start="3">
<li><p>After typing "kubectl proxy" the result is: Starting to serve on 127.0.0.1:8001</p>
</li>
<li><p>In a browser I started the dashboard:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default</p>
</li>
</ol>
<p>After clicking the "Skip" button, the dashboard opened.</p>
<p>Clicking on "Pods" (and nearly all other items) gave this error:</p>
<blockquote>
<p>pods is forbidden: User
"system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard"
cannot list resource "pods" in API group "" in the namespace
"kubernetes-dashboard" (could be "default" as well)</p>
</blockquote>
<p>It did not matter whether I chose the default namespace.</p>
<p><strong>ALTERNATIVE:</strong> As an alternative I tried to bind the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole.</p>
<ol>
<li>Preparations: create this file:</li>
</ol>
<blockquote>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
</blockquote>
<pre><code>$ kubectl apply -f s.yml
</code></pre>
<p>Create this file:</p>
<blockquote>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
</blockquote>
<pre><code>$ kubectl apply -f r.yml
</code></pre>
<p>Then run this command:</p>
<pre><code>$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
</code></pre>
<p>This (or similar alternative) command gives a lot of errors.</p>
<p>Breaking this command down in parts: kubectl -n kubernetes-dashboard get sa/admin-user ... gives:</p>
<p><a href="https://i.stack.imgur.com/35VdV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35VdV.png" alt="enter image description here" /></a></p>
<p>This command: kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}" gives no result.</p>
| tm1701 | <p>It's definitely a Permissions issue.</p>
<p>Binds the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole.</p>
<p>Otherwise it doesn't have the privileges to be able to collect data from the cluster.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: NAMESPACE-WHERE-DASHBOARD-IS
</code></pre>
| glv |
<p>I want to copy files from kubernetes pod into the local machine, but shows error like tihs:</p>
<pre><code>> kubectl cp texhub-server-service-6f8d688c77-kvhgg:"/app/core.21" /Users/xiaoqiangjiang/core.21
tar: removing leading '/' from member names
Dropping out copy after 0 retries
error: unexpected EOF
</code></pre>
<p>why did this happen? what should I do to fixed this issue? I have already tried to add the retries parameter, still facing this issue:</p>
<pre><code>> kubectl cp texhub-server-service-6f8d688c77-kvhgg:"/app/core.21" /Users/xiaoqiangjiang/core.21 --retries=3
tar: removing leading '/' from member names
Resuming copy at 888832 bytes, retry 0/3
tar: removing leading '/' from member names
Resuming copy at 1505280 bytes, retry 1/3
tar: removing leading '/' from member names
Resuming copy at 2125824 bytes, retry 2/3
tar: removing leading '/' from member names
Dropping out copy after 3 retries
error: unexpected EOF
</code></pre>
<p>this is the pods list:</p>
<pre><code>> kubectl get pods
NAME READY STATUS RESTARTS AGE
admin-service-54f5d4b568-vbjg6 1/1 Running 345 (5m1s ago) 6d18h
ai-web-594648499d-hq8gx 1/1 Running 0 58d
chat-server-service-686b4d59f7-x2tq2 2/2 Running 0 24d
cruise-web-74cc96797f-l7g55 1/1 Running 0 68d
cv-render-service-5489d5949c-f6stg 1/1 Running 1 (2d18h ago) 2d18h
cv-service-56cd946c4c-8p4t8 1/1 Running 0 29d
cv-web-647cbccd95-47w7d 1/1 Running 0 29d
dolphin-music-service-597988cd59-bd92w 1/1 Running 2 (19d ago) 129d
dolphin-post-service-79df79d7f7-zww7x 1/1 Running 0 6d18h
official-website-776b6c6dfd-hfbcx 1/1 Running 0 407d
react-admin-new-6ff54c9889-9cwrf 1/1 Running 0 7d21h
rss-sync-service-77dd845898-wqhp2 1/1 Running 27 (31h ago) 8d
snap-web-66dbcc769f-sl6bs 1/1 Running 0 139d
texhub-server-service-6f8d688c77-kvhgg 1/1 Running 3 (20m ago) 30m
texhub-web-f9fd456f7-47ccc 1/1 Running 0 42h
tool-web-7f97bf5484-28j7r 1/1 Running 0 141d
y-websocket-service-768d56f54b-845kf 1/1 Running 12 (3d19h ago) 6d18h
</code></pre>
<p>the core.21 file size is 8Mb.</p>
| Dolphin | <p>See if any of these two alternatives helps you</p>
<pre><code>kubectl cp texhub-server-service-6f8d688c77-kvhgg:"/app/core.21" /Users/xiaoqiangjiang/core.21 --retries=-1
</code></pre>
<p>retries=-1 (any negative value for infinite retrying)</p>
<p>Or</p>
<pre><code>kubectl exec texhub-server-service-6f8d688c77-kvhgg cat /app/core.21 > /Users/xiaoqiangjiang/core.21
</code></pre>
| Raghu |
<p>Since Kubernetes 1.25, the <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container" rel="noreferrer">ephemeral containers</a> are stable.</p>
<p>I can inject ephemeral container into running pod but when the purpose of debug container ends I'd like to remove the container from the pod but I still see it with Terminated state.</p>
<p>The docs currently say to delete the container I must delete whole pod similar to copied pod but I don't think that is right.</p>
<p>How can I delete ephemeral container from running pod without destroying it?</p>
| dosmanak | <p>Unfortunately it isn't possible to do what you say.</p>
<blockquote>
<p>Ephemeral containers are created using a special <em>ephemeralcontainers</em> handler in the API rather than by adding them directly to <em>pod.spec</em>, so it's not possible to add an ephemeral container using <em>kubectl edit</em>.</p>
</blockquote>
<blockquote>
<p>Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#understanding-ephemeral-containers" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#understanding-ephemeral-containers</a></p>
| glv |
<p>I am trying to install Install Kubernetes offline (without an internet connection) on Ubuntu 16.04 machine. Is there any procedure or steps to follow for the installation without internet connectivity?</p>
| CuriousCase | <p>If you have one machine with no external internet connectivity, then there is no option to install k8s. However if you download all the required software/images you need to install k8s beforehand, then it is possible. Simply transfer the data between machine. Please refer to <a href="https://gist.github.com/jgsqware/6595126e17afc6f187666b0296ea0723" rel="nofollow noreferrer">https://gist.github.com/jgsqware/6595126e17afc6f187666b0296ea0723</a></p>
| Pavol Krajkovič |
<p>I am trying to resolve this error from Kubernetes (here is the 'describe pod' output, redacted some parts to keep sensitive data out of this post):</p>
<pre><code> ~ $ kubectl describe pod service-xyz-68c5f4f99-mn7jl -n development
Name: service-xyz-68c5f4f99-mn7jl
Namespace: development
Priority: 0
Node: autogw.snancakp/<REDACTED>
Start Time: Thu, 24 Aug 2023 09:55:21 -0400
Labels: app=service-xyz
pod-template-hash=68c5f4f99
Annotations: cni.projectcalico.org/containerID: 7c93d46d14e9101887d58a7b4627fd1679b8435a439bbe46a96ec11b36d44981
cni.projectcalico.org/podIP: <REDACTED>/32
cni.projectcalico.org/podIPs: <REDACTED>/32
Status: Running
IP: <REDACTED>
IPs:
IP: <REDACTED>
Controlled By: ReplicaSet/service-xyz-68c5f4f99
Containers:
service-xyz:
Container ID: containerd://<REDACTED>
Image: gitlab.personal.local:5050/teamproject/service-xyz:latest
Image ID: gitlab.personal.local:5050/teamproject/service-xyz@sha256:<REDACTED>
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 24 Aug 2023 09:55:27 -0400
Finished: Thu, 24 Aug 2023 09:55:27 -0400
Ready: False
Restart Count: 0
Environment:
QUEUE: service-xyz
EXCHANGE: <set to the key 'environment' in secret 'teamproject-secrets'> Optional: false
RMQ_URL: <set to the key 'rmq_url' in secret 'teamproject-secrets'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rtm67 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-rtm67:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned development/service-xyz-68c5f4f99-mn7jl to machine.goodmachine
Normal Pulled 6m55s kubelet Successfully pulled image "gitlab.personal.local:5050/teamproject/service-xyz:latest" in 5.137384512s (5.137391155s including waiting)
Normal Created 6m54s kubelet Created container service-xyz
Normal Started 6m54s kubelet Started container service-xyz
Normal Pulling 6m11s (x4 over 7m) kubelet Pulling image "gitlab.personal.local:5050/teamproject/service-xyz:latest"
Warning Failed 6m11s (x3 over 6m52s) kubelet Failed to pull image "gitlab.personal.local:5050/teamproject/service-xyz:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gitlab.personal.local:5050/teamproject/service-xyz:latest": failed to resolve reference "gitlab.personal.local:5050/teamproject/service-xyz:latest": failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
Warning Failed 6m11s (x3 over 6m52s) kubelet Error: ErrImagePull
Normal BackOff 5m34s (x2 over 6m26s) kubelet Back-off pulling image "gitlab.personal.local:5050/teamproject/service-xyz:latest"
Warning Failed 5m34s (x2 over 6m26s) kubelet Error: ImagePullBackOff
Warning BackOff 119s (x18 over 6m52s) kubelet Back-off restarting failed container service-xyz in pod service-xyz-68c5f4f99-mn7jl_development(561cbfc0-addd-4da3-ae6b-dccc2dfa68eb)
</code></pre>
<p>So the error is an ImagePullBackOff, so I figured I didn't set up my gitlab-ci, secrets, or pod/deployment yaml correct.</p>
<p>Here is <strong><code>gitlab-ci.yml</code></strong>:</p>
<pre class="lang-yaml prettyprint-override"><code>stages:
- build
- deploy
variables:
SERVICE_NAME: "service-xyz"
services:
- name: docker:dind
alias: dockerservice
build_image:
tags:
- self.hosted
- linux
stage: build
image: docker:latest
variables:
DOCKER_HOST: tcp://dockerservice:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
script:
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
only:
- main
deploy:
tags:
- self.hosted
- linux
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
variables:
NAMESPACE: development
SECRET_NAME: regcred
before_script:
- export KUBECONFIG=$KUBECONFIG_DEVELOPMENT
script:
- kubectl delete secret -n ${NAMESPACE} ${SECRET_NAME} --ignore-not-found
- kubectl create secret -n ${NAMESPACE} docker-registry ${SECRET_NAME} --docker-server=${CI_REGISTRY} --docker-username=${CI_REGISTRY_USER} --docker-password=${CI_REGISTRY_PASSWORD} --docker-email=${GITLAB_USER_EMAIL}
- kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n $NAMESPACE
- kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
- kubectl apply -f deployment.yaml -n $NAMESPACE
only:
- main
when: manual
</code></pre>
<p>Here is the <strong>deployment</strong> file (deployment.yaml) (which references <code>regcred</code>):</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-xyz
spec:
replicas: 1
selector:
matchLabels:
app: service-xyz
template:
metadata:
labels:
app: service-xyz
spec:
containers:
- name: service-xyz
image: gitlab.personal.local:5050/teamproject/service-xyz:latest
imagePullPolicy: Always
env:
- name: QUEUE
value: service-xyz
- name: EXCHANGE
valueFrom:
secretKeyRef:
name: teamproject-secrets
key: environment
- name: RMQ_URL
valueFrom:
secretKeyRef:
name: teamproject-secrets
key: rmq_url
imagePullSecrets:
- name: regcred
</code></pre>
<p>When I look at the secrets in Kuberenetes (through Rancher), I see the <code>regcred</code> in the correct place:
<a href="https://i.stack.imgur.com/sPsgT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sPsgT.png" alt="RancherSecretsPage" /></a></p>
<p>I believe I've set up everything correctly, and I don't know why the deployment won't work. The deployment correctly references <code>regcred</code>, but I'm still getting the ImagePullBackOff error.</p>
<p>Can anyone help me out here?</p>
<p>Regards and thanks</p>
| Nulla | <p>As per Gitlab documentation</p>
<p><strong>CI_REGISTRY_PASSWORD</strong></p>
<p>The password to push containers to the project’s GitLab Container Registry. Only available if the Container Registry is enabled for the project. This password value is the same as the CI_JOB_TOKEN and <strong>is valid only as long as the job is running</strong>.</p>
<p><strong>Use the CI_DEPLOY_PASSWORD for long-lived access to the registry.</strong></p>
<p>So based on this it looks like your secret would no longer be valid once your job finishes(which it does immediately after trying to creating your kubernetes deployment object) and in the meantime your Pod is trying to pull in the image from registry using the invalid token/password(CI_REGISTRY_PASSWORD)</p>
<p>Try creating and using a deploy token instead</p>
<p><a href="https://docs.gitlab.com/ee/user/project/deploy_tokens/index.html#gitlab-deploy-token" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/deploy_tokens/index.html#gitlab-deploy-token</a></p>
<p><a href="https://i.stack.imgur.com/csrry.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/csrry.png" alt="enter image description here" /></a></p>
| Raghu |
<p>I want to build the the <code>secretName</code> dynamically base on the value of the <code>my-label</code> key (trough a <code>ENV</code>). Is this possible?</p>
<p>I used the a similar approach to use label values as ARGs which worked.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
my-label: "my-value"
spec:
containers:
- name: my-container
image: my-image
env:
- name: MY_ENV_VAR
valueFrom:
fieldRef:
fieldPath: metadata.labels['my-label']
volumeMounts:
- name: my-secret
mountPath: /path/to/my-secret
volumes:
- name: my-secret
secret:
secretName: my-secret-$(MY_ENV_VAR)
</code></pre>
| Christoph Lang | <p>The fastest solution is surely to use kustomize.</p>
<p>Following your data, first organize the repository by creating a folder called "base" and one called "dev".</p>
<p>Then move the "my-cronjob" manifest into the "base" folder and add a kustomization.yaml file that invokes the CronJob.</p>
<p>Finally, create a file called kustomization.yaml inside the "dev" folder, calling the files from the "base" folder plus the patch.</p>
<p>Example:</p>
<p><a href="https://i.stack.imgur.com/LnpCF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LnpCF.png" alt="Repo structure" /></a></p>
<p>base/kustomization.yaml</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./my-cronjob.yaml
</code></pre>
<p>dev/kustomization.yaml</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
patches:
- target:
kind: CronJob
name: my-cronjob
patch: |-
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/env/0/valueFrom/fieldRef/fieldPath
value: metadata.labels['DEV']
</code></pre>
<p>To replicate to other environments, just copy the "dev" folder and paste it into a "prod" folder (for example) and edit the patch with the correct parameter.</p>
| glv |
<p>We are bringing up a Cassandra cluster, using k8ssandra helm chart, it exposes several services, our client applications are using the datastax Java-Driver and running at the same k8s cluster as the Cassandra cluster (this is testing phase)</p>
<pre><code>CqlSessionBuilder builder = CqlSession.builder();
</code></pre>
<p>What is the recommended way to connect the application (via the Driver) to Cassandra?</p>
<p>Adding all nodes?</p>
<pre><code>for (String node :nodes) {
builder.addContactPoint(new InetSocketAddress(node, 9042));
}
</code></pre>
<p>Adding just the service address?</p>
<pre><code> builder.addContactPoint(new InetSocketAddress(service-dns-name , 9042))
</code></pre>
<p>Adding the service address as unresolved? (would that even work?)</p>
<pre><code> builder.addContactPoint(InetSocketAddress.createUnresolved(service-dns-name , 9042))
</code></pre>
| Doron Levi | <p>The k8ssandra Helm chart deploys a CassandraDatacenter object and cass-operator in addition to a number of other resources. cass-operator is responsible for managing the CassandraDatacenter. It creates the StatefulSet(s) and creates several headless services including:</p>
<ul>
<li>datacenter service</li>
<li>seeds service</li>
<li>all pods service</li>
</ul>
<p>The seeds service only resolves to pods that are seeds. Its name is of the form <code><cluster-name>-seed-service</code>. Because of the ephemeral nature of pods cass-operator may designate different C* nodes as seed nodes. Do not use the seed service for connecting client applications.</p>
<p>The all pods service resolves to all Cassandra pods, regardless of whether they are readiness. Its name is of the form <code><cluster-name>-<dc-name>-all-pods-service</code>. This service is intended to facilitate with monitoring. Do not use the all pods service for connecting client applications.</p>
<p>The datacenter service resolves to ready pods. Its name is of the form <code><cluster-name>-<dc-name>-service</code> This is the service that you should use for connecting client applications. Do not directly use pod IPs as they will change over time.</p>
| John Sanda |
<p>Is it possible to create a kubernetes RBAC rule that allows creating a Job from an existing CronJob, but prevents creating a Job any other way?</p>
<p>We want to keep our clusters tightly locked down to avoid arbitrary deployments not managed by CICD - but we also need to facilitate manual testing of CronJobs, or rerunning failed jobs off schedule. I'd like developers to be able to run a command like:</p>
<pre><code>kubectl create job --from=cronjob/my-job my-job-test-run-1
</code></pre>
<p>But not be able to run something like:</p>
<pre><code>kubectl create job my-evil-job -f evil-job.yaml
</code></pre>
<p>Is that possible?</p>
| Gavin Clarke | <p>In this scenario in order to successfully execute this command:</p>
<pre><code>kubectl create job --from=cronjob/<cronjob_name>
</code></pre>
<p>User/ServiceAccount should have proper <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> rules (at least two from the output provided below, create <code>Jobs</code> and get <code>CronJobs</code>.</p>
<p>In first example I granted access to create <code>Jobs</code> and get <code>CronJobs</code> and I was able to create <code>Job</code> and <code>Job --from CronJob</code></p>
<pre><code>user@minikube:~$ cat test_role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: job
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create"]
- apiGroups: ["batch"]
resources: ["cronjobs"]
verbs: ["get"]
user@minikube:~$ kubectl create job --image=inginx testjob20
job.batch/testjob20 created
user@minikube:~$ kubectl create job --from=cronjobs/hello testjob21
job.batch/testjob21 created
</code></pre>
<p>But if I granted access only to create <code>Job</code> without get <code>CronJob</code>, I was be able to create <code>Job</code> but not to create <code>Job --from CronJob</code></p>
<pre><code>user@minikube:~$ cat test_role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: job
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create"]
user@minikube:~$ kubectl create job --image=nginx testjob3
job.batch/testjob3 created
user@minikube:~$ kubectl create job --from=cronjobs/hello testjob4
Error from server (Forbidden): cronjobs.batch "hello" is forbidden: User "system:serviceaccount:default:t1" cannot get resource "cronjobs" in API group "batch" in the namespace "default"
</code></pre>
<p>When I deleted access to create <code>Jobs</code>, I couldn't create <code>Job</code> and also <code>Job --from CronJob</code></p>
<pre><code>user@minikube:~$ cat test_role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: job
rules:
- apiGroups: ["batch"]
resources: ["cronjobs"]
verbs: ["get"]
user@minikube:~$ kubectl create job --image=inginx testjob10
error: failed to create job: jobs.batch is forbidden: User "system:serviceaccount:default:t1" cannot create resource "jobs" in API group "batch" in the namespace "default"
user@minikube:~$ kubectl create job --from=cronjobs/hello testjob11
error: failed to create job: jobs.batch is forbidden: User "system:serviceaccount:default:t1" cannot create resource "jobs" in API group "batch" in the namespace "default"
</code></pre>
<p>As you can see if User/ServiceAccount doesn't have both permission in this scenario it's impossible to create (<code>Job</code> or <code>Job --from CronJob</code>) so it's impossible to create such restrictions using only RABC rules.</p>
<p>One possible solution is to split this permission into two different User/ServiceAccount for two different tasks (first user can create <code>Jobs</code> + get <code>CronJobs</code>, second user without permission to create <code>Jobs</code>).</p>
<p>Another possibility is to try to use k8s admission Controller with f.e. <a href="https://www.openpolicyagent.org/docs/v0.12.2/kubernetes-admission-control/" rel="nofollow noreferrer">Open Policy agent</a></p>
| RadekW |
<p>I am getting error during applying YAML config to AWS K8s cluster:</p>
<p>Error:</p>
<pre><code>couldn't create node group filter from command line options:
loading config file "K8s-nodegroups/group_test.yaml": error unmarshaling JSON:
while decoding JSON: json: unknown field "fullEc2Access"
</code></pre>
<p>Here is the command which I am using:
<code>eksctl create nodegroup -f K8s-nodegroups/group_test.yaml</code></p>
<p>And here is my YAML:</p>
<pre><code>apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: some-cluster-name
region: eu-west-2
nodeGroups:
- name: "group-test"
instanceType: t3.small
desiredCapacity: 1
subnets:
- eu-west-2a
- eu-west-2b
maxSize: 2
minSize: 1
fullEc2Access: true
spot: true
maxPodsPerNode: 100
volumeSize: 8
volumeType: gp3
labels: { group: test }
</code></pre>
<p>And there is not error related to <code>fullEc2Access</code>, if I remove it I will get next one:</p>
<p><code>...error unmarshaling JSON: while decoding JSON: json: unknown field "spot"</code></p>
<p>Why my file is trying to precess like a JSON?
How I can fix it, I got this example form docs and checked it many times.</p>
<p>I know that I can use "one line" command to create nodegroup, but I want to use YAML.
How it is possible to fix it?</p>
<p>I tried eksctl <code>0.131.0</code> and <code>0.128.0</code> versions - same errors.</p>
| prosto.vint | <p><strong>fullEc2Access</strong> does not exists as a property in the schema, <strong>spot</strong> is a property of <em>ManagedNodeGroup</em>, not <em>NodeGroup</em>.</p>
<p>See <a href="https://eksctl.io/usage/schema/" rel="nofollow noreferrer">EKS Config File Schema</a> for further info, or you can print the default schema using the command</p>
<pre><code>eksctl utils schema
</code></pre>
| Filippo Testini |
<p>What does the <code>maxReplicas</code> property mean in the pipeline yaml in Azure in context of the k8s deployment?</p>
<p>E.g. in <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli" rel="nofollow noreferrer">this</a> documentation the <code>maxReplicas: 10</code> is mentioned. But there is no explanation about what it means. At least I was not able to find one. Would be grateful if someone will help me to find the documentation on that.</p>
<p>I have two assumptions.</p>
<p>First, it means that we need to duplicate pods. I.e. with the <code>maxReplicas: 10</code> we may have up to 10 clusters with identical pods.</p>
<p>Second assumption, the <code>maxReplicas: 10</code> means that in a k8s cluster we can have no more than 10 pods.</p>
| manymanymore | <p>It refers to the Kubernetes Horizontal Pod Autoscaler.</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
<p>Example: <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli</a></p>
| glv |
<p>I have referred this tutorial to create deployable image:</p>
<p><a href="https://medium.com/@javatechie/kubernetes-tutorial-run-deploy-spring-boot-application-in-k8s-cluster-using-yaml-configuration-3b079154d232" rel="nofollow noreferrer">https://medium.com/@javatechie/kubernetes-tutorial-run-deploy-spring-boot-application-in-k8s-cluster-using-yaml-configuration-3b079154d232</a></p>
<p>I want to use this HELM chart in order to deploy image from directory:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: spring-boot-k8s
spec:
selector:
matchLabels:
app: spring-boot-k8s
replicas: 2 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: spring-boot-k8s
spec:
containers:
- name: spring-boot-k8s
image: springboot-k8s-example:1.0
# Image that will be used to containers in the cluster
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
# The port that the container is running on in the cluster
</code></pre>
<p>How I can deploy the docker image <code>springboot-k8s-example:1.0</code> from a directory or from a private Docker registry from url link?</p>
| Peter Penzov | <p>Here you will need to create an image pull secret to pull an image from your own repository. I have reproduced your issue and pushed the image to my privet docker registry and then pulled the image successfully.</p>
<p>This could be done by using the below command here I am assuming that you are using docker privet registry :</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<docker-username> --docker-password=<your-password> --docker-email=<docker-email
</code></pre>
<p>This will create a secret and now you will need to add it in your deployment manifest so that it could authenticate with privet registry to pull the image.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: spring-boot-k8s
spec:
selector:
matchLabels:
app: spring-boot-k8s
replicas: 2 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: spring-boot-k8s
spec:
containers:
- name: spring-boot-k8s
image: sidharthpai/springboot-k8s-example:1.0
# Image that will be used to containers in the cluster
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
</code></pre>
<p>For more reference to create image pull secret : <a href="https://github.com/javahometech/kubernetes/tree/master/secrets" rel="nofollow noreferrer">image pull secret</a></p>
<p>Now to prepare the helm chart you will need to create a chart with relevant name and then configure Deployment.yml & Secret.yml in the helm chart template.Once this is done you will need to configure values.yml.</p>
<p>Deployment.yml (template)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: {{ .Values.name}}
spec:
selector:
matchLabels:
app: {{ .Values.app}}
replicas: 2 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: {{ .Values.app}}
spec:
containers:
- name: {{ .Values.name}}
image: {{ .Values.image}}
# Image that will be used to containers in the cluster
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: {{ .Values.secret_name}}
</code></pre>
<p>Secret.yml(Template)</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: {{ .Values.docker_config}}
kind: Secret
metadata:
name: {{ .Values.secret_name}}
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>Values.yml</p>
<pre><code>name: spring-boot-k8s
app: spring-boot-k8s
image: sidharthpai/springboot-k8s-example:1.0
secret_name: regcred
docker_config: <docker-config-value>
</code></pre>
<p><a href="https://i.stack.imgur.com/1kRjb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1kRjb.png" alt="Deployed the application using helm chart" /></a></p>
| sidharth vijayakumar |
<p>I am trying to configure a prometheus to monitor a simple flask app, but its very weird that the prometheus does show the service in the drop down, but it shows nothing when I click in the dropdown:</p>
<p><a href="https://i.stack.imgur.com/KeuEB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KeuEB.png" alt="blank" /></a></p>
<p>Here is the code:</p>
<p>app.py</p>
<pre><code>from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics
app = Flask(__name__)
metrics = PrometheusMetrics(app)
@app.route('/')
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
</code></pre>
<p>flask-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: starian/flask-app1
ports:
- containerPort: 5000
name: webapp
</code></pre>
<p>flask-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: webapp
type: LoadBalancer
</code></pre>
<p>service-monitor.yaml</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: flask-app
labels:
release: prometheus-stack
spec:
selector:
matchLabels:
app: flask-app
endpoints:
- port: http
interval: 5s
path: /metrics
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.8-slim
WORKDIR /app
COPY . /app
RUN pip install flask
RUN pip install prometheus_flask_exporter
CMD ["python", "app.py"]
</code></pre>
<p>I check everything that I know of:
THe metrics does show in the the flask app. I can curl it.
All the services is on.</p>
<p>What could goes wrong?</p>
| user2459179 | <p><code>servicemonitor</code> object looks for services with the label selectors that you give in the manifest , the name of the object is telling you exactly what it does. it looks for a <code>service</code> with the <code>app: flask-app</code> label not the <strong>pod</strong>.
so if you look at your service manifest it does not have <strong>lables</strong> so the service monitor is monitoring nothing , if you create another service monitor with another labelselector it is shown just like that in prometheus targets.<br />
to sum it up , edit your service manifest:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: flask-app-service
labels:
app: flask-app
spec:
selector:
app: flask-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: webapp
type: LoadBalancer
</code></pre>
<p>Hope that helps! :)</p>
| AlirezaPourchali |
<p>I am looking at this article. It mentions below query with below description.</p>
<blockquote>
<p>It is even more interesting monitoring the error rate vs the total
amount of requests, this way you get the magnitude of the errors for
the Kubernetes API server requests.</p>
</blockquote>
<pre><code>sum(rate(apiserver_request_total{job="kubernetes-apiservers",code=~"[45].."}[5m]))*100/sum(rate(apiserver_request_total{job="kubernetes-apiservers"}[5m]))
</code></pre>
<p>What I don't understand is why can we just compute below? Moreover, what is the purpose of applying sum function after rate? My understanding is rate is the average change per second, why sums the changes up? Can I get help on understanding the above query, it would be great to have an concrete example. Thanks in advance!</p>
<pre><code>count(apiserver_request_total{job="kubernetes-apiservers",code=~"[45].."})/count(apiserver_request_total{job="kubernetes-apiservers"})
</code></pre>
| bunny | <p><code>rate</code> calculates per-second estimation of increase of your counter. <code>sum</code> than sums up all time series produced by <code>rate</code> into one.</p>
<p>Consider <a href="https://prometheus.demo.do.prometheus.io/graph?g0.expr=node_cpu_seconds_total&g0.tab=1&g0.stacked=0&g0.range_input=1h&g1.expr=rate(node_cpu_seconds_total%5B1m%5D)&g1.tab=1&g1.stacked=0&g1.range_input=1h&g2.expr=sum(rate(node_cpu_seconds_total%5B1m%5D))&g2.tab=1&g2.stacked=0&g2.range_input=1h" rel="nofollow noreferrer">this demo</a>. Notice how <code>rate</code> produces eight time series (same as number of "input" series), while <code>sum</code> - only one.</p>
<p>Please notice also <a href="https://stackoverflow.com/a/76009728/21363224">difference between <code>sum</code> and <code>sum_over_time</code></a>, as it is often reason of confusion.</p>
<hr />
<p><code>count</code> returns number of time series within inner query. It's result more or less constant over time.</p>
<p>For demo linked above count will always return <code>vector(8)</code>, as there are always 8 modes of memory.</p>
| markalex |
<p>I am trying to configure a hello world application using ingress in GKE. I have been referring a GCP official documentation to deploy an application using Ingress.</p>
<p><a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">Deploying an app using ingress</a></p>
<p>But this does not work i have tried to refer several documents but none of those work. I have installed the ingress controller in my kubernetes cluster.</p>
<p><code>kubectl get svc -n ingress-nginx </code> returns below output</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
ingress-nginx-controller LoadBalancer 10.125.177.232 35.232.139.102 80:31835/TCP,443:31583/TCP 7h24m
</code></pre>
<p><code>kubectl get pods-n ingress-nginx</code> returns</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-jj72r 0/1 Completed 0 7h24m
ingress-nginx-admission-patch-pktz6 0/1 Completed 0 7h24m
ingress-nginx-controller-5cb8d9c6dd-vptkh 1/1 Running 0 7h24m
</code></pre>
<p><code>kubectl get ingress</code> returns below output</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-resource <none> 35.232.139.102.nip.io 34.69.2.173 80 7h48m
</code></pre>
<p><code>kubectl get pods</code> returns below output</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hello-app-6d7bb985fd-x5qpn 1/1 Running 0 43m
</code></pre>
<p><code>kubect get svc</code> returns below output</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app ClusterIP 10.125.187.239 <none> 8080/TCP 43m
</code></pre>
<p>Ingress resource yml file used</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: 35.232.139.102.nip.io
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
</code></pre>
<p>Can someone tell me what i am doing wrong ? When i try to reach the application its not working.</p>
| sidharth vijayakumar | <p>So I have installed Ingress-controller and used ingress controller ip as the host in my ingress file.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "35.232.139.102.nip.io"
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
</code></pre>
<p>Issue here was I forgot to add the IP from which I was accessing the application. When you create a GKE cluster there will be a firewall with the <code>cluster-name-all</code> in this firewall you will need to add your IP address of the machine from which you are trying to access the application. Also ensure that the port number is also exposed in my case both were not provided hence it was failing.</p>
| sidharth vijayakumar |
<p>I'm trying to install a simple helm chart and I got:</p>
<p><code>Internal error occurred: failed calling webhook "mservice.elbv2.k8s.aws": failed to call webhook: the server could not find the requested resource</code></p>
| Padi | <p>Had the same issue suddenly on EKS v1.24 and with "aws-load-balancer-controller" v2.4.5 after working for over 1 month with that version with no issues until this week. The resources are created through Terraform and Helm. I suspect an upstream change affected underlying resources.</p>
<p>Updating to <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.5.1" rel="noreferrer">"aws-load-balancer-controller" v2.5.1</a> and making sure that the helm releases had a depends_on entry on the alb ingress controller helm release, fixed the issue for me.</p>
| hefnat |
<p>While reading this blog-post <a href="https://raesene.github.io/blog/2023/04/08/lets-talk-about-kubelet-authorization/" rel="nofollow noreferrer">https://raesene.github.io/blog/2023/04/08/lets-talk-about-kubelet-authorization/</a></p>
<p>Found that there are multiple authorisation modes supported, my understanding was ABAC & now RBAC.</p>
<p>Read that in kubelet config in certain config we have this flag <code>--authorization-mode=Node,RBAC</code>.</p>
<p>Wanted to understand there some <code>kubectl</code> command to find which all authorisation modes are present/supported ?</p>
| Sumit Murari | <p>If you are using local cluster, to find out which authorization modes are supported in your Kubernetes cluster, you can use the <code>kubectl</code> command to retrieve this information. You can achieve this by checking the cluster configuration, specifically the <code>kube-apiserver</code> component.</p>
<p>Here's how you can do it:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl describe pod kube-apiserver-<your-api-server-pod-name> -n kube-system
</code></pre>
<p>Replace <code><your-api-server-pod-name></code> with the actual name of your <code>kube-apiserver</code> pod in the <code>kube-system</code> namespace.</p>
<p>Look for the <code>--authorization-mode</code> flag in the output. This flag will specify which authorization modes are enabled in your cluster. It might look something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>--authorization-mode=Node,RBAC
</code></pre>
<p>In the example above, both Node and RBAC authorization modes are enabled in the cluster. You can have different modes enabled based on your cluster configuration, and this command will provide you with the authoritative list of enabled authorization modes.</p>
| Saifeddine Rajhi |
<p>Can i use different versions of cassandra in a single cluster? My goal is to transfer data from one DC(A) to new DC(B) and decommission DC(A), but DC(A) is on version 3.11.3 and DC(B) is going to be *3.11.7+</p>
<p>* I Want to use K8ssandra deployment with metrics and other stuff. The K8ssandra project cannot deploy older versions of cassandra than 3.11.7</p>
<p>Thank you!</p>
| Samabo | <p>K8ssandra itself is purposefully an "opinionated" stack, which is why you can only use certain more recent and not-known to include major issues versions of Cassandra.</p>
<p>But, if you already have the existing cluster, that doesn't mean you can't migrate between them. Check out this blog for an example of doing that: <a href="https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/" rel="nofollow noreferrer">https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/</a></p>
| Jeff DiNoto |
<p>I am facing an issue in kubernetes. I have a deployment and in replicaset we have given value as 2. After updating my release it is showing 3 replicas. 2 of them are running properly but one is in CrashLoopBackOff. I tried deleting it but it again comes up with same error.</p>
<p>There are 2 containers running in the po. In one container I am able to login but not able to login into nginx-cache container</p>
<pre class="lang-none prettyprint-override"><code>deployment-5bd9ff7f9d 1/2 CrashLoopBackOff 297 (2m19s ago) 24h (this is the error)
deployment-ffbf89fcd 2/2 Running 0 36d
deployment-ffbf89fcd 2/2 Running 0 36d
</code></pre>
<p>Kubectl describe pod</p>
<pre><code>Warning Failed 44m (x4 over 44m) kubelet Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "107374182400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podc22d1a88-befe-4680-8eec-2ad69a4cc890/nginx-cache/cpu.cfs_quota_us: invalid argument: unknown
Normal Pulled 43m (x5 over 44m) kubelet Container image "abcd2.azurecr.io/ab_cde/nginx-cache:0.2-ROOT" already present on machine
</code></pre>
<p>How to remove that error</p>
| Anup | <p>As seen from your <em>get pods</em>, the Deployment in <strong>CrashLoopBackOff</strong> state has a different hash from the other 2; it would appear that it is being handled by a different ReplicaSet than the other 2.</p>
<blockquote>
<p>The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.</p>
</blockquote>
<blockquote>
<p>This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label</a></p>
<p>Try running <code>kubectl -n YOUR-NAMESPACE get replicasets</code>; if you find 2, delete the one that corresponds to the Pod with the error.</p>
| glv |
<p>I have setup K8s cluster on AWS. I have followed the Nginx Ingress setup using the link - <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Ingress-Setup</a>. I then tried to deploy a coffee application using the link - <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">demo-application</a> and accessing the coffee application, I am getting a <code>404</code> error. I am getting a <code>200 OK</code> response when accessing the <code>curl http://localhost:8080/coffee</code> from within the pod. I am not sure how to troubleshoot this issue.</p>
<pre><code>[ec2-user@ip-172-31-37-241 service]$ curl -vv --resolve cafe.example.com:$IC_HTTPS_PO RT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/coffee --insecure
* Added cafe.example.com:443:65.1.245.71 to DNS cache
* Hostname cafe.example.com was found in DNS cache
* Trying 65.1.245.71:443...
* Connected to cafe.example.com (65.1.245.71) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
* CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=NGINXIngressController
* start date: Sep 12 18:03:35 2018 GMT
* expire date: Sep 11 18:03:35 2023 GMT
* issuer: CN=NGINXIngressController
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /coffee HTTP/1.1
> Host: cafe.example.com
> User-Agent: curl/7.76.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx/1.21.0
< Date: Fri, 10 Sep 2021 03:24:23 GMT
< Content-Type: text/html
< Content-Length: 153
< Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.0</center>
</body>
</html>
* Connection #0 to host cafe.example.com left intact
</code></pre>
<p>Ingress definition:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
</code></pre>
<p>Successful response when accessing the pods directly</p>
<pre><code>[ec2-user@ip-172-31-37-241 service]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
coffee-6f4b79b975-b5ph5 1/1 Running 0 29m
coffee-6f4b79b975-grzh5 1/1 Running 0 29m
tea-6fb46d899f-5hskc 1/1 Running 0 29m
tea-6fb46d899f-bzp88 1/1 Running 0 29m
tea-6fb46d899f-plq6j 1/1 Running 0 29m
[ec2-user@ip-172-31-37-241 service]$ kubectl exec -it coffee-6f4b79b975-b5ph5 /bin/sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ $ curl -vv http://localhost:8080/coffee
* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /coffee HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.78.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.3
< Date: Fri, 10 Sep 2021 03:29:08 GMT
< Content-Type: text/plain
< Content-Length: 159
< Connection: keep-alive
< Expires: Fri, 10 Sep 2021 03:29:07 GMT
< Cache-Control: no-cache
<
Server address: 127.0.0.1:8080
Server name: coffee-6f4b79b975-b5ph5
Date: 10/Sep/2021:03:29:08 +0000
URI: /coffee
Request ID: e7fbd46fde0c34df3d1eac64a36e0192
* Connection #0 to host localhost left intact
</code></pre>
| zilcuanu | <p>Your application is listening on port 8080. In your Service file you need to use the targetPort as 8080.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
targetPort: 8080
protocol: TCP
name: https
selector:
app: nginx-ingress
</code></pre>
| Vishwas Karale |
<p>I want to create large EBS volumes (e.g. 1TB) dynamically for each jenkins worker pod on my EKS cluster such that each <code>persistentVolume</code> representing an EBS volume only persists for the lifespan of the pod.</p>
<p>I can't change the size of the nodes on the EKS cluster,
so I'm trying to use external EBS volumes via the <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver" rel="nofollow noreferrer">ebs-csi-driver</a> helm chart.</p>
<p>I can't seem to find a configuration for the <code>podTemplate</code> that enables dynamic provisioning of the <code>persistentVolumeClaim</code> and subsequent EBS <code>persistentVolume</code> for hosting my builds. The only way I'm able to successfully provision and mount EBS <code>persistentVolumes</code> dynamically to my worker pods is by using my own <code>persistentVolumeClaim</code> which I have to reference in the <code>podTemplate</code> and manually deploy using <code>kubectl apply -f pvc.yaml</code> and destroy <code>kubectl delete -f pvc.yaml</code> for each build cycle.</p>
<p>So far I've been able to provision one <code>persistentVolume</code> at a time using the following:</p>
<ul>
<li><code>persistentVolumeClaim</code>:
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1000Gi
</code></pre>
</li>
<li><code>StorageClass</code>:
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
</code></pre>
</li>
<li><code>podTemplate</code>:
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
spec:
serviceAccount: jenkins-sa
securityContext:
fsGroup: 1000
containers:
- name: jenkins-worker
image: <some-image>
imagePullPolicy: Always
command:
- cat
tty: true
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- name: build-volume
mountPath: <some-mount-path>
volumes:
- name: build-volume
persistentVolumeClaim:
claimName: ebs-claim
</code></pre>
<ul>
<li>I annotated the "jenkins-sa" <code>serviceAccount</code> with an AWS IAM Role ARN (<code>eks.amazonaws.com/role-arn</code>)
with all the AWS IAM policies and trust-relationships needed to allow the jenkins worker Pod
to provision an AWS EBS <code>persistentVolume</code> (as described in the <a href="https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html" rel="nofollow noreferrer">AWS ebs-csi-driver docs</a>).</li>
</ul>
</li>
</ul>
<p>Once the pipeline is started, and the jenkins worker pod is executing a build,
I'm able to ssh into it and confirm that the external EBS volume is mouted to the build directory using <code>df</code>.
Happy days!</p>
<p>However, I noticed the <code>persistentVolume</code> provisioned by the jenkins worker pod was lingering around
after the build finished and the pod destroyed. Apparently this is because the <code>PersistentVolumeClaim</code>
needs to be deleted before the <code>persistentVolume</code> it's bound to can be released (see <a href="https://saturncloud.io/blog/kubernetes-the-correct-way-to-delete-persistent-volumes-pv/" rel="nofollow noreferrer">here</a> for details).</p>
<p>After some digging it looks like I need to specify a <code>dynamicPVC()</code> in either the
<code>volumes</code> or <code>workspaceVolume</code> spec in the <code>podTemplate</code> yaml.
However I'm struggling to find documentation around dynamic <code>PersistentVolumeClaim</code> provisioning
beyond the <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">jenkins kubernetes plugin page</a>.
Unfortunately when I try to create <code>persistentVolumeClaims</code> dynamically like this
it doesn't work and just uses the maximum storage that the node can provision (which is limited to 20Gi).</p>
<ul>
<li>Updated <code>podTemplate</code>:
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
spec:
serviceAccount: jenkins-sa
workspaceVolume:
dynamicPVC:
accessModes: ReadWriteOnce
requestsSize: 800Gi
storageClassName: ebs-sc
securityContext:
fsGroup: 1000
containers:
- name: jenkins-worker
image: <some-image>
imagePullPolicy: Always
command:
- cat
tty: true
securityContext:
runAsGroup: 1000
runAsUser: 1000
</code></pre>
</li>
</ul>
<p>I am expecting a new <code>PersistentVolumeClaim</code> and <code>PersistentVolume</code> to be created
by the <code>podTemplate</code> when I start my build pipeline, and subsequently destroyed
when the pipeline finishes and the pod released.</p>
| nyserq | <p>you use k8s 1.23+, then this is possible for <code>StatefulSets</code>, i.e if you can change pods by <code>StatefulSets</code>.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention" rel="nofollow noreferrer">PersistentVolumeClaim retention</a> feature configures the volume retention behavior that applies when the StatefulSet is deleted with the field <code>whenDeleted</code></p>
<p>So when PersistentVolumeClaims (PVCs) generated from the StatefulSet spec template will be deleted automatically when the <code>StatefulSet is deleted</code> or pods in the StatefulSet are scaled down.</p>
<pre><code>kind: StatefulSet
...
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Delete
whenScaled: Delete
...
</code></pre>
<p>Also, add the <code>reclaimPolicy: Delete</code> field in the <code>StorageClass</code> resource which determines what happens to the Persistent Volumes (PVs) associated with that StorageClass when they are dynamically provisioned and later released.</p>
<p>As a result:</p>
<ul>
<li>When a PV associated with this StorageClass is dynamically provisioned and later released, the PV and the underlying storage resources are automatically deleted.</li>
<li>This means that when the associated PVC is deleted, both the PV and the underlying storage are automatically cleaned up.</li>
</ul>
| Saifeddine Rajhi |
<p>I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster.
The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm.</p>
<p>However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown".</p>
<p>The installation of ArgoCD was done with the next command. (Basic install)</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>And, as example, the kube-prometheus-stack Application I create in Github looks this way:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus-stack
namespace: argocd
spec:
project: default
source:
chart: kube-prometheus-stack
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: 44.4.1
helm:
releaseName: kube-prometheus-stack
destination:
server: "https://kubernetes.default.svc"
namespace: observability
</code></pre>
<p>Any idea what I could be missing?</p>
<p>Thanks!</p>
| Nora | <p>Try changing:</p>
<pre><code>FROM repoURL: https://prometheus-community.github.io/helm-charts
TO repoURL: [email protected]:prometheus-community/helm-charts.git
OR repoURL: https://github.com/prometheus-community/helm-charts.git
</code></pre>
<pre><code>FROM targetRevision: 44.4.1
TO targetRevision: kube-prometheus-stack-44.4.1
</code></pre>
<p>And under the <em>targetRevision</em> field, add:</p>
<pre><code>path: charts/kube-prometheus-stack
</code></pre>
| glv |
<p>I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running <code>docker run --rm -p 3000:3000 reactdemo</code> locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.</p>
<p>I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.</p>
<p>Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!</p>
<p>Dockerfile:</p>
<pre><code># pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
</code></pre>
<p>Deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
</code></pre>
<p>I then open a port on my local machine to the nodeport for the service:</p>
<pre><code>PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
</code></pre>
<p>My assumption is that everything is in place at this point and I should be able to open a browser to hit <code>localhost:31000</code>. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.</p>
<p>Here is it all running:</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
</code></pre>
<p>Some extra things to note:</p>
<ul>
<li>Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.</li>
<li>I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)</li>
<li>I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:</li>
</ul>
<hr />
<pre><code>localhost didn’t send any data.
ERR_EMPTY_RESPONSE
</code></pre>
| Damisco | <p>if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try <code> minikube service <servicename></code> this in your terminal and it shows the URL of your service.</p>
| Ankush Limbasiya |
<p>i m trying diff commands in aws eks and when there is wrong cmd it spits out too long of output.</p>
<pre><code>aws kubectl get pod
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters
]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument command: Invalid choice, valid choices are:
accessanalyzer | account
acm | acm-pca
alexaforbusiness | amp
amplify | amplifybackend
</code></pre>
<p>in case i only want to see the error msg and not 2 screens of all possible cmds how to limit the output in case of error?</p>
<p>i tried :</p>
<pre><code>aws eks describe-cluster --voice wrongparameter | head 10
</code></pre>
<p>but it s not working</p>
| ERJAN | <p>There is no such command <code>aws kubectl get pod</code> !!</p>
<p>It is either <code>kubectl get pod</code> to list pods in current namespace if you have <a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="nofollow noreferrer">kubectl</a> cli installed and configured with the correct context or <code>aws eks describe-cluster --name demo</code> to describe your AWS Kubernetes cluster with name <code>demo</code> in the current region and profile.</p>
| Saifeddine Rajhi |
<p>I have a question about Kubernetes containers and persistent volumes.</p>
<p><strong>How can I make some of the preexisting folders of a Kubernetes container persistent?</strong></p>
<p>I know the usage of PVCs in Kubernetes but the problem about mounting a PVC to a container is that this operation -naturally- deletes everything in the mount path. eg. Say that we have an image which has a non-empty directory <code>/xyz</code> and we want to make this directory persistent. If we create a PVC and mount it to <code>/xyz</code>, we would lose everything inside <code>/xyz</code> as well (we don't want this to happen). So we want that directory to be persistent from the start with the files inside of it.</p>
<p>I'm not so sure if Docker or any other container technology responds such a feature, so it may not be suitable for Kubernetes too. Would be glad if anyone can enlighten me about this. Thanks!</p>
<p>My approaches so far:</p>
<ul>
<li><em>Copying</em>: Creating a PVC for the directory contents and mounting it to an init container or job that copies <code>/xyz</code> to the <code>/mounted/xyz</code>, then mounting PVC to the main container's <code>/xyz</code>. This approach has some drawbacks if the directory is too fat or has some OS/runtime-specific configurations.</li>
<li><em>Hostpath</em>: Populating a directory with the contents of <code>/xyz</code> (eg. <code>/in/host/xyz</code>) before starting the container. Then mounting this path from host to the container. Not a good approach since it's hard to automate.</li>
</ul>
| tuna | <p>there is no way to mount a Volume in a certain folder without overwriting its contents.</p>
<p>In my opinion the best approaches could be:</p>
<ol>
<li><p>The first one reported by you (for large content):</p>
<p>a. Create PVC</p>
<p>b. Add an initContainer to your Deployment that mount the Volume in a DIFFERENT path from the directory containing the data to move/copy</p>
<p>c. Add to the initContainer a "command" field with the commands to move/copy the content from the "source" directory to the mounted volume (target)</p>
<p>d. Mount to the "main" container the PVC used in the initContainer at the "source" directory path</p>
</li>
<li><p>Create a K8s cronjob (or job that works once if the files are never modified) that syncs from one folder to another (similar to point 1, but avoid waiting a long time before the application Pod starts, since the initContainer is no longer needed).
<a href="https://i.stack.imgur.com/EoWMU.png" rel="nofollow noreferrer">Cronjob example</a>
(Pay attention to file owners; you may need to run the job under the same serviceAccount that produced those files)</p>
</li>
<li><p>If they are static files, build the Docker image with all the contents of the folder already inside (Dockerfile —> copy). <a href="https://docs.docker.com/engine/reference/builder/" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/builder/</a></p>
</li>
</ol>
<p>I strongly recommend not using hostPath in PRODUCTION environments.
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p>
| glv |
<p>thanks for checking out my topic.</p>
<p>I'm currently working to have kustomize to download the resource and base files from our git repository.
We have tried a few options some of them following the documentation and some of them not, see below. But anyhow still not able to download from our remote repo and while trying to run the kubectl apply it looks for a local resource based on the git url and file names.</p>
<pre><code>resources:
- ssh://git@SERVERURL:$PORT/$REPO.GIT
- git::ssh://git@SERVERURL:$PORT/$REPO.GIT
- ssh::git@SERVERURL:$PORT/$REPO.GIT
- git::git@SERVERURL:$PORT/$REPO.GIT
- git@SERVERURL:$PORT/$REPO.GIT
</code></pre>
<p>As a workaround I have added the git clone for the expected folder to my pipeline, but the goal is to have the bases/resources downloaded directly from the kustomization url.
Any ideas or some hints on how to get it running?</p>
| Gabriele Hausmann | <p>After reaching some Kubernetes colleagues, we found out the reason for my problem.
Basically, when running kubectl with version lower than 1.20 we have kustomize v2.0.3.
My Jenkins agent was using a outdated kubectl version (1.17) and this was the root cause.</p>
<p>In this case, there were two options:</p>
<ol>
<li>Update kubectl image, with 1.20 or higher,</li>
<li>Decouple kustomization and kubectl (fits better in our case).</li>
</ol>
| Gabriele Hausmann |
<p>I am trying to migrate a dashboard which shows the count of Readiness and Liveness Probe Failures, from Kibana(ElasticSearch) to a Grafana Dashboard(Sauron). In kibana the we can get both the probe failures separately using <code>kubernetes.event.message : Liveness probe failed</code> for Liveness failure and similar event message for Readiness, but in Sauron or Thanos (which acts as the datasource for Grafana) k8's event messages are not picked up. So I am unable to find a suitable promQL which will give me the count of both the probe failures individually.</p>
<p>The closest promQL I have found is <code>kube_event_count{reason="Unhealthy"}</code> which is giving me the sum of the count of both the probe failures. I need the count of the probe failures individually. Another promQL that I have tried is <code>kube_pod_container_status_ready</code> which probably gives the readiness status of the containers but I am not sure about it.</p>
| Soumik Laik | <p>The following two queries will do the trick for you:</p>
<pre><code>prober_probe_total{probe_type="Readiness",result="failed"}
</code></pre>
<pre><code>prober_probe_total{probe_type="Liveness",result="failed"}
</code></pre>
| glv |
<p>I am using Grafana Helm chart to install Grafana on K8s cluster.
The procedure works quite good, also predefining dashboards, so that they are accessible after installation.On the other hand I didn’t find a solution to automate the creation of users & teams so far.
How can I specify/predefine users + teams , so that they are being created on “helm install”-ing the chart ?</p>
<p>Any hint highly appreciated</p>
<p>PS: I am aware of the HTTP API , but I am more interested in a way to predefine the info and having "helm install..." is setting up the whole stack</p>
| GeKo | <p>there isn't <code>helm install</code> available for creating user(s)/group(s) for grafana. the best way is through HTTP API, However, Ansible playbook might be an option too</p>
| CA-CB-TellMe |
<p>Hello I am trying to deploy a simple tomcat service. Below are the details:</p>
<p>1.minikube version: v1.8.1</p>
<p>2.OS: mac</p>
<p>3.The <strong>deployment.yaml</strong> file (I am in the directory of the yaml file)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
spec:
selector:
matchLabels:
app: tomcat
replicas: 1
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:9.0
ports:
- containerPort: 8080
</code></pre>
<p>4.Commands used to deploy and expose the service</p>
<pre><code>kubectl apply -f deployment.yaml
kubectl expose deployment tomcat-deployment --type=NodePort
minikube service tomcat-deployment --url
curl [URL]
</code></pre>
<p>I get a 404 when I curl the URL.
I am unsure if there's an issue with the deployment.yaml file or some minikube settings.</p>
| Nath | <p>Sara's answer above pointed me to the right direction. Copying the files works but this requires a restart of the tomcat service which reverts the changes. I had to use 'cp -r' on the deployment yaml as per below:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/tomcat/webapps.dist/manager/META-INF/context.xml
name: tomcat-configmap
subPath: context1
- mountPath: /usr/local/tomcat/webapps.dist/host-manager/META-INF/context.xml
name: tomcat-configmap
subPath: context2
mountPath: /usr/local/tomcat/conf/tomcat-users.xml
name: tomcat-configmap
subPath: tomcat-users
command: ["/bin/bash"]
args: [ "-c", "cp -r /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps/ && catalina.sh start; sleep inf" ]
volumes:
- name: tomcat-configmap
configMap:
name: tomcat-configmap</code></pre>
</div>
</div>
</p>
| dark swan |
<p>I'm trying to setup a very simple 2 node k8s 1.13.3 cluster in a vSphere private cloud. The VMs are running Ubuntu 18.04. Firewalls are turned off for testing purposes. yet the initialization is failing due to a refused connection. Is there something else that could be causing this other than ports being blocked? I'm new to k8s and am trying to wrap my head around all of this. </p>
<p>I've placed a vsphere.conf in /etc/kubernetes/ as shown in this gist.
<a href="https://gist.github.com/spstratis/0395073ac3ba6dc24349582b43894a77" rel="noreferrer">https://gist.github.com/spstratis/0395073ac3ba6dc24349582b43894a77</a></p>
<p>I've also created a config file to point to when I run <code>kubeadm init</code>. Here's the example of it'\s content.
<a href="https://gist.github.com/spstratis/086f08a1a4033138a0c42f80aef5ab40" rel="noreferrer">https://gist.github.com/spstratis/086f08a1a4033138a0c42f80aef5ab40</a></p>
<p>When I run
<code>sudo kubeadm init --config /etc/kubernetes/kubeadminitmaster.yaml</code>
it times out with the following error. </p>
<pre><code>[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
</code></pre>
<p>Checking <code>sudo systemctl status kubelet</code> shows me that the kubelet is running. I have the firewall on my master VM turned off for now for testing puposes so that I can verify the cluster will bootstrap itself. </p>
<pre><code> Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2019-02-16 18:09:58 UTC; 24s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 16471 (kubelet)
Tasks: 18 (limit: 4704)
CGroup: /system.slice/kubelet.service
└─16471 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cloud-config=/etc/kubernetes/vsphere.conf --cloud-provider=vsphere --cgroup-driver=systemd --network-plugin=cni --pod-i
</code></pre>
<p>Here are some additional logs below showing that the connection to <a href="https://192.168.0.12:6443/" rel="noreferrer">https://192.168.0.12:6443/</a> is refused. All of this seems to be causing kubelet to fail and prevent the init process from finishing. </p>
<pre><code> Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.633721 16471 kubelet.go:2266] node "k8s-master-1" not found
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.668213 16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-1&limit=500&resourceVersion=0: dial tcp 192.168.0.1
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.669283 16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.0.12:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.12:6443: connect: connection refused
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.670479 16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.12:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-1&limit=500&resourceVersion=0: dial tcp 192.1
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.734005 16471 kubelet.go:2266] node "k8s-master-1" not found
</code></pre>
| Stavros_S | <p>In order to address the error (dial tcp 127.0.0.1:10248: connect: connection refused.), run the following:</p>
<pre class="lang-sh prettyprint-override"><code>sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo kubeadm reset
sudo kubeadm init
</code></pre>
<p><strong>Use the same commands if the same error occurs while configuring worker node.</strong></p>
| Kirik |
<p>I am building a Kubernetes cluster using kubeadm and have an issue with a single node.<br />
The worker nodes are running with sub-interfacing and policy based routing, which work as intended; however, out of the 4 worker nodes, if pods are moved to one of them, they fail liveness and readiness checks over http.<br />
I am using Kubernetes version 1.26.1, calico 3.25.0, metallb 0.13.9, and ingress-nginx 4.5.0.
The cluster stood up with little issue; outside of getting the policy based routing on the nodes worked out. Calico and MetalLB stood up and work as well.
The issue now is when I stand up the ingress-nginx controllers and force the pods on to a specific worker node. Standing them up and running on them on the other nodes works and I can curl the LoadBalancer IP; however, while testing, when the ingress-nginx pods are moved to a specific node, the liveness and readiness checks fail. Moving the pods back to any other worker node they come up and run just fine.
I've been verifying the routes and iptables on all the nodes; as well as, watching the interfaces via tcpdump, but I've not narrowed down the issue.</p>
<p>For the simple things:</p>
<ul>
<li>kernel parameters and loaded modules between the nodes are the same</li>
<li>No logs in messages/crio is showing an issue with starting the pod</li>
<li>the calico and metallb pods are working on the problem node</li>
<li>I've rebuilt the cluster since noticing the issue, and prior builds cert-manager was having issues on the node, as well as a few other random test deployments I've tried</li>
</ul>
<p>From with the pods while they are running, I can hit external webs via curl (dns work and outbound traffic work)
Using tcpdump on 'any' interface of the problem node, i can see the pod and the kubernetes internal api IP communicate
I can't hit the pod's IP, service IP, or anything from the problem node or other member node
the namespace events aren't showing any issues except for the liveness and readiness probes failing
The endpoints for the services aren't being filled while on the problem node (although this isn't a surprise).
Watching the traffic over the vxlan.calico interface isn't showing only one way traffic - there are responses to traffic that is making it through.</p>
<p>Im at a lose on where to look for the root issue. This has been going on for over a week and I could use some help.</p>
| mgcdrd | <p>I found out what I was doing to cause the issue in the first place, so will document it just in case someone runs across the same scenario.</p>
<p>Some more background on this as it is very niche. But to some limitations we face, the worker nodes have 1 physical interface which is broken out into 2 additional sub-interfaces to allow for vlan tagged traffic. This being the case, I wrote iproute policy rules to direct traffic between the logical interfaces. So in summation, eth2 (the one actually cabled up) has logical interfaces eth2, eth2.3 and eth2.4 all on different subnets.</p>
<p>The issue I caused was writing rules for the primary interface, eth2. This was causing the kubelet traffic for liveness and readiness probes to be mis-routed and not actually follow the kube-proxy iptables rules and calico's felix routing rules. Once the policy rules for the primary interface were removed and the pods restarted (this last bit was more of my impatience) traffic flowed as it should and the pods came up and the probes finished satisfactorily.</p>
| mgcdrd |
<p>We have a Grafana Service with two Grafana PODs in our Kubernetes Cluster. Everytime an alert is triggered both instances will fire the alert.</p>
<p>To prevent this from happening, we tried activating the HA alerting, which basically consists of the following configuration:</p>
<pre><code>[unified_alerting]
enabled = true
ha_listen_address = ${POD_IP}:9094
ha_peers = ${POD_IP}:9094
</code></pre>
<p>Since every POD only knows it's own IP address <code>${POD_IP}</code>, we are not able to set the ha_peers value correctly to contain all instances (like described in the Grafana documentation). Therefore we still get duplicate alerts.</p>
<p>Also, if one instance is terminated and another will start, I'm not quite sure how the ha_peers of the remaining active POD will be updated.</p>
<p>We'd like to avoid using work-arouds like fixed IPs because this would go against Kubernetes practices.</p>
<p>Does anyone one know how to circumvent or solve this problem?</p>
| elcaos | <p>As Jan Garaj already mentioned, headless services are the way to go.</p>
<p>If the problem still occurs after using a headless service, it may have something to do with the network setup/firewall. Grafana uses gossip protocols to synchronize the pods. Gossip uses both TCP and UDP on port 9094. We only accepted TCP but not UDP. After allowing traffic with UDP on port 9094 the alerts were deduplicated.</p>
<p><strong>TL;DR:</strong> Allow TCP and UDP traffic on port 9094 as described in <a href="https://grafana.com/docs/grafana/latest/alerting/high-availability/enable-alerting-ha/" rel="nofollow noreferrer">Grafana Documentation</a>!</p>
| Erik156 |
<p>I have a SpringBoot application with a class that has scheduled functions.
I have another class 'ConditionalScheduler', that is supposed to manage the schedulers based on some properties in my application config.</p>
<p>Code snippets:</p>
<pre><code>@Configuration
@EnableScheduling
public class ConditionalScheduler {
@Bean
@ConditionalOnProperty(value = "custom-property.custom-sub-property.enabled", havingValue = "true", matchIfMissing = false)
public Scheduler scheduler() {
return new Scheduler();
}
}
</code></pre>
<pre><code>
@Slf4j
public class Scheduler {
@Scheduled(initialDelay = 0, fixedDelay = 1, timeUnit = TimeUnit.DAYS)
public void run() {
.....
}
}
</code></pre>
<p>This works well when I run it locally.
But when I deploy it on a k8s cluster it does not create the scheduler bean.
Even when I set the matchIfMissing property to 'true' it does not start anything.
I checked all places for typos and there is nothing.</p>
<p>Does anyone have an idea why this might happen?
I am happy to provide more information if needed.</p>
<p>Thanks in advance
Paul</p>
| P D | <p>please check whether your configuration file is loaded correctly in Kubernetes. I suggest you to write a simple function to retrieve property value from configuration file to see if it works.</p>
| U Ang |
<p>I am trying to migrate a socket io service from GCP (App Engine) to a kubernetes cluster.
Everything works fine on the GCP side (we have one instance of the server without replicas).
The migration to k8s is going very well, except that when connecting the client socket to the server, it does not receive some information:</p>
<ul>
<li><p>In transport 'polling': Of course, as there are two pods, this doesn't work properly anymore and the client socket keeps deconnecting / reconnecting in loop.</p>
</li>
<li><p>In 'websocket' transport: The connection is correctly established, the client can receive data from the server in 'broadcast to all client' mode => <code>socket.emit('getDeviceList', os.hostname())</code> but, as soon as the server tries to send data only to the concerned client <code>io.of(namespace).to(socket.id).emit('getDeviceList', JSON.stringify(obj))</code>, this one doesn't receive anything...</p>
</li>
<li><p>Moreover, I modified my service to have only one pod for a test, the polling mode works correctly, but, I find myself in the same case as the websocket mode => I can't send an information to a precise client...</p>
</li>
</ul>
<p>Of course, the same code on the App Engine side works correctly and the client receives everything correctly.</p>
<p>I'm working with:</p>
<pre><code>"socket.io": "^3.1.0",
"socket.io-redis": "^5.2.0",
"vue": "^2.5.18",
"vue-socket.io": "3.0.7",
</code></pre>
<p>My server side configuration:</p>
<pre><code>var io = require('socket.io')(server, {
pingTimeout: 5000,
pingInterval : 2000,
cors: {
origin: true,
methods: ["GET", "POST"],
transports: ['websocket', 'polling'],
credentials: true
},
allowEIO3: true
});
io.adapter(redis({ host: redis_host, port: redis_port }))
</code></pre>
<p>My front side configuration:</p>
<pre><code>Vue.use(new VueSocketIO({
debug: true,
connection: 'path_to_the_socket_io/namespace,
options: {
query: `id=..._timestamp`,
transports: ['polling']
}
}));
</code></pre>
<p>My ingress side annotation:</p>
<pre><code>kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: ip-loadbalancer
meta.helm.sh/release-name: xxx
meta.helm.sh/release-namespace: xxx -release nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/force-ssl-redirect: true nginx.ingress.kubernetes.io/proxy-connect-timeout: 10800
nginx.ingress.kubernetes.io/proxy-read-timeout: 10800
nginx.ingress.kubernetes.io/proxy-send-timeout: 10800
nginx.org/websocket-services: app-sockets-cluster-ip-service
</code></pre>
<p>My question is : why i can get broadcast to all user message and not specific message to my socket ?</p>
<p>Can someone try to help me ? :)</p>
<p>Thanks a lot !</p>
| Adrien DEBLOCK | <p>I found the solution in the day.and share it.</p>
<p>In fact, the problem is not due to the kubernetes Cluster but due to the socket io and socket io redis adapter version.</p>
<p>I was using <code>socket.io: 3.x.x</code> and using <code>socket.io-redis: 5.x.x</code>
In fact, i need to use the <code>socket.io-redis: 6.x.x</code> with this version of socket io :)</p>
<p>You can find the compatible version of socket io and redis adapter here:
<a href="https://github.com/socketio/socket.io-redis-adapter#compatibility-table" rel="nofollow noreferrer">https://github.com/socketio/socket.io-redis-adapter#compatibility-table</a></p>
<p>Thanks a lot.</p>
| Adrien DEBLOCK |
<p>I am trying to establish <code>SSH</code> authentication between Jenkins and GitHub. For the same, I am using the kubernetes secret to store the private and public key and I am mounting the secrets when the pod is created. Command I have used to create the secret:</p>
<pre><code>kubectl create secret generic test-ssh --from-file=id_rsa=id_rsa --from-file=id_rsa.pub=id_rsa.pub --namespace jenkins
</code></pre>
<p>and mapped it in pod configuration as:</p>
<pre><code>volumes:
- secretVolume:
mountPath: "/root/.ssh"
secretName: "test-ssh"
</code></pre>
<p>When the pod is created, I can see that the secret is mapped correctly in the <code>~/.ssh</code> folder as shown below.</p>
<p><a href="https://i.stack.imgur.com/8Zyfv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Zyfv.png" alt="enter image description here" /></a></p>
<p>but the problem is the <code>~/.ssh</code> folder itself has the sticky bit permission enabled</p>
<p><a href="https://i.stack.imgur.com/UGQR6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UGQR6.png" alt="enter image description here" /></a></p>
<p>and this is preventing the builds adding the <code>known_hosts</code> file when <code>ssh-keyscan</code> command is executed</p>
<pre><code>ssh-keyscan github.com >> ~/.ssh/known_hosts
bash: ~/.ssh/known_hosts: Read-only file system
</code></pre>
<p>I was hoping to achieve one of the two solutions I can think of</p>
<ol>
<li>Remove the sticky permissions from <code>~/.ssh</code> folder after it is created</li>
<li>While mounting the kubernetes secret, mount it without sticky permissions</li>
</ol>
<p>Could anyone help me to understand if there is a possibility to achieve this?
I have already tried <code>chmod -t .ssh</code> and it gives me the same error <code>chmod: changing permissions of '.ssh': Read-only file system</code></p>
<p>The owner of the <code>~/.ssh</code> folder is <code>root</code> and I have logged in as <code>root</code> user. I have confirmed this by running the <code>whoami</code> command.</p>
| Prasann | <p>Secrets aren't mounts that allow for permissions to be alter, so you have to do some trickery to get it to work.
Recently I ran into a similar issue, and had to make it work. We use a custom image to perform the ssh calls, so we added the .ssh dir and the known_hosts files to the image; setting the permissions in the Dockerfile. I then used the subPath to target the id_rsa and id_rsa.pub files in the deployment. I don't have my mappings handy but can tomorrow if necessary to show what I mean.
I think the 'official' answer was to mount the files in a different location and cp them where they needed to be in an initContainer using a shared mount or in the entrypoint file/binary.</p>
| mgcdrd |
<p>I'm new to Pinot and K8s, I have setup Pinot in K8s environment using helm, and trying to add authentication properties using this instruction(<a href="https://docs.pinot.apache.org/operators/tutorials/authentication/basic-auth-access-control" rel="nofollow noreferrer">https://docs.pinot.apache.org/operators/tutorials/authentication/basic-auth-access-control</a>).</p>
<p>How to add these properties to those config files and make it work? (e.g. */var/pinot/controller/config/pinot-controller.conf) Config files are read-only, and I don't think we can use commands like in K8s envrionment?</p>
<pre><code>bin/pinot-admin.sh StartController -configFileName /path/to/controller.conf
</code></pre>
| aqqqqqq | <p>In your scenario you may try updating the ConfigMap by running</p>
<p>‘<em>kubectl get configmap <CONFIGMAP_NAME> -o yaml > configmap.yaml</em>’ After that, edit the config.yaml and look for pinot-controller.conf. Here you may edit the parameters as required. Then apply the changes by running ‘<em>kubectl apply -f configmap.yaml</em>’. Once done, restart the pods for it to take effect. Attached are some documentations for a good read.[1][2]</p>
<p>[1] <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/</a></p>
<p>[2] <a href="https://docs.pinot.apache.org/operators/tutorials/authentication/basic-auth-access-control" rel="nofollow noreferrer">https://docs.pinot.apache.org/operators/tutorials/authentication/basic-auth-access-control</a></p>
| Ray John Navarro |
<p>I'm currently using minikube and I'm trying to access my application by utilizing the <code>minikube tunnel</code> since the service type is <code>LoadBalancer</code>.</p>
<p>I'm able to obtain an external IP when I execute the <code>minikube tunnel</code>, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.</p>
<p>To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed <code>kubectl port-forward</code> and I was able to access my application through <strong>localhost</strong>.</p>
<p>Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?</p>
| RebeloX | <p>In my case <code>minikube service <serviceName></code> solved this issue.</p>
<p>For further details look <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">here</a> in minikube docs.</p>
| mkessler |
<p>I recently deployed Minio stand-alone on a K0s pod. I can successfully use mc on my laptop to authenticate and create a bucket on my pod’s ip:9000.</p>
<p>But when I try to access the web console and login I get a POST error to ip:9000 and I am unable to login.</p>
<p>Would anyone know what’s causing this?</p>
| user3720568 | <p>I've just started a minio container to verify this and it fact there are two ports you need to publish which are <code>9000</code> and <code>9001</code>.</p>
<p>You can reach the admin console on port <code>9001</code> and the API on port <code>9000</code>, hence your <code>mc</code> command which targets port <code>9000</code> works but trying to login on port <code>9000</code> fails.</p>
<p><a href="https://i.stack.imgur.com/eT5N3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eT5N3.png" alt="MinIO admin console on port 9001" /></a></p>
<h2>Edit</h2>
<p>Now that I understand the problem better thanks to your comments I've tested on my Docker what happens when you login. And in fact there is a <code>POST</code> request happening when clicking on <code>Login</code> but it's not going to port <code>9001</code> not <code>9000</code>, so it seems the your webconsole somehow issues request to the wrong port.</p>
<p>Here a screenshot of the Network tab in my DevTools showing the request that's being issued when I press Login.
<a href="https://i.stack.imgur.com/UdDfQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UdDfQ.png" alt="Chrome Dev Tools: Login request" /></a></p>
<p>I've copied the <code>curl</code> for this request from the DevTool and added the <code>-i</code> flag so you can see the HTTP response code. You could try this with your appropriate <code>accessKey</code> and <code>secretKey</code> of course.</p>
<pre class="lang-sh prettyprint-override"><code>curl -i 'http://localhost:9001/api/v1/login' -H 'Connection: keep-alive' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.83 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: */*' -H 'Sec-GPC: 1' -H 'Origin: http://localhost:9001' -H 'Sec-Fetch-Site: same-origin' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Dest: empty' -H 'Referer: http://localhost:9001/login' -H 'Accept-Language: en-US,en;q=0.9' -H 'Cookie: PGADMIN_LANGUAGE=en' --data-raw '{"accessKey":"minio-root-user","secretKey":"minio-root-password"}' --compressed
</code></pre>
<p>Expected result:</p>
<pre><code>HTTP/1.1 204 No Content
Server: MinIO Console
Set-Cookie: token=AFMyDDQmtaorbMvSfaSQs5N+/9pYgK/rartN8SrGawE3ovm9AoJ5zz/eC9tnY7fRy5k4LChYcJKvx0rWyHr/+4XN2JnqdsT6VLDGI0cTasWiOo87ggj5WEv/cK4OyFlWiv5cJA8GUgQhVmYSk7MqPCVnBlfrvXhF7FaXhy85zAvzuGnExaBv9/8vZFs2LDiDF/9RX3Skb2gzIPIKije0++q4mwllluLIrhxyGrDgO16u33fWnPMjtbmGvsaOJAjx178h19BxbVnacBFyUv7ep+TFQ3xTRFfHefIMQK9lulMZOb5/oZUgEPolZpiB1Z9IJoNHVnUDJRnIIQXjv0bti/Wkz5RnWSoFqDjUWBopqFOuWYM/GMDCVxMrXJgQ/iDSg12b0uo6sOFbtvokyccUHKp5TtEznadzMf3Ga9iiZ4WAAXqONTC4ACMGaHxgUPVD7NvlYkyOlb/dPL75q0g3Qj+hiI5FELqPLEXgXMFHAi0EQDsNo4IXeqlxTJpxQYTUXRgrx1Kg6IlRJ5P9eIKwnj/eXmvXe4lvQSXR7iwEviBa1NVl1alLP0d7eib75IfhiMo7Hvyywg==; Path=/; Expires=Sat, 26 Mar 2022 13:23:34 GMT; Max-Age=3600; HttpOnly; SameSite=Lax
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block
Date: Sat, 26 Mar 2022 12:23:34 GMT
Connection: close
</code></pre>
| Mushroomator |
<p>I am new to k8s and need some help, plz.</p>
<p>I want to make a change in a pod's deployment configuration and change readOnlyRootFilesystem to false.</p>
<p>This is what I am trying to do, but it doesn't seem to work. Plz suggest what's wrong:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl patch deployment eric-ran-rdm-singlepod -n vdu -o yaml -p {"spec":{"template":{"spec":{"containers":[{"name":"eric-ran-rdm-infra":{"securityContext":[{"readOnlyRootFilesystem":"true"}]}}]}}}}
</code></pre>
<p><a href="https://i.stack.imgur.com/cEIYl.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Thanks very much!!</p>
| Nitisha | <p>Your JSON is invalid. You need to make sure you are providing valid JSON and it should be in the correct structure as defined by the k8s API as well. You can use <a href="https://jsonlint.com" rel="nofollow noreferrer">jsonlint.com</a>.</p>
<pre class="lang-json prettyprint-override"><code>{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "eric-ran-rdm-infra",
"securityContext": {
"readOnlyRootFilesystem": "true"
}
}
]
}
}
}
}
</code></pre>
<blockquote>
<p>Note: I have only checked the syntax here and <strong>not</strong> checked/ tested the structure against the k8s API of this JSON here, but I think it should be right, please correct me if I am wrong.</p>
</blockquote>
<p>It might be easier to specify a deployment in a <code>.yaml</code> file and just apply that using <code>kubectl apply -f my_deployment.yaml</code>.</p>
| Mushroomator |
Subsets and Splits