Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>k8s file like this from bitnami</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: minio
name: minio
spec:
replicas: 1
selector:
matchLabels:
app: minio
serviceName: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- env:
- name: BITNAMI_DEBUG
value: "false"
- name: MINIO_SCHEME
value: http
- name: MINIO_FORCE_NEW_KEYS
value: "no"
- name: MINIO_ROOT_USER
value: linkflow
- name: MINIO_ROOT_PASSWORD
value: Sjtu403c@#%
- name: MINIO_BROWSER
value: "on"
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
- name: MINIO_CONSOLE_PORT_NUMBER
value: "9001"
image: registry.aliyuncs.com/linkflow/minio-bitnami
livenessProbe:
failureThreshold: 5
httpGet:
path: /minio/health/live
port: minio-api
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
name: minio
ports:
- containerPort: 9000
name: minio-api
protocol: TCP
- containerPort: 9001
name: minio-console
protocol: TCP
readinessProbe:
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
tcpSocket:
port: minio-api
timeoutSeconds: 1
resources:
limits:
memory: 1Gi
requests:
memory: 1G
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumeMounts:
- mountPath: /data
name: data
securityContext:
fsGroup: 1001
volumeClaimTemplates:
- kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: default
volumeMode: Filesystem
---
apiVersion: v1
kind: Service
metadata:
labels:
app: minio
name: minio
spec:
ports:
- name: minio-api
port: 9000
targetPort: minio-api
- name: minio-console
port: 9001
targetPort: minio-console
selector:
app: minio
</code></pre>
<p>when i use local k8s portforward ,it run ok. get <a href="http://127.0.0.1/minio" rel="nofollow noreferrer">http://127.0.0.1/minio</a> web console is can be see</p>
<pre><code>kubectl port-forward svc/minio 9001:9001
</code></pre>
<p>my ingress</p>
<pre><code> - backend:
service:
name: minio
port:
number: 9001
path: /minio
pathType: ImplementationSpecific
</code></pre>
<p>and when i use azure SLB with domain, https://hostname/minio it error
Uncaught SyntaxError: Unexpected token '<'</p>
<p>i try add env MINIO_BROWSER_REDIRECT_URL,but not work. how could i do?</p>
| xlovepython | <p>ingress patch need to change to /</p>
<pre><code> - backend:
service:
name: minio
port:
number: 9001
path: /
pathType: ImplementationSpecific
</code></pre>
| xlovepython |
<p>I have setup a cluster on AWS using kops. I want to connect to the cluster from my local machine.</p>
<p>I have to do <code>cat ~/.kube/config</code>, copy the content and replace it with my local kube config to access to the cluster.</p>
<p>The problem is that it expires after certain amount of time. Is there a way to get permanent access to the cluster?</p>
| confusedWarrior | <p>Not sure if you can get permanent access to the cluster, but based on official <code>kOps</code> <a href="https://kops.sigs.k8s.io/cli/kops_update_cluster/" rel="nofollow noreferrer">documentation</a> you can just run <code>kops update cluster</code> command with <code>--admin={duration}</code> flag and set expire time to a very big value.</p>
<p>For example - let set it for almost 10 years:</p>
<pre><code>kops update cluster {your-cluster-name} --admin=87599h --yes
</code></pre>
<p>Then just copy as usual your config file to the client.</p>
<p>Based on official <a href="https://github.com/kubernetes/kops/blob/master/docs/releases/1.19-NOTES.md" rel="nofollow noreferrer">release notes</a>, to back to the previous behaviour just use value <code>87600h</code>.</p>
| Mikolaj S. |
<p>I have deployed simple app -NGINX and a Load balancer service in Kubernetes.
I can see that pods are running as well as service but calling Loadbalancer external IP is givings server error -site can't be reached .Any suggestion please</p>
<p><strong>app.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p><strong>Service.Yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
</code></pre>
<p><a href="https://i.stack.imgur.com/W2nVy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2nVy.png" alt="enter image description here" /></a>
P.S. -Attached outcome from terminal.</p>
| Vicky | <p>If you are using Minikube to access the service then you might need to run one extra command. But if this is on a cloud provider then you have an error in your service file.</p>
<p>Please ensure that you put two space in yaml file but your indentation of the yaml file is messed up as you have only added 1 space. Also you made a mistake in the last line of <code>service.yaml</code> file.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
</code></pre>
| sidharth vijayakumar |
<ul>
<li><p>Can Kubernetes pods share significant amount of memory?</p>
</li>
<li><p>Does copy-on-write style forking exist for pods?</p>
</li>
</ul>
<p>The purpose is to make pods spawn faster and use less memory.</p>
<p>Our scenario is that we have a dedicated game server to host in kubernetes. The problem is that one instance of the dedicated game server would take up a few GB of memory upfront (e.g. 3 GBs).</p>
<p>Also, we have a few such docker images of game servers, each for game A, game B... Let's call a pod that's running game A's image for game A <code>pod A</code>.</p>
<p>Let's say we now have 3 x <code>pod A</code>, 5 x <code>pod B</code>. Now players rushing into game B, so I need let's say another 4 * <code>pod B</code> urgently.</p>
<p>I can surely spawn 4 more <code>pod B</code>. Kubernetes supports this perfectly. However there are 2 problems:</p>
<ul>
<li>The booting of my game server is very slow (30s - 1min). Players don't want to wait.</li>
<li>More importantly for us, the cost of having this many pods that take up so much memory is very high. Because pods do not share memory as far as I know. Where as if it were plain old EC2 machine or bare metal, processes can share memory because they can fork and then copy-on-write.</li>
</ul>
<p>Copy-on-write style forking and memory sharing seems to solve both problems.</p>
| Boyang | <p>One of Kubernetes' assumptions is that <em>pods</em> are scheduled on different Nodes, which contradicts the idea of sharing common resources (does not apply for storage where <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">there are many options and documentation available</a>). The situation is different when it comes to sharing resources between <em>containers in one pod</em>, but for your issue this doesn't apply.</p>
<p>However, it seems that there is some possibility to share memory - not well documented and I guess very uncommon in Kubernetes. Check my answers with more details below:</p>
<blockquote>
<p>Can Kubernetes pods share significant amount of memory?</p>
</blockquote>
<p>What I found is that pods can share a common <a href="https://en.wikipedia.org/wiki/Inter-process_communication" rel="nofollow noreferrer">IPC</a> with the host (node).
You can check <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Pod Security Policies</a>, especially <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">field <code>hostIPC</code></a>:</p>
<blockquote>
<p><strong>HostIPC</strong> - Controls whether the pod containers can share the host IPC namespace.</p>
</blockquote>
<p>Some usage examples and possible security issues <a href="https://github.com/BishopFox/badPods/tree/main/manifests/hostipc#bad-pod-7-hostipc" rel="nofollow noreferrer">can be found here</a>:</p>
<ul>
<li><a href="https://github.com/BishopFox/badPods/tree/main/manifests/hostipc#inspect-devshm---look-for-any-files-in-this-shared-memory-location" rel="nofollow noreferrer">Shared <code>/dev/sh</code> directory</a></li>
<li><a href="https://github.com/BishopFox/badPods/tree/main/manifests/hostipc#look-for-any-use-of-inter-process-communication-on-the-host" rel="nofollow noreferrer">Use existing IPC facilities</a></li>
</ul>
<p>Keep in mind that this solution is not common in Kubernetes. Pods with elevated <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/#why-is-podsecuritypolicy-going-away" rel="nofollow noreferrer">privileges are granted broader permissions than needed</a>:</p>
<blockquote>
<p>The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them. It is easy to accidentally grant broader permissions than intended, and difficult to inspect which PSP(s) apply in a given situation.</p>
</blockquote>
<p>That's why the Kubernetes team marked Pod Security Policies as deprecated from Kubernetes <code>v1.21</code> - <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/" rel="nofollow noreferrer">check more information in this article</a>.</p>
<p>Also, if you are using multiple nodes in your cluster you <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">should use nodeSelector</a> to make sure that pods will be assigned to same node that means they will be able to share one (host's) IPC.</p>
<blockquote>
<p>Does copy-on-write style forking exist for pods?</p>
</blockquote>
<p>I did a re-search and I didn't find any information about this possibility, so I think it is not possible.</p>
<hr />
<p>I think the main issue is that your game architecture is not "very suitable" for Kubernetes. Check these articles and websites about dedicated game servers in Kubernetes- maybe you will them useful:</p>
<ul>
<li><a href="https://agones.dev/site/" rel="nofollow noreferrer">Agones</a></li>
<li><a href="https://www.gamedeveloper.com/programming/scaling-dedicated-game-servers-with-kubernetes-part-3-scaling-up-nodes" rel="nofollow noreferrer">Scaling Dedicated Game Servers with Kubernetes: Part 3 – Scaling Up Nodes</a></li>
<li><a href="https://cloud.google.com/files/DedicatedGameServerSolution.pdf" rel="nofollow noreferrer">Google Cloud - Dedicated Game Server Solution</a></li>
<li><a href="https://cloud.google.com/game-servers" rel="nofollow noreferrer">Google Cloud - Game Servers</a></li>
</ul>
| Mikolaj S. |
<p>Problem Statement:</p>
<p>I have a Pod which belongs to a workload, now I want to know the workload that initiated that Pod. One way of doing it right now is going through the <code>ownerReference</code> and then going up the chain recursively finding for the root parent workload that initiated the pod.</p>
<p>Is there a way I can directly know which root parent workload initiated the Pod?</p>
| ashu8912 | <p>First, please remember that pods created by specific workload have this workload's name in the pod name. For example, pods defined in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> have following pod naming convention:</p>
<p><code><replicaset-name>-<some-string></code></p>
<p>and replica set name is:</p>
<p><code><deployment-name>-<some-string></code></p>
<p>So for example:</p>
<p><em>Pod name</em>: <code>nginx-66b6c48dd5-84rml</code>
<em>Replica set name</em>: <code>nginx-66b6c48dd5</code>
<strong>Deployment name</strong>: <code>nginx</code></p>
<p>So the first part of the name which doesn't seem to be some random letters / number is the root workload name.</p>
<p>Only pods <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pods-in-a-statefulset" rel="nofollow noreferrer">defined in StatefulSet have ordinal indexes</a>, as follows:</p>
<p><code><statefulset-name>-<ordinal index></code></p>
<p>For example:</p>
<p><em>Pod name</em>: <code>web-0</code>
<strong>StafeulSet name</strong>: <code>web</code></p>
<p>Of course, based on workload name we are not able to know what kind of workload it is. Check the second part of my answer below.</p>
<hr />
<p>Well, not taking into account the pod's name, it seems that your thinking is correct, the only way to find a "root" workload is to go through the chain recursively and find the next "parents" workloads.</p>
<p>When you run the command <code>kubectl get pod {pod-name} -o json</code> (to get all information about pod) there is only information about above level (as you said in case of pod defined in deployment, in pod information there is only information about replica set).</p>
<p>I wrote a small bash script that recursively checks every workload's <code>ownerReferences</code> until it finds "root" workload (the root workload does not have <code>ownerRefernces</code>). It requires you to have <a href="https://github.com/stedolan/jq" rel="nofollow noreferrer"><code>jq</code> utility</a> installed on your system. Check this:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
function get_root_owner_reference {
# Set kind, name and namespace
kind=$1
name=$2
namespace=$3
# Get ownerReferences
owner_references=$(kubectl get $kind $name -o json -n $namespace | jq -r 'try (.metadata.ownerReferences[])')
# If ownerReferences does not exists assume that it is root workload; if exists run get_root_owner_reference function
if [[ -z "$owner_references" ]]; then
resource_json=$(kubectl get $kind $name -o json -n $namespace)
echo "Kind: $(echo $resource_json | jq -r '.kind')"
echo "Name: $(echo $resource_json | jq -r '.metadata.name')"
else
get_root_owner_reference $(echo $owner_references | jq -r '.kind') $(echo $owner_references | jq -r '.name') $namespace
fi
}
# Get namespace if set
if [[ -z $3 ]]; then
namespace="default"
else
namespace=$3
fi
get_root_owner_reference $1 $2 $namespace
</code></pre>
<p>You need to provide two arguments - resource and name of the resource. Namespace name is optional (if not given it will use Kubernetes <code>default</code> namespace).</p>
<p><strong>Examples</strong>:
Pod defined in deployment:</p>
<pre><code>user@cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod nginx-66b6c48dd5-84rml
Kind: Deployment
Name: nginx
</code></pre>
<p>Pod created from CronJob:</p>
<pre><code>user@cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod hello-27247072-mv4l9
Kind: CronJob
Name: hello
</code></pre>
<p>Pod created straight from pod definition:</p>
<pre><code>user@cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod hostipc-exec-pod
Kind: Pod
Name: hostipc-exec-pod
</code></pre>
<p>Pod from other namespace:</p>
<pre><code>user@cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod kube-dns-679799b55c-7pzr7 kube-system
Kind: Deployment
Name: kube-dns
</code></pre>
| Mikolaj S. |
<p>I run a bare-metal Kubernetes cluster and want to map services onto URLs instead of ports (I used <code>NodePort</code> so far).</p>
<p>To achieve this I tried to install an <code>IngressController</code> to be able to deploy Ingress objects containing routing.</p>
<p>I installed the <code>IngressController</code> via helm:</p>
<pre><code>helm install my-ingress helm install my-ingress stable/nginx-ingress
</code></pre>
<p>and the deployment worked fine so far. To just use the node's domain name, I enabled <code>hostNetwork: true</code> in the <code>nginx-ingress-controller</code>.</p>
<p>Then, I created an Ingress deployment with this definition:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>which also deployed fine. Finally, when I try to access <code>http://my-url.com/testpath</code> I get a login-prompt. I nowhere set up login-credentials nor do I intend to do so as the services should be publicly available and/or handle authentication on their own.</p>
<p>How do I disable this behavior? I want to access the services just as I would use a <code>NodePort</code> solution.</p>
| Herry | <p>To clarify the case I am posting answer (from comments area) as Community Wiki.</p>
<p>The problem here was not in configuration but in environment - there was running another ingress in the pod during Longhorn' deployment. This situation led to force basic authentication to both ones.</p>
<p>To resolve that problem it was necessary to to clean up all deployments.</p>
| kkopczak |
<p>I'm trying to spread my <code>ingress-nginx-controller</code> pods such that:</p>
<ul>
<li>Each availability zone has the same # of pods (+- 1).</li>
<li>Pods prefer Nodes that currently run the least pods.</li>
</ul>
<p>Following other questions here, I have set up Pod Topology Spread Constraints in my pod deployment:</p>
<pre><code> replicas: 4
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
</code></pre>
<p>I currently have 2 Nodes, each in a different availability zone:</p>
<pre><code>$ kubectl get nodes --label-columns=topology.kubernetes.io/zone,kubernetes.io/hostname
NAME STATUS ROLES AGE VERSION ZONE HOSTNAME
ip-{{node1}}.compute.internal Ready node 136m v1.20.2 us-west-2a ip-{{node1}}.compute.internal
ip-{{node2}}.compute.internal Ready node 20h v1.20.2 us-west-2b ip-{{node2}}.compute.internal
</code></pre>
<p>After running <code>kubectl rollout restart</code> for that deployment, I get 3 pods in one Node, and 1 pod in the other, which has a skew of <code>2 > 1</code>:</p>
<pre><code>$ kubectl describe pod ingress-nginx-controller -n ingress-nginx | grep 'Node:'
Node: ip-{{node1}}.compute.internal/{{node1}}
Node: ip-{{node2}}.compute.internal/{{node2}}
Node: ip-{{node1}}.compute.internal/{{node1}}
Node: ip-{{node1}}.compute.internal/{{node1}}
</code></pre>
<p>Why is my constraint not respected? How can I debug the pod scheduler?</p>
<p>My kubectl version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.0-beta.0.607+269d62d895c297", GitCommit:"269d62d895c29743931bfaaec6e8d37ced43c35f", GitTreeState:"clean", BuildDate:"2021-03-05T22:28:02Z", GoVersion:"go1.16", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| roim | <p><code>kubectl rollout restart</code> spins up new pods and then terminate old pods <strong>after</strong> all the new pods are up and running.</p>
<p>From the pod topology spread constraints <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#known-limitations" rel="nofollow noreferrer">known limitations</a> section, constraints don't remain satisfied when pods are removed and the recommended mitigation is now to use <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">Descheduler</a> , which you already seemed to have been using from your comment.</p>
| Joe Zeng |
<p>I'm new to Kubernetes and I have a use case where I want to read data from another deployment.</p>
<p>In the following file, the the <code>RabbitmqCluster</code> creates a default user. I want to extract the credentials of that user into a secret for use in other services that need to publish or subscribe to that broker:</p>
<pre><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: broker
---
apiVersion: v1
kind: Secret
metadata:
name: broker-credentials-secret
type: Opaque
stringData:
username: $BROKER_USER # Available at $(kubectl get secret broker-default-user -o jsonpath='{.data.username}' | base64 --decode)
password: $BROKER_PASSWORD # Available at $(kubectl get secret broker-default-user -o jsonpath='{.data.password}' | base64 --decode)
</code></pre>
<p>My first thought was to separate into two different files, I could wait for the cluster to be ready and then <code>sed</code> the <code>BROKER_PASSWORD</code> and <code>BROKER_USER</code> variables into the second config that then deploys the secret.</p>
<p>My question is: is there a proper way to handle this scenario? Should I just separate these two into two different files and write documentation about their intended order of deployment? Or is there a better way of doing this?</p>
| jokarl | <p>Your thinking and approach is correct, this way (splitting into two files) seems to be the best option in this case - there is no way to dynamically set values in Kubernetes YAML from the other running Kubernetes resource. Keep in mind that for a secret definition you don't have to use the <code>stringData</code> and <code>base64 --decode</code> command in <code>kubectl</code>. It does not make any sense to decode values when they will be encoded again - better just read values in <code>base64</code> string and use <code>data</code> instead of <code>stringData</code> - <a href="https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets" rel="nofollow noreferrer">check this</a>. Finally all should look like:</p>
<p><em>file-1.yaml</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: broker
</code></pre>
<p><em>file-2.yaml</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: broker-credentials-secret
type: Opaque
data:
username: BROKER_USER
password: BROKER_PASSWORD
</code></pre>
<p>Then you can run this one-liner (with <code>sed</code> commands + <a href="https://stackoverflow.com/questions/54032336/need-some-explaination-of-kubectl-stdin-and-pipe/54032630#54032630">using pipes</a> . I also deleted <code>$</code> signs in second yaml so <code>sed</code> commands work properly):</p>
<pre><code>kubectl apply -f file-1.yaml && sed -e "s/BROKER_USER/$(kubectl get secret broker-default-user -o jsonpath='{.data.username}')/g" -e "s/BROKER_PASSWORD/$(kubectl get secret broker-default-user -o jsonpath='{.data.password}')/g" file-2.yaml | kubectl apply -f -
</code></pre>
| Mikolaj S. |
<p>I have the following deployment yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: pressbrief
spec:
replicas: 1
selector:
matchLabels:
app: app-pressbrief
template:
metadata:
labels:
app: app-pressbrief
spec:
containers:
</code></pre>
<p>I use the following command to run the deployment</p>
<pre><code>kubectl apply -f deployment.yaml
deployment.apps/pressbrief created
</code></pre>
<p>now if I change something in the containers template and run it again</p>
<pre><code>kubectl apply -f deployment.yaml
deployment.apps/pressbrief configured
</code></pre>
<p>I'll now see two pods running instead of one. I'd expect that since it's the same deployment, <strong>the old pod should be terminated, but it isn't</strong>. Perhaps it's important to mention that the old pod is in a "crash-loop" state (hence the reason I'm updating it).</p>
| Mike | <p>You need to change your update Strategy if you are using RollingUpdate it will wait for the new pod to be in Ready State till it starts terminating the first. Use Recreate for update it will terminate the pod first then create the new one</p>
| Vishwas Karale |
<p>I'm looking for a way to find:</p>
<ol>
<li>The current usage of CPU and RAM of each pod running.</li>
<li>The configured CPU and RAM of each pod.</li>
</ol>
<p>One side is to identify the resource usage, and the other is to identify if it was patched manually or via the deploy YAML.</p>
| Frank N Stein | <p>The first part of your question is answered with the kubectl top command.
The second part is here</p>
<p>You specify the initial cpu and memory and the max cpu and memory in the pod spec.</p>
<pre><code>spec:
containers:
- name: cpu-demo-ctr
image: vish/stress
resources:
limits:
cpu: "1"
memory: "400Mi"
requests:
cpu: "0.5"
memory: "200Mi"
</code></pre>
<p>There is a guide in the Kubernetes documentation here :
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">enter link description here</a></p>
| Henrik Hoegh |
<p>I am trying to host my own Nextcloud server using Kubernetes.</p>
<p>I want my Nextcloud server to be accessed from <code>http://localhost:32738/nextcloud</code> but every time I access that URL, it gets redirected to <code>http://localhost:32738/login</code> and gives me <code>404 Not Found</code>.</p>
<p>If I replace the path with:</p>
<pre><code>path: /
</code></pre>
<p>then, it works without problems on <code>http://localhost:32738/login</code> but as I said, it is not the solution I am looking for. The login page should be accessed from <code>http://localhost:32738/nextcloud/login</code>.</p>
<p>Going to <code>http://127.0.0.1:32738/nextcloud/</code> does work for the initial setup but after that it becomes inaccessible as it always redirects to:</p>
<pre><code>http://127.0.0.1:32738/apps/dashboard/
</code></pre>
<p>and not to:</p>
<pre><code>http://127.0.0.1:32738/nextcloud/apps/dashboard/
</code></pre>
<p>This is my yaml:</p>
<pre><code>#Nextcloud-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-server-pod
template:
metadata:
labels:
pod-label: nextcloud-server-pod
spec:
containers:
- name: nextcloud
image: nextcloud:22.2.0-apache
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_HOST
value: nextcloud-database:5432
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Nextcloud-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-server-pod
ports:
- port: 80
protocol: TCP
name: nextcloud-server
---
#Database-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-database-pod
template:
metadata:
labels:
pod-label: nextcloud-database-pod
spec:
containers:
- name: postgresql
image: postgres:13.4
env:
- name: POSTGRES_DATABASE
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-rootpassword
- name: PGDATA
value: /var/lib/postgresql/data/
volumeMounts:
- name: database-storage
mountPath: /var/lib/postgresql/data/
subPath: data
volumes:
- name: database-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Database-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-database-pod
ports:
- port: 5432
protocol: TCP
name: nextcloud-database
---
#PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-pv
labels:
type: local
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
---
#PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
#Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
service:
name: nextcloud-server
port:
number: 80
pathType: Prefix
path: /nextcloud(/.*)
---
#Secret
apiVersion: v1
kind: Secret
metadata:
name: nextcloud
labels:
app: nextcloud
immutable: true
stringData:
db-name: nextcloud
db-username: nextcloud
db-password: changeme
db-rootpassword: longpassword
username: admin
password: changeme
</code></pre>
<p>ingress-nginx was installed with:</p>
<pre><code>helm install nginx ingress-nginx/ingress-nginx
</code></pre>
<p>Please tell me if you want me to supply more information.</p>
| Paul Schuldesz | <p>In your case there is a difference between the exposed URL in the backend service and the specified path in the Ingress rule. That's why you get an error.</p>
<p>To avoid that you can use rewrite rule.</p>
<p>Using that one your ingress paths will be rewritten to value you provide.
This annotation <code>ingress.kubernetes.io/rewrite-target: /login</code> will rewrite the URL <code>/nextcloud/login</code> to <code>/login</code> before sending the request to the backend service.</p>
<p>But:</p>
<blockquote>
<p>Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions.</p>
</blockquote>
<p>On <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">this documentation</a> you can find following example:</p>
<pre><code>$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
' | kubectl create -f -
</code></pre>
<blockquote>
<p>In this ingress definition, any characters captured by <code>(.*)</code> will be assigned to the placeholder <code>$2</code>, which is then used as a parameter in the <code>rewrite-target</code> annotation.</p>
</blockquote>
<p>So in your URL you could see wanted <code>/nextcloud/login</code>, but rewriting will couse changing path to <code>/login</code> in the Ingress rule and finding your backend. I would suggest use one of following option:</p>
<pre><code>path: /nextcloud(/.*)
</code></pre>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1
</code></pre>
<p>or</p>
<pre><code>path: /nextcloud/login
</code></pre>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /login
</code></pre>
<p>See also <a href="https://docs.pivotal.io/pks/1-7/nsxt-ingress-rewrite-url.html" rel="nofollow noreferrer">this article</a>.</p>
| kkopczak |
<p>Hi I'm new to Kubernetes and helm and have recently tried one tutorial from Pluralsight.</p>
<p>As per the tutorial, my ingress file looks like this -</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-ingress
spec:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ $.Release.Name }}.{{ .host.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ $.Release.Name}}-{{ .host.chart }}
port:
number: 80
{{- end }}
</code></pre>
<p>When I try to connect to minikube ip, it gives me this error -</p>
<pre><code>404 Not Found
----
nginx
</code></pre>
<p>I've installed the helm chart using helm install dev guestbook
These are my log of pods and ingress -
Command - <code>kubectl get ingress</code> -</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
dev-guestbook-ingress <none> dev.frontend.minikube.local,dev.backend.minikube.local 192.168.99.104 80 136m
</code></pre>
<p>Command - <code>kubectl describe ingress dev-guestbook-ingress</code></p>
<pre><code>Name: dev-guestbook-ingress
Namespace: default
Address: 192.168.99.104
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
dev.frontend.minikube.local
/* dev-frontend:80 (172.17.0.3:4200)
dev.backend.minikube.local
/* dev-backend:80 (172.17.0.5:3000)
Annotations: meta.helm.sh/release-name: dev
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 32m (x2 over 33m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>These are log of my ingress pod after using command -</p>
<pre><code>kubectl logs ingress-nginx-controller-69bdbc4d57-jnswj --namespace=ingress-nginx
</code></pre>
<pre><code>-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.0.0-beta.1
Build: a091b01f436b4ab4f3d04264df93962432a02450
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.20.1
-------------------------------------------------------------------------------
W0928 13:00:25.824832 8 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0928 13:00:25.825340 8 main.go:221] "Creating API client" host="https://10.96.0.1:443"
W0928 13:00:56.974273 8 main.go:262] Initial connection to the Kubernetes API server was retried 1 times.
I0928 13:00:56.974428 8 main.go:265] "Running in Kubernetes cluster" major="1" minor="22" git="v1.22.2" state="clean" commit="8b5a19147530eaac9476b0ab82980b4088bbc1b2" platform="linux/amd64"
I0928 13:00:57.378459 8 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0928 13:00:57.401933 8 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0928 13:00:57.426834 8 nginx.go:253] "Starting NGINX Ingress controller"
I0928 13:00:57.451326 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"3fd79ae9-35c2-4147-aa8c-b8bdb94a46e7", APIVersion:"v1", ResourceVersion:"787", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0928 13:00:57.451353 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"dc2450e0-6542-4b42-81e3-0b68598bdb8d", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0928 13:00:57.453791 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"c22ae05e-ac70-47da-9573-087394876f41", APIVersion:"v1", ResourceVersion:"786", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0928 13:00:58.543719 8 store.go:361] "Ignoring ingress because of error while validating ingress class" ingress="default/dev-guestbook-ingress" error="ingress does not contain a valid IngressClass"
I0928 13:00:58.640831 8 nginx.go:295] "Starting NGINX process"
I0928 13:00:58.640960 8 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I0928 13:00:58.641293 8 nginx.go:315] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0928 13:00:58.641425 8 controller.go:150] "Configuration changes detected, backend reload required"
I0928 13:00:58.657350 8 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader
I0928 13:00:58.657383 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-69bdbc4d57-jnswj"
I0928 13:00:58.668104 8 status.go:204] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-69bdbc4d57-jnswj" node="minikube"
I0928 13:00:59.289049 8 controller.go:167] "Backend successfully reloaded"
I0928 13:00:59.289288 8 controller.go:178] "Initial sync, sleeping for 1 second"
I0928 13:00:59.289389 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69bdbc4d57-jnswj", UID:"84ad1680-72e1-4733-a1e7-9c314527d810", APIVersion:"v1", ResourceVersion:"2634", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0928 13:06:22.563182 8 main.go:101] "successfully validated configuration, accepting" ingress="dev-guestbook-ingress/default"
I0928 13:06:22.568356 8 store.go:396] "creating ingress" ingress="default/dev-guestbook-ingress" ingressclass="nginx"
I0928 13:06:22.568860 8 controller.go:150] "Configuration changes detected, backend reload required"
I0928 13:06:22.569014 8 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"dev-guestbook-ingress", UID:"519a99af-aec5-4316-ae38-d1bcc1127c59", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3001", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0928 13:06:22.639189 8 controller.go:167] "Backend successfully reloaded"
I0928 13:06:22.639649 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69bdbc4d57-jnswj", UID:"84ad1680-72e1-4733-a1e7-9c314527d810", APIVersion:"v1", ResourceVersion:"2634", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0928 13:06:58.702705 8 status.go:284] "updating Ingress status" namespace="default" ingress="dev-guestbook-ingress" currentValue=[] newValue=[{IP:192.168.99.104 Hostname: Ports:[]}]
I0928 13:06:58.708447 8 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"dev-guestbook-ingress", UID:"519a99af-aec5-4316-ae38-d1bcc1127c59", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3034", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
192.168.99.1 - - [28/Sep/2021:13:07:19 +0000] "GET / HTTP/1.1" 404 548 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36" 444 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 0fa63d31eee2d30a43ec3e4a810692e2
192.168.99.1 - - [28/Sep/2021:13:07:19 +0000] "GET /favicon.ico HTTP/1.1" 404 548 "http://dev.backend.minikube.local/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36" 399 0.000 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 7578f34eb36a0624571b59ace7874748
192.168.99.1 - - [28/Sep/2021:13:27:37 +0000] "HEAD / HTTP/1.1" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36 Edg/94.0.992.31" 287 0.001 [upstream-default-backend] [] 127.0.0.1:8181 0 0.001 404 de62ff190e82ba05a6614dd727f1170a
192.168.99.1 - - [28/Sep/2021:13:27:40 +0000] "GET / HTTP/1.1" 404 548 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36 Edg/94.0.992.31" 458 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 a4ebeb4dfadea97778a6394e218e99b0
192.168.99.1 - - [28/Sep/2021:13:27:40 +0000] "GET /favicon.ico HTTP/1.1" 404 548 "http://dev.frontend.minikube.local/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36 Edg/94.0.992.31" 414 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.000 404 e39c1bbc38591ec1e33ba0ca074e8127
I0928 13:32:41.441620 8 main.go:101] "successfully validated configuration, accepting" ingress="dev-guestbook-ingress/default"
I0928 13:32:41.445673 8 store.go:399] "removing ingress because of unknown ingressclass" ingress="default/dev-guestbook-ingress"
I0928 13:32:41.445785 8 controller.go:150] "Configuration changes detected, backend reload required"
I0928 13:32:41.565590 8 controller.go:167] "Backend successfully reloaded"
I0928 13:32:41.566233 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69bdbc4d57-jnswj", UID:"84ad1680-72e1-4733-a1e7-9c314527d810", APIVersion:"v1", ResourceVersion:"2634", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
</code></pre>
<p>My kubernetes veriosn is -</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>and my minikube version is -</p>
<pre><code>minikube version: v1.23.0
commit: 5931455374810b1bbeb222a9713ae2c756daee10
</code></pre>
<p>I did add the minikube ip with host in etc/hosts, so that problem is not there</p>
| axel | <p>Try adding below to fix that default backend error and see if that resolves the issue. Also, it would be worth checking if your Ingress IP & the minikube IP are same.</p>
<pre><code>spec:
defaultBackend:
service:
name: {{ $.Release.Name}}-{{ .host.chart }}
port:
number: 80
</code></pre>
| Abhinab Padhi |
<p>When I run skaffold this is the error I get. Skaffold generates tags, checks the cache, starts the deploy then it cleans up.</p>
<pre><code>- stderr: "error: error parsing C: ~\k8s\\ingress-srv.yaml: error converting YAML to JSON: yaml: line 20: mapping values are not allowed in this context
\n"
- cause: exit status 1
</code></pre>
<p>Docker creates a container for the server. Here is the ingress server yaml file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: northernherpgeckosales.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
</code></pre>
<p>For good measure here is the skaffold file:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: giantgecko/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: giantgecko/front-end
context: front-end
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
</code></pre>
| Jonathan Lang | <p>Take a closer look at your Ingress definition file (starting from line 19):</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
</code></pre>
<p>You have unnecessary indents from the line 20 (<code>pathType: Prefix</code>) till the end of the file. Just format your YAML file properly. For the previous <code>path: /api/users/?(.*)</code> everything is alright - no unnecessary indents.</p>
<p>Final YAML looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: northernherpgeckosales.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
</code></pre>
| Mikolaj S. |
<p>I am trying to follow <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode" rel="nofollow noreferrer">this Spark document</a> to use the cluster mode.</p>
<p>I deployed Spark in a local Kubernetes at namespace <code>hm-spark</code> by</p>
<pre class="lang-bash prettyprint-override"><code>helm upgrade \
spark \
spark \
--install \
--repo=https://charts.bitnami.com/bitnami \
--namespace=hm-spark \
--create-namespace \
--values=my-values.yaml
</code></pre>
<p><strong>my-values.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>image:
registry: docker.io
repository: bitnami/spark
tag: 3.4.0-debian-11-r1
</code></pre>
<p>I got Kubernetes IP <code>https://127.0.0.1:6443</code> by</p>
<pre><code>➜ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
</code></pre>
<p>Then when I submit the Spark application on my macOS by:</p>
<pre><code>spark-submit \
--master=k8s://https://127.0.0.1:6443 \
--deploy-mode=cluster \
--name=spark-pi \
--class=org.apache.spark.examples.SparkPi \
--conf=spark.kubernetes.namespace=hm-spark \
--conf=spark.kubernetes.container.image=docker.io/bitnami/spark:3.4.0-debian-11-r1 \
local:///opt/bitnami/spark/examples/jars/spark-examples_2.12-3.4.0.jar
</code></pre>
<p>The <code>spark-pi</code> pod got created</p>
<pre><code>➜ kubectl get pods --namespace hm-spark
NAME READY STATUS RESTARTS AGE
spark-worker-0 1/1 Running 0 82m
spark-worker-1 1/1 Running 0 82m
spark-master-0 1/1 Running 0 82m
spark-pi-ec6d2e886f483472-driver 0/1 Error 0 9m32s
</code></pre>
<p>However, it failed with error:</p>
<pre class="lang-bash prettyprint-override"><code>spark 00:50:00.00
spark 00:50:00.01 Welcome to the Bitnami spark container
spark 00:50:00.01 Subscribe to project updates by watching https://github.com/bitnami/containers
spark 00:50:00.01 Submit issues and feature requests at https://github.com/bitnami/containers/issues
spark 00:50:00.01
23/05/31 00:50:02 INFO SparkContext: Running Spark version 3.4.0
23/05/31 00:50:02 INFO ResourceUtils: ==============================================================
23/05/31 00:50:02 INFO ResourceUtils: No custom resources configured for spark.driver.
23/05/31 00:50:02 INFO ResourceUtils: ==============================================================
23/05/31 00:50:02 INFO SparkContext: Submitted application: Spark Pi
23/05/31 00:50:02 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/05/31 00:50:02 INFO ResourceProfile: Limiting resource is cpu
23/05/31 00:50:02 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/05/31 00:50:02 INFO SecurityManager: Changing view acls to: spark
23/05/31 00:50:02 INFO SecurityManager: Changing modify acls to: spark
23/05/31 00:50:02 INFO SecurityManager: Changing view acls groups to:
23/05/31 00:50:02 INFO SecurityManager: Changing modify acls groups to:
23/05/31 00:50:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: spark; groups with view permissions: EMPTY; users with modify permissions: spark; groups with modify permissions: EMPTY
23/05/31 00:50:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/05/31 00:50:02 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
23/05/31 00:50:02 INFO SparkEnv: Registering MapOutputTracker
23/05/31 00:50:02 INFO SparkEnv: Registering BlockManagerMaster
23/05/31 00:50:02 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/05/31 00:50:02 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/05/31 00:50:02 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/05/31 00:50:02 INFO DiskBlockManager: Created local directory at /var/data/spark-a1bd571d-599f-48d6-b7a9-06d35fb82cdb/blockmgr-5ee7264e-b52a-48bc-a1bf-f8f3f6a514aa
23/05/31 00:50:02 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
23/05/31 00:50:02 INFO SparkEnv: Registering OutputCommitCoordinator
23/05/31 00:50:02 INFO JettyUtils: Start Jetty 0.0.0.0:4040 for SparkUI
23/05/31 00:50:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/05/31 00:50:02 INFO SparkContext: Added JAR local:///opt/bitnami/spark/examples/jars/spark-examples_2.12-3.4.0.jar at file:/opt/bitnami/spark/examples/jars/spark-examples_2.12-3.4.0.jar with timestamp 1685494202182
23/05/31 00:50:02 WARN SparkContext: The JAR local:///opt/bitnami/spark/examples/jars/spark-examples_2.12-3.4.0.jar at file:/opt/bitnami/spark/examples/jars/spark-examples_2.12-3.4.0.jar has been added already. Overwriting of added jar is not supported in the current version.
23/05/31 00:50:02 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
23/05/31 00:50:03 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:322)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anon$1.run(StandaloneAppClient.scala:108)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.io.IOException: Failed to connect to spark-master/<unresolved>:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:284)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:214)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:226)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
Caused by: java.net.UnknownHostException: spark-master
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:801)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1533)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1385)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306)
at java.base/java.net.InetAddress.getByName(InetAddress.java:1256)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:156)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:153)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:153)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:41)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:61)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:53)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:31)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:106)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:206)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:990)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:516)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
# ...
23/05/31 00:51:02 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
23/05/31 00:51:02 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
23/05/31 00:51:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
23/05/31 00:51:02 INFO NettyBlockTransferService: Server created on spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc:7079
23/05/31 00:51:02 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/05/31 00:51:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc, 7079, None)
23/05/31 00:51:02 INFO BlockManagerMasterEndpoint: Registering block manager spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc:7079 with 413.9 MiB RAM, BlockManagerId(driver, spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc, 7079, None)
23/05/31 00:51:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc, 7079, None)
23/05/31 00:51:02 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc, 7079, None)
23/05/31 00:51:03 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
23/05/31 00:51:03 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
23/05/31 00:51:03 INFO SparkContext: SparkContext is stopping with exitCode 0.
23/05/31 00:51:03 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 2 output partitions
23/05/31 00:51:03 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
23/05/31 00:51:03 INFO DAGScheduler: Parents of final stage: List()
23/05/31 00:51:03 INFO DAGScheduler: Missing parents: List()
23/05/31 00:51:03 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
23/05/31 00:51:03 INFO SparkUI: Stopped Spark web UI at http://spark-pi-ec6d2e886f483472-driver-svc.hm-spark.svc:4040
23/05/31 00:51:03 INFO TaskSchedulerImpl: Cancelling stage 0
23/05/31 00:51:03 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
23/05/31 00:51:03 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) failed in 0.028 s due to Job aborted due to stage failure: Task serialization failed: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:1020)
org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
org.apache.spark.examples.SparkPi.main(SparkPi.scala)
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.base/java.lang.reflect.Method.invoke(Method.java:568)
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1020)
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:192)
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:215)
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1111)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1120)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
</code></pre>
<p>And here is my Kubernetes services:</p>
<pre class="lang-bash prettyprint-override"><code>➜ kubectl get services --namespace hm-spark
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
spark-headless ClusterIP None <none> <none> 82m
spark-master-svc ClusterIP 10.43.164.158 <none> 7077/TCP,80/TCP 82m
spark-pi-ec6d2e886f483472-driver-svc ClusterIP None <none> 7078/TCP,7079/TCP,4040/TCP 10m
</code></pre>
<p>Any idea? Thanks!</p>
| Hongbo Miao | <p>I have no specific experience with these Bitnami helm charts, but it seems to me like your application is both trying to use a:</p>
<ul>
<li><a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode" rel="nofollow noreferrer">Kubernetes cluster manager</a> (judging from the URL in your <code>--master</code> config, starting with <code>k8s://</code>)</li>
<li>and a <a href="https://spark.apache.org/docs/latest/spark-standalone.html" rel="nofollow noreferrer">Standalone cluster manager</a> (judging from the error messages, port 7077 and the <code>spark://</code> part of the master URL)</li>
</ul>
<p>That seems like a bit of a mix up: you should choose between one of both. After having a look at some docs around those Bitnami helm charts, I found <a href="https://github.com/bitnami/charts/tree/main/bitnami/spark#submit-an-application" rel="nofollow noreferrer">this example</a>:</p>
<pre class="lang-bash prettyprint-override"><code>$ ./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--conf spark.kubernetes.container.image=bitnami/spark:3 \
--master k8s://https://k8s-apiserver-host:k8s-apiserver-port \
--conf spark.kubernetes.driverEnv.SPARK_MASTER_URL=spark://spark-master-svc:spark-master-port \
--deploy-mode cluster \
./examples/jars/spark-examples_2.12-3.2.0.jar 1000
</code></pre>
<p>Again, I'm not entirely familiar with these helm charts but it seems like you might be missing a critical configuration bit concerning which master will finally be used, namely:</p>
<pre><code>--conf spark.kubernetes.driverEnv.SPARK_MASTER_URL=spark://spark-master-svc:spark-master-port
</code></pre>
| Koedlt |
<p>I'm learning k8s & nodePort</p>
<ol>
<li>made 1 master node & 1 worker node at AWS EC2</li>
<li>installed k8s, kubeadm</li>
<li>Use flannel CNI</li>
<li>deployed a nodejs app. with port 3000</li>
<li>Attach a nodePort to nodejs app. so, 3000:31000</li>
<li>I expected that i access to that nodejs app outside ec2 with url : "[master-node-pub-ip]:31000"</li>
<li>But i can't access to "[master-node-pub-ip]:31000" and can access with "[worker-node-pub-ip]:31000"</li>
</ol>
<p>I don't know what did i wrong...</p>
<p>How can i access that nodejs app via master nodes ip??</p>
<p>Thank you.</p>
| mecha kucha | <p>Try to install an ingress controller</p>
<p>I prefer the helm way:</p>
<pre class="lang-sh prettyprint-override"><code>helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install nginx-ingress nginx-stable/nginx-ingress --set rbac.create=true
</code></pre>
<p>Then you can access to you nodepert via the node public IP.</p>
| nagyzekkyandras |
<p>I have this in a <code>selenium-hub-service.yml</code> file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
sessionAffinity: None
</code></pre>
<p>When I do <code>kubectl describe service</code> on terminal, I get the endpoint of kubernetes service as <code>192.168.49.2:8443</code>. I then take that and point the browser to <code>192.168.49.2:30001</code> but browser is not able to reach that endpoint. I was expecting to reach selenium hub.</p>
<p>When I do <code>minikube service selenium-srv --url</code>, which gives me <code>http://127.0.0.1:56498</code> and point browser to it, I can reach the hub.</p>
<p>My question is: why am I not able to reach through <code>nodePort</code>?</p>
<p>I would like to do it through <code>nodePort</code> way because I know the port beforehand and if kubernetes service end point remains constant then it may be easy to point my tests to a known endpoint when I integrate it with azure pipeline.</p>
<p>EDIT: output of <code>kubectl get service</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
selenium-srv NodePort 10.96.34.117 <none> 4444:30001/TCP 2d2h
</code></pre>
| user1207289 | <p>Posted community wiki based on <a href="https://github.com/kubernetes/minikube/issues/11193" rel="nofollow noreferrer">this Github topic</a>. Feel free to expand it.</p>
<p>The information below assumes that you are using <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/" rel="nofollow noreferrer">the default driver docker</a>.</p>
<hr />
<p>Minikube on macOS behaves a bit differently than on Linux. While on Linux, you have special interfaces used for docker and for connecting to the minikube node port, like this one:</p>
<pre><code>3: docker0:
...
inet 172.17.0.1/16
</code></pre>
<p>And this one:</p>
<pre><code>4: br-42319e616ec5:
...
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-42319e616ec5
</code></pre>
<p>There is no such solution implemented on macOS. <a href="https://github.com/kubernetes/minikube/issues/11193#issuecomment-826331511" rel="nofollow noreferrer">Check this</a>:</p>
<blockquote>
<p>This is a known issue, Docker Desktop networking doesn't support ports. You will have to use minikube tunnel.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/minikube/issues/11193#issuecomment-826708118" rel="nofollow noreferrer">Also</a>:</p>
<blockquote>
<p>there is no bridge0 on Macos, and it makes container IP unreachable from host.</p>
</blockquote>
<p>That means you can't connect to your service using IP address <code>192.168.49.2</code>.</p>
<p>Check also this article: <a href="https://docs.docker.com/desktop/mac/networking/#known-limitations-use-cases-and-workarounds" rel="nofollow noreferrer">Known limitations, use cases, and workarounds - Docker Desktop for Mac</a>:</p>
<blockquote>
<p><strong>There is no docker0 bridge on macOS</strong>
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a <code>docker0</code> interface on the host. This interface is actually within the virtual machine.</p>
</blockquote>
<blockquote>
<p><strong>I cannot ping my containers</strong>
Docker Desktop for Mac can’t route traffic to containers.</p>
</blockquote>
<blockquote>
<p><strong>Per-container IP addressing is not possible</strong>
The docker (Linux) bridge network is not reachable from the macOS host.</p>
</blockquote>
<p>There are few ways to <a href="https://github.com/kubernetes/minikube/issues/11193#issuecomment-826708118" rel="nofollow noreferrer">setup minikube to use NodePort at the localhost address on Mac, like this one</a>:</p>
<pre><code>minikube start --driver=docker --extra-config=apiserver.service-node-port-range=32760-32767 --ports=127.0.0.1:32760-32767:32760-32767`
</code></pre>
<p>You can also use <code>minikube service</code> command which will return a URL to connect to a service.</p>
| Mikolaj S. |
<p>I installed Microk8s on a local physical Ubuntu 20-04 server (without a GUI):</p>
<pre><code> microk8s status --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
helm # Helm 2 - the package manager for Kubernetes
disabled:
</code></pre>
<p>When I try to install something with helm it says:</p>
<pre><code>Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
</code></pre>
<p>What configuration has to be done to use the MicroK8s Kubernetes cluster for helm installations?
Do I have to enable more MicroK8s services for that?
Can I run a Kubernetes cluster on one or two single local physical Ubuntu server with MicroK8s?</p>
| SQL-Neuling | <p>Searching for solution for your issue, I have found <a href="https://stackoverflow.com/questions/45914420/why-tiller-connect-to-localhost-8080-for-kubernetes-api/59547001#59547001">this one</a>. Try to run:</p>
<pre class="lang-yaml prettyprint-override"><code>[microk8s] kubectl config view --raw > ~/.kube/config
</code></pre>
<hr />
<p>Helm interacts directly with the Kubernetes API server so it needs to be able to connect to a Kubernetes cluster. Helms reads the same configuration files used by <code>kubectl</code> to do it automatically.</p>
<p>Based on <em>Learning Helm</em> by O'Reilly Media:</p>
<blockquote>
<p>Helm will try to find this information by reading the environment variable $KUBECONFIG. If that is not set, it will look in the same default locations that <code>kubectl</code> looks in.</p>
</blockquote>
<hr />
<p>See also:</p>
<ul>
<li><a href="https://github.com/helm/helm/issues/3460" rel="nofollow noreferrer">This discussion about similar issue on Github</a></li>
<li><a href="https://stackoverflow.com/questions/63066604/error-kubernetes-cluster-unreachable-get-http-localhost8080-versiontimeou">This similar issue</a></li>
</ul>
| kkopczak |
<p>I'm very new to kubernetes, and I'm tasked to continue project which was started by someone else. I just want to understand the what the following code does to the kubernetes, especially corefile part. thank you.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 8.8.8.8
cache 30
loop
reload
loadbalance
}
</code></pre>
| sekicdx | <p>The Corefile part is the configuration for your cluster's CoreDNS deployment. It's wrapped in a <code>ConfigMap</code> so that you can manage it like any other Kubernetes resource.</p>
<p>CoreDNS is a DNS server. Your Kubernetes cluster needs a local DNS server so that your pods, services, etc. can discover each other by name.</p>
<p>Most important:
any DNS request that can't be resolved in the local environment will be forwarded to the IP you specify at "forward . [remote DNS]"</p>
<p>See <a href="https://coredns.io/manual/configuration/" rel="nofollow noreferrer">https://coredns.io/manual/configuration/</a></p>
| Lukas yo |
<p>I'm confused with the relationship between two parameters: <code>requests</code> and <code>cpu.shares</code> of the cgroup which is updated once the Pod is deployed. According the readings I've done so far, <code>cpu.shares</code> reflects some kind of priority when trying to get the chance to consume the CPU. And it's a relative value.</p>
<p>So my question why kubernetes considers the <code>request</code> value of the CPU as an absolute value when scheduling? When it comes to the CPU processes will get a time slice to get executed based on their priorities (according to the CFS mechanism). To my knowledge, there's no such thing called giving such amounts of CPUs (1CPU, 2CPUs etc.). So, if the <code>cpu.share</code> value is considered to prioritize the tasks, why kubernetes consider the exact request value (Eg: 1500m, 200m) to find out a node?</p>
<p>Please correct me if I've got this wrong. Thanks !!</p>
| BLasan | <p>Answering your questions from the main question and comments:</p>
<blockquote>
<p>So my question why kubernetes considers the <code>request</code> value of the CPU as an absolute value when scheduling?</p>
</blockquote>
<blockquote>
<p>To my knowledge, there's no such thing called giving such amounts of CPUs (1CPU, 2CPUs etc.). So, if the <code>cpu.share</code> value is considered to prioritize the tasks, why kubernetes consider the exact request value (Eg: 1500m, 200m) to find out a node?</p>
</blockquote>
<p>It's because decimal CPU values from the requests <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu" rel="nofollow noreferrer">are always converted to the values in milicores, like 0.1 is equal to 100m which can be read as "one hundred millicpu" or "one hundred millicores"</a>. Those units are specific for Kubernetes:</p>
<blockquote>
<p>Fractional requests are allowed. A Container with <code>spec.containers[].resources.requests.cpu</code> of <code>0.5</code> is guaranteed half as much CPU as one that asks for 1 CPU. The expression <code>0.1</code> is equivalent to the expression <code>100m</code>, which can be read as "one hundred millicpu". Some people say "one hundred millicores", and this is understood to mean the same thing. A request with a decimal point, like <code>0.1</code>, is converted to <code>100m</code> by the API, and precision finer than <code>1m</code> is not allowed. For this reason, the form <code>100m</code> might be preferred.</p>
</blockquote>
<blockquote>
<p>CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.</p>
</blockquote>
<p>Based on the above one, remember that you can specify to use let's say 1.5 CPU of the node by specifying <code>cpu: 1.5</code> or <code>cpu: 1500m</code>.</p>
<blockquote>
<p>Just wanna know lowering the <code>cpu.share</code> value in cgroups (which is modified by k8s after the deployment) affects to the cpu power consume by the process. For an instance, assume that A, B containers have 1024, 2048 shares allocated. So the available resources will be split into 1:2 ratio. So would it be the same as if we configure cpu.share as 10, 20 for two containers. Still the ratio is 1:2</p>
</blockquote>
<p>Let's make it clear - it's true that the ratio is the same, but the values are different. 1024 and 2048 in <code>cpu.shares</code> means <code>cpu: 1000m</code> and <code>cpu: 2000m</code> defined in Kubernetes resources, while 10 and 20 means <code>cpu: 10m</code> and <code>cpu: 20m</code>.</p>
<blockquote>
<p>Let's say the cluster nodes are based on Linux OS. So, how kubernetes ensure that request value is given to a container? Ultimately, OS will use configurations available in a cgroup to allocate resource, right? It modifies the <code>cpu.shares</code> value of the cgroup. So my question is, which files is modified by k8s to tell operating system to give <code>100m</code> or <code>200m</code> to a container?</p>
</blockquote>
<p>Yes, your thinking is correct. Let me explain in more detail.</p>
<p>Generally on the Kubernetes node <a href="https://medium.com/omio-engineering/cpu-limits-and-aggressive-throttling-in-kubernetes-c5b20bd8a718" rel="nofollow noreferrer">there are three cgroups under the root cgroup</a>, named as <em>slices</em>:</p>
<blockquote>
<p>The k8s uses <code>cpu.share</code> file to allocate the CPU resources. In this case, the root cgroup inherits 4096 CPU shares, which are 100% of available CPU power(1 core = 1024; this is fixed value). The root cgroup allocate its share proportionally based on children’s <code>cpu.share</code> and they do the same with their children and so on. In typical Kubernetes nodes, there are three cgroup under the root cgroup, namely <code>system.slice</code>, <code>user.slice</code>, and <code>kubepods</code>. The first two are used to allocate the resource for critical system workloads and non-k8s user space programs. The last one, <code>kubepods</code> is created by k8s to allocate the resource to pods.</p>
</blockquote>
<p>To check which files are modified we need to go to the <code>/sys/fs/cgroup/cpu</code> directory. <a href="https://gist.github.com/mcastelino/b8ce9a70b00ee56036dadd70ded53e9f#cpu-resource-management" rel="nofollow noreferrer">Here we can find directory called <code>kubepods</code></a> (which is one of the above mentioned <em>slices</em>) where all <code>cpu.shares</code> files for pods are here. In <code>kubepods</code> directory we can find two other folders - <code>besteffort</code> and <code>burstable</code>. Here is worth mentioning that Kubernetes have a three <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes" rel="nofollow noreferrer">QoS classes</a>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed" rel="nofollow noreferrer">Guaranteed</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-burstable" rel="nofollow noreferrer">Burstable</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-besteffort" rel="nofollow noreferrer">BestEffort</a></li>
</ul>
<p>Each pod has an assigned QoS class and depending on which class it is, the pod is located in the corresponding directory (except guaranteed, pod with this class is created in <code>kubepods</code> directory).</p>
<p>For example, I'm creating a pod with following definition:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test-deployment
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: test-deployment
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
cpu: 300m
- name: busybox
image: busybox
args:
- sleep
- "999999"
resources:
requests:
cpu: 150m
</code></pre>
<p>Based on earlier mentioned definitions, this pod will have assigned Qos class <code>Burstable</code>, thus it will be created in the <code>/sys/fs/cgroup/cpu/kubepods/burstable</code> directory.</p>
<p>Now we can check <code>cpu.shares</code> set for this pod:</p>
<pre><code>user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90 $ cat cpu.shares
460
</code></pre>
<p>It is correct as one pod is taking 300m and the second one 150m and it's <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">calculated by multiplying 1024</a>. For each container we have sub-directories as well:</p>
<pre><code>user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90/fa6194cbda0ccd0b1dc77793bfbff608064aa576a5a83a2f1c5c741de8cf019a $ cat cpu.shares
153
user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90/d5ba592186874637d703544ceb6f270939733f6292e1fea7435dd55b6f3f1829 $ cat cpu.shares
307
</code></pre>
<p>If you want to read more about Kubrenetes CPU management, I'd recommend reading following:</p>
<ul>
<li><a href="https://medium.com/omio-engineering/cpu-limits-and-aggressive-throttling-in-kubernetes-c5b20bd8a718" rel="nofollow noreferrer">CPU limits and aggressive throttling in Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/" rel="nofollow noreferrer">Control CPU Management Policies on the Node</a></li>
<li><a href="https://medium.com/@kkwriting/kubernetes-resource-limits-and-kernel-cgroups-337625bab87d" rel="nofollow noreferrer">Kubernetes resource limits and kernel cgroups</a></li>
<li><a href="https://www.infoq.com/presentations/evolve-kubernetes-resource-manager/" rel="nofollow noreferrer">How to Evolve Kubernetes Resource Management Model</a></li>
</ul>
| Mikolaj S. |
<p>Assuming I have set <code>resource.limits.ephemeral-storage</code> for containers in a Kubernetes cluster (using Docker), and the following Docker daemon.json logging configuration on the worker nodes:</p>
<pre><code>{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "10",
}
}
</code></pre>
<p>My understanding is that all log files (even the rotated log files) will count towards this ephemeral storage limit. This means that to determine the value for <code>resource.limits.ephemeral-storage</code>, I have to consider the maximum allowed log size (here 10*100MB) to the calculation.</p>
<p>Is there a way to "exclude" log files from counting towards the container's ephemeral-storage limit?</p>
<p>Since log handling is done "outside" of Kubernetes, I want to avoid that the resource limits for Kubernetes workloads depend on the Docker log configuration. Otherwise any change to the rotation settings (e.g. increase to 10*200MB) could cause pods to be evicted, if one would forget to adjust the limit for each and every container.</p>
| hessfr | <p>Based on the function <a href="https://github.com/kubernetes/kubernetes/blob/d88fadbd65c5e8bde22630d251766a634c7613b0/pkg/kubelet/stats/helper.go#L344" rel="nofollow noreferrer">calcEphemeralStorage</a> from <a href="https://github.com/kubernetes/kubernetes/tree/d88fadbd65c5e8bde22630d251766a634c7613b0" rel="nofollow noreferrer">release 1.17.16 source code</a>, if you want to exclude logs from calculation you can comment or remove those lines and rebuild kubelet:</p>
<pre><code>if podLogStats != nil {
result.UsedBytes = addUsage(result.UsedBytes, podLogStats.UsedBytes)
result.InodesUsed = addUsage(result.InodesUsed, podLogStats.InodesUsed)
result.Time = maxUpdateTime(&result.Time, &podLogStats.Time)
}
</code></pre>
<p>This part of the code is responsible for counting ephemeral storage usage for logs. But removing that part of code may also require to adjust some test files which expect that logs amount is calculated.
All statistics are instead counted in <a href="https://github.com/kubernetes/kubernetes/blob/ebcb4a2d88c83096e6068aa56e9a5281976e1fec/pkg/kubelet/stats/cri_stats_provider.go#L392" rel="nofollow noreferrer">this function</a>:</p>
<pre><code>func (p *criStatsProvider) makePodStorageStats(s *statsapi.PodStats, rootFsInfo *cadvisorapiv2.FsInfo) {
podNs := s.PodRef.Namespace
podName := s.PodRef.Name
podUID := types.UID(s.PodRef.UID)
vstats, found := p.resourceAnalyzer.GetPodVolumeStats(podUID)
if !found {
return
}
logStats, err := p.hostStatsProvider.getPodLogStats(podNs, podName, podUID, rootFsInfo)
if err != nil {
klog.ErrorS(err, "Unable to fetch pod log stats", "pod", klog.KRef(podNs, podName))
// If people do in-place upgrade, there might be pods still using
// the old log path. For those pods, no pod log stats is returned.
// We should continue generating other stats in that case.
// calcEphemeralStorage tolerants logStats == nil.
}
etcHostsStats, err := p.hostStatsProvider.getPodEtcHostsStats(podUID, rootFsInfo)
if err != nil {
klog.ErrorS(err, "Unable to fetch pod etc hosts stats", "pod", klog.KRef(podNs, podName))
}
ephemeralStats := make([]statsapi.VolumeStats, len(vstats.EphemeralVolumes))
copy(ephemeralStats, vstats.EphemeralVolumes)
s.VolumeStats = append(append([]statsapi.VolumeStats{}, vstats.EphemeralVolumes...), vstats.PersistentVolumes...)
s.EphemeralStorage = calcEphemeralStorage(s.Containers, ephemeralStats, rootFsInfo, logStats, etcHostsStats, true)
}
</code></pre>
<p>In the last line you can find a usage of <code>calcEphemeralStorage</code>.</p>
<p>In the recent version the <a href="https://github.com/kubernetes/kubernetes/blob/ebcb4a2d88c83096e6068aa56e9a5281976e1fec/pkg/kubelet/stats/helper.go#L374" rel="nofollow noreferrer">mentioned code</a> include the same log calculation section, so the solution should work for the <a href="https://github.com/kubernetes/kubernetes/tree/092fbfbf53427de67cac1e9fa54aaa09a28371d7" rel="nofollow noreferrer">latest release</a> too.</p>
<p>See also:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/d88fadbd65c5e8bde22630d251766a634c7613b0/pkg/kubelet/eviction/eviction_manager.go#L504" rel="nofollow noreferrer">variable with total usage of ephemeral storage</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/tree/d88fadbd65c5e8bde22630d251766a634c7613b0" rel="nofollow noreferrer">commit of your version</a></li>
<li><a href="https://developer.ibm.com/components/kubernetes/articles/setup-guide-for-kubernetes-developers/" rel="nofollow noreferrer">Setup guide for Kubernetes developers</a></li>
<li><a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/compiling-kubernetes-binaries" rel="nofollow noreferrer">Compiling Kubernetes binaries</a></li>
</ul>
| Mikołaj Głodziak |
<p>Kubernetes (v1.10.8) installed on my cloud by kismatic (v1.12.0). How I can update kubernetes to the latest version with <code>kubeadm</code>?</p>
| MarcinW | <p>With such version difference - we currently have <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1231" rel="nofollow noreferrer">v1.23</a> (<a href="https://kubernetes.io/releases/" rel="nofollow noreferrer">see official supported releases</a>) - I would consider creating the cluster from the beginning.</p>
<p>If this is not possible, you should upgrade them step by step (from version to version). <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">Here</a> you can find guide that will help to upgrade kubeadm clusters.
A link to older versions you can find <a href="https://v1-19.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">here</a>, but</p>
<blockquote>
<p><em><strong>NOTE</strong></em>:
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot.</p>
</blockquote>
<p>However, you have to have in mind that upgrading through so many versions can cause other issues, so I recommend using the first option.</p>
| kkopczak |
<p>I am currently setting up a Kubernetes cluster but I noticed there are no default storage classes defined.</p>
<pre><code>u@n:~$ kubectl get sc
No resources found in default namespace.
</code></pre>
<p>When reading through the docs there are lots of examples for storage classes used when you're deploying your cluster on cloud providers, but not self hosted. What kind of storage class do I need to use in my case?</p>
| Ferskfisk | <p>In general, you can start from the Kubernetes documentation. <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">Here</a> you can find storage-classes concept. Each StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified. <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">local volumes</a> could help you. Look at the example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling. This is specified by the <code>WaitForFirstConsumer</code> volume binding mode.</p>
<p>Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim.</p>
<p>If you are looking for complete guide to configure storage for bare metal cluster you can find it <a href="https://www.weave.works/blog/kubernetes-faq-configure-storage-for-bare-metal-cluster" rel="nofollow noreferrer">here</a>. As I mentioned before local volumes do not currently support dynamic provisioning. However it could be workaround, if you are using NFS Server. Look at <a href="https://geoffreymahugu.medium.com/kubernetes-bare-metal-dynamic-storage-allocation-e5311ac45909" rel="nofollow noreferrer">this guide</a>.</p>
| Mikołaj Głodziak |
<p>I'm trying to use micronaut kubernetes informer like what they explained in documentation . this is my code</p>
<pre><code>@Singleton
@Informer(apiType = V1ConfigMap.class, apiListType =
V1ConfigMapList.class)
public class ConfigMapInformer implements
ResourceEventHandler<V1ConfigMap> {
@Override
public void onAdd(V1ConfigMap obj) {
System.err.println("add config map");
}
@Override
public void onUpdate(V1ConfigMap oldObj, V1ConfigMap newObj) {
System.err.println("update configmap");
}
@Override
public void onDelete(V1ConfigMap obj, boolean deletedFinalStateUnknown)
{
}
}
</code></pre>
<p>And i'm using minikube for runing this application.
but after changing ConfigMaps nothing happens</p>
<p>this is my build.gradle dependencies section
<a href="https://i.stack.imgur.com/BV9jR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BV9jR.jpg" alt="enter image description here" /></a></p>
<p>and these are logs of pod</p>
<pre><code>←[36m06:16:02.327←[0;39m ←[1;30m[pool-3-thread-1]←[0;39m
←[39mDEBUG←[0;39m ←[35mi.m.k.c.KubernetesConfigMapWatcher←[0;39m -
PropertySource modified by ConfigMap: employee
←[36m06:16:02.327←[0;39m ←[1;30m[pool-3-thread-1]←[0;39m ←[34mINFO
←[0;39m ←[35mi.m.context.DefaultBeanContext←[0;39m - Reading bootstrap
environment configuration
←[36m06:16:02.328←[0;39m ←[1;30m[pool-3-thread-1]←[0;39m ←[34mINFO
←[0;39m ←[35mi.m.d.c.c.DistributedPropertySourceLocator←[0;39m -
Resolved 1 configuration sources from client:
compositeConfigurationClient(kubernetes)
</code></pre>
| Javad Behrouzi | <p>It's hard to guess without having the access to the source code. But there's an example informer app in the micronaut-kubernetes github <a href="https://github.com/micronaut-projects/micronaut-kubernetes/tree/master/examples/micronaut-kubernetes-informer" rel="nofollow noreferrer">https://github.com/micronaut-projects/micronaut-kubernetes/tree/master/examples/micronaut-kubernetes-informer</a> , check it out.</p>
<p>What could also help is complete <code>build.gradle</code> snippet.</p>
| Pavol Gressa |
<p>I wanted to create a MySQL container in Kubernetes with default disabled strict mode. I know the way of how to disable strict mode in docker. I tried to use the same way in Kubernetes, but it shows an errors log.</p>
<p>docker</p>
<pre class="lang-sh prettyprint-override"><code>docker container run -t -d --name hello-wordl mysql --sql-mode=""
</code></pre>
<p>kubernetes</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
labels:
app: db
spec:
selector:
matchLabels:
app: db
template:
metadata:
name: my-db
labels:
app: db
spec:
containers:
- name: my-db
image: mariadb
imagePullPolicy: Always
args: ["--sql-mode=\"\""]
</code></pre>
<p>error:</p>
<blockquote>
<p>2021-10-29 08:20:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.2.40+maria~bionic started.
2021-10-29 08:20:57+00:00 [ERROR] [Entrypoint]: mysqld failed while attempting to check config
command was: mysqld --sql-mode="" --verbose --help --log-bin-index=/tmp/tmp.i8yL5kgKoq
2021-10-29 8:20:57 140254859638464 [ERROR] mysqld: Error while setting value '""' to 'sql_mode'</p>
</blockquote>
| Deno | <p>Based on the error you're getting, it is reading the double quotes as value to sql_mode. You should omit the escaped double-quotes.</p>
<pre><code>args: ["--sql-mode="]
</code></pre>
| Fiona W |
<p>Does the Kubernetes scheduler place the pods on the nodes only based on their requested resources and nodes' available resources at the current snapshot of the server or it also takes into account the node's historical resource utilization?</p>
| Saeid Ghafouri | <p>In the official <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">Kubernetes documentation</a> we can find process and metrics used by <code>kube-scheduler</code> for choosing node for pod.</p>
<p>Basically this is 2-step process:</p>
<blockquote>
<p>kube-scheduler selects a node for the pod in a 2-step operation:</p>
<ol>
<li>Filtering</li>
<li>Scoring</li>
</ol>
</blockquote>
<p><em>Filtering</em> step is responsible for getting list of nodes which actually are able to run a pod:</p>
<blockquote>
<p>The <em>filtering</em> step finds the set of Nodes where it's feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resource to meet a Pod's specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable.</p>
</blockquote>
<p><em>Scoring</em> step is responsible for choosing the best node from the list generated by the <em>filtering</em> step:</p>
<blockquote>
<p>In the <em>scoring</em> step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.</p>
</blockquote>
<blockquote>
<p>Finally, kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, kube-scheduler selects one of these at random.</p>
</blockquote>
<p>When the node with the highest score is chosen, scheduler notifies the API server:</p>
<blockquote>
<p>...picks a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called <em>binding</em>.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler" rel="nofollow noreferrer">Factors that are taken into consideration for scheduling</a>:</p>
<ul>
<li>Individual and collective resource requirements</li>
<li>Hardware</li>
<li>Policy constraints</li>
<li>Affinity and anti-affinity specifications</li>
<li>Data locality</li>
<li>Inter-workload interference</li>
<li>Others...</li>
</ul>
<p>More detailed information about parameters be found <a href="https://kubernetes.io/docs/reference/scheduling/policies/" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>The following <em>predicates</em> implement filtering:</p>
<ul>
<li><code>PodFitsHostPorts</code>: Checks if a Node has free ports (the network protocol kind) for the Pod ports the Pod is requesting.</li>
<li><code>PodFitsHost</code>: Checks if a Pod specifies a specific Node by its hostname.</li>
<li><code>PodFitsResources</code>: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod.</li>
<li><code>MatchNodeSelector</code>: Checks if a Pod's Node <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">Selector</a> matches the Node's <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels" rel="nofollow noreferrer">label(s)</a>.</li>
<li><code>NoVolumeZoneConflict</code>: Evaluate if the <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Volumes</a> that a Pod requests are available on the Node, given the failure zone restrictions for that storage.</li>
<li><code>NoDiskConflict</code>: Evaluates if a Pod can fit on a Node due to the volumes it requests, and those that are already mounted.</li>
<li><code>MaxCSIVolumeCount</code>: Decides how many <a href="https://kubernetes.io/docs/concepts/storage/volumes/#csi" rel="nofollow noreferrer">CSI</a> volumes should be attached, and whether that's over a configured limit.</li>
<li><code>PodToleratesNodeTaints</code>: checks if a Pod's <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">tolerations</a> can tolerate the Node's <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">taints</a>.</li>
<li><code>CheckVolumeBinding</code>: Evaluates if a Pod can fit due to the volumes it requests. This applies for both bound and unbound <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PVCs</a>.</li>
</ul>
</blockquote>
<blockquote>
<p>The following <em>priorities</em> implement scoring:</p>
<ul>
<li><code>SelectorSpreadPriority</code>: Spreads Pods across hosts, considering Pods that belong to the same <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a>.</li>
<li><code>InterPodAffinityPriority</code>: Implements preferred <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">inter pod affininity and antiaffinity</a>.</li>
<li><code>LeastRequestedPriority</code>: Favors nodes with fewer requested resources. In other words, the more Pods that are placed on a Node, and the more resources those Pods use, the lower the ranking this policy will give.</li>
<li><code>MostRequestedPriority</code>: Favors nodes with most requested resources. This policy will fit the scheduled Pods onto the smallest number of Nodes needed to run your overall set of workloads.</li>
<li><code>RequestedToCapacityRatioPriority</code>: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.</li>
<li><code>BalancedResourceAllocation</code>: Favors nodes with balanced resource usage.</li>
<li><code>NodePreferAvoidPodsPriority</code>: Prioritizes nodes according to the node annotation <code>scheduler.alpha.kubernetes.io/preferAvoidPods</code>. You can use this to hint that two different Pods shouldn't run on the same Node.</li>
<li><code>NodeAffinityPriority</code>: Prioritizes nodes according to node affinity scheduling preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. You can read more about this in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Assigning Pods to Nodes</a>.</li>
<li><code>TaintTolerationPriority</code>: Prepares the priority list for all the nodes, based on the number of intolerable taints on the node. This policy adjusts a node's rank taking that list into account.</li>
<li><code>ImageLocalityPriority</code>: Favors nodes that already have the <a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-image" rel="nofollow noreferrer">container images</a> for that Pod cached locally.</li>
<li><code>ServiceSpreadingPriority</code>: For a given Service, this policy aims to make sure that the Pods for the Service run on different nodes. It favours scheduling onto nodes that don't have Pods for the service already assigned there. The overall outcome is that the Service becomes more resilient to a single Node failure.</li>
<li><code>EqualPriority</code>: Gives an equal weight of one to all nodes.</li>
<li><code>EvenPodsSpreadPriority</code>: Implements preferred <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">pod topology spread constraints</a>.</li>
</ul>
</blockquote>
<p>Answering your question:</p>
<blockquote>
<p>Does it take into account the node's historical resource utilization?</p>
</blockquote>
<p>As can see, on the above list there are no parameters related to the historical resource utilization. Also, I did research and I didn't find any information about it.</p>
| Mikolaj S. |
<p>I was looking for method to upgrade k8s version without downtime for Azure AKS and found this amazing blog post <a href="https://omichels.github.io/zerodowntime-aks.html" rel="nofollow noreferrer">https://omichels.github.io/zerodowntime-aks.html</a> but I got error at the start only</p>
<p>So currently running version of k8s in my region is no more available. When I tried to create a temporary nodepool got below error</p>
<pre><code>(AgentPoolK8sVersionNotSupported) Version 1.19.6 is not supported in this region.
Please use [az aks get-versions] command to get the supported version list in this region.
For more information, please check https://aka.ms/supported-version-list
</code></pre>
<p>What can I do to achieve zero downtime upgrade?</p>
| PSKP | <p>Here is how I upgraded without downtime, for your reference.</p>
<ol>
<li><p>Upgrade control plane only. (Can finish it on azure portal)<a href="https://i.stack.imgur.com/TcPX2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TcPX2.png" alt="enter image description here" /></a></p>
</li>
<li><p>Add a new Node pool. Now the version of new node pool is higher(same with control plane). Then add a label to it, e.g. <strong>nodePool=newNodePool</strong>.</p>
</li>
<li><p>Patch all application to the new node pool. (By nodeSelector)</p>
<p><code>$ kubectl get deployment -n {namespace} -o name | xargs kubectl patch -p "{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"nodePool\":\"newNodePool\"}}}}}" -n {namespace}</code></p>
</li>
<li><p>Check the pods if are scheduled to the new node pool.</p>
<p><code>$ kubectl get pods -owide</code></p>
</li>
<li><p>Delete the old node pool.</p>
</li>
</ol>
| HKBN-ITDS |
<p>Let's say I have a service that maps to pod that has 2 containers, 1 expose port 8080, the other one expose port 8081. The service expose both ports. The ingress uses nginx-ingress, and has the cookie based session affinity annotations. It has 2 paths, 1 is <code>/</code> mapping to port 8080, the other one is <code>/static</code> mapping to port 8081 on the same service. Will the session affinity work in such way where all the requests from the same client will be sent to the same pod no matter if the path is <code>/</code> or <code>/static</code>?</p>
<p>Below are full configs:</p>
<p>Ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 8080
- path: /static
pathType: Prefix
backend:
service:
name: test-service
port:
number: 8081
</code></pre>
<p>Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: ClusterIP
selector:
app: test-pod
ports:
- name: container1
port: 8080
targetPort: 8080
- name: container2
port: 8081
targetPort: 8081
</code></pre>
<p>Deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: container1
image: ...
ports:
- containerPort: 8080
- name: container2
image: ...
ports:
- containerPort: 8081
</code></pre>
| user3908406 | <p>I managed to test your configuration.</p>
<p>Actually this affinity annotation will work only for <code>/</code> path - this is how <code>nginx ingress</code> <a href="https://stackoverflow.com/questions/59272484/sticky-sessions-on-kubernetes-cluster/59360370#59360370">works</a> - to make affinity annotation work for both paths you need to create two ingress definitions:</p>
<p>Ingress for path <code>/</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-one
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "balanced"
nginx.ingress.kubernetes.io/session-cookie-name: "route-one"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: <your-domain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 8080
</code></pre>
<p>Ingress for path <code>/static</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-two
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "balanced"
nginx.ingress.kubernetes.io/session-cookie-name: "route-two"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: <your-domain>
http:
paths:
- path: /static
pathType: Prefix
backend:
service:
name: test-service
port:
number: 8081
</code></pre>
<p>Back to your main question - as we are creating two different ingresses, with two different cookies, they are independent of each other. Each of them will choose his "pod" to "stick" regardless of what the other has chosen. I did research I couldn't find any information about setting it a way to make it work you want.
Briefly answering your question:</p>
<blockquote>
<p>Will the session affinity work in such way where all the requests from the same client will be sent to the same pod no matter if the path is <code>/</code> or <code>/static</code>?</p>
</blockquote>
<p>No.</p>
| Mikolaj S. |
<p>While reading nodeselector (<a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#:%7E:text=nodeSelector" rel="nofollow noreferrer">link</a>), I was bewildered why node selector is a key-value pair. It would have been simple if just provide an identifier. For example instead of following</p>
<pre><code>kubectl label nodes node.xyz disktype=ssd
</code></pre>
<p>We could use</p>
<pre><code>kubectl label nodes node.xyz ssdDisk
</code></pre>
<p>What is the reason for giving nodeselector as key value pair.</p>
| Manish Khandelwal | <p>Short answer: because that's what the engineers who created it thought. :)</p>
<p>You can look at the source code. There is <code>nodeSelector</code> defined as a key-value map.
By the way, the <code>labels</code> are also defined in the same way, so that one can match the other.
Due to the fact that node selector is a key-value map - you can use a lot of them while maintaining order.</p>
<hr />
<p>Here you have a <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go" rel="nofollow noreferrer">link</a> to the code where nodeselector has been defined. You may be interested in these code snippets:</p>
<pre><code>type NodeSelector struct {
//Required. A list of node selector terms. The terms are ORed.
NodeSelectorTerms []NodeSelectorTerm `json:"nodeSelectorTerms" protobuf:"bytes,1,rep,name=nodeSelectorTerms"`
}
</code></pre>
<pre><code>type NodeSelectorTerm struct {
// A list of node selector requirements by node's labels.
// +optional
MatchExpressions []NodeSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,1,rep,name=matchExpressions"`
// A list of node selector requirements by node's fields.
// +optional
MatchFields []NodeSelectorRequirement `json:"matchFields,omitempty" protobuf:"bytes,2,rep,name=matchFields"`
}
</code></pre>
<pre><code>type NodeSelectorRequirement struct {
// The label key that the selector applies to.
Key string `json:"key" protobuf:"bytes,1,opt,name=key"`
// Represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
Operator NodeSelectorOperator `json:"operator" protobuf:"bytes,2,opt,name=operator,casttype=NodeSelectorOperator"`
// An array of string values. If the operator is In or NotIn,
// the values array must be non-empty. If the operator is Exists or DoesNotExist,
// the values array must be empty. If the operator is Gt or Lt, the values
// array must have a single element, which will be interpreted as an integer.
// This array is replaced during a strategic merge patch.
// +optional
Values []string `json:"values,omitempty" protobuf:"bytes,3,rep,name=values"`
}
</code></pre>
<p>And mapping <code>NodeSelector</code>:</p>
<pre><code>NodeSelector map[string]string `json:"nodeSelector,omitempty" protobuf:"bytes,7,rep,name=nodeSelector"`
</code></pre>
| Mikołaj Głodziak |
<p>I've an application running on k8s and would like to updated the java heapsize .
I've updated the JAVA_OPTS environnement variable and set it in the deployment file as below</p>
<pre><code>- name: JAVA_OPTS
value: "-Xmx768m -XX:MaxMetaspaceSize=256m"
</code></pre>
<p>but when i run the below command it looks like my changes does not takes effect</p>
<pre><code> java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|ThreadStackSize'
intx CompilerThreadStackSize = 0 {pd product}
uintx ErgoHeapSizeLimit = 0 {product}
uintx HeapSizePerGCThread = 87241520 {product}
uintx InitialHeapSize := 33554432 {product}
uintx LargePageHeapSizeThreshold = 134217728 {product}
uintx MaxHeapSize := 536870912 {product}
intx ThreadStackSize = 1024 {pd product}
intx VMThreadStackSize = 1024 {pd product}
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (IcedTea 3.12.0) (Alpine 8.212.04-r0)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
</code></pre>
<p>I'm i wrong can someone help me and explain how to set hose values ?</p>
| bilel | <p>I see that you used OpenJDK Alpine to deploy a JAVA application, so you need to use this environment "<strong>JAVA_TOOL_OPTIONS</strong>" instead of "<strong>JAVA_OPTS</strong>", something like:</p>
<pre><code>spec:
containers:
- name: jvm_options
image: xxx:xxx
env:
- name: JAVA_TOOL_OPTIONS
value: "-Xmx768m -XX:MaxMetaspaceSize=256m"
</code></pre>
<p>Once your application is running, you can check the application log and you will find the log below:</p>
<pre><code>Picked up JAVA_TOOL_OPTIONS: -Xmx768m -XX:MaxMetaspaceSize=256m
</code></pre>
| HKBN-ITDS |
<p>I am trying to run an application locally on k8s but I am not able to reach it.</p>
<p>here is my deloyment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: listings
labels:
app: listings
spec:
replicas: 2
selector:
matchLabels:
app: listings
template:
metadata:
labels:
app: listings
spec:
containers:
- image: mydockerhub/listings:latest
name: listings
envFrom:
- secretRef:
name: listings-secret
- configMapRef:
name: listings-config
ports:
- containerPort: 8000
name: django-port
</code></pre>
<p>and it is my service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: listings
labels:
app: listings
spec:
type: NodePort
selector:
app: listings
ports:
- name: http
port: 8000
targetPort: 8000
nodePort: 30036
protocol: TCP
</code></pre>
<p>At this stage, I don't want to use other methods like ingress or ClusterIP, or load balancer. I want to make nodePort work because I am trying to learn.</p>
<p>When I run <code>kubectl get svc -o wide</code> I see</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
listings NodePort 10.107.77.231 <none> 8000:30036/TCP 28s app=listings
</code></pre>
<p>When I run <code>kubectl get node -o wide</code> I see</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 85d v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.10.16.3-microsoft-standard-WSL2 docker://20.10.12
</code></pre>
<p>and when I run <code>minikube ip</code> it shows <code>192.168.49.2</code></p>
<p>I try to open <code>http://192.168.49.2:30036/health</code> it is not opening <code>This site can’t be reached</code></p>
<p>How should expose my application externally?</p>
<p>note that I have created the required configmap and secret objects. also note that this is a simple django restful application that if you hit the /health endpoint, it returns success. and that's it. so there is no problem with the application</p>
| Amin Ba | <p>That is because your local and minikube are not in the same network segment,
you must do something more to access minikube service on windows.</p>
<p>First</p>
<pre><code>$ minikube service list
</code></pre>
<p>That will show your service detail which include name, url, nodePort, targetPort.</p>
<p>Then</p>
<pre><code>$ minikube service --url listings
</code></pre>
<p>It will open a port to listen on your windows machine that can forward the traffic to minikube node port.</p>
<p>Or you can use command <code>kubectl port-forward</code> to expose service on host port, like:</p>
<pre><code>kubectl port-forward --address 0.0.0.0 -n default service/listings 30036:8000
</code></pre>
<p>Then try with <code>http://localhost:30036/health</code></p>
| HKBN-ITDS |
<p>I created a kubernetes cluster using <code>kubeadm</code>. Services are declared as <code>ClusterIP</code>. At the moment I'm trying to deploy my app as ingress of type <code>loadbalancer</code> with Metallb but I faced some problems. If I deploy my app as ingress some jv and css components are not found. There was no problem running the application as a service but the problem appeared while I used Ingress. It is an ASP.NET Core application</p>
<p>My Ingress source:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: taco-ingress
spec:
rules:
- host: tasty.taco.com
http:
paths:
- path: /web1
pathType: Prefix
backend:
service:
name: web1
port:
number: 80
- path: /web2
pathType: Prefix
backend:
service:
name: web2
port:
number: 80
</code></pre>
<p>My Deployment source:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web1
labels:
app: taco
taco: web1
spec:
replicas: 2
selector:
matchLabels:
app: taco
task: web1
template:
metadata:
labels:
app: taco
task: web1
version: v0.0.1
spec:
containers:
- name: taco
image: yatesu0x00/webapplication1:ingress-v1
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web2
labels:
app: taco
taco: web2
spec:
replicas: 2
selector:
matchLabels:
app: taco
task: web2
template:
metadata:
labels:
app: taco
task: web2
version: v0.0.1
spec:
containers:
- name: taco
image: yatesu0x00/webapplication2:ingress-v1
ports:
- containerPort: 80
</code></pre>
<p>My Service source:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: web1
spec:
ports:
- targetPort: 80
port: 80
selector:
app: taco
task: web1
---
apiVersion: v1
kind: Service
metadata:
name: web2
spec:
ports:
- targetPort: 80
port: 80
selector:
app: taco
task: web2
</code></pre>
<p>The html file of the app:</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Epic Website - (⌐■_■)</title>
<link rel="stylesheet" href="/lib/bootstrap/dist/css/bootstrap.min.css" />
<link rel="stylesheet" href="/css/site.css" />
</head>
<body>
<header>
<nav class="navbar navbar-expand-sm navbar-toggleable-sm navbar-light bg-white border-bottom box-shadow mb-3">
<div class="container">
<a class="navbar-brand" href="/">Home</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target=".navbar-collapse" aria-controls="navbarSupportedContent"
aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="navbar-collapse collapse d-sm-inline-flex flex-sm-row-reverse">
<ul class="navbar-nav flex-grow-1">
<li class="nav-item">
<a class="navbar-brand" href="/Home2">Home2</a>
</li>
<li class="nav-item">
<a class="navbar-brand" href="/Home/ItWorks">Click me!</a>
</li>
</ul>
</div>
</div>
</nav>
</header>
<div class="container">
<main role="main" class="pb-3">
<h2>Want this taco?</h2>
<pre>
{\__/}
( ●.●)
( >🌮
</pre>
</main>
</div>
<footer class="border-top footer text-muted">
<div class="container">
&copy; 2020 - <a href="/Home/Privacy">Privacy</a>
</div>
</footer>
<script src="/lib/jquery/dist/jquery.min.js"></script>
<script src="/lib/bootstrap/dist/js/bootstrap.bundle.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/lib/darkmode-js.min.js"></script>
<script src="/js/site.js?v=8ZRc1sGeVrPBx4lD717BgRaQekyh78QKV9SKsdt638U"></script>
</body>
</html>
</code></pre>
<p>If I open up the console in browser I can see that there is <code>404 not found</code> on all elements of type <code><script></code>.</p>
| Yatesu | <p>If you look closer in the logs, you can find that the cause of your problem is that your app is requesting static content (example for <code>css/site.css</code> file) in the path <code>tasty.taco.com/css/site.css</code> and as Ingress Controller doesn't have definition for prefix <code>/css</code> in it's definition it is returning 404 error code.</p>
<p>The static content is available in the following path <code>tasty.taco.com/web1/css/site.css</code> - look that I used <code>web1</code> prefix so Ingress knows to which service re-direct this request.</p>
<p>Generally, using annotation <code>nginx.ingress.kubernetes.io/rewrite-target</code> in the apps that are requesting static content often causes issues like this.</p>
<p>Fix for this is <strong>not</strong> to use this annotation and add to your app possibility to setup base URL, in your example by using <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.extensions.usepathbasemiddleware?view=aspnetcore-5.0" rel="nofollow noreferrer"><code> UsePathBaseMiddleware</code> class</a>.</p>
<blockquote>
<p>Represents a middleware that extracts the specified path base from request path and postpend it to the request path base.</p>
</blockquote>
<p>For detailed steps, I'd recommend following steps presented in <a href="https://stackoverflow.com/questions/56625822/asp-net-core-2-2-kubernetes-ingress-not-found-static-content-for-custom-path/57212033#57212033">this answer</a>.</p>
| Mikolaj S. |
<p>It may sound like a naive question, I am running some load testing on one of the deployments on k8s. So to get an idea of the CPU utilization, I opened LENS HPA and CPU utilization is being shown like this</p>
<p><a href="https://i.stack.imgur.com/8SHmR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8SHmR.png" alt="enter image description here" /></a></p>
<p>can anyone please tell me how to understand this number, earlier it was 380/50% for CPU.</p>
<p>I just want to get an idea of what does this number means, if it is 380/50, is my CPU not big enough?</p>
| iron_man83 | <p>It means probably the same as the output from the <code>kubectl describe hpa {hpa-name}:</code></p>
<pre><code>$ kubectl describe hpa php-apache
Name: php-apache
...
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 60% (120m) / 50%
</code></pre>
<p>It means that CPU has consumption increased to to x % of the request - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#increase-load" rel="nofollow noreferrer">good example and explanation in the Kubernetes docs</a>:</p>
<blockquote>
<p>Within a minute or so, you should see the higher CPU load; for example:</p>
<pre><code>NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
</code></pre>
<p>and then, more replicas. For example:</p>
<pre><code>NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 7 3m
</code></pre>
<p>Here, CPU consumption has increased to 305% of the request.</p>
</blockquote>
<p>So in your example <strong>(380%/50%)</strong> it means that you setup HPA to maintain an average CPU utilization across pods to <strong>50%</strong> (by increasing and decreasing number of replicas - updating the deployment) and CPU consumption has increased to <strong>380%</strong> so the deployment will be resized automatically.</p>
<p>Also check:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a></li>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">HorizontalPodAutoscaler Walkthrough</a></li>
</ul>
| Mikolaj S. |
<h1><strong>Problem</strong></h1>
<p>In our project, we want two pods working as server-client that communicate via the Python <code>socket</code> library. Both containers are built locally with <code>docker build</code>, pulled locally via <code>imagePullPolicy: IfNotPresent</code> on the yaml files and run on the same node of the k8s cluster (I'm running kubernetes vanilla, if that's important).</p>
<p>The communication works well when we</p>
<ul>
<li>run both python scripts in the command line</li>
<li>run both scripts as containers using <code>docker build</code> and <code>docker run</code></li>
<li>the server app container is deployed in the K8s cluster and the client app is run either on the command line or as a docker container.</li>
</ul>
<p>The communication fails when both server and client are deployed in K8s. <code>kubectl logs client -f</code> returns :</p>
<pre><code>Traceback (most recent call last):
File "client.py", line 7, in <module>
client_socket.connect((IP_Server,PORT_Server))
TimeoutError: [Errno 110] Connection timed out
</code></pre>
<p>I suspect there's a problem with the outgoing request from the client script when it's deployed on the cluster, but I can't find where the problem lies.</p>
<h1><strong>Codes</strong></h1>
<p><strong>server.py</strong></p>
<pre><code>import socket
IP = "0.0.0.0"
PORT = 1234
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind((IP, PORT))
server_socket.listen()
...
</code></pre>
<p><strong>server.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: server
spec:
ports:
- port: 1234
targetPort: 1234
protocol: TCP
selector:
app: server
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
image: server:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
</code></pre>
<p><strong>client.py</strong></p>
<pre><code>import socket
IP_Server = # the IP of the server service, obtained from "kubectl get svc"
PORT_Server = 1234
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect((IP_Server,PORT_Server)) # fails here
...
</code></pre>
<p><strong>client.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: client
labels:
app: client
spec:
containers:
- name: client
image: client:latest
imagePullPolicy: IfNotPresent
</code></pre>
| waldowe | <p>In case anyone gets here in the future, I found a solution that worked for me by executing</p>
<pre><code>iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F
</code></pre>
<p>and then deleting all coredns pods.</p>
<p>Reference : <a href="https://github.com/kubernetes/kubernetes/issues/86762#issuecomment-836338017" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/86762#issuecomment-836338017</a></p>
| waldowe |
<p>I'm implementing a kubernetes cluster for a django-based app and I didn't found what's the best practice for namespaces.</p>
<p>My app will need various services like a postgresql cluster, a reverse proxy (traefik), an Elastic Search / Kibana cluster and argoCD & argo workflow for CD.</p>
<p>Is it better to pull all those services into an unique namespace called production? Or do I need to separate them by services?</p>
<p>I began to separate them by services but I face some problem. For example an argo workflow launched on argo namespace can't use a secret stored from postgresql namespace.</p>
<p>Thank you for your help,
Tux Craft</p>
| TuxCraft | <p>Your question is slightly opinion-based. However, I will try to figure out the topic somehow by presenting both solutions. First, an excerpt from <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">the documentation</a>. There you can find a paragraph <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#when-to-use-multiple-namespaces" rel="nofollow noreferrer">when to use multiple namespaces</a>:</p>
<blockquote>
<p>Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">resource quota</a>).
It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels" rel="nofollow noreferrer">labels</a> to distinguish resources within the same namespace.</p>
</blockquote>
<p>Based on this documentation, in your case, the best solution will be to create one namespace and many deployments. This will allow you to avoid problems like this:</p>
<blockquote>
<p>For example an argo workflow launched on argo namespace can't use a secret stored from postgresql namespace.</p>
</blockquote>
<p>Technically, you can create the same thing using multiple namespaces. However, the point of namespaces is to isolate, so it doesn't seem like a good idea in your situation. You can read very good topic about <a href="https://stackoverflow.com/questions/37221483/service-located-in-another-namespace">Service located in another namespace</a>.</p>
| Mikołaj Głodziak |
<p>Not sure if this is a silly question. When the same app/service running in multiple containers, how do they report themselves to zookeeper/etcd and identify themselves? So that load balancers know the different instances and know who to talk to, where to probe and dispatch, etc..? Or the service instances would use some id from the container in their identification?</p>
<p>Thanks in advance</p>
| dengying | <p>To begin with, let me explain in a few sentences <a href="https://platform9.com/blog/kubernetes-service-discovery-principles-in-practice/" rel="nofollow noreferrer">how it works</a>:</p>
<blockquote>
<p>The basic building block starts with the <strong>Pod</strong>, which is just a resource that can be created and destroyed on demand. Because a Pod can be moved or rescheduled to another <strong>Node</strong>, any internal IPs that this Pod is assigned can change over time.
If we were to connect to this Pod to access our application, it would not work on the next re-deployment. To make a Pod <strong>reachable</strong> to external networks or clusters without relying on any internal IPs, we need another layer of abstraction. K8s offers that abstraction with what we call a <strong>Service Deployment.</strong></p>
</blockquote>
<p>This way, you can create a website that will be identified, for example, by a load balancer.</p>
<blockquote>
<p><strong>Services</strong> provide network connectivity to Pods that work uniformly across clusters. <strong>Service discovery is the actual process of figuring out how to connect to a service.</strong></p>
</blockquote>
<hr />
<p>You can also find some information about Service <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">in the official documentation</a>:</p>
<blockquote>
<p>An abstract way to expose an application running on a set of <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.</p>
</blockquote>
<p>Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. You can read more about this topic <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">here</a> and <a href="https://platform9.com/blog/kubernetes-service-discovery-principles-in-practice/" rel="nofollow noreferrer">here</a>.</p>
| Mikołaj Głodziak |
<p>I'm relatively new to K8's, and I've setup a private cluster with a single node pool on GKE. The node pool was configured to have nodes on 3 zones in a single GCP region (Autoscaling enabled with minimum of 2 nodes and maximum of 20 total nodes).</p>
<p>I ran into the following error upon trying to deploy new resources: <code>Cannot schedule pods: Insufficient memory.</code> I decided to disable Autoscaling on cluster's node pool, and then manually increase the number of nodes. I applied the change, each Instance Group now has 3 instances (a total of 9). However, the instances listed under the 'Nodes' section of the GKE Console does not reflect that. It show only 5 (which is actually a decrease from the original 6).</p>
<p><a href="https://i.stack.imgur.com/ZSsC5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZSsC5.png" alt="enter image description here" /></a></p>
<p>GCP DOCS:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster</a></p>
<p>What am I missing here? Thanks</p>
| bahmsto | <p>The problem here was caused by mismatch between versions of the cluster and node pool.</p>
<blockquote>
<p>By default, <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades#upgrading_automatically" rel="nofollow noreferrer">automatic upgrades</a> are enabled for Google Kubernetes Engine (GKE) clusters and node pools.</p>
</blockquote>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrading-nodes" rel="nofollow noreferrer">Here</a> is documentation that helps to upgrade cluster or node pool manually.</p>
<hr />
<p>Upgrading to the default cluster version can be done by running the following command:</p>
<pre><code>gcloud container clusters upgrade CLUSTER_NAME --master
</code></pre>
<hr />
<p>All nodes can be updated to the same version as the control plane by running this command:</p>
<pre><code>gcloud container clusters upgrade
</code></pre>
<p>Following command should be used in case rolling back an upgrade:</p>
<pre><code>gcloud container node-pools rollback NODE_POOL_NAME \ --cluster CLUSTER_NAME
</code></pre>
<hr />
<p>See also <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/upgrade" rel="nofollow noreferrer">this reference</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades#how_cluster_and_node_pool_upgrades_work" rel="nofollow noreferrer">this documentation</a>.</p>
| kkopczak |
<p>I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?</p>
<p>Thanks a million
Abdel</p>
| DelMab | <p>Similar question has been asked <a href="https://techcommunity.microsoft.com/t5/azure-devops/run-a-powershell-script-on-azure-aks-nodes/m-p/2689781" rel="nofollow noreferrer">here</a>. User <a href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/1138854" rel="nofollow noreferrer">philipwelz</a> has written:</p>
<blockquote>
<p>Hey,</p>
<p>although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.</p>
<p>I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.</p>
<p>BR,<br />
Philip</p>
</blockquote>
<p>This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the <a href="https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html" rel="nofollow noreferrer">following doc</a>. You are right that install scripts work only on Linux:</p>
<blockquote>
<p>Install scripts work on Linux hosts only.</p>
</blockquote>
<p>But, for the Windows and Mac software you have specific yaml files:</p>
<blockquote>
<p>For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.</p>
</blockquote>
<p>The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:</p>
<p>STEP 3:</p>
<blockquote>
<ul>
<li>Generate a <a href="https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#" rel="nofollow noreferrer">defender.yaml</a> file, where:</li>
</ul>
</blockquote>
<pre><code> The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user <ADMIN> (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console.
$ <PLATFORM>/twistcli defender export kubernetes \
--user <ADMIN_USER> \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \
--cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME>
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
</code></pre>
<p>and then STEP 4:</p>
<pre><code>kubectl create -f ./defender.yaml
</code></pre>
| Mikołaj Głodziak |
<p>How to list the pods running in a particular zone?
And suppose my cluster is configured into multiple zones, how to ensure pods are distributed into every zone?</p>
| kr_devops | <p>Topology key zone value is a label applied on nodes and so you can get the nodes with 'zone-value' and list pods of those nodes.
Somethig like below,</p>
<p><code>kubectl get nodes -l zone:<zone value> -o json | jq '.items[].metadata.name' | xargs -I worker sh -c 'kubectl get pods -o wide | grep worker'</code></p>
| Aadhavan Rajasekar |
<p>We have followed <a href="https://mainflux.readthedocs.io/en/latest/kubernetes/" rel="nofollow noreferrer">this</a> tutorial to get mainflux up and running. After installing kubectl we added helm repos as follows</p>
<pre><code>helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
</code></pre>
<p>We have installed ingress-nginx using</p>
<pre><code> helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
</code></pre>
<p>finally mainflux is installed</p>
<pre><code>helm install mainflux . -n mf --set ingress.hostname='example.com' --set
influxdb.enabled=true
</code></pre>
<p>After that we have added the following in the ingress-nginx-contoller</p>
<pre><code>kubectl edit svc -n ingress-nginx ingress-nginx-controller
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqtts
port: 8883
protocol: TCP
targetPort: 8883
</code></pre>
<p>everything seems to be up and running but when we visit example.com we see a 404 message instead of the UI, which should be running as mainflux-nginx-ingress in mf namespace points to that as shown below</p>
<pre><code> rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mainflux-ui
port:
number: 3000
- path: /version
pathType: Prefix
backend:
service:
name: mainflux-things
port:
number: 8182
</code></pre>
<p>Ingress file created is like this</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: nginx-ingress-ingress-nginx-controller
namespace: ingress-nginx
uid: be22613c-df21-41f3-9466-eb2146ac0503
resourceVersion: '2151483'
generation: 3
creationTimestamp: '2021-12-31T11:39:08Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"nginx-ingress-ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"ingressClassName":"nginx","rules":[{"host":"aqueglobal.hopto.org","http":{"paths":[{"backend":{"service":{"name":"ingress-nginx-controller","port":{"number":80}}},"path":"/","pathType":"ImplementationSpecific"}]}}]}}
managedFields:
- manager: kubectl-client-side-apply
operation: Update
apiVersion: networking.k8s.io/v1
time: '2021-12-31T11:39:08Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:ingressClassName: {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2021-12-31T11:39:33Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
- manager: dashboard
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-01-03T07:26:29Z'
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:rules: {}
spec:
ingressClassName: nginx
rules:
- host: aqueglobal.dockerfix.ga
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: mainflux-ui
port:
number: 80
status:
loadBalancer:
ingress:
- ip: 178.128.140.136
</code></pre>
<p>Please let me know if you need more information on this.</p>
<p>Logs from ingress-nginx-controller</p>
<pre><code>Release: v1.1.0
Build: cacbee86b6ccc45bde8ffc184521bed3022e7dee
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
-------------------------------------------------------------------------------
W1229 10:42:59.968679 8 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1229 10:42:59.969348 8 main.go:223] "Creating API client" host="https://10.245.0.1:443"
I1229 10:42:59.981189 8 main.go:267] "Running in Kubernetes cluster" major="1" minor="21" git="v1.21.5" state="clean" commit="aea7bbadd2fc0cd689de94a54e5b7b758869d691" platform="linux/amd64"
I1229 10:43:01.110865 8 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1229 10:43:01.135087 8 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1229 10:43:01.192917 8 nginx.go:255] "Starting NGINX Ingress controller"
I1229 10:43:01.218095 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b79068dd-ef5b-4098-bf83-0b5b38d328e8", APIVersion:"v1", ResourceVersion:"1364193", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1229 10:43:02.300256 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-ingress" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.300294 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-rewrite-ingress" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.300308 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-rewrite-ingress-http-adapter" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.300544 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-jaeger-operator-jaeger-query" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.394534 8 nginx.go:297] "Starting NGINX process"
I1229 10:43:02.394823 8 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I1229 10:43:02.395134 8 nginx.go:317] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1229 10:43:02.395498 8 controller.go:155] "Configuration changes detected, backend reload required"
I1229 10:43:02.420641 8 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
I1229 10:43:02.420988 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-54bfb9bb-h7rnk"
I1229 10:43:02.476845 8 controller.go:172] "Backend successfully reloaded"
I1229 10:43:02.477112 8 controller.go:183] "Initial sync, sleeping for 1 second"
I1229 10:43:02.477268 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54bfb9bb-h7rnk", UID:"a7bc7f3d-057c-48af-9cc7-ac5696e33c4e", APIVersion:"v1", ResourceVersion:"1364272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.110.0.4 - - [29/Dec/2021:11:40:20 +0000] "CONNECT 161.97.119.209:25562 HTTP/1.1" 400 150 "-" "-" 0 0.100 [] [] - - - - 8a665aa9190578b193cc461a2dd7c250
10.110.0.5 - - [29/Dec/2021:12:00:47 +0000] "GET / HTTP/1.1" 400 650 "http://localhost:8001/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 461 0.000 [] [] - - - - 9392ae22b5c8f2b2af93a16105d117af
10.110.0.6 - - [29/Dec/2021:12:00:47 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "http://178.128.140.136:443/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 376 0.000 [] [] - - - - c92ed214e9bb86e0de12cf5b77d428a9
10.110.0.6 - - [29/Dec/2021:12:04:33 +0000] "GET / HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 454 0.000 [] [] - - - - 443edf8d2edd6a051ce07d654bb2af89
10.110.0.4 - - [29/Dec/2021:12:04:33 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "http://178.128.140.136:443/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 376 0.000 [] [] - - - - 005b2e9af113b00747166d1906906588
I1229 14:42:40.103830 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.039s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:14.3kBs testedConfigurationSize:0.04}
I1229 14:42:40.103862 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-jaeger-operator-jaeger-query/mf"
10.110.0.4 - - [29/Dec/2021:17:09:23 +0000] "\x16\x03\x01\x01\xFE\x01\x00\x01\xFA\x03\x03\xF0Y\x16\xD3ELt\xCCv\xFAq$\xA4V\xEA\x80\x03\x1C\xE5\xEF\x1A\x1Cy\x12\x88_\xEBam_\xF7X\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.055 [] [] - - - - 145d5cb5329de31ffe9b8ce98bcfd841
10.110.0.4 - - [29/Dec/2021:17:27:59 +0000] "\x04\x01\x00\x19h/\x12\xA1\x00" 400 150 "-" "-" 0 0.002 [] [] - - - - f7b5cdff79f165cb9eb6e93a1302f32b
10.110.0.6 - - [29/Dec/2021:17:27:59 +0000] "\x05\x01\x00" 400 150 "-" "-" 0 0.002 [] [] - - - - 8658dc6c8c1670df628a7a4583d4587f
10.110.0.4 - - [29/Dec/2021:17:27:59 +0000] "CONNECT hotmail-com.olc.protection.outlook.com:25 HTTP/1.1" 400 150 "-" "-" 0 0.003 [] [] - - - - c119e2115f54ce2f1ef91f771e64d456
2021/12/29 18:20:58 [crit] 33#33: *252621 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
2021/12/29 18:47:11 [crit] 33#33: *267094 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
2021/12/29 19:37:37 [crit] 33#33: *294934 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
2021/12/29 20:20:07 [crit] 34#34: *318401 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
10.110.0.4 - - [29/Dec/2021:21:03:10 +0000] "\x04\x01\x00PU\xCE\xA0s\x00" 400 150 "-" "-" 0 0.003 [] [] - - - - 47053e3a5c942a0ee2239ba2e4d9be8f
10.110.0.6 - - [29/Dec/2021:21:03:10 +0000] "\x05\x01\x00" 400 150 "-" "-" 0 0.002 [] [] - - - - a3d70a5ff4485970e78f028aa9a827d4
10.110.0.6 - - [29/Dec/2021:21:03:10 +0000] "CONNECT 85.206.160.115:80 HTTP/1.1" 400 150 "-" "-" 0 0.002 [] [] - - - - 7b4fff89c964b6865ac4f67fa897ad5d
2021/12/29 21:20:05 [crit] 34#34: *351510 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.6, server: 0.0.0.0:443
10.110.0.4 - - [29/Dec/2021:21:53:07 +0000] "\x01\x02\x03\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.212 [] [] - - - - 3e69ee8444b4410a1e841bcb9ca645e4
10.110.0.4 - - [29/Dec/2021:22:22:10 +0000] "CONNECT 161.97.119.209:25562 HTTP/1.1" 400 150 "-" "-" 0 0.089 [] [] - - - - b1d0f23d0111c17bc08c92c72eb9c3a4
10.110.0.4 - - [29/Dec/2021:23:27:28 +0000] "H\x00\x00\x00tj\xA8\x9E#D\x98+\xCA\xF0\xA7\xBBl\xC5\x19\xD7\x8D\xB6\x18\xEDJ\x1En\xC1\xF9xu[l\xF0E\x1D-j\xEC\xD4xL\xC9r\xC9\x15\x10u\xE0%\x86Rtg\x05fv\x86]%\xCC\x80\x0C\xE8\xCF\xAE\x00\xB5\xC0f\xC8\x8DD\xC5\x09\xF4" 400 150 "-" "-" 0 0.142 [] [] - - - - e07241ad9c169d9998fa7ef1ca46a9ac
2021/12/29 23:31:19 [crit] 33#33: *423930 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.6, server: 0.0.0.0:443
10.110.0.6 - - [29/Dec/2021:23:47:36 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.038 [] [] - - - - c1cb7bd37bf5661a79475d3700770fde
2021/12/29 23:48:00 [crit] 34#34: *433156 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.6, server: 0.0.0.0:443
10.110.0.5 - - [29/Dec/2021:23:58:07 +0000] "\xC9\x94\xD1\xA6\xAE\x9C\x05lM/\x09\x8Cp#\xEE\x9D*5#]\xC7R:\xC8\x8E/\x11\xB8\xCD\x89Z\xFB\xA4\x19f\xD2\xCE\xB3\xA1\x81\xBB\xFC\xA0\xDD%d1\x17\xA6%n\xC5" 400 150 "-" "-" 0 0.042 [] [] - - - - 25e4cb81e83b0cdaaa06570e63bdf694
10.110.0.6 - - [29/Dec/2021:23:58:07 +0000] "\x10 \x00\x00BBBB\xBA\x8C\xC1\xABDAAA" 400 150 "-" "-" 0 0.035 [] [] - - - - 426506f8a90e477fe94f2ffcc8183c97
I1230 00:42:40.103254 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.046s renderingIngressLength:1 renderingIngressTime:0s admissionTime:14.3kBs testedConfigurationSize:0.046}
I1230 00:42:40.103476 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-jaeger-operator-jaeger-query/mf"
E1230 00:48:27.313265 8 leaderelection.go:330] error retrieving resource lock ingress-nginx/ingress-controller-leader: etcdserver: request timed out
I1230 00:48:34.204268 8 leaderelection.go:283] failed to renew lease ingress-nginx/ingress-controller-leader: timed out waiting for the condition
I1230 00:48:34.204406 8 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
E1230 00:48:41.310746 8 leaderelection.go:330] error retrieving resource lock ingress-nginx/ingress-controller-leader: etcdserver: request timed out
I1230 00:48:50.241126 8 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
2021/12/30 01:44:38 [crit] 33#33: *497526 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
10.110.0.4 - - [30/Dec/2021:02:09:49 +0000] "145.ll|'|'|SGFjS2VkX0Q0OTkwNjI3|'|'|WIN-JNAPIER0859|'|'|JNapier|'|'|19-02-01|'|'||'|'|Win 7 Professional SP1 x64|'|'|No|'|'|0.7d|'|'|..|'|'|AA==|'|'|112.inf|'|'|SGFjS2VkDQoxOTIuMTY4LjkyLjIyMjo1NTUyDQpEZXNrdG9wDQpjbGllbnRhLmV4ZQ0KRmFsc2UNCkZhbHNlDQpUcnVlDQpGYWxzZQ==12.act|'|'|AA==" 400 150 "-" "-" 0 0.141 [] [] - - - - e40974d785f85a100960886a497916c6
2021/12/30 02:11:36 [crit] 34#34: *512430 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
2021/12/30 02:16:03 [crit] 33#33: *514904 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
10.110.0.6 - - [30/Dec/2021:04:24:50 +0000] "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.125 [] [] - - - - 48598e8bbad3e1b15b1887ec187bb224
10.110.0.5 - - [30/Dec/2021:04:24:50 +0000] "GET / HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Linux; Android 8.0.0;) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Mobile Safari/537.36" 211 0.000 [] [] - - - - f1aa3dcdecf07e6560caec45bcfee1e4
10.110.0.4 - - [30/Dec/2021:04:24:51 +0000] "\x00\xFFK\x00\x00\x00\xE2\x00 \x00\x00\x00\x0E2O\xAAC\xE92g\xC2W'\x17+\x1D\xD9\xC1\xF3,kN\x17\x14" 400 150 "-" "-" 0 0.052 [] [] - - - - 34ef8bd3bfc420819af3ac933ff54ea9
10.110.0.4 - - [30/Dec/2021:04:52:58 +0000] "ABCDEFGHIJKLMNOPQRSTUVWXYZ9999" 400 150 "-" "-" 0 0.014 [] [] - - - - 4e5553403d3cbe707bad49c052f52a2f
10.110.0.5 - - [30/Dec/2021:05:19:57 +0000] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 400 150 "-" "-" 51 0.050 [] [] - - - - c6cec0eedc7723db6542bb78665c19c8
2021/12/30 05:21:21 [crit] 33#33: *617199 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
10.110.0.5 - - [30/Dec/2021:06:05:54 +0000] "\x16\x03\x01\x01\xFE\x01\x00\x01\xFA\x03\x03_\xE0\x15(,\x13\xA7\xFD\xD1x\xDCm\xDF_5\xFD\x8EL\xBAG\xD0\xB9\xA1\x98\xE8X\xE6E\x138\xE1\xB7\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.081 [] [] - - - - c3d52cdc830e38cd8a75aa61975835cd
10.110.0.4 - - [30/Dec/2021:07:05:48 +0000] "\x01\x02\x03\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.218 [] [] - - - - f18e9f0380ab696404ae465495411af8
I1230 07:48:10.715646 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.032s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:21.6kBs testedConfigurationSize:0.033}
I1230 07:48:10.715691 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-ingress/mf"
I1230 07:48:11.327497 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.036s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:55.6kBs testedConfigurationSize:0.037}
I1230 07:48:11.327543 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress/mf"
I1230 07:48:11.941131 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.034s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:21.7kBs testedConfigurationSize:0.035}
I1230 07:48:11.941229 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress-http-adapter/mf"
10.110.0.4 - - [30/Dec/2021:07:53:33 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.037 [] [] - - - - 3996839a4965b5cf2ad4ae90d7f5116e
I1230 08:15:03.063694 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.033s renderingIngressLength:1 renderingIngressTime:0s admissionTime:25.6kBs testedConfigurationSize:0.033}
I1230 08:15:03.063726 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-ingress/mf"
I1230 08:15:03.676872 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.042s renderingIngressLength:1 renderingIngressTime:0s admissionTime:55.8kBs testedConfigurationSize:0.042}
I1230 08:15:03.677099 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress/mf"
I1230 08:15:04.288284 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.041s renderingIngressLength:1 renderingIngressTime:0s admissionTime:25.7kBs testedConfigurationSize:0.041}
I1230 08:15:04.288313 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress-http-adapter/mf"
W1230 09:06:09.292167 8 controller.go:1299] Error getting SSL certificate "mf/mainflux-server": local SSL certificate mf/mainflux-server was not found. Using default certificate
I1230 09:06:09.352552 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.06s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:25.6kBs testedConfigurationSize:0.061}
I1230 09:06:09.352599 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-ingress/mf"
W1230 09:06:09.901615 8 controller.go:1299] Error getting SSL certificate "mf/mainflux-server": local SSL certificate mf/mainflux-server was not found. Using default certificate
I1230 09:06:09.942908 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.041s renderingIngressLength:1 renderingIngressTime:0s admissionTime:55.8kBs testedConfigurationSize:0.041}
I1230 09:06:09.942978 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress/mf"
W1230 09:06:10.513294 8 controller.go:1299] Error getting SSL certificate "mf/mainflux-server": local SSL certificate mf/mainflux-server was not found. Using default certificate
I1230 09:06:10.552006 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.038s renderingIngressLength:1 renderingIngressTime:0s admissionTime:25.7kBs testedConfigurationSize:0.038}
I1230 09:06:10.552038 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress-http-adapter/mf"
2021/12/30 09:53:31 [crit] 33#33: *767491 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
I1230 10:42:40.093248 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.033s renderingIngressLength:1 renderingIngressTime:0s admissionTime:14.3kBs testedConfigurationSize:0.033}
I1230 10:42:40.093294 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-jaeger-operator-jaeger-query/mf"
2021/12/30 11:37:54 [crit] 33#33: *825144 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
2021/12/30 11:47:21 [crit] 34#34: *830372 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:
</code></pre>
| Sami Hassan | <p>In this case problem was the lack of the <strong>Ingressclass</strong>.</p>
<p>To solve this problem:</p>
<pre class="lang-sh prettyprint-override"><code>"Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-ingress" error="ingress does not contain a valid IngressClass"
</code></pre>
<p>You should add annotation to your yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code> annotations:
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>See <a href="https://forum.linuxfoundation.org/discussion/859965/exercise-7-nginx-update-requires-change-to-yaml" rel="nofollow noreferrer">this answer in another forum</a> and also <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this documentation</a>.</p>
| kkopczak |
<p>I've googled few days and haven't found any decisions.
I've tried to update k8s from 1.19.0 to 1.19.6 In Ubuntu-20.
(cluster manually installed k81 - master and k82 - worker node)</p>
<pre><code># kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[preflight] Some fatal errors occurred:
[ERROR CoreDNSUnsupportedPlugins]: couldn't retrieve DNS addon deployments: deployments.apps is forbidden: User "system:node:k81" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
[ERROR CoreDNSMigration]: couldn't retrieve DNS addon deployments: deployments.apps is forbidden: User "system:node:k81" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
[ERROR kubeDNSTranslation]: configmaps "kube-dns" is forbidden: User "system:node:k81" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'k81' and this object
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>When I try to list roles and permissions under kubernetes-admin user - it shows the same error with permissions:</p>
<pre><code>~# kubectl get rolebindings,clusterrolebindings --all-namespaces
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:node:k81" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:node:k81" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
</code></pre>
<p>I can list pods and cluster nodes:</p>
<pre><code># kubectl get nodes
NAME STATUS ROLES AGE VERSION
k81 Ready master 371d v1.19.6
k82 Ready <none> 371d v1.19.6
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
gitlab-managed-apps gitlab-runner-gitlab-runner-6bf497d6c9-g7rhc 1/1 Running 47 27d
gitlab-managed-apps prometheus-kube-state-metrics-c6bbb8465-8kls5 1/1 Running 3 27d
ingress-nginx ingress-nginx-controller-848bfcb64d-r6k6k 1/1 Running 3 27d
kube-system coredns-f9fd979d6-6dd42 1/1 Running 1 24h
kube-system coredns-f9fd979d6-zjsnz 1/1 Running 1 24h
kube-system csi-nfs-controller-5bd5cb55bc-76xdm 3/3 Running 69 27d
kube-system csi-nfs-controller-5bd5cb55bc-mkwmv 3/3 Running 61 27d
kube-system csi-nfs-node-b4v4g 3/3 Running 18 49d
kube-system etcd-k81 1/1 Running 30 371d
kube-system kube-apiserver-k81 1/1 Running 54 371d
kube-system kube-controller-manager-k81 1/1 Running 27 5d22h
kube-system kube-flannel-ds-l4xkx 1/1 Running 13 371d
kube-system kube-flannel-ds-rdm4n 1/1 Running 5 371d
kube-system kube-proxy-4976l 1/1 Running 5 371d
kube-system kube-proxy-g2fn4 1/1 Running 11 371d
kube-system kube-scheduler-k81 1/1 Running 330 371d
kube-system tiller-deploy-f5c865db5-zlgk9 1/1 Running 5 27d
# kubectl logs coredns-f9fd979d6-zjsnz -n kube-system
Error from server (Forbidden): pods "coredns-f9fd979d6-zjsnz" is forbidden: User "system:node:k81" cannot get resource "pods/log" in API group "" in the namespace "kube-system"
</code></pre>
<pre><code># kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin
# kubectl get csr
No resources found
</code></pre>
| Ninja | <p>The solution for the issue is to <a href="https://v1-19.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig" rel="nofollow noreferrer">regenerate the kubeconfig file for the admin</a>:</p>
<pre class="lang-sh prettyprint-override"><code>sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=.
</code></pre>
<p>Above command will create the <code>admin.conf</code> file in the current directory (let's say it is <code>/home/user/testing/</code>) so when you are running <code>kubectl</code> commands you need to specify it using <code>--kubeconfig {directory}/admin.conf</code> flag, for example:</p>
<pre class="lang-sh prettyprint-override"><code>sudo kubectl get rolebindings,clusterrolebindings --all-namespaces --kubeconfig /home/user/testing/admin.conf
</code></pre>
<p>As you are using <code>/etc/kubernetes/admin.conf</code> file by default, you can delete it and create a new one in <code>/etc/kubernetes</code> directory:</p>
<pre><code>sudo rm /etc/kubernetes/admin.conf
sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=/etc/kubernetes/
</code></pre>
| Mikolaj S. |
<p>We are trying some docker containers locally. For security purposes, user and password are used as env variables in the config file. The config file is copied as the volume in the docker-compose for one of the APIs. After docker-compose up, inside the container, we are still seeing the variable name and not the env variable value.</p>
<p>Config file inside the container copied as volume:</p>
<p>dbconfig:</p>
<pre><code>dbuser: ${USER}
dbpass: ${PASSWORD}
dbname:
dbdrivername:
tablename
</code></pre>
<p>docker-compose.yaml:</p>
<p>services:</p>
<pre><code>api:
image: ${API_IMAGE:api}:${VERSION:-latest}
ports:
- 8080:8080
environment:
- "USER=${USER}"
- "PASSWORD=${PASSWORD}"
volumes:
- ./conf/config.yaml:/etc/api.yaml
command: ["-config", "/etc/api.yaml"]
</code></pre>
<p>Config.yaml:</p>
<p>dbconfig:</p>
<pre><code>dbuser: ${USER}
dbpass: ${PASSWORD}
dbname:
dbdrivername:
tablename
</code></pre>
<p>Please help us get rid of this error as we are newly adopting docker testing</p>
| Raji M | <p>Issue fixed with the solution mentioned here. <a href="https://stackoverflow.com/questions/71916469/how-to-run-2-different-commands-from-docker-compose-command">How to run 2 different commands from docker-compose command:</a>
We added the sed command in the entry point script which searches for the env variable inside the config and replaces it with the value. Env variables are passed from docker-compose for the service</p>
<pre><code>sed \
-e "s/USER/${USER}/g" \
-e "s/PASSWORD/${PASSWORD}/g" \ -i /etc/api.yaml
</code></pre>
| Raji M |
<p>I'm working on this cloud project where we have several development repositories in GitHub and in each we have the overlays containing config files that are specific for a local K8S cluster, a dev Azure cluster and a prod Azure cluster.</p>
<p>In order to have different repos for these envs we use a repo with a kustomization file for each service that fetches the overlay of the dev/test/prod and uses it as it's base.</p>
<p>However the issue is managing this resources since we don't want to share the dev repos to possible clients or other end users in order for them to deploy these services into their K8S environment but not giving them permissions will imply that they will not be able to fetch these overlays and bases and deploy them.</p>
<p>What is the best practice in order to have a protected and restrictive dev repos and yet be able to do the deployment operation?</p>
<p>I know this is a abstract question but I've never dealt with organization of repos in a scale like this.</p>
| Fábio Caramelo | <p>To clarify I am posting Community Wiki answer.</p>
<p>The solution you suggested in comment's section:</p>
<blockquote>
<p>We will have the deployments/namespaces/services manifests in the same repo as the application source code as well the an overlay with a customization with the necessary resources to fully deploy in the dev environment.</p>
<p>As for test/ prod environments we created a structure to add an overlay per app with the same resource files but with the env details in the files to be used as configmaps.</p>
<p>And a customization using the dev repository as the base. Unfortunately this will imply that the cluster admin will have access to all repos of an application.</p>
</blockquote>
| kkopczak |
<p>I am running a GPU server by referring to <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="nofollow noreferrer">this document</a>.
I have found that GPU is used in DL work with Jupyter notebook by creating a virtual environment of CPU pod on the GPU node as shown below.</p>
<p>Obviously there is no <code>nvidia.com/GPU</code> entry in Limits, Requests,
so I don't understand that GPU is used.</p>
<pre><code>Limits:
cpu: 2
memory: 2000Mi
Requests:
cpu: 2
memory: 2000Mi
</code></pre>
<p>Is there a way to disable GPU for CPU pods?</p>
<p>Thank you.</p>
| OH KYOON | <p>Based on <a href="https://github.com/NVIDIA/k8s-device-plugin/issues/146" rel="nofollow noreferrer">this topic</a> on github:</p>
<blockquote>
<p>This is currently not supported and we don't really have a plan to support it.</p>
</blockquote>
<p>But...</p>
<blockquote>
<p>you might want to take a look at the CUDA_VISIBLE_DEVICES environment variable that controls what devices a specific CUDA process can see:<br />
<a href="https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/" rel="nofollow noreferrer">https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/</a></p>
</blockquote>
| Mikołaj Głodziak |
<p>I want to configure port forward <code>80</code>-><code>32181</code>, <code>443</code>-><code>30598</code>. <code>32181</code> and <code>30598</code> is <code>NodePort</code> of k8s ingress controller which i can establish connection correctly:</p>
<pre><code>$ curl http://localhost:32181
<html>
<head><title>404 Not Found</title></head>
<body>
...
$ curl https://localhost:30598 -k
<html>
<head><title>404 Not Found</title></head>
<body>
...
</code></pre>
<p>What I have done is:</p>
<pre><code>$ cat /proc/sys/net/ipv4/ip_forward
1
$ firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client frp http https kube-apiserver kube-kubelet ssh
ports:
protocols:
forward: no
masquerade: yes
forward-ports:
port=80:proto=tcp:toport=32181:toaddr=
port=443:proto=tcp:toport=30598:toaddr=
source-ports:
icmp-blocks:
rich rules:
</code></pre>
<p>but I cant access my nginx via <code>80</code> or <code>443</code>:</p>
<pre><code>$ curl https://localhost:443 -k
curl: (7) Failed to connect to localhost port 443: Connection refused
</code></pre>
<p>and more info:</p>
<blockquote>
<p>centos: 8.2 4.18.0-348.2.1.el8_5.x86_64</p>
</blockquote>
<blockquote>
<p>k8s: 1.22(with calico(v3.21.0) network plugin)</p>
</blockquote>
<blockquote>
<p>firewalld: 0.9.3</p>
</blockquote>
<p>and iptables output:</p>
<pre><code>$ iptables -nvL -t nat --line-numbers
Chain PREROUTING (policy ACCEPT 51 packets, 2688 bytes)
num pkts bytes target prot opt in out source destination
1 51 2688 cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
2 51 2688 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
3 51 2688 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 50 packets, 2648 bytes)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 1872 packets, 112K bytes)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
2 1862 112K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
3 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 1922 packets, 116K bytes)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
2 1911 115K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
3 758 45480 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-SERVICES (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
2 0 0 KUBE-SVC-Z6GDYMWE5TV2NNJN tcp -- * * 0.0.0.0/0 10.110.193.197 /* kubernetes-dashboard/dashboard-metrics-scraper cluster IP */ tcp dpt:8000
3 0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
4 0 0 KUBE-SVC-EDNDUDH2C75GIR6O tcp -- * * 0.0.0.0/0 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:https cluster IP */ tcp dpt:443
5 0 0 KUBE-SVC-EZYNCFY2F7N6OQA2 tcp -- * * 0.0.0.0/0 10.103.242.141 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP */ tcp dpt:443
6 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
7 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
8 0 0 KUBE-SVC-CEZPIJSAUFW5MYPQ tcp -- * * 0.0.0.0/0 10.97.166.112 /* kubernetes-dashboard/kubernetes-dashboard cluster IP */ tcp dpt:443
9 0 0 KUBE-SVC-H5K62VURUHBF7BRH tcp -- * * 0.0.0.0/0 10.104.154.95 /* lens-metrics/kube-state-metrics:metrics cluster IP */ tcp dpt:8080
10 0 0 KUBE-SVC-MOZMMOD3XZX35IET tcp -- * * 0.0.0.0/0 10.96.73.22 /* lens-metrics/prometheus:web cluster IP */ tcp dpt:80
11 0 0 KUBE-SVC-CG5I4G2RS3ZVWGLK tcp -- * * 0.0.0.0/0 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:http cluster IP */ tcp dpt:80
12 1165 69528 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-POSTROUTING (1 references)
num pkts bytes target prot opt in out source destination
1 1859 112K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
2 3 180 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
3 3 180 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ random-fully
Chain KUBE-MARK-DROP (0 references)
num pkts bytes target prot opt in out source destination
1 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-NODEPORTS (1 references)
num pkts bytes target prot opt in out source destination
1 2 120 KUBE-SVC-EDNDUDH2C75GIR6O tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */ tcp dpt:30598
2 1 60 KUBE-SVC-CG5I4G2RS3ZVWGLK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp dpt:32181
Chain KUBE-MARK-MASQ (27 references)
num pkts bytes target prot opt in out source destination
1 3 180 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-SEP-IPE5TMLTCUYK646X (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.147 0.0.0.0/0 /* kube-system/kube-dns:metrics */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:192.168.103.147:9153
Chain KUBE-SEP-3LZLTHU4JT3FAVZK (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.149 0.0.0.0/0 /* kube-system/kube-dns:metrics */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:192.168.103.149:9153
Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
2 0 0 KUBE-SEP-IPE5TMLTCUYK646X all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ statistic mode random probability 0.50000000000
3 0 0 KUBE-SEP-3LZLTHU4JT3FAVZK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */
Chain KUBE-SEP-ZOAMCQDU54EOM4EJ (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.141 0.0.0.0/0 /* kubernetes-dashboard/dashboard-metrics-scraper */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/dashboard-metrics-scraper */ tcp to:192.168.103.141:8000
Chain KUBE-SVC-Z6GDYMWE5TV2NNJN (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.110.193.197 /* kubernetes-dashboard/dashboard-metrics-scraper cluster IP */ tcp dpt:8000
2 0 0 KUBE-SEP-ZOAMCQDU54EOM4EJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/dashboard-metrics-scraper */
Chain KUBE-SEP-HYE2IFAO6PORQFJR (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.0.176 0.0.0.0/0 /* default/kubernetes:https */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:192.168.0.176:6443
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-HYE2IFAO6PORQFJR all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SEP-GJ4OJHBKIREWLMRS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.146 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */
2 2 120 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */ tcp to:192.168.103.146:443
Chain KUBE-SVC-EDNDUDH2C75GIR6O (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:https cluster IP */ tcp dpt:443
2 2 120 KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */ tcp dpt:30598
3 2 120 KUBE-SEP-GJ4OJHBKIREWLMRS all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */
Chain KUBE-SEP-K2CVHZPTBE2YAD6P (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.146 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook */ tcp to:192.168.103.146:8443
Chain KUBE-SVC-EZYNCFY2F7N6OQA2 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.103.242.141 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-K2CVHZPTBE2YAD6P all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook */
Chain KUBE-SEP-S6VTWHFP6KEYRW5L (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.147 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.103.147:53
Chain KUBE-SEP-SFGZMYIS2CE4JD3K (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.103.149:53
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
2 0 0 KUBE-SEP-S6VTWHFP6KEYRW5L all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000
3 0 0 KUBE-SEP-SFGZMYIS2CE4JD3K all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SEP-IJUMPPTQDLYXOX4B (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.147 0.0.0.0/0 /* kube-system/kube-dns:dns */
2 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.103.147:53
Chain KUBE-SEP-C4W6TKYY5HHEG4RV (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
2 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.103.149:53
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ udp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
2 0 0 KUBE-SEP-IJUMPPTQDLYXOX4B all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ statistic mode random probability 0.50000000000
3 0 0 KUBE-SEP-C4W6TKYY5HHEG4RV all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain KUBE-SEP-GX372II3CQAGUHFM (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.145 0.0.0.0/0 /* kubernetes-dashboard/kubernetes-dashboard */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/kubernetes-dashboard */ tcp to:192.168.103.145:8443
Chain KUBE-SVC-CEZPIJSAUFW5MYPQ (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.166.112 /* kubernetes-dashboard/kubernetes-dashboard cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-GX372II3CQAGUHFM all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/kubernetes-dashboard */
Chain KUBE-SEP-I3RZS3REJP7POFLG (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.143 0.0.0.0/0 /* lens-metrics/kube-state-metrics:metrics */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/kube-state-metrics:metrics */ tcp to:192.168.103.143:8080
Chain KUBE-SVC-H5K62VURUHBF7BRH (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.104.154.95 /* lens-metrics/kube-state-metrics:metrics cluster IP */ tcp dpt:8080
2 0 0 KUBE-SEP-I3RZS3REJP7POFLG all -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/kube-state-metrics:metrics */
Chain KUBE-SEP-ROTMHDCXAI3T7IOR (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.144 0.0.0.0/0 /* lens-metrics/prometheus:web */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/prometheus:web */ tcp to:192.168.103.144:9090
Chain KUBE-SVC-MOZMMOD3XZX35IET (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.73.22 /* lens-metrics/prometheus:web cluster IP */ tcp dpt:80
2 0 0 KUBE-SEP-ROTMHDCXAI3T7IOR all -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/prometheus:web */
Chain KUBE-SEP-OAYGOO6JHJEB65WC (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.146 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */
2 1 60 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp to:192.168.103.146:80
Chain KUBE-SVC-CG5I4G2RS3ZVWGLK (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:http cluster IP */ tcp dpt:80
2 1 60 KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp dpt:32181
3 1 60 KUBE-SEP-OAYGOO6JHJEB65WC all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */
Chain KUBE-PROXY-CANARY (0 references)
num pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
num pkts bytes target prot opt in out source destination
1 49 3274 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:flqWnvo8yq4ULQLa */ match-set cali40masq-ipam-pools src ! match-set cali40all-ipam-pools dst random-fully
Chain cali-POSTROUTING (1 references)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
2 1894 114K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
3 0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:SXWvdsbh4Mw7wOln */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL random-fully
Chain cali-PREROUTING (1 references)
num pkts bytes target prot opt in out source destination
1 51 2688 cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-snat (1 references)
num pkts bytes target prot opt in out source destination
Chain cali-OUTPUT (1 references)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-fip-dnat (2 references)
num pkts bytes target prot opt in out source destination
Chain KUBE-KUBELET-CANARY (0 references)
num pkts bytes target prot opt in out source destination
</code></pre>
| Vista Chyi | <p>To clarify I am posting Community Wiki answer.</p>
<p>The problem existed only during forwarding to a k8s service NodePort.</p>
<p>To solve the problem <strong>you have set up an External Nginx as a TCP Proxy.</strong></p>
<p>Here one can find <a href="https://docs.gitlab.com/charts/advanced/external-nginx/" rel="nofollow noreferrer">documentation</a> about <em>External NGINX</em>.</p>
<blockquote>
<p>Ingress does not directly support TCP services, so some additional configuration is necessary. Your NGINX Ingress Controller may have been <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md" rel="nofollow noreferrer">deployed directly</a> (i.e. with a Kubernetes spec file) or through the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">official Helm chart</a>. The configuration of the TCP pass through will differ depending on the deployment approach.</p>
</blockquote>
| kkopczak |
<p>I am now using Alibaba <a href="https://github.com/alibaba/canal" rel="nofollow noreferrer">Canal</a> to sync MySQL from datacenter A to datacenter B(the canal deploy in kubernetes), after I start the canal-server, shows error like this:</p>
<pre><code>[root@canal-server-stable-0 bin]# tail -f /home/canal/logs/canal/canal.log
2021-05-26 11:47:32.329 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2021-05-26 11:47:32.366 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2021-05-26 11:47:32.849 [main] ERROR com.alibaba.otter.canal.deployer.CanalLauncher - ## Something goes wrong when starting up the canal Server:
com.alibaba.otter.canal.common.CanalException: load manager config failed.
Caused by: com.alibaba.otter.canal.common.CanalException: requestGet for canal config error: auth :admin is failed
2021-05-26 11:52:50.402 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2021-05-26 11:52:50.432 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2021-05-26 11:52:50.836 [main] ERROR com.alibaba.otter.canal.deployer.CanalLauncher - ## Something goes wrong when starting up the canal Server:
com.alibaba.otter.canal.common.CanalException: load manager config failed.
Caused by: com.alibaba.otter.canal.common.CanalException: requestGet for canal config error: auth :admin is failed
</code></pre>
<p>this is my canal server config:</p>
<pre><code>[root@canal-server-stable-0 bin]# cat ../conf/canal.properties
# register ip
# canal.register.ip = canal-server-stable-0.canal-server-discovery-svc-stable.hades-pro.svc.cluster.local
canal.register.ip = 10.244.5.5
# canal admin config
canal.admin.manager = 10.105.49.36:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 6bb4837eb74329105ee4568dda7dc67ed2ca2ad9
# admin auto register
canal.admin.register.auto = true
canal.admin.register.cluster = online
</code></pre>
<p>the hash password was encrypt from <code>123456</code>. I am sure the password is right. I tried to find the password in database, it matched with my config:</p>
<p><a href="https://i.stack.imgur.com/wSfhB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wSfhB.png" alt="enter image description here" /></a></p>
<p>I also using Arthas to trace the online app of canal-admin:</p>
<pre><code>watch com.alibaba.otter.canal.admin.controller.PollingConfigController auth "{params,returnObj}" -x 3 -b
</code></pre>
<p>shows the password I pass is: <code>6bb4837eb74329105ee4568dda7dc67ed2ca2ad9</code>. I did not know where is going wrong now, what should I do to fix it?</p>
| Dolphin | <p>you can check canal admin conf/application.yaml file</p>
<pre><code>canal:
adminUser: admin
adminPasswd: 123456
</code></pre>
<p>if you modified "canal.adminPasswd" attribute, you can modified it correct.</p>
<p>hope, help you.</p>
| user17505573 |
<p>I think that you can configure this in the <code>-service.yaml</code>. For example I have a frontend and a backend. The frontend should be public and my backend should be private. So how do you set up two microservices that communicate with each other, one being public and one private, using Kubernetes?</p>
<p>frontend-service.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
spec:
ports:
- port: 8081
protocol: TCP
targetPort: 8081
selector:
app: frontend
type: LoadBalancer
status:
loadBalancer: {}
</code></pre>
<p>backend-service.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: backend
name: backend
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: backend
type: LoadBalancer
status:
loadBalancer: {}
</code></pre>
<p>What I tried</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f frontend-deploy.yaml
kubectl get pods
kubectl apply -f frontend-service.yaml
kubectl get service
</code></pre>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f backend-deploy.yaml
kubectl get pods
kubectl apply -f backend-service.yaml
kubectl get service
</code></pre>
<pre class="lang-sh prettyprint-override"><code>kubectl expose deployment frontend --type=LoadBalancer --name=frontend-service.yaml
</code></pre>
| Test | <p>You should use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ClusterIP type</a> for the private / internal services which will make your application only available within the cluster:</p>
<blockquote>
<ul>
<li><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code></li>
</ul>
</blockquote>
<p>...and <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">LoadBalancer type</a> for the public services which are designed to receive requests from the outside the cluster:</p>
<blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider's load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</li>
</ul>
</blockquote>
<p>Example:</p>
<p>Let's say that I have created frontend and backend deployments - frontend on 8081 port and backend on 8080. Services yamls are similar to yours (I used LoadBalancer for the frontend, and ClusterIP for the backend). Fronted service is available at the 80 port, backend at the 8080:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8081
selector:
app: frontend
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: backend
type: ClusterIP
</code></pre>
<p>Let's check the services:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.36.9.41 <none> 8080/TCP 19m
frontend LoadBalancer 10.36.4.172 xx.xxx.xxx.xxx 80:32126/TCP 19m
</code></pre>
<p>As can see, both services have ClusterIP (used for communication inside the cluster) and frontend service has a LoadBalancer with public IP.</p>
<p>Let's <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">exec into a pod</a> and send request to the frontend and backend using just a service name:</p>
<pre class="lang-sh prettyprint-override"><code>root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl backend:8080
"hello world backend"
root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl frontend:80
"hello world frontend"
</code></pre>
<p>It's working properly, because the pod that I exec into is in the same namespace (default). For communication between different namespaces you <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-work-by-dns-name" rel="nofollow noreferrer">should use <code><service-name>.<namespace>.svc.cluster.local</code> or ClusterIP</a>:</p>
<pre><code>root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl backend.default.svc.cluster.local:8080
"hello world backend"
root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl 10.36.9.41:8080
"hello world backend"
</code></pre>
<p>This is how communication inside cluster works in Kubernetes</p>
<p>For requests outside the cluster use LoadBalancer IP (<code>EXTERNAL-IP</code> in the <code>kubectl get svc</code> command):</p>
<pre class="lang-sh prettyprint-override"><code>user@shell-outside-the-cluster:~$ curl xx.xxx.xxx.xxx
"hello world frontend"
</code></pre>
<p>Consider using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> when you have multiple applications which you want to expose publicly.</p>
<p>Also check these:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">Access Services Running on Clusters | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods | Kubernetes</a></li>
</ul>
| Mikolaj S. |
<p>Is there a tool or command that would draw Kubernetes pod topology?
I would like to illustrate the relations between pods so that I can visualize the high-level architecture diagram of an existing cluster.</p>
| Psyduck | <p>you can use <a href="https://github.com/steveteuber/kubectl-graph" rel="nofollow noreferrer">graph plugin</a> with kubectl or <a href="https://www.weave.works/docs/scope/latest/introducing/" rel="nofollow noreferrer">Weave Scope</a> and <a href="https://k8slens.dev/" rel="nofollow noreferrer">Lens</a> which have GUI</p>
| Ahvand Fadaei |
<p>I have a pod called <code>mypod0</code> with two persistent volumes.</p>
<p><code>mypd0</code>, <code>mypd1</code> (provided through two persistent volume claims <code>myclaim0</code>, <code>myclaim1</code>) mounted into <code>mypod0</code> at <code>/dir0</code>, <code>/dir1</code> as shown in the pod definition below.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod0
spec:
containers:
- name: mycontainer
image: myimage
volumeMounts:
- mountPath: "/dir0"
name: mypd
- mountPath: "/dir1"
- name: mypd1
volumes:
- name: mypd0
persistentVolumeClaim:
claimName: myclaim0
- name: mypd1
persistentVolumeClaim:
claimName: myclaim1
</code></pre>
<p>Now I also have another pod <code>mypod1</code> already running in the cluster. Is there a way to <strong>dynamically/programmatically</strong> (using fabric8, Kubernetes-client) to unmount (detach) <code>mypd1</code> from <code>mypod0</code>, and then attach the volume <code>mypd1</code> into <code>mypod1</code> (without restarting any of the pods <code>mypod0</code>, <code>mypod1</code>). Any hint will be appreciated.</p>
| Mazen Ezzeddine | <p><strong>As <a href="https://stackoverflow.com/users/213269/jonas">Jonas</a> mentioned in the comment, this action is not possible:</strong></p>
<blockquote>
<p>Nope, this is not possible. Pod-manifests is intended to be seen as immutable and pods as disposable resources.</p>
</blockquote>
<p>Look at the definition of <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">pods</a>:</p>
<blockquote>
<p><em>Pods</em> are the smallest deployable units of computing that you can create and manage in Kubernetes.</p>
<p>A <em>Pod</em> (as in a pod of whales or pea pod) is a group of one or more <a href="https://kubernetes.io/docs/concepts/containers/" rel="nofollow noreferrer">containers</a>, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.</p>
</blockquote>
<p>However, you can dynamically create new storage volumes. Kubernetes supports <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a>:</p>
<blockquote>
<p>Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer"><code>PersistentVolume</code> objects</a> to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I'm trying to migrate our Hashicorp Vault standalone with storage type <code>file</code> running in our RKE2 cluster to a Hashicorp Vault HA with storage type <code>raft</code>. But still getting some issues. <a href="https://artifacthub.io/packages/helm/hashicorp/vault" rel="nofollow noreferrer">We're using the Helm chart version 0.22.0</a>. So these are the steps I followed:</p>
<ol>
<li>Create a temporary Vault Raft running in RKE2 with 1 replica (don't initialize vault)</li>
<li><code>Exec</code> into the old Vault container with the storage type <code>file</code></li>
<li>Go to the <code>/vault/</code> folder and create a <code>raft</code> folder in it</li>
<li>Run the command <code>vault operator migrate --config migrate.hcl</code>
The migrate.hcl file looks like this:</li>
</ol>
<pre><code>storage_source "file" {
path = "/vault/data/"
}
storage_destination "raft" {
path = "/vault/raft/"
node_id = "vault-raft-0"
}
cluster_addr="https://127.0.0.1:8201"
</code></pre>
<ol start="5">
<li>The migration is complete and it created a <code>vault.db</code> file into <code>/vault/raft/</code> and a <code>raft.db</code> file into <code>/vault/raft/raft/</code> including an empty folder called <code>snapshots</code>.</li>
<li>Then I copied this whole <code>/vault/raft/</code> folder to my local pc and copied it again to the temporary Vault Raft container. It has the same data storage mount path, so: <code>/vault/raft/</code></li>
<li>After copying the files I re-deployed the temporary Vault Raft since the pvc won't be deleted and checked if it has still the copied <code>.db</code> files in it.</li>
<li>In the end I tried to unseal it, but after running the third command Iit returns the following message: <code>Error unsealing: context deadline exceeded</code></li>
</ol>
<p>Am I doing something completely wrong?</p>
| Lucas Scheepers | <p>Maybe it's a little bit late, but working flow for me sounds like this:</p>
<ol>
<li>create migrate.hcl with content :</li>
</ol>
<pre><code>storage_source "file" {
path = "/vault/data/"
}
storage_destination "raft" {
path = "/vault/data/"
}
cluster_addr = "https://vault-0.vault-internal:8201"
</code></pre>
<ol start="2">
<li>Then run <code>vault operator migrate -config=migrate.hcl</code></li>
<li>Uninstall your previous deployment.</li>
<li>Install deployment with raft enabled.</li>
<li>At the vault-0 run <code>vault operator unseal</code></li>
<li>At the other pods run <code>vault operator raft join http://vault-0.vault-internal:8200</code> and <code>vault operator unseal</code>.</li>
</ol>
<p>This worked for me</p>
| Eugene |
<p>I'm trying to dockerize an multilayered .NET API project then deploy it and SQL Server to Kubernetes. After I have created k8s folder for .yaml files, I encounter a problem.</p>
<p>I have files like this :</p>
<p><img src="https://i.stack.imgur.com/2PI5y.png" alt="enter image description here" /></p>
<p>And even though I've written same commands for every .yaml files in k8s folder, when I write <code>kubectl get services</code> , I can see only two files which are <strong>dotnetapp-mssql-cluster-ip-service</strong> and <strong>dotnetapp-server-cluster-ip-service</strong> .
<img src="https://i.stack.imgur.com/XdqYZ.png" alt="enter image description here" /></p>
<p>When I try to <code>kubectl create</code> or <code>kubectl apply</code> for other files that I can't see on the services list, it says <strong>unchanged</strong> .
<img src="https://i.stack.imgur.com/ZRHcX.png" alt="enter image description here" /></p>
<p>How can I see all my files when I write <code>kubectl get services</code> ?</p>
| Ecem Ozturk | <p>It seems like you have created a PVC, 2 services, 2 deployments, and an ingress? Based on you re-running <code>kubectl apply -f WebAPI/k8s/ingress-service.yaml</code> and Kubernetes telling you that <code>ingress.networking.k8s.io/dotnetapp</code> is unchanged.</p>
<p>Try run <code>kubectl get ingress</code>. Also remember that resources are not named after the files, but the property <code>name</code> set in <code>metadata</code> within each resource. So you should see an <code>ingress</code> called <code>dotnetapp</code> when running that commands.</p>
<p>Otherwise it seems you can see your other 2 services as expected using <code>kubectl get services</code>?</p>
| clarj |
<p>I want to route through Istio virtual Service to my microservice. When I use <code>@RequestParam</code> based input in prefix or even in exact it throws <code>404</code> for <code>/api/cities/{city_id}/tours</code> but the rest works fine.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app
namespace: nsapp
spec:
gateways:
- app-gateway
hosts:
- app.*.*.*
http:
- match:
- uri:
prefix: "/api/cities/{city_id}/tours"
- uri:
prefix: "/api/countries/{country_id}/cities"
- uri:
prefix: "/api/countries"
route:
- destination:
host: appservice
port:
number: 9090
</code></pre>
| N K Shukla | <p>This fragment</p>
<pre class="lang-yaml prettyprint-override"><code> - uri:
prefix: "/api/cities/{city_id}/tours"
- uri:
prefix: "/api/countries/{country_id}/cities"
</code></pre>
<p>will be taken literally. The <code>{city_id}</code> and <code>{country_id}</code> will not be replaced with your custom ID. At this point, istio will look for a prefix that reads literally <code>/api/cities/{city_id}/tours</code> or <code>/api/countries/{country_id}/cities</code>, which doesn't exist (you get error 404). If you want to match the expression to your custom ID you have to use a regular expression. Look at <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#StringMatch" rel="nofollow noreferrer">this doc</a>. There you will find information about the capabilities of the StringMatch: <code>exact</code>, <code>prefix</code> or <code>regex</code>. You can find the syntax of the regular expressions used in istio <a href="https://github.com/google/re2/wiki/Syntax" rel="nofollow noreferrer">here</a>.</p>
<p>Summary:
You should change <code>prefix</code> to <code>regex</code> and then create your own regular expression to match your custom ID. Example:</p>
<pre class="lang-yaml prettyprint-override"><code> - uri:
regex: "/api/cities/[a-zA-Z]+/tours"
- uri:
regex: "/api/countries/[a-zA-Z]+/cities"
</code></pre>
<p>In my example, there are only letters (upper or lower case) in the ID. Here you have to create your own regular expression based on <a href="https://github.com/google/re2/wiki/Syntax" rel="nofollow noreferrer">this documentation</a>.</p>
| Mikołaj Głodziak |
<p>I have a kubernetes cluster in google cloud. Due to the resource limit, I could not run a app that would take a large amount of memory. So I run the app in another cloud machine, and using <code>kubectl</code> to forward the service port, this is my <code>kubectl</code> forward script:</p>
<pre><code>#!/usr/bin/env bash
set -u
set -e
set -x
namespace=reddwarf-cache
kubectl config use-context reddwarf-kubernetes
POD=$(kubectl get pod -l app.kubernetes.io/instance=cruise-redis -n ${namespace} -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward ${POD} 6380:6379 -n ${namespace}
</code></pre>
<p>And I could connect the kubernetes cluster service from the remote server like this, local app connect to local mechine port and <code>kubectl</code> will forward the connection to kubernetes cluster. But sadly I found the <code>kubectl</code> forward could not keep stable for a long time, when the app runs for some time, always give connection refused error in the future time. Is it possible to fix this problem? To let me connect to kubernetes cluster service in a stable way? For example, when I port forward the redis connection, it will throw this error in the future:</p>
<pre><code>E1216 11:58:43.452204 7756 portforward.go:346] error creating error stream for port 6379 -> 6379: write tcp 172.17.0.16:60112->106.14.183.131:6443: write: broken pipe
Handling connection for 6379
E1216 11:58:43.658372 7756 portforward.go:346] error creating error stream for port 6379 -> 6379: write tcp 172.17.0.16:60112->106.14.183.131:6443: write: broken pipe
Handling connection for 6379
E1216 11:58:43.670151 7756 portforward.go:346] error creating error stream for port 6379 -> 6379: write tcp 172.17.0.16:60112->106.14.183.131:6443: write: broken pipe
Handling connection for 6379
</code></pre>
<p>When I using this command to connect the kubernetes cluster's redis service using port forward and execute some command, will show this error:</p>
<pre><code>➜ ~ redis-cli -h 127.0.0.1 -p 6379 -a 'uoGTdVy3P7'
127.0.0.1:6379> info
Error: Connection reset by peer
</code></pre>
<p>I have read this GitHub <a href="https://github.com/kubernetes/kubernetes/issues/74551" rel="nofollow noreferrer">issue</a> seem no one knows what's happened.</p>
| Dolphin | <p>Following the GitHub <a href="https://github.com/kubernetes/kubernetes/issues/74551" rel="nofollow noreferrer">issue</a> you posted I stumbled upon a solution that might help you.
First, let kubectl decide which host port to use by running:</p>
<pre><code>kubectl port-forward ${POD} :6379 -n ${namespace}
</code></pre>
<p>Then, forward the same port on the host by running:</p>
<pre><code>kubectl port-forward ${POD} 6379:6379 -n ${namespace}
</code></pre>
<p>Another thing you could do is to create a Service that maps your desired port to port 6379 in your redis Pods, example Service file would look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: redis
ports:
- protocol: TCP
port: <port>
targetPort: 6379
</code></pre>
<p>Apply the Service resource. Then, create Ingress backed by a single Service to make redis available from outside the cluster, example file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
defaultBackend:
service:
name: my-service
port:
number: <port>
</code></pre>
<p>Apply the Ingress resource and get its IP address by running:</p>
<pre><code>kubectl get ingress test-ingress
</code></pre>
<p>After that check your redis connection from another machine using Ingress IP address.</p>
| mdobrucki |
<p>Do I still need to expose pod via <code>clusterip</code> service?</p>
<p>There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on <code>port:80</code>.</p>
<p>I don't know how to test it, tried:</p>
<pre><code>k exec main -it -- sh
netcan -z -v -w 5 service-main 80
</code></pre>
<p>and</p>
<pre><code>k exec main -it -- sh
curl front:80
</code></pre>
<p>The main.yaml pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
</code></pre>
<p>The front.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
</code></pre>
<p>The api.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
</code></pre>
<p>The main-to-front-networkpolicy.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
</code></pre>
<p>What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?</p>
<p>Also, do I need to write <code>containerPort:80</code> in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?</p>
<p>I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:</p>
<pre><code>k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
</code></pre>
<p>UPDATE: ideally I want answers to these:</p>
<ul>
<li><p>a clear explanation of the diff btw <code>service</code> and <code>networkpolicy</code>.
If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?</p>
</li>
<li><p>if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?</p>
</li>
</ul>
| ERJAN | <p>Network policies and services are two different and independent Kubernetes resources.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service is</a>:</p>
<blockquote>
<p>An abstract way to expose an application running on a set of <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> as a network service.</p>
</blockquote>
<p>Good explanation <a href="https://kubernetes.io/docs/concepts/services-networking/service/#motivation" rel="nofollow noreferrer">from the Kubernetes docs</a>:</p>
<blockquote>
<p>Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> to run your app, it can create and destroy Pods dynamically.</p>
<p>Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.</p>
<p>This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?</p>
<p>Enter <em>Services</em>.</p>
</blockquote>
<p>Also another good explanation <a href="https://stackoverflow.com/questions/56896490/what-exactly-kubernetes-services-are-and-how-they-are-different-from-deployments/56896662#56896662">in this answer</a>.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">For production you should use</a> a <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">workload resources</a> instead of creating pods directly:</p>
<blockquote>
<p>Pods are generally not created directly and are created using workload resources. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">Working with Pods</a> for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a></li>
</ul>
</blockquote>
<p>And use services to make requests to your application.</p>
<p>Network policies <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">are used to control traffic flow</a>:</p>
<blockquote>
<p>If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.</p>
</blockquote>
<p>Network policies target pods, not services (an abstraction). Check <a href="https://stackoverflow.com/questions/66423222/kubernetes-networkpolicy-limit-egress-traffic-to-service">this answer</a> and <a href="https://stackoverflow.com/a/55527331/16391991">this one</a>.</p>
<p>Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster <a href="https://stackoverflow.com/questions/65017380/kubernetes-network-policy-deny-all-policy-not-blocking-basic-communication/65022827#65022827">may not be compatible</a>:</p>
<blockquote>
<p>For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. <a href="https://docs.projectcalico.org/getting-started/kubernetes/" rel="nofollow noreferrer">Project Calico</a> or <a href="https://cilium.io/" rel="nofollow noreferrer">Cilium</a> are plugins that do so. This is not the default when creating a cluster!</p>
</blockquote>
<p>Test on kubeadm cluster with Calico plugin -> I created similar pods as you did, but I changed <code>container</code> part:</p>
<pre><code>spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
</code></pre>
<p>So NGINX app is available at the <code>8080</code> port.</p>
<p>Let's check pods IP:</p>
<pre><code>user@shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
</code></pre>
<p>Let's <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">exec into running <code>main</code> pod</a> and try to make request to the <code>front</code> pod:</p>
<pre class="lang-sh prettyprint-override"><code>root@main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
</code></pre>
<p>It is working.</p>
<p>After applying your network policy:</p>
<pre class="lang-sh prettyprint-override"><code>user@shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user@shell:~$ kubectl exec -it main -- bash
root@main:/# curl 192.168.156.61:8080
...
</code></pre>
<p>Not working anymore, so it means that network policy is applied successfully.</p>
<p>Nice option to get more information about applied network policy is to run <code>kubectl describe</code> command:</p>
<pre><code>user@shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress
</code></pre>
| Mikolaj S. |
<p>I am attempting to setup a multi-node k8s cluster as per this <a href="https://techviewleo.com/deploy-kubernetes-cluster-on-debian-using-k0s/" rel="nofollow noreferrer">kOS Setup Link</a>, but I face the error below when I try to join one of the nodes to the master node:</p>
<pre><code> k0s token create --role=worker
WARN[2022-01-12 13:55:31] no config file given, using defaults
Error: failed to read cluster ca certificate from /var/lib/k0s/pki/ca.crt: open
/var/lib/k0s/pki/ca.crt: no such file or directory. check if the control plane is
initialized on this node
</code></pre>
<p>I verified from the control node that this file does, exist however:</p>
<pre><code>cd /var/lib/k0s/pki
ls
</code></pre>
<p>I am new to k8s setup from scratch, and a bit unsure which configuration item I need to fix (and where between master and worker nodes?).
My research shows that <em><strong>A token is required when joining a new worker node to the Kubernetes cluster -- This token is generated from the control node</strong></em>. It also says that <em><strong>When you bootstrap a cluster with kubeadm, a token is generated which expires after 24 hours</strong></em></p>
<p>When I try to check for the existence of a token on master node I get:</p>
<pre><code> kubeadm tokens list
-bash: kubeadm: command not found
</code></pre>
<p>I am unsure however if this is correct ( Is k0s <em><strong>even</strong></em> bootstrapped with <code>kubeadm</code>??).</p>
<p>However using k0s CLI syntax I can see that they are no tokens on the master:</p>
<pre><code>k0s token list
No k0s join tokens found
</code></pre>
<p>My question:</p>
<ol>
<li>What do I need to do for the file that is reporting as missing.</li>
<li>Is this error (possibly) related to the issue of token above and if so do I first need to regenerate the token at the master node end.</li>
<li>I have encountered multiple fixes at this <a href="https://github.com/kubernetes/kubernetes/issues/53889" rel="nofollow noreferrer">Github Issue</a> but I am not sure if any of them applies to my issue. Last thing I need is to break the current setup before I have even finished the cluster setup.</li>
</ol>
<p>Environment
Master node : Debian 10 Buster
Worker node : Debian 10 Buster</p>
| Golide | <p>I've got the same error as you when I tried to run <code>k0s token create --role=worker</code> on the <strong>worker</strong> node.</p>
<p>You need to run <a href="https://techviewleo.com/deploy-kubernetes-cluster-on-debian-using-k0s/" rel="nofollow noreferrer">this command on the <strong>master</strong> node</a>:</p>
<blockquote>
<p>Next, you need to <strong>create a join token</strong> that the worker node will use to join the cluster. This token is generated from the control node.</p>
</blockquote>
<p>First you need to run <code>k0s token create --role=worker</code> on the <strong>master</strong> node to get a token and later <a href="https://techviewleo.com/deploy-kubernetes-cluster-on-debian-using-k0s/" rel="nofollow noreferrer">use this token on the <strong>worker</strong> node</a>:</p>
<blockquote>
<p>On the worker node, issue the command below.</p>
<pre><code>k0s worker <login-token>
</code></pre>
</blockquote>
<p>So:</p>
<ul>
<li>generate a token on the <strong>master</strong> using <code>k0s token create --role=worker</code></li>
<li>use this token on the <strong>worker</strong> using <code>k0s worker <login-token> </code></li>
</ul>
<p>In my case I also needed to add <code>sudo</code> before both commands, so they looked like <code>sudo k0s token create --role=worker</code> and <code>sudo k0s worker <login-token> </code></p>
<p>You wrote:</p>
<blockquote>
<p>I am unsure however if this is correct ( Is k0s <em><strong>even</strong></em> bootstrapped with kubeadm ?? ).</p>
</blockquote>
<p>No, they are two different and independent solutions.</p>
| Mikolaj S. |
<p>Assume an application consisting of different web services implemented with ASP.NET Core, each of which is deployed as a Pod in a Kubernetes cluster (AKS to be exact). Now, suppose I want to secure the cluster-internal communication between those services via HTTPS. This requires me to:</p>
<ol>
<li>get TLS certificates for each of the services,</li>
<li>have the Pods trust those TLS certificates (or, rather, the signing CA), and</li>
<li>rotate the certificates when their validity period ends.</li>
</ol>
<p>What I've already learned:</p>
<ul>
<li><a href="https://stackoverflow.com/a/48168572/62838">This StackOverflow answer</a> indicates that this adds a lot of complexity and discourages going that route. Nevertheless, I'd like to know what such a setup would comprise.</li>
<li>Projects such as <em>LettuceEncrypt</em> allow to automate most of the steps 1 and 3 above. You only need a CA that implements the ACME protocol.</li>
<li>The <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">Kubernetes docs ("Managing TLS in a cluster")</a> mention a Kubernetes API which uses a <em>"protocol that is similar to the ACME draft"</em> to manage CSRs.</li>
<li>However, in the docs, they're doing all the work manually (setting up a local CA, issuing CSRs manually, signing the CSRs manually using the local CA, all via the cfssl tools) that I'm wondering why on earth I would actually want to use those APIs. What are they doing for me besides storing CSRs as Kubernetes resources?</li>
<li>The docs also mention that Kubernetes clusters already include a root CA that one could use for the purpose of issuing TLS certificates for Pods, but don't explain <em>how</em> one would do so: "<em>It is possible to configure your cluster to use the cluster root CA for this purpose, but you should never rely on this.</em>"</li>
<li>The quote above seems to suggest and warn against using the cluster root CA at the same time. Why the warning, wouldn't it simplify things a lot if we could use an existing CA?</li>
</ul>
<p>In my mind, this could be really simple: Just set up Kestrel with <em>LettuceEncrypt</em>, configure it against the cluster root CA, and have all the Pods trust that CA (by importing the corresponding certificate as a trusted root).</p>
<p>Is it that simple? Or what am I missing?</p>
<p><strong>Update 2022-07-26:</strong> Note that I need to support Windows containers.</p>
| Fabian Schmied | <p>For this purpose you should use <a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-mutual-tls/" rel="nofollow noreferrer">mTLS</a>. To archive this with an AKS Cluster you can easily active the <a href="https://learn.microsoft.com/en-us/azure/aks/open-service-mesh-about" rel="nofollow noreferrer">Open Service Mesh Add-On</a>. With OSM enabled, you can now encrypt communications between service endpoints deployed in the cluster. The cool thing is the the OSM Add-on integrates with <a href="https://learn.microsoft.com/en-us/azure/aks/open-service-mesh-integrations#metrics-observability" rel="nofollow noreferrer">Azure Monitor</a>.</p>
<p><a href="https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/#https-ingress-mtls-and-tls" rel="nofollow noreferrer">Here</a> an example to do mTLS with ingress-nginx :</p>
<blockquote>
<p>To proxy connections to HTTPS backends, we will configure the Ingress
and IngressBackend configurations to use https as the backend
protocol, and have OSM issue a certificate that Nginx will use as the
client certificate to proxy HTTPS connections to TLS backends. The
client certificate and CA certificate will be stored in a Kubernetes
secret that Nginx will use to authenticate service mesh backends.</p>
</blockquote>
| Philip Welz |
<p>I'm trying to deploy RStudio community on Kubernetes.</p>
<p>I'd like to use Helm in order to facilitate the process (I wouldn't really know where to start if I had to specify the different manifests myself).
I've found <a href="https://artifacthub.io/packages/helm/dsri-helm-charts/rstudio/0.1.21" rel="nofollow noreferrer">the dsri helm chart</a>, but of course since it is made for <a href="https://www.okd.io" rel="nofollow noreferrer">okd</a> I can't install it on regular k8 using</p>
<pre><code>helm install rstudio dsri/rstudio \
--set serviceAccount.name=anyuid \
--set service.openshiftRoute.enabled=true \
--set image.repository=ghcr.io/maastrichtu-ids/rstudio \
--set image.tag=latest \
--set storage.mountPath=/home/rstudio \
--set password=changeme
</code></pre>
<p>Is there any way to convert this chart to work on regular Kubernetes? I could switch to okd although I don't really see the benefit of it.</p>
| gaut | <p>You can make it work by disabling creation of Openshift-specific resources. In this case its openshiftRoute. In my case the command looks as follows:</p>
<pre><code>helm install rstudio dsri/rstudio
--set serviceAccount.name=anyuid
--set service.openshiftRoute.enabled=false
--set image.repository=ghcr.io/maastrichtu-ids/rstudio
--set image.tag=latest
--set storage.mountPath=/home/rstudio
--set password=changeme
--set service.openshift.tls.enabled=false
--set serviceAccount.create=true
--set storage.enabled=false
</code></pre>
<p>I've also had to set <code>serviceAccount.create=true</code> because Pod was stuck on Pending state without it and <code>storage.enabled=false</code> as I don't have any PersistentVolumes configured, but change those according to your setup.</p>
| mdobrucki |
<p>I use a private online server to set a jenkins environment though kubernetes.</p>
<p>I have the following service file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: jenkins
</code></pre>
<p>It works, meaning that I can wget from my server the jenkins pod.
However I cannot reach my service from my local computer web-browser.</p>
<p>To do so, I have to type the following command:</p>
<pre><code>kubectl port-forward -n jenkins service/jenkins 8080:8080 --address=<localServerIp>
</code></pre>
<p>I have read that port-forward is debug only (<a href="https://stackoverflow.com/questions/61032945/difference-between-kubectl-port-forwarding-and-nodeport-service/61055177#61055177">Difference between kubectl port-forwarding and NodePort service</a>).
But I cannot find how to configure my service to be visible from the internet. I want the equivalent of the port-forward rule for a persistent port-forward.</p>
| Pierre Vittet | <p>Configuration you provided should be fine, but you would have to configure additional firewall rules on the nodes to make it possible to connect to your Jenkins Service on <code>NodeIP:NodePort</code> externally.</p>
<p>There are certain <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">considerations</a> when provisioning bare-metal clusters, because you have to configure your own load balancer to give your Services externally available IP addresses. Cloud environments use their own load balancer making this easier. You might configure your own load balancer, then create a <code>LoadBalancer</code> type of Service and connect to your app that way. Check <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">different types of Services here</a></p>
<p>Another thing you can try, although not recommended, is to make your <code>kubectl port-forward</code> command persistant. You can set <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">kubelet parameter</a> <code>streaming-connection-idle-timeout</code> to 0 - this will never close your forwarding. If you don't want to change any configuration you can run:</p>
<p><code>while true; do kubectl port-forward -n jenkins service/jenkins 8080:8080 --address=<localServerIp>; done</code></p>
<p>Some links you might find useful: <a href="https://stackoverflow.com/questions/47484312/kubectl-port-forwarding-timeout-issue">similar case</a>, <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">exposing apps in kubernetes</a>.</p>
| mdobrucki |
<p>I am using the mssql docker image (Linux) for sql server 2019. The default user is not <code>root</code> but <code>mssql</code>.
I need to perform some operations as <code>root</code> inside the container:</p>
<pre><code>docker exec -it sql bash
mssql@7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
</code></pre>
<p>Then I start the shell as <code>root</code>:</p>
<pre><code>docker exec -it --user=root sql bash
root@7f5a78a63728:/# <command>
...
</code></pre>
<p>This works.</p>
<p>Now I need to do this in a container deployed in an AKS cluster</p>
<pre><code>kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql@rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
</code></pre>
<p>as expected. But then:</p>
<pre><code>kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
</code></pre>
<p>So when the container is in an AKS cluster, starting a shell as <code>root</code> doesn't work.</p>
<p>I then try to ssh into the node and use docker from inside:</p>
<pre><code>kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root@aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
</code></pre>
<p>Looks like a Kubernetes cluster node doesn't have docker installed!</p>
<p>Any clues?</p>
<p><strong>EDIT</strong></p>
<p>The image I used locally and in Kubernetes is exactly the same,</p>
<pre><code>mcr.microsoft.com/mssql/server:2019-latest untouched
</code></pre>
| Franco Tiveron | <p><a href="https://stackoverflow.com/users/10008173/david-maze" title="74,731 reputation">David Maze</a> has well mentioned in the comment:</p>
<blockquote>
<p>Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?</p>
</blockquote>
<p>Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">pod lifecycle</a>.</p>
| Mikołaj Głodziak |
<p>Per Flink's doc, we can deploy a standalone Flink cluster on top of Kubernetes, using Flink’s standalone deployment,
or deploy Flink on Kubernetes using native Kubernetes deployments.</p>
<p>The document says</p>
<blockquote>
<p>We generally recommend <strong>new</strong> users to deploy Flink on Kubernetes using native Kubernetes deployments.</p>
</blockquote>
<p>Is it because native Kubernetes is easier to get start with, or is it because standalone mode is kind of legacy ?</p>
<p>In native Kubernetes mode, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources. While in standalone mode, task managers have to be provisioned manually.</p>
<p>It sounds to me that native Kubernetes mode is a better choice.</p>
| ROSS XIE | <p>Posted community wiki based on other answers - <a href="https://stackoverflow.com/questions/63270800/how-different-is-the-flink-deployment-on-kubernetes-and-native-kubernetes/63272753#63272753">David Anderson answer</a> and <a href="https://stackoverflow.com/questions/67142140/flink-on-kubernetes/67154143#67154143">austin_ce answer</a>. Feel free to expand it.</p>
<hr />
<p>Good explanation from the <a href="https://stackoverflow.com/questions/63270800/how-different-is-the-flink-deployment-on-kubernetes-and-native-kubernetes/63272753#63272753">David Anderson answer</a>:</p>
<p><a href="https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/standalone/kubernetes/" rel="nofollow noreferrer">Standalone mode</a>:</p>
<blockquote>
<p>In a <a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html" rel="nofollow noreferrer">Kubernetes session or per-job deployment</a>, Flink has no idea it's running on Kubernetes. In this mode, Flink behaves as it does in any standalone deployment (where there is no cluster framework available to do resource management). Kubernetes just happens to be how the infrastructure was created, but as far as Flink is concerned, it could have been bare metal. You will have to arrange for kubernetes to create the infrastructure that you will have configured Flink to expect.</p>
</blockquote>
<p><a href="https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/resource-providers/native_kubernetes/" rel="nofollow noreferrer">Native mode</a>:</p>
<ul>
<li>Session deployment
<blockquote>
<p>In a <a href="https://ci.apache.org/projects/flink/flink-stable/ops/deployment/native_kubernetes.html" rel="nofollow noreferrer">Native Kubernetes session deployment</a>, Flink uses its <code>KubernetesResourceManager</code>, which submits a description of the cluster it wants to the Kubernetes ApiServer, which creates it. As jobs come and go, and the requirements for task managers (and slots) go up and down, Flink is able to obtain and release resources from kubernetes as appropriate.</p>
</blockquote>
</li>
<li>Application mode
<blockquote>
<p>In <a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/#application-mode" rel="nofollow noreferrer">Application Mode</a> (<a href="https://flink.apache.org/news/2020/07/14/application-mode.html" rel="nofollow noreferrer">blog post</a>) (<a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/native_kubernetes.html#flink-kubernetes-application" rel="nofollow noreferrer">details</a>) you end up with Flink running as a kubernetes application, which will automatically create and destroy cluster components as needed for the job(s) in one Flink application.</p>
</blockquote>
</li>
</ul>
<p>The native mode is recommended <a href="https://stackoverflow.com/questions/67142140/flink-on-kubernetes/67154143#67154143">because it is just simpler</a>, I would not say it is legacy:</p>
<blockquote>
<p>The <code>Native</code> mode is the current recommendation for starting out on Kubernetes as it is the simplest option, like you noted. In Flink 1.13 (to be released in the coming weeks), there is added support for <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/#pod-template" rel="nofollow noreferrer">specifying Pod templates</a>. One of the drawbacks to this approach is its limited ability to integrate with CI/CD.</p>
</blockquote>
| Mikolaj S. |
<p>Such as <code>system:masters</code>、<code>system:anonymous</code>、<code>system:unauthenticated</code>.</p>
<p>Is there a way to have all system groups that do not contain external creation, just the system,kubectl command or a list?</p>
<p>I searched the Kubernetes documentation but didn't find a list or a way to get it.</p>
| FlagT | <p><strong>There is no build-in command to list all the default user groups from the Kubernetes cluster.</strong></p>
<p>However you can try to workaround in several options:</p>
<ul>
<li>You can create your custom script (i.e. in Bash) based on <code>kubectl get clusterrole</code> command.</li>
<li>You can try install some <a href="https://krew.sigs.k8s.io/plugins/" rel="nofollow noreferrer">plugins</a>. Plugin <a href="https://github.com/corneliusweig/rakkess/blob/master/README.md" rel="nofollow noreferrer">rakkess</a> could help you:</li>
</ul>
<blockquote>
<p>Have you ever wondered what access rights you have on a provided kubernetes cluster? For single resources you can use <code>kubectl auth can-i list deployments</code>, but maybe you are looking for a complete overview? This is what <code>rakkess</code> is for. It lists access rights for the current user and all server resources, similar to <code>kubectl auth can-i --list</code>.</p>
</blockquote>
<p>See also more information about:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/" rel="nofollow noreferrer">kubelet authentication / authorization</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#anonymous-requests" rel="nofollow noreferrer">anonymous requests</a></li>
</ul>
| Mikołaj Głodziak |
<p>I'm using simple pattern where one Node had one Pod in it, and that Pod is controlled by a Deployment with one replicas set.</p>
<p>Deployment is there to ensure Pod restarts when it gets evicted by <code>DiskPressureEviction</code>.
The problem I'm facing is caused by Deployment retrying to restart the Pod too fast. As the Pod is set to be in specific Node that hasn't cleaned up <code>DiskPressure</code> yet, restarting Pod fails sequentially before Node is ready to accept new Pod:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
deployment-adid-7bb998fccc-4v9dx 0/1 Evicted 0 6m17s
deployment-adid-7bb998fccc-59kvv 0/1 Evicted 0 6m20s
deployment-adid-7bb998fccc-59zzl 0/1 Evicted 0 6m20s
deployment-adid-7bb998fccc-dmm9k 0/1 Evicted 0 6m16s
deployment-adid-7bb998fccc-gn59z 0/1 Evicted 0 6m20s
deployment-adid-7bb998fccc-j4v25 0/1 Evicted 0 6m18s
deployment-adid-7bb998fccc-mw4ps 0/1 Evicted 0 6m20s
deployment-adid-7bb998fccc-n7krq 0/1 Evicted 0 18h
deployment-adid-7bb998fccc-rm4tr 0/1 Evicted 0 6m18s
deployment-adid-7bb998fccc-vn44q 0/1 ContainerCreating 0 6m15s
</code></pre>
<p>Here, 8 Pods are created and evicted in like 5 seconds before 9th get accepted by the designated Node.</p>
<p>While the last Pod finally becomes <code>Running</code> I don't like making garbage Pods. Would be nice if Pod can wait for the Node becoming ready, yet if it's impossible, I suppose restarting could be simply delayed. Presumably by describing <code>waitTime</code> before start recreating a Pod, or describing <code>restartInterval</code> that says in which interval should Deployment try to restart Pod.</p>
<p>So how can I set this kind of control in Deployment's spec?</p>
<p>ADDITION:</p>
<p>With excluding meaningless labels, Deployment spec is something like this:</p>
<pre><code>deployment_template = {
'apiVersion': 'apps/v1',
'kind': 'Deployment',
'metadata': {
'name': 'first',
},
'spec': {
'replicas': '1',
'selector': {
'matchLabels': {
"podName" : "first"
}
},
'template': {
'metadata': {
'labels': {
"podName" : "first"
}
},
'spec': {
'nodeSelector': {
"node": "1"
},
'restartPolicy': 'Always',
'hostNetwork': True,
'dnsPolicy': 'ClusterFirstWithHostNet',
'containers': [
{
'name': 'containername',
'image': "somecontainerimage",
'imagePullPolicy': 'Always',
}
]
}
}
}
}
</code></pre>
| 김기영 | <p>First I'd suggest updating to the newest, supported Kubernetes version. The maintenance support for version 1.17 that you are using <a href="https://endoflife.date/kubernetes" rel="nofollow noreferrer">ended 11 months ago</a>. <a href="https://kubernetes.io/blog/2021/12/07/kubernetes-1-23-release-announcement/" rel="nofollow noreferrer">The actual version (as of today 15.12.2021) is v1.23</a>. Since Kubernetes v1.18 <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features" rel="nofollow noreferrer">the feature <code>TaintBasedEvictions</code></a> is in stable mode.</p>
<p>Another thing is that, instead of trying to delay the deployment which is kind of a workaround and not the best practice and better to fix a main issue which is disk pressure eviction that you are occurring. You should consider changing behaviour of your application, or at least try to avoid disk pressure on node by increasing it's storage size.</p>
<p>Anyway, If you want to keep it in that way, you may try to setup some additional parameters. You can't itself delay the deployment, but you can change the behaviour of the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet agent</a> on your node.</p>
<hr />
<p>Below example is for the Kubernetes version 1.23. Keep in mind that for version 1.17 it may differ.</p>
<p>I created a cluster with one master node and one worker node, the pods are only scheduled on the worker node. I am fulfilling worker storage to create <code>node.kubernetes.io/disk-pressure</code>. By default the behaviour is similar to yours, many pods are created in <code>Evicted</code> state, which, worth to note, it's totally normal and it's expected behaviour. They are creating until node get taint <code>disk-pressure</code>, <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/" rel="nofollow noreferrer">which is occurring after ~10 seconds by default</a>:</p>
<blockquote>
<p>nodeStatusUpdateFrequency is the frequency that kubelet computes node status. ... Default: "10s"</p>
</blockquote>
<p>After that time, as you can observe, there are no pods created in <code>Evicted</code> state. The taint is deleted (i.e in you case the disk storage on node is back to the proper value) after ~5 min, it is <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration" rel="nofollow noreferrer">defined by <code>evictionPressureTransitionPeriod</code> parameter</a>:</p>
<blockquote>
<p>evictionPressureTransitionPeriod is the duration for which the kubelet has to wait before transitioning out of an eviction pressure condition. ... Default: "5m"</p>
</blockquote>
<p>Okay, let's change some configuration by editing <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration" rel="nofollow noreferrer">kubelet config file</a> on the worker node- it is located at <code>/var/lib/kubelet/config.yaml</code> for kubeadm.</p>
<p>I will change three parameters:</p>
<ul>
<li>earlier mentioned <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration" rel="nofollow noreferrer"><code>evictionPressureTransitionPeriod</code> parameter</a> set to 120s so taint will be deleted faster</li>
<li><a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration" rel="nofollow noreferrer"><code>evictionSoft</code></a> to define a soft eviction - in my case it will occur when worker node has available less than 15GB of the storage</li>
<li><a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration" rel="nofollow noreferrer"><code>evictionSoftGracePeriod</code></a> to define a period after pod will enter into eviction state if defined <code>evictionSoft</code>occurs, in my case it's 60 seconds</li>
</ul>
<p>The file <code>var/lib/kubelet/config.yaml</code> - only the changed / added fields:</p>
<pre><code>evictionPressureTransitionPeriod: 120s
evictionSoftGracePeriod:
nodefs.available: 60s
evictionSoft:
nodefs.available: 15Gi
</code></pre>
<p>To sum up - after my node storage is less than 15 GB, the pod will be in running state for 60 seconds. After that, is storage is still less than 15 GB, pods will enter into <code>Evicted</code> / <code>Completed</code> state, the new pods will occur in <code>Pending</code> state:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-nginx-deployment-6cf77b6d6b-2hr2s 0/1 Completed 0 115m
my-nginx-deployment-6cf77b6d6b-8f8wv 0/1 Completed 0 115m
my-nginx-deployment-6cf77b6d6b-9kpc9 0/1 Pending 0 108s
my-nginx-deployment-6cf77b6d6b-jbx5g 0/1 Pending 0 107s
</code></pre>
<p>After the available storage is higher than 15 GB, it will take 2 minutes to remove the taint and create new pods.</p>
<p>If during these 60 seconds the available storage will be again higher than 15GB, then no action will be done, the pods will be still in <code>Running</code> state.</p>
<p>If you have any garbage pods running, run this command to delete them:</p>
<pre><code>kubectl get pods | grep -e "ContainerStatusUnknown" -e "Evicted" -e "Completed" -e "Error" | awk '{print $1}' | xargs kubectl delete pod
</code></pre>
<p>Keep in mind that pod eviction may behave differently for different <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">QoS classes</a> and <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass" rel="nofollow noreferrer">priority classes</a>- check this article -> <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction" rel="nofollow noreferrer">Node-pressure Eviction - Pod selection for kubelet eviction</a> for more information.</p>
<p>You should try to monitor how exactly the disk pressure is happening on your node and you can adjust the kubelet configuration accordingly. Also check these articles:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/" rel="nofollow noreferrer">Node-pressure Eviction</a>.</li>
<li><a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/" rel="nofollow noreferrer">Parameters to configure in kubelet</a></li>
</ul>
| Mikolaj S. |
<p>In k8s, a pod starts with a container named <code>pause</code>.<br />
Pause container helps other containers share network namespace.<br />
I know it but have a question.</p>
<p>What is the lifecycle of the pause container?<br />
What I want to know is that when a pod gets <code>crashloopbackoff</code> or temporally doesn't work, does the pause container also stop?</p>
<p>If not, does the pause container maintains its own Linux namespace?</p>
| Togomi | <blockquote>
<p>When a pod gets <code>crashloopbackoff</code> or temporally doesn't work, does the pause container also stop?</p>
</blockquote>
<p>No. A <a href="https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/" rel="nofollow noreferrer">CrashloopBackOff</a> is independent of <code>pause</code> containers. I have reproduced this situation in Minikube with docker driver with following yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dummy-pod
spec:
containers:
- name: dummy-pod
image: ubuntu
restartPolicy: Always
</code></pre>
<p>Command <code>kubectl get pods</code> returns for me:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
dummy-pod 0/1 CrashLoopBackOff 7 (2m59s ago) 14m
</code></pre>
<p>Then I have logged into the node on which the crashed pod exist and run <code>docker ps</code>:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7985cf2b01ad k8s.gcr.io/pause:3.5 "/pause" 14 minutes ago Up 14 minutes k8s_POD_dummy-pod_default_0f278cd1-6225-4311-98c9-e154bf9b42a3_0
18eeb073fe71 6e38f40d628d "/storage-provisioner" 16 minutes ago Up 16 minutes k8s_storage-provisioner_storage-provisioner_kube-system_5c3cec65-5a2d-4881-aa34-d98e1098f17f_1
b7dd2640584d 8d147537fb7d "/coredns -conf /etc…" 17 minutes ago Up 17 minutes k8s_coredns_coredns-78fcd69978-h28mp_kube-system_f62eec5a-290c-4a42-b488-e1475d7f6ff2_0
d3acb4e61218 36c4ebbc9d97 "/usr/local/bin/kube…" 17 minutes ago Up 17 minutes k8s_kube-proxy_kube-proxy-bf75s_kube-system_39dc64cc-2eab-497d-bf13-b5e6d1dbc9cd_0
083690fe3672 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_coredns-78fcd69978-h28mp_kube-system_f62eec5a-290c-4a42-b488-e1475d7f6ff2_0
df0186291c8c k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-proxy-bf75s_kube-system_39dc64cc-2eab-497d-bf13-b5e6d1dbc9cd_0
06fdfb5eab54 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_storage-provisioner_kube-system_5c3cec65-5a2d-4881-aa34-d98e1098f17f_0
183f6cc10573 aca5ededae9c "kube-scheduler --au…" 17 minutes ago Up 17 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_6fd078a966e479e33d7689b1955afaa5_0
2d032a2ec51d f30469a2491a "kube-apiserver --ad…" 17 minutes ago Up 17 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_4889789e825c65fc82181cf533a96c40_0
cd157b628bc5 6e002eb89a88 "kube-controller-man…" 17 minutes ago Up 17 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_f8d2ab48618562b3a50d40a37281e35e_0
a2d5608e5bac 004811815584 "etcd --advertise-cl…" 17 minutes ago Up 17 minutes k8s_etcd_etcd-minikube_kube-system_08a3871e1baa241b73e5af01a6d01393_0
e9493a3f2383 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-apiserver-minikube_kube-system_4889789e825c65fc82181cf533a96c40_0
1088a8210eed k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-scheduler-minikube_kube-system_6fd078a966e479e33d7689b1955afaa5_0
f551447a77b6 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_etcd-minikube_kube-system_08a3871e1baa241b73e5af01a6d01393_0
c8414ee790d8 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-controller-manager-minikube_kube-system_f8d2ab48618562b3a50d40a37281e35e_0
</code></pre>
<p>The <code>pause</code> containers are independent in the pod. The pause container is a container which holds the network namespace for the pod. It does nothing. It doesn't stop even if the pod is in the CrashLoopBackOff state. If the pause container is dead, kubernetes consider the pod died and kill it and reschedule a new one. There was no such situation here.</p>
<p>See also an <a href="https://stackoverflow.com/questions/48651269/what-are-the-pause-containers">explanation of <code>pause</code> containers</a>.</p>
| Mikołaj Głodziak |
<p>I'm trying kubernetes and making some progress, but I'm running into an issue with ingress when trying to make my hello world app publicly available.</p>
<p><strong>SUCCESS WITH DEPLOYMENT AND SERVICE</strong></p>
<p>I created a simple <code>hello world</code> type of nodejs app and pushed the image to my docker hub <code>johnlai2004/swarm2</code>. I successfully created a deployment and service with this yaml file:</p>
<p><code>nodejs.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-hello
labels:
app: nodejs-hello
spec:
replicas: 1
selector:
matchLabels:
app: nodejs-hello
template:
metadata:
labels:
app: nodejs-hello
spec:
containers:
- name: nodejs-hello
image: johnlai2004/swarm2
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-hello-service
spec:
selector:
app: nodejs-hello
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
</code></pre>
<p>I uploaded these files to a VPS with a new installation of ubuntu 20.04, minikube, kubectl and docker.</p>
<p>I ran the following commands and got the results I wanted:</p>
<pre><code>minikube start --driver=docker
kubectl apply -f nodejs.yaml
minikube service nodejs-hello-service
|-----------|----------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------------|-------------|---------------------------|
| default | nodejs-hello-service | 3000 | http://192.168.49.2:30000 |
|-----------|----------------------|-------------|---------------------------|
</code></pre>
<p>When I do a <code>wget http://192.168.49.2:30000</code>, I get an <code>index.html</code> file that says <code>hello from nodejs-hello-556dc868-6lrdz at 12/19/2021, 10:29:56 PM</code>. This is perfect.</p>
<p><strong>FAILURE WITH INGRESS</strong></p>
<p>Next, I want to use ingress so that I can see the page at <code>http://website.example.com</code> (replace <code>website.example.com</code> with the actual domain that points to my server). I put this file on my server:</p>
<p><code>nodejs-ingress.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodejs-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: website.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nodejs-hello-service
port:
number: 3000
</code></pre>
<p>And I ran the commands</p>
<pre><code>minikube addons enable ingress
kubectl apply -f nodejs-ingress.yaml
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nodejs-ingress <none> website.example.com localhost 80 15m
</code></pre>
<p>But when I visit <code>http://website.example.com</code> with my browser, the browser says it can't connect. Using <code>wget http://website.example.com</code> gave the same connection issue.</p>
<p>Can someone point out what I may have done wrong?</p>
<hr />
<p><strong>UPDATE</strong></p>
<p>I ran these commands because I think it shows I didn't install ingress-controller in the right name space?</p>
<pre><code>kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-tqsrp 0/1 Completed 0 4h25m
ingress-nginx-admission-patch--1-sth26 0/1 Completed 0 4h25m
ingress-nginx-controller-5f66978484-tmx72 1/1 Running 0 4h25m
kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
nodejs-hello-556dc868-6lrdz 1/1 Running 0 40m
</code></pre>
<p>So does this mean my nodejs app is in a name space that doesn't have access to the ingress controller?</p>
<hr />
<p>UPDATE 2</p>
<p>I also tried following <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">this guide</a> step by step.
One difference I noticed was when I ran the command <code>kubectl get ingress</code>, <code>ADDRESS</code> says <code>localhost</code>. But in the guide, it says it is supposed to be <code>172.17.0.15</code></p>
<p><a href="https://i.stack.imgur.com/IsjYW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IsjYW.png" alt="enter image description here" /></a></p>
<p>Does this difference matter? I'm hosting things in a cloud vps called linode.com. Does that change the way I do things?</p>
| John | <p>The behaviour you have is expected. Let me explain why. Going point through point, and at the end I will present my tips.</p>
<p>First I think it's worth to present minikube architecture. I'm assuming you have installed minikube using default driver <code>docker</code> as you have address <code>192.168.49.2</code> which is standard for <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/" rel="nofollow noreferrer"><code>docker</code> driver</a>.</p>
<p>The layers are:</p>
<ul>
<li>your VM with Ubuntu 20.04</li>
<li>Kubernetes cluster setup by minikube which is docker container with address <code>192.168.49.2</code> on the VM</li>
</ul>
<p>So... you can not just run <code>curl</code> on the VM using a <code>localhost</code> address to connect to the service. <a href="https://en.wikipedia.org/wiki/Localhost" rel="nofollow noreferrer">The <code>localhost</code> is referring to the device that you are making curl request (so your VM)</a>, not the docker container. That's why you need to use the <code>192.168.49.2</code> address.</p>
<p>You can type <code>docker ps -a</code>, and you will see the container which is an actual Kubernetes cluster. You can exec into it using <code>docker exec</code> command and then run <code>curl</code> command with the localhost address:</p>
<pre><code>user@example-ubuntu-minikube-template-1:~$ curl -H "Host: hello-world.info" localhost
curl: (7) Failed to connect to localhost port 80: Connection refused
user@example-ubuntu-minikube-template-1:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a4ba3895990 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 4 days ago Up 4 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
user@example-ubuntu-minikube-template-1:~$ docker exec -it 1a sh
# curl -H "Host: hello-world.info" localhost
Hello, world!
Version: 1.0.0
Hostname: web-79d88c97d6-nl4c7
</code></pre>
<p>Keep in mind that you can access this container only from the VM that is hosting this container. Without any further configuration it's you can't just access it from the other VMs. It is possible, but not recommended.</p>
<p>Good explanation in the <a href="https://minikube.sigs.k8s.io/docs/faq/" rel="nofollow noreferrer">minikube FAQ</a>:</p>
<blockquote>
<p>How can I access a minikube cluster from a remote network?<a href="https://minikube.sigs.k8s.io/docs/faq/#how-can-i-access-a-minikube-cluster-from-a-remote-network" rel="nofollow noreferrer"></a></p>
</blockquote>
<blockquote>
<p>minikube’s primary goal is to quickly set up local Kubernetes clusters, and therefore we strongly discourage using minikube in production or for listening to remote traffic. By design, minikube is meant to only listen on the local network.</p>
</blockquote>
<blockquote>
<p>However, it is possible to configure minikube to listen on a remote network. This will open your network to the outside world and is not recommended. If you are not fully aware of the security implications, please avoid using this.</p>
</blockquote>
<blockquote>
<p>For the docker and podman driver, use <code>--listen-address</code> flag:</p>
<pre><code>minikube start --listen-address=0.0.0.0
</code></pre>
</blockquote>
<p>So I'd avoid it. I will present a better possible solution at the end of the answer.</p>
<p>You asked:</p>
<blockquote>
<p>But when I visit <code>http://website.example.com</code> with my browser, the browser says it can't connect. Using <code>wget http://website.example.com</code> gave the same connection issue.</p>
</blockquote>
<blockquote>
<p>Can someone point out what I may have done wrong?</p>
</blockquote>
<p>Your computer does not know what it is <code>website.example.com</code>. It's not aware that this name is used by your Ingress. If your computer does not find this name in <a href="https://en.wikipedia.org/wiki/Hosts_(file)" rel="nofollow noreferrer">hosts file</a> it will start looking for this over the Internet.</p>
<p>So you have some possible solutions:</p>
<ul>
<li>use <code>curl -H 'HOST: website.example.com' http://192.168.49.2:80</code> - it will work only on the VM where is minikube</li>
<li><a href="https://linuxize.com/post/how-to-edit-your-hosts-file/" rel="nofollow noreferrer">add host to the hosts file</a> - something like <code>192.168.49.2 website.example.com</code> - it will work only on the VM where is minikube</li>
<li>Setup a bare-metal cluster (more details in the sum up section) and point the domain address to the VM address. In GCP and AWS you can do it using <a href="https://cloud.google.com/dns" rel="nofollow noreferrer">Google Cloud DNS</a> or <a href="https://aws.amazon.com/route53/" rel="nofollow noreferrer">Amazon Route 53</a>. On linode cloud maybe <a href="https://www.linode.com/docs/guides/dns-manager/" rel="nofollow noreferrer">this one - DNS Manager</a> ?</li>
</ul>
<p><strong>EDIT:</strong>
You wrote that you have a domain pointed to the server, so please just check the sum up section of my answer.</p>
<p>Also you asked:</p>
<blockquote>
<p>I ran these commands because I think it shows I didn't install ingress-controller in the right name space?</p>
</blockquote>
<blockquote>
<p>So does this mean my nodejs app is in a name space that doesn't have access to the ingress controller?</p>
</blockquote>
<p><a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">It's absolutely normal</a>:</p>
<blockquote>
<p>It will install the controller in the <code>ingress-nginx</code> namespace, creating that namespace if it doesn't already exist.</p>
</blockquote>
<p>Also:</p>
<blockquote>
<p>I also tried following this guide step by step: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/</a></p>
</blockquote>
<blockquote>
<p>One difference I noticed was when I ran the command <code>kubectl get ingress</code>, <code>ADDRESS</code> says <code>localhost</code>. But in the guide, it says it is supposed to be <code>172.17.0.15</code></p>
</blockquote>
<blockquote>
<p>Does this difference matter? I'm hosting things in a cloud vps called linode.com. Does that change the way I do things?</p>
</blockquote>
<p>It's also normal and expected when using minikube with a docker driver.</p>
<p>To sum up / other tips:</p>
<ul>
<li>Minkube is fine for local testing; you can easily access apps and services using from the host - check <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">Accessing apps</a>.</li>
<li>However, for exposing it outside the network you should consider using solutions like <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a> or <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">kubespray</a> for cluster setup + <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> for LoadBalancer solution. Check "<a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">Bare-metal considerations</a>" and <a href="https://stackoverflow.com/questions/61583350/exposing-a-kubernetes-service-on-a-bare-metal-cluster-over-the-external-network">Exposing a Kubernetes service on a bare-metal cluster over the external network architecture</a></li>
<li>If you are planning to use your service with ingress, don't use LoadBalancer type. It's ok to use ClusterIP.</li>
</ul>
| Mikolaj S. |
<p>So I wish to limit resources used by pod running for each of my namespace, and therefor want to use resource quota.
I am following this <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/" rel="nofollow noreferrer">tutorial</a>.
It works well, but I wish something a little different.
When trying to schedule a pod which will go over the limit of my quota, I am getting a <code>403</code> error.
What I wish is the request to be scheduled, but waiting in a pending state until one of the other pod end and free some resources.</p>
<p>Any advice?</p>
| Djoby | <p>Instead of using straight pod definitions (<code>kind: Pod</code>) use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a>.</p>
<p><em>Why?</em></p>
<p>Pods in Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">are designed as relatively ephemeral, disposable entities</a>:</p>
<blockquote>
<p>You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">controller</a>), the new Pod is scheduled to run on a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a> in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is <em>evicted</em> for lack of resources, or the node fails.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">Kubernetes assumes that for managing pods you should</a> a <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">workload resources</a> instead of creating pods directly:</p>
<blockquote>
<p>Pods are generally not created directly and are created using workload resources. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">Working with Pods</a> for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a></li>
</ul>
</blockquote>
<p>By using deployment you will get very similar behaviour to the one you want.</p>
<p>Example below:</p>
<p>Let's suppose that I created <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">pod quota</a> for a custom namespace, set to "2" as <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/#create-a-resourcequota" rel="nofollow noreferrer">in this example</a> and I have two pods running in this namespace:</p>
<pre><code>kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 75s
quota-demo-2 1/1 Running 0 6s
</code></pre>
<p>Third pod definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Now I will try to apply this third pod in this namespace:</p>
<pre><code>kubectl apply -f pod.yaml -n quota-demo
Error from server (Forbidden): error when creating "pod.yaml": pods "quota-demo-3" is forbidden: exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2
</code></pre>
<p>Not working as expected.</p>
<p>Now I will change pod definition into deployment definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: quota-demo-3-deployment
labels:
app: quota-demo-3
spec:
selector:
matchLabels:
app: quota-demo-3
template:
metadata:
labels:
app: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>I will apply this deployment:</p>
<pre><code>kubectl apply -f deployment-v3.yaml -n quota-demo
deployment.apps/quota-demo-3-deployment created
</code></pre>
<p>Deployment is created successfully, but there is no new pod, Let's check this deployment:</p>
<pre><code>kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 0/1 0 0 12s
</code></pre>
<p>We can see that a pod quota is working, deployment is monitoring resources and waiting for the possibility to create a new pod.</p>
<p>Let's now delete one of the pod and check deployment again:</p>
<pre><code>kubectl delete pod quota-demo-2 -n quota-demo
pod "quota-demo-2" deleted
kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 1/1 1 1 2m50s
</code></pre>
<p>The pod from the deployment is created automatically after deletion of the pod:</p>
<pre><code>kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 5m51s
quota-demo-3-deployment-7fd6ddcb69-nfmdj 1/1 Running 0 29s
</code></pre>
<p>It works the same way for <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/" rel="nofollow noreferrer">memory and CPU quotas for namespace</a> - when the resources are free, deployment will automatically create new pods.</p>
| Mikolaj S. |
<p>I'm using Kubernetes with kube-state-metrics and Prometheus/grafana to graph various metrics of the Kubernetes Cluster.</p>
<p>Now I'd like to Graph how many <strong>new</strong> PODs have been created per Hour over Time.</p>
<p>The Metric <code>kube_pod_created</code> contains the Creation-Timestamp as Value but since there is a Value in each Time-Slot, the following Query also returns Results >0 for Time-Slots where no new PODs have been created:</p>
<pre><code>count(rate(kube_pod_created[1h])) by(namespace)
</code></pre>
<p>Can I use the Value in some sort of criteria to only count if Value is within the "current" Time-Slot ?</p>
| powo | <p>PODs created in past hour</p>
<p><code>count ( (time() - sum by (pod) (kube_pod_created)) < 60*60 )</code></p>
| Chao Yang |
<p>Imagine I have some pods I need on separate k8s nodes, I could use something like this if I know both pods have a label <code>my/label=somevalue</code></p>
<pre class="lang-yaml prettyprint-override"><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
podAffinityTerm:
labelSelector:
matchLabels:
my/label: somevalue
</code></pre>
<p>I have some pods I need separated, according to multiple values of the same label, which doesn't known up-front (it's a shared key which is calculated by an operator).</p>
<p>Is there a way of specifying a <code>podAffinityTerm</code> which applies to any pods sharing the same value of <code>my.label</code>, regardless of the actual value?</p>
<p>E.g.</p>
<pre><code>Pod a has my/label=x
Pod b has my/label=x
Pod c has my/label=y
Pod d has my/label=y
</code></pre>
<p>I'd need pods a & b separated from each other, and pods c & d separated form each other, but e.g. a and d can coexist on the same node</p>
| Jon Bates | <p>As far as I know, there is no built-in way to specify affinity without knowing label values. At the stage of creating a pod you need to provide both key and value. In order for affinity to work properly, you need to know this value at the time of creation and put it in the appropriate yaml file.</p>
<p>Theoretically, you could create a custom script, e.g. in bash, which will take your value for you</p>
<blockquote>
<p>its a sharding key which is calculated by an operator</p>
</blockquote>
<p>and then replace it in yaml files. This way it will be set correctly when creating pod.</p>
<p>Additionally, you can also have a look at <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/" rel="nofollow noreferrer">Well-Known Labels, Annotations and Taints</a>.</p>
<p>Depending on your exact situation, you can try to solve the problem with them. See also <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">this page</a>. There you will find everything about assigning pod to nodes.</p>
| Mikołaj Głodziak |
<p>I’ve been deploying an Private AKS cluster. On the subnet where it supposed to be deployed I’ve assigned and UDR to force all traffic 0.0.0.0 to the internal IP of the Azure Firewall that resides in a peered VNEt aka the hub (in a hub and spoke architecture). The AKS deployment was not finishing and actually looking at the node pools to be deployed it looks like the deployment failed because the service couldn’t reach MS stuff. My question now is as I was unable to find, what url do I need to actually permit from the aks subnet in terms of a) deploying it b) keeping it up to date - meaning updating the worker nodes c) NTP d) whatever else ?</p>
| user211245 | <p>In the official MS <a href="https://learn.microsoft.com/en-us/azure/firewall/protect-azure-kubernetes-service#securing-aks" rel="nofollow noreferrer">documentation</a> there is a section that describes the required outbound ports / network rule for an AKS cluster when using a firewall.</p>
| Philip Welz |
<p>I dig everywhere to see why we don't have DNS resolution for static pods and couldn't find a right answer. Most basic stuff and couldn't find appealing answer.</p>
<p>Like you create a static pod, exec into it, do a "nslookup pod-name" or like "nslookup 10-44-0-7.default.pod.cluster.local", I know the second one is for Deployment and DaemonSet which creates A record, why not for static pods because they are ephemeral, in that way Deployment also is. Please tell me if it is possible and how we enable it.</p>
<p>My testing for the failed queries, all are static pods created with "kubectl run busybox-2 --image=busybox --command sleep 1d"</p>
<p>Used this as syntax:</p>
<p>In general a pod has the following DNS resolution: pod-ip-address.my-namespace.pod.cluster-domain.example.</p>
<pre><code>vagrant@kubemaster:~$ kubectl get pods -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-1 1/1 Running 0 135m 10.36.0.6 kubenode02 <none> <none>
busybox-2 1/1 Running 0 134m 10.44.0.7 kubenode01 <none> <none>
busybox-sleep 1/1 Running 19 (24h ago) 23d 10.44.0.3 kubenode01 <none> <none>
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local Dlink
options ndots:5
/ # nslookup 10-44-0-7.default.pod.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: 10-44-0-7.default.pod.cluster.local
Address: 10.44.0.7
*** Can't find 10-44-0-7.default.pod.cluster.local: No answer
/ # nslookup 10-44-0-6.default.pod.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
*** Can't find 10-44-0-6.default.pod.cluster.local: No answer
</code></pre>
<p>Appreciate the help.</p>
| Celtic Bean | <p>You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:</p>
<p>Katacoda
Play with Kubernetes
Your cluster must be running the CoreDNS add-on. Migrating to CoreDNS explains how to use kubeadm to migrate from kube-dns.</p>
<p>Your Kubernetes server must be at or later than version v1.12. To check the version, enter kubectl version.</p>
| Merve Pınar |
<p>I am researching the K8s architecture, focusing on the flow of pod spinning in the system.</p>
<p>I was wondering <em><strong>how</strong></em> (that is, who is initiating the communication) and <em><strong>when</strong></em> the different components of the control plane communicate with each other.</p>
<p>I have followed the excellent talk of <strong>Jérôme Petazzoni</strong> at LISA 2019 (<a href="https://www.youtube.com/watch?v=3KtEAa7_duA" rel="nofollow noreferrer">here</a>) to understand the architecture of the control plane, and read the concepts on <a href="https://kubernetes.io/docs/concepts/architecture" rel="nofollow noreferrer">kubernetes.io</a>.</p>
<p>However, I still haven't found the answers to the following questions:</p>
<ol>
<li>Who initiates the resource check of each node, in the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-capacity" rel="nofollow noreferrer">documentation</a> it is written:</li>
</ol>
<blockquote>
<p>Node objects track information about the Node's resource capacity: for example, the amount of memory available and the number of CPUs. Nodes that self-register report their capacity during registration. If you manually add a Node, then you need to set the node's capacity information when you add it.</p>
</blockquote>
<p>However, there is no specification on when does it update at <code>etcd</code>, and who initiates the regular update (other than the <em>heartbeat</em> that updates the status of the node).</p>
<p>Also, when does the cache of the scheduler update?</p>
<ol start="2">
<li>Who informs the different components about new pending requests? That is, how is the <code>controller-manager</code>/<code>scheduler</code> <em>"knows"</em> when it suppose to do its job? Each request is written as a manifest in <code>etcd</code> by the <code>kube-api-server</code>, but these components aren't connected to <code>etcd</code> directly.</li>
</ol>
<p>Does that mean the API-Server needs to inform each component about each new request?</p>
<p>I have many possible answers, but not a concrete confirmation of the real process in current K8s architecture.</p>
| Kerek | <p>Answering your questions:</p>
<p><strong>Who initiates the resource check of each node?</strong></p>
<p>The component responsible for that is <em>"Node Status Manager"</em> which is a sub-control loop of the "<em>SyncLoop"</em> which is a <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet agent</a> component.</p>
<p>The more detailed answer is in this article: <a href="https://aws.plainenglish.io/kubernetes-deep-dive-kubelet-e4527ed56f4c" rel="nofollow noreferrer">Kubernetes Deep Dive: Kubelet</a>:</p>
<blockquote>
<p>As you can see, the core of <code>kubelet</code>’s work is a control loop, namely: <strong>SyncLoop</strong>.</p>
</blockquote>
<blockquote>
<p>For example, the <strong>Node Status Manager</strong> is responsible for responding to changes in the status of the <code>Node</code>, and then collecting the status of the <code>Node</code> and reporting it to the <code>APIServer</code> through Heartbeat.</p>
</blockquote>
<p>There is also a good diagram:</p>
<p><img src="https://pbs.twimg.com/media/DncWSekUYAAlOMY?format=jpg&name=large" alt="" /></p>
<p>Answering second part:</p>
<p><strong>Who informs the different components about new pending requests? That is, how is the <code>controller-manager</code>/<code>scheduler</code> <em>"knows"</em> when it suppose to do its job?</strong></p>
<p>The components responsible for that are <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">Kubernetes' controllers</a> and <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">Scheduler</a> itself. Good examples and explanations are in this article: <a href="https://github.com/jamiehannaford/what-happens-when-k8s" rel="nofollow noreferrer">What happens when ... Kubernetes edition!</a>,</p>
<p>Basically after Kubernetes verified the request (authentication, authorization, admission control stuff), it is saved to datastore (<code>etcd</code>), and then it's taken by <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#initializers" rel="nofollow noreferrer">initializers</a> which may perform some additional logic on the resource (not always), after that it's visible via kube-server. Main part that may interest you is <a href="https://github.com/jamiehannaford/what-happens-when-k8s#control-loops" rel="nofollow noreferrer">Control loops</a>. They are constantly checking if a new request exists in a datastore, and if yes they are proceeding. Example - when you are deploying a new deployment:</p>
<ul>
<li><a href="https://github.com/jamiehannaford/what-happens-when-k8s#deployments-controller" rel="nofollow noreferrer">Deployments controller</a> is taking a request - it will realise that there is no ReplicaSet record associated, it will roll-out new one</li>
<li><a href="https://github.com/jamiehannaford/what-happens-when-k8s#replicasets-controller" rel="nofollow noreferrer">ReplicaSets controller</a>, like deployments controller, it will take a request and roll-out new pods</li>
<li>Pods are ready but they are in pending state - now <a href="https://github.com/jamiehannaford/what-happens-when-k8s#scheduler" rel="nofollow noreferrer">Scheduler</a> (which is like previous controllers, listening constantly for new requests from the data store - it's de facto answer for your question) will find a suitable node and schedule a pod to a node. Now, kubelet agent on the node will create a new pod.</li>
</ul>
<p>For more details I'd strongly suggest reading the earlier mentioned article - <a href="https://github.com/jamiehannaford/what-happens-when-k8s" rel="nofollow noreferrer">What happens when ... Kubernetes edition!</a>.</p>
<p><strong>Does that mean the API-Server needs to inform each component about each new request?</strong></p>
<p>It works in a different way - the kube-apiserver is making requests to be visible, and controllers, which are loops, are detecting new requests and starting to proceed with them.</p>
| Mikolaj S. |
<p>We have a Kubernetes cluster behind a L4 load balancer but we do not have programmatic access to the load balancer to add/remove nodes when we need to update/reboot nodes (the LB is managed by the Support team of our hosting provider).</p>
<p>The load balancer does support healthchecks but the current setup is to call port 80 on each node to determine whether the node is healthy. This will succeed even if the node is drained so we have no choice except to reboot the node and wait up to 10 seconds for the LB to notice and take it out of the set when kubeapi dies.</p>
<p>I want something like a pod per-node that we could use to determine whether the node is alive, presumably setup with a node port. The problem is that I can't find how to do this. If I use a daemonset, I don't think the pods are evicted during drain so that wouldn't work and if I used a normal deployment, there is no guarantee that a healthy node will have an instance of the pod and would appear unhealthy. Even with anti-affinity setup I don't think there is any guarantee that all healthy nodes will have a running pod to check.</p>
<p>Does anyone know a way of using a TCP or HTTP call to a node to detect it is drained?</p>
| Luke Briner | <p>It seems that the solution you are looking for is fully described in <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/" rel="nofollow noreferrer">this documentation</a>:</p>
<blockquote>
<p><em>Node Problem Detector</em> is a daemon for monitoring and reporting about a node's health. You can run Node Problem Detector as a <code>DaemonSet</code> or as a standalone daemon. Node Problem Detector collects information about node problems from various daemons and reports these conditions to the API server as <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">NodeCondition</a> and <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#event-v1-core" rel="nofollow noreferrer">Event</a>.</p>
</blockquote>
<p>You can create a node monitoring based on its <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">condition</a>.</p>
<p>You need to also know about the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/#limitations" rel="nofollow noreferrer">limitations</a>:</p>
<blockquote>
<ul>
<li>Node Problem Detector only supports file based kernel log. Log tools such as <code>journald</code> are not supported.</li>
<li>Node Problem Detector uses the kernel log format for reporting kernel issues. To learn how to extend the kernel log format, see <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/#support-other-log-format" rel="nofollow noreferrer">Add support for another log format</a>.</li>
</ul>
</blockquote>
| Mikołaj Głodziak |
<p>I am trying to get ingress EXTERNAL-IP in k8s. Is there any way to get the details from terraform data block. like using data "azurerm_kubernetes_cluster" or something?</p>
| iluv_dev | <p>you can create the Public IP in advance with terraform and assign this IP to your ingress service:</p>
<p>YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
</code></pre>
<p>Same but Terraform code:</p>
<pre><code>resource "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
annotations {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "${azurerm_resource_group.YOUR_RG.name}"
}
spec {
selector = {
app = <PLACEHOLDER>
}
port {
port = <PLACEHOLDER>
target_port = <PLACEHOLDER>
}
type = "LoadBalancer"
load_balancer_ip = "${azurerm_public_ip.YOUR_IP.ip_address}"
}
}
</code></pre>
| Philip Welz |
<p>I need to persist the heap dump when the java process gets OOM and the pod is restarted.</p>
<p>I have following added in the jvm args</p>
<pre><code>-XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/dumps
</code></pre>
<p>...and emptydir is mounted on the same path.</p>
<p>But the issue is if the pod gets restarted and if it gets scheduled on a different node, then we are losing the heap dump. How do I persist the heap dump even if the pod is scheduled to a different node?</p>
<p>We are using AWS EKS and we are having more than 1 replica for the pod.</p>
<p>Could anyone help with this, please?</p>
| Baitanik | <p>As writing to EFS is too slow in your case, there is another option for AWS EKS - <code>awsElasticBlockStore</code>.</p>
<blockquote>
<p>The contents of an EBS volume are persisted and the volume is unmounted when a pod is removed. This means that an EBS volume can be pre-populated with data, and that data can be shared between pods.</p>
</blockquote>
<p><strong>Note</strong>: You must create an EBS volume by using aws ec2 create-volume or the AWS API before you can use it.</p>
<p>There are some restrictions when using an awsElasticBlockStore volume:</p>
<ul>
<li>the nodes on which pods are running must be AWS EC2 instances</li>
<li>those instances need to be in the same region and availability zone as the EBS volume</li>
<li>EBS only supports a single EC2 instance mounting a volume</li>
</ul>
<p>Check the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer">official k8s documentation page</a> on this topic, please.
And <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="nofollow noreferrer">How to use persistent storage in EKS</a>.</p>
| mozello |
<p>I've updated <code>kubectl client version</code> to the latest but I am not able to update <code>kubectl server version</code> to the latest due to which client and server versions are different. The problem is: how can I update <code>kubectl server version</code>?</p>
<p>P.S i'm running minikube on docker locally</p>
<pre><code>Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.18) exceeds the supported minor version skew of +/-1 ```
```commands which i used to update Kubectl client version are below (for mac intel chip):
Link reference: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/ ```
</code></pre>
| MuhammadSannan | <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>You need to upgrade your minikube version in order to upgrade the kubectl server version.</p>
<p>Based on <a href="https://stackoverflow.com/questions/45002364/how-to-upgrade-minikube/64362273#64362273">this answer</a> to upgrade minikube on macOS you just need to run these commands (since 2020):</p>
<pre><code>brew update
brew upgrade minikube
</code></pre>
<p>If you encountered any problems, I'd suggest to fully delete minikube from your system (based on <a href="https://gist.github.com/rahulkumar-aws/65e6fbe16cc71012cef997957a1530a3" rel="nofollow noreferrer">this GitHub page</a>):</p>
<pre><code>minikube stop; minikube delete &&
docker stop $(docker ps -aq) &&
rm -rf ~/.kube ~/.minikube &&
sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &&
launchctl stop '*kubelet*.mount' &&
launchctl stop localkube.service &&
launchctl disable localkube.service &&
sudo rm -rf /etc/kubernetes/ &&
docker system prune -af --volumes
</code></pre>
<p>Then you can install minikube with the newest version from scratch by <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">using brew</a>:</p>
<pre><code>brew install minikube
</code></pre>
<p>either by <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">downloading and installing a binary file</a>:</p>
<pre><code>curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
</code></pre>
| Mikolaj S. |
<p>By default the SKU used when creating a load balancer in AKS is standard, for development if you want to use basic SKU we have to use the command line <code>az aks create -g RGName -n ClusterName --load-balancer-sku basic</code></p>
<p>But could not find anything on how to specify the <code>--load-balancer-sku</code> in the yaml file.
Current YAML File AS-IS given below, what to add to make the SKU basic?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
selector:
app: hello-world-svc
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
</code></pre>
| Binoj Antony | <p>you can specify the SKU per ENV with passing a config file to the Cloud Controller Manager as of <a href="https://kubernetes-sigs.github.io/cloud-provider-azure/topics/loadbalancer/#loadbalancer-skus" rel="nofollow noreferrer">here</a></p>
| Philip Welz |
<p>I'm trying to find some sort of signal from a cluster indicating that there has been some sort of change with a Kubernetes cluster. I'm looking for any change that could cause issues with software running on that cluster such as Kubernetes version change, infra/distro/layout change, etc.</p>
<p>The only signal that I have been able to find is a node restart, but this can happen for any number of reasons - I'm trying to find something a bit stronger than this. I am preferably looking for something platform agnostic as well.</p>
| Rishik Hombal | <p>In addition to watching Node events (see the complete list of events <a href="https://github.com/kubernetes/kubernetes/blob/7380fc735aca591325ae1fabf8dab194b40367de/pkg/kubelet/events/event.go#L50" rel="nofollow noreferrer">here</a>), you can use Kubernetes' <strong>Node Problem Detector</strong> for monitoring and reporting about a node's health (<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health" rel="nofollow noreferrer">link</a>).</p>
<blockquote>
<p>There are tons of node problems that could possibly affect the pods running on the node, such as:</p>
<ul>
<li>Infrastructure daemon issues: ntp service down;</li>
<li>Hardware issues: Bad CPU, memory or disk;</li>
<li>Kernel issues: Kernel deadlock, corrupted file system;</li>
<li>Container runtime issues: Unresponsive runtime daemon;</li>
</ul>
</blockquote>
<p>Node-problem-detector collects node problems from various daemons and make them visible to the upstream layers.</p>
<p>Node-problem-detector supports several exporters:</p>
<ul>
<li><strong>Kubernetes exporter</strong> reports node problems to Kubernetes API server: temporary problems get reported as Events, and permanent problems get reported as Node Conditions.</li>
<li>Prometheus exporter.</li>
<li>Stackdriver Monitoring API.</li>
</ul>
<hr />
<p>Another option is the <strong>Prometheus Node Exporter</strong> (<a href="https://prometheus.io/docs/guides/node-exporter/" rel="nofollow noreferrer">link</a>). It exposes a wide variety of hardware- and kernel-related metrics (<strong>OS release info, system information as provided by the 'uname' system call</strong>, memory statistics, disk IO statistics, NFS statistics, etc.).</p>
<p>Check the list of all existing collectors and the supported systems <a href="https://github.com/prometheus/node_exporter#collectors" rel="nofollow noreferrer">here</a>.</p>
| mozello |
<p>I had a project that wanted to update the DNS configuration of Pod with Operator,</p>
<pre><code>get dns message
get matched pod
modify:
pod.Spec.DNSConfig = CRD_SPEC
pod.Spec.DNSPolicy = corev1.DNSNone
client.Update(ctx,&pod)
</code></pre>
<p>But when I implemented it, I got the following error:</p>
<pre><code> ERROR controller-runtime.manager.controller.dnsinjection Reconciler error {"reconciler group": "xxxx", "reconciler kind": "xxxxx", "name": "dnsinjection", "namespace": "default", "error": "Pod \"busybox\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)\n core.PodSpec{\n \t... // 21 identical fields\n \tPriority: &0,\n \tPreemptionPolicy: nil,\n \tDNSConfig: &core.PodDNSConfig{\n \t\tNameservers: []string{\n \t\t\t\"1.2.3.4\",\n- \t\t\t\"0.0.0.0\",\n \t\t},\n \t\tSearches: []string{\"ns1.svc.cluster-domain.example\", \"my.dns.search.suffix\"},\n \t\tOptions: []core.PodDNSConfigOption{{Name: \"ndots\", Value: &\"2\"}, {Name: \"edns0\"}},\n \t},\n \tReadinessGates: nil,\n \tRuntimeClassName: nil,\n \t... // 3 identical fields\n }\n"}
</code></pre>
<p><code>DNSConfig</code> and <code>DNSPoicy</code> fields are not declared to be unable to be <strong>updated</strong> in the source code, so why did the update fail?</p>
<p>I got the same error with <code>kubect edit pod busybox</code> and <code>kubectl apply -f modifyed_pod.yml(add DNSConfig)</code> command.</p>
<p>I would appreciate it if you could tell me how to solve it.</p>
| moluzhui | <p>As the message says, Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement" rel="nofollow noreferrer">does not support updating most pod's fields directly</a>:</p>
<blockquote>
<p>Kubernetes doesn't prevent you from managing Pods directly. It is possible to update some fields of a running Pod, in place. However, Pod update operations like <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#patch-pod-v1-core" rel="nofollow noreferrer"><code>patch</code></a>, and <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#replace-pod-v1-core" rel="nofollow noreferrer"><code>replace</code></a> have some limitations:</p>
<ul>
<li>Most of the metadata about a Pod is immutable. For example, you cannot change the <code>namespace</code>, <code>name</code>, <code>uid</code>, or <code>creationTimestamp</code> fields; the <code>generation</code> field is unique. It only accepts updates that increment the field's current value.</li>
<li>If the <code>metadata.deletionTimestamp</code> is set, no new entry can be added to the <code>metadata.finalizers</code> list.</li>
<li>Pod updates may not change fields other than <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code> or <code>spec.tolerations</code>. For <code>spec.tolerations</code>, you can only add new entries.</li>
</ul>
</blockquote>
<p><em>Why is that?</em></p>
<p>Pods in Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">are designed as relatively ephemeral, disposable entities</a>:</p>
<blockquote>
<p>You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">controller</a>), the new Pod is scheduled to run on a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a> in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is <em>evicted</em> for lack of resources, or the node fails.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">Kubernetes assumes that for managing pods and doing any updates you should use</a> a <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">workload resources</a> instead of creating pods directly:</p>
<blockquote>
<p>Pods are generally not created directly and are created using workload resources. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">Working with Pods</a> for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a></li>
</ul>
</blockquote>
<p>You can easily update most fields in workload resources definition and it will work properly. Keep in mind that it won't edit any existing pods - <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates" rel="nofollow noreferrer">it will delete the currently running pods with old configuration and start the new ones - Kubernetes will make sure that this process goes smoothly</a>:</p>
<blockquote>
<p>Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create replacement Pods that use the updated template.</p>
</blockquote>
<blockquote>
<p>For example, the StatefulSet controller ensures that the running Pods match the current pod template for each StatefulSet object. If you edit the StatefulSet to change its pod template, the StatefulSet starts to create new Pods based on the updated template. Eventually, all of the old Pods are replaced with new Pods, and the update is complete.</p>
</blockquote>
<blockquote>
<p>Each workload resource implements its own rules for handling changes to the Pod template. If you want to read more about StatefulSet specifically, read <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets" rel="nofollow noreferrer">Update strategy</a> in the StatefulSet Basics tutorial.</p>
</blockquote>
<p>So based on all above information I'd suggest to switch to workload resource, for example <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a>:</p>
<blockquote>
<p>A <em>Deployment</em> provides declarative updates for <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSets</a>.</p>
</blockquote>
<p>For example - right now I have pod with below definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox:1.28
command:
- sleep
- "9999999"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
</code></pre>
<p>When I try to run <code>kubectl edit pod busybox</code> command to change <code>dnsPolicy</code> I have the same error as you.
However, If I changed to deployment with the same pod definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-deployment
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- image: busybox:1.28
command:
- sleep
- "9999999"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
</code></pre>
<p>Then if I run <code>kubectl edit deploy busybox-deployment</code> and change <code>dnsPolicy</code> field I will get a new pod with new configuration (the old pod will be automatically deleted).</p>
<p>Keep in mind, if you want to stick with direct pod definition, you can always just delete pod and apply a new, modified yaml as you tried (<code>kubectl delete pod {your-pod-name} && kubectl apply -f {modified.yaml}</code>). It will work properly.</p>
<p>Also check:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/41583672/kubernetes-deployments-vs-statefulsets">Kubernetes Deployments vs StatefulSets</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/" rel="nofollow noreferrer">Workload Resources</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod Lifecycle</a></li>
</ul>
| Mikolaj S. |
<p>I have deployed an ingress in Kubernetes and using two applications on different ingress namespaces.</p>
<p>When I access the APP2 I can reach the website and it's working fine but APP1 is displaying BLANK page. No errors just BLANK and response 200 OK.</p>
<p>Basically I integrated ArgoCd with Azure AD. The integration it is fine but I think ingress rules are not totally fine.</p>
<p>Both Apps are on different namespaces so I have to use two different ingress on different namespaces:</p>
<p>This is the APP1:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /argo-cd/$2
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 443
rules:
- http:
paths:
- path: /argo-cd
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 443
</code></pre>
<p>And this is the APP2:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sonarqube-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
defaultBackend:
service:
name: sonarqube
port:
number: 9000
tls:
- hosts:
- sq-example
secretName: nginx-cert
rules:
- host: sq.example.com
http:
paths:
- path: /sonarqube(/|$)(.*)
pathType: Prefix
backend:
service:
name: sonarqube
port:
number: 9000
- path: /(.*)
pathType: Prefix
backend:
service:
name: sonarqube
port:
number: 9000
</code></pre>
<p>args of ingress deployment:</p>
<pre><code>spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=ingress-nginx/ca-key-pair
- --enable-ssl-passthrough
</code></pre>
<p>logs ingress controller pod:</p>
<pre><code>10.200.140.160 - - [03/Nov/2021:15:00:34 +0000] "GET /argo-cd HTTP/1.1" 200 831 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" 489 0.002 [argocd-argocd-server-443] [] 10.200.140.177:8080, 10.200.140.177:8080 0, 831 0.000, 0.004 502, 200 d491c01cd741fa9f155642f8616b6d9f
2021/11/03 15:09:05 [error] 867#867: *534643 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.200.140.160, server: _, request: "GET /argo-cd/ HTTP/1.1", upstream: "https://10.200.140.177:8080/argo-cd/", host: "10.200.140.211"
10.200.140.160 - - [03/Nov/2021:15:09:05 +0000] "GET /argo-cd/ HTTP/1.1" 200 831 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" 440 0.006 [argocd-argocd-server-443] [] 10.200.140.177:8080, 10.200.140.177:8080 0, 831 0.000, 0.004 502, 200 8995b914ae6e39d8ca781e1f4f269f50
10.200.140.160 - - [03/Nov/2021:15:09:16 +0000] "GET /argo-cd HTTP/1.1" 200 831 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" 489 0.001 [argocd-argocd-server-443] [] 10.200.140.177:8080 831 0.004 200 0adadba11c87f9b88ed75d52e4ca387a
</code></pre>
<p>I tryied playing with the path: /argo-cd on APP1 like:</p>
<p>path: /argo-cd/
path: /argo-cd/(/|$)(.<em>)
path: /argo-cd/(.</em>)
path: /argo-cd/*</p>
<p>but impossible to make it work. Am I doing something wrong here?</p>
<p>Thanks in advance.</p>
| X T | <p>The problem is that you didn't configure the argo-cd root path.</p>
<hr />
<p><em>Why?</em></p>
<p>First, it's worth to remind that NGINX Ingress controller <a href="https://docs.nginx.com/nginx-ingress-controller/installation/running-multiple-ingress-controllers/#running-multiple-nginx-ingress-controllers" rel="nofollow noreferrer">by default is <em>Cluster-wide</em></a>:</p>
<blockquote>
<ul>
<li><strong>Cluster-wide Ingress Controller (default)</strong>. The Ingress Controller handles configuration resources created in any namespace of the cluster. As NGINX is a high-performance load balancer capable of serving many applications at the same time, this option is used by default in our installation manifests and Helm chart.</li>
</ul>
</blockquote>
<p>So even if you have configured Ingresses in different namespaces at the end you are using the same NGINX Ingress Controller. You can check it by running:</p>
<pre><code>kubectl get ing -n ingress-nginx
kubectl get ing -n argocd
</code></pre>
<p>You can observe that <code>ADDRESS</code> is the same for both ingresses in different namespaces.</p>
<p>Let's assume that I have applied only the first ingress definition (APP1). If I try to reach <code>https://{ingress-ip}/argo-cd</code> I will be redirected to the <code>https://{ingress-ip}/applications</code> website - it works probably because you also setup the <code>defaultBackend</code> setting. Anyway it's not a good approach - you should configure the argo-cd root path correctly.</p>
<p>When I applied the second ingress definition (APP2) I'm also getting the blank page as you - probably because the definitions from both ingresses are mixing and this is causing an issue.</p>
<p><em>How to setup the argo-cd root path?</em></p>
<p>Based on <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-mapping-crd-for-path-based-routing" rel="nofollow noreferrer">this documentation</a>:</p>
<blockquote>
<p>Edit the <code>argocd-server</code> deployment to add the <code>--rootpath=/argo-cd</code> flag to the argocd-server command.</p>
</blockquote>
<p>It's not really explained in detailed way in the docs, but I figured how to setup it:</p>
<p>First, we need to get current deployment configuration:</p>
<pre><code>kubectl get deploy argocd-server -o yaml -n argocd > argocd-server-deployment.yaml
</code></pre>
<p>Now, we need to edit the <code>argocd-server-deployment.yaml</code> file. Under <code>command</code> (in my case it was line 52) we need to add <code>rootpath</code> flag - before:</p>
<pre><code>containers:
- command:
- argocd-server
env:
</code></pre>
<p>After:</p>
<pre><code>containers:
- command:
- argocd-server
- --rootpath=/argo-cd
env:
</code></pre>
<p>Save it, and run <code>kubectl apply -f argocd-server-deployment.yaml</code>.</p>
<p>Now, it's time to edit ingress definition also - as we setup root path we need to delete <code>nginx.ingress.kubernetes.io/rewrite-target:</code> annotation:</p>
<pre><code>annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
</code></pre>
<p>After these changes, if I reach <code>https://{ingress-ip}/argo-cd</code> I will be redirected to the <code>https://{ingress-ip}/argo-cd/applications</code>. Everything is working properly.</p>
| Mikolaj S. |
<p>We are running a Kubernetes cluster on AKS. The cluster runs on multiple node pools. Autoscaling is enabled to make sure nodes are added or removed when necessary.</p>
<p>I can see the current used amount of nodes by navigating to AKS -> Settings -> Node pools on the Azure Portal. However, I'm not able to get this information historically.</p>
<p>A question I want to find an answer for:
How many nodes were active for node pool 'x' last night?</p>
<p>I couldn't find any metrics for the AKS and Virtual Machine Scale Set resources to answer my question. What are my options?</p>
| Max | <p>AFAIK there is not such a metric. a small workaround could be:</p>
<p>In the Portal go to your AKS -> Monitoring -> Metrics. In the Scope select your AKS, Scope Namespace is <code>Container service</code> and then you have the following metrics:</p>
<ul>
<li><p><code>Cluster Health</code> - determines if the autoscaler will take action on the cluster</p>
</li>
<li><p><code>Unneeded Nodes</code> - autoscaler mark nodes for deletion</p>
</li>
</ul>
<p>There you can see at least if scaling took place and how many nodes were deleted afterwards so you could calculate the amount of nodes.</p>
| Philip Welz |
<p>I deployed an EFS in AWS and a test pod on EKS from this document: <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">Amazon EFS CSI driver</a>.</p>
<p>EFS CSI Controller pods in the <code>kube-system</code>:</p>
<pre><code>kube-system efs-csi-controller-5bb76d96d8-b7qhk 3/3 Running 0 26s
kube-system efs-csi-controller-5bb76d96d8-hcgvc 3/3 Running 0 26s
</code></pre>
<p>After deployed a sample application from the doc, when confirm <code>efs-csi-controller</code> sa pod logs, it seems they didn't work well.</p>
<p>Pod 1:</p>
<pre><code>$ kubectl logs efs-csi-controller-5bb76d96d8-b7qhk \
> -n kube-system \
> -c csi-provisioner \
> --tail 10
W1030 08:15:59.073406 1 feature_gate.go:235] Setting GA feature gate Topology=true. It will be removed in a future release.
I1030 08:15:59.073485 1 feature_gate.go:243] feature gates: &{map[Topology:true]}
I1030 08:15:59.073500 1 csi-provisioner.go:132] Version: v2.1.1-0-g353098c90
I1030 08:15:59.073520 1 csi-provisioner.go:155] Building kube configs for running in cluster...
I1030 08:15:59.087072 1 connection.go:153] Connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock
I1030 08:15:59.087512 1 common.go:111] Probing CSI driver for readiness
I1030 08:15:59.090672 1 csi-provisioner.go:202] Detected CSI driver efs.csi.aws.com
I1030 08:15:59.091694 1 csi-provisioner.go:244] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I1030 08:15:59.091997 1 controller.go:756] Using saving PVs to API server in background
I1030 08:15:59.092834 1 leaderelection.go:243] attempting to acquire leader lease kube-system/efs-csi-aws-com...
</code></pre>
<p>Pod 2:</p>
<pre><code>$ kubectl logs efs-csi-controller-5bb76d96d8-hcgvc \
> -n kube-system \
> -c csi-provisioner \
> --tail 10
I1030 08:16:32.628759 1 controller.go:1099] Final error received, removing PVC 111111a-d6fb-440a-9bb1-132901jfas from claims in progress
W1030 08:16:32.628783 1 controller.go:958] Retrying syncing claim "111111a-d6fb-440a-9bb1-132901jfas", failure 5
E1030 08:16:32.628798 1 controller.go:981] error syncing claim "111111a-d6fb-440a-9bb1-132901jfas": failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
I1030 08:16:32.628845 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"111111a-d6fb-440a-9bb1-132901jfas", APIVersion:"v1", ResourceVersion:"1724705", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
I1030 08:17:04.628997 1 controller.go:1332] provision "default/efs-claim" class "efs-sc": started
I1030 08:17:04.629193 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"111111a-d6fb-440a-9bb1-132901jfas", APIVersion:"v1", ResourceVersion:"1724705", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/efs-claim"
I1030 08:17:04.687957 1 controller.go:1099] Final error received, removing PVC 111111a-d6fb-440a-9bb1-132901jfas from claims in progress
W1030 08:17:04.687977 1 controller.go:958] Retrying syncing claim "111111a-d6fb-440a-9bb1-132901jfas", failure 6
E1030 08:17:04.688001 1 controller.go:981] error syncing claim "111111a-d6fb-440a-9bb1-132901jfas": failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
I1030 08:17:04.688044 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"111111a-d6fb-440a-9bb1-132901jfas", APIVersion:"v1", ResourceVersion:"1724705", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
</code></pre>
<p>From the events, I can see:</p>
<pre><code>$ kubectl get events
27m Warning FailedScheduling pod/efs-app skip schedule deleting pod: default/efs-app
7m38s Warning FailedScheduling pod/efs-app 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
7m24s Warning FailedScheduling pod/efs-app 0/2 nodes are available: 2 persistentvolumeclaim "efs-claim" is being deleted.
7m24s Warning FailedScheduling pod/efs-app skip schedule deleting pod: default/efs-app
17s Warning FailedScheduling pod/efs-app 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
27m Normal ExternalProvisioning persistentvolumeclaim/efs-claim waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
10m Normal ExternalProvisioning persistentvolumeclaim/efs-claim waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
11m Normal Provisioning persistentvolumeclaim/efs-claim External provisioner is provisioning volume for claim "default/efs-claim"
11m Warning ProvisioningFailed persistentvolumeclaim/efs-claim failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
7m47s Normal Provisioning persistentvolumeclaim/efs-claim External provisioner is provisioning volume for claim "default/efs-claim"
7m47s Warning ProvisioningFailed persistentvolumeclaim/efs-claim failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
74s Normal ExternalProvisioning persistentvolumeclaim/efs-claim waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
2m56s Normal Provisioning persistentvolumeclaim/efs-claim External provisioner is provisioning volume for claim "default/efs-claim"
2m56s Warning ProvisioningFailed persistentvolumeclaim/efs-claim failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
</code></pre>
<p><code>ServiceAccount</code> was created by:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: efs-csi-controller-sa
namespace: kube-system
labels:
app.kubernetes.io/name: aws-efs-csi-driver
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/AmazonEKS_EFS_CSI_Driver_Policy
</code></pre>
<p>The <code>AmazonEKS_EFS_CSI_Driver_Policy</code> is the json from <a href="https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/v1.3.2/docs/iam-policy-example.json" rel="nofollow noreferrer">here</a>.</p>
<hr />
<h1>Example code</h1>
<p>storageclass.yaml</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-92107410
directoryPerms: "700"
gidRangeStart: "1000" # optional
gidRangeEnd: "2000" # optional
basePath: "/dynamic_provisioning" # optional
</code></pre>
<p>pod.yaml</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
</code></pre>
| Miantian | <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>Based on @Miantian comment:</p>
<blockquote>
<p>The reason was the efs driver image is using the different region from mine. I changed to the right one and it works.</p>
</blockquote>
<p>You can find steps to setup the Amazon EFS CSI driver in the proper region in <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">this documentation</a>.</p>
| Mikolaj S. |
<p>Is there a way for a Kubernetes ClusterIP Service to have a network alias, other than its <code>metadata.name</code> field value?</p>
<p>Docker-compose has a similar functionality with <a href="https://docs.docker.com/compose/compose-file/compose-file-v3/#aliases" rel="nofollow noreferrer">network aliases</a>.</p>
| xenosdio | <p>If I good understand your question you can also configure HostAliases for a Pod under <code>.spec.hostAliases</code> using <code>/etc/hosts</code> file. Look at the example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
</code></pre>
<blockquote>
<p>In addition to the default boilerplate, you can add additional entries to the <code>hosts</code> file. For example: to resolve <code>foo.local</code>, <code>bar.local</code> to <code>127.0.0.1</code> and <code>foo.remote</code>, <code>bar.remote</code> to <code>10.1.2.3</code>, you can configure HostAliases for a Pod under <code>.spec.hostAliases</code>.</p>
</blockquote>
<p>You can find more information about Host Aliases <a href="https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/" rel="nofollow noreferrer">here</a>:</p>
| Mikołaj Głodziak |
<p>I'm trying to deploy a simple REST API written in Golang to AWS EKS.</p>
<p>I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.</p>
<p>All resources in the cluster look like:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/aws-load-balancer-controller-5947f7c854-fgwk2 1/1 Running 0 75m
kube-system pod/aws-load-balancer-controller-5947f7c854-gkttb 1/1 Running 0 75m
kube-system pod/aws-node-dfc7r 1/1 Running 0 120m
kube-system pod/aws-node-hpn4z 1/1 Running 0 120m
kube-system pod/aws-node-s6mng 1/1 Running 0 120m
kube-system pod/coredns-66cb55d4f4-5l7vm 1/1 Running 0 127m
kube-system pod/coredns-66cb55d4f4-frk6p 1/1 Running 0 127m
kube-system pod/kube-proxy-6ndf5 1/1 Running 0 120m
kube-system pod/kube-proxy-s95qk 1/1 Running 0 120m
kube-system pod/kube-proxy-vdrdd 1/1 Running 0 120m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 127m
kube-system service/aws-load-balancer-webhook-service ClusterIP 10.100.202.90 <none> 443/TCP 75m
kube-system service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 127m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 127m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 127m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/aws-load-balancer-controller 2/2 2 2 75m
kube-system deployment.apps/coredns 2/2 2 2 127m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/aws-load-balancer-controller-5947f7c854 2 2 2 75m
kube-system replicaset.apps/coredns-66cb55d4f4 2 2 2 127m
</code></pre>
<p>I can run the application locally with Go and with Docker. But releasing this on AWS EKS always throws <code>CrashLoopBackOff</code>.</p>
<p>Running <code>kubectl describe pod PODNAME</code> shows:</p>
<pre><code>Name: go-api-55d74b9546-dkk9g
Namespace: default
Priority: 0
Node: ip-172-16-1-191.ec2.internal/172.16.1.191
Start Time: Tue, 15 Mar 2022 07:04:08 -0700
Labels: app=go-api
pod-template-hash=55d74b9546
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 172.16.1.195
IPs:
IP: 172.16.1.195
Controlled By: ReplicaSet/go-api-55d74b9546
Containers:
go-api:
Container ID: docker://a4bc07b60c85fd308157d967d2d0d688d8eeccfe4c829102eb929ca82fb25595
Image: saurabhmish/golang-hello:latest
Image ID: docker-pullable://saurabhmish/golang-hello@sha256:f79a495ad17710b569136f611ae3c8191173400e2cbb9cfe416e75e2af6f7874
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Mar 2022 07:09:50 -0700
Finished: Tue, 15 Mar 2022 07:09:50 -0700
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jt4gp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-jt4gp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m31s default-scheduler Successfully assigned default/go-api-55d74b9546-dkk9g to ip-172-16-1-191.ec2.internal
Normal Pulled 7m17s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 12.77458991s
Normal Pulled 7m16s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 110.127771ms
Normal Pulled 7m3s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 109.617419ms
Normal Created 6m37s (x4 over 7m17s) kubelet Created container go-api
Normal Started 6m37s (x4 over 7m17s) kubelet Started container go-api
Normal Pulled 6m37s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 218.952336ms
Normal Pulling 5m56s (x5 over 7m30s) kubelet Pulling image "saurabhmish/golang-hello:latest"
Normal Pulled 5m56s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 108.105083ms
Warning BackOff 2m28s (x24 over 7m15s) kubelet Back-off restarting failed container
</code></pre>
<p>Running <code>kubectl logs PODNAME</code> and <code>kubectl logs PODNAME -c go-api</code> shows <code>standard_init_linux.go:228: exec user process caused: exec format error</code></p>
<p>Manifests:</p>
<p><code>go-deploy.yaml</code> ( This is the <a href="https://hub.docker.com/repository/docker/saurabhmish/golang-hello" rel="nofollow noreferrer">Docker Hub Image</a> with documentation )</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-api
labels:
app: go-api
spec:
replicas: 2
selector:
matchLabels:
app: go-api
strategy: {}
template:
metadata:
labels:
app: go-api
spec:
containers:
- name: go-api
image: saurabhmish/golang-hello:latest
ports:
- containerPort: 3000
resources: {}
</code></pre>
<p><code>go-service.yaml</code></p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: go-api
spec:
selector:
app: go-api
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
</code></pre>
<p>How can I fix this error ?</p>
| Saurabh | <p>Posting this as Community wiki for better visibility.
Feel free to expand it.</p>
<hr />
<p>Thanks to @David Maze, who pointed to the solution. There is an article <strong>'Build Intel64-compatible Docker images from Mac M1 (ARM)'</strong> (by Beppe Catanese) <a href="https://medium.com/geekculture/from-apple-silicon-to-heroku-docker-registry-without-swearing-36a2f59b30a3" rel="nofollow noreferrer">here</a>.<br />
This article describes the underlying problem well.</p>
<h4>You are developing/building on the ARM architecture (Mac M1), but you deploy the docker image to a x86-64 architecture based Kubernetes cluster.</h4>
<p>Solution:</p>
<h4>Option A: use <code>buildx</code></h4>
<p><a href="https://github.com/docker/buildx" rel="nofollow noreferrer">Buildx</a> is a Docker plugin that allows, amongst other features, to build images for various target platforms.</p>
<pre><code>$ docker buildx build --platform linux/amd64 -t myapp .
</code></pre>
<h4>Option B: set <code>DOCKER_DEFAULT_PLATFORM</code></h4>
<p>The DOCKER_DEFAULT_PLATFORM environment variable permits to set the default platform for the commands that take the --platform flag.</p>
<pre><code>export DOCKER_DEFAULT_PLATFORM=linux/amd64
</code></pre>
| mozello |
<p>i would like to know if is possible to isolate namespace on Azure Kubernetes service. Now if i give rbac role to my colleague they can see all namespace, i would like to segregate namespace for department, e.g. data can see only data namespace, dev can see only den namespace etc..</p>
<p>is it possible?</p>
<p>Thanks</p>
| Emanuele | <p>yes, You have to Enable <code>AKS-managed Azure Active Directory</code>, <code>Role-based access control (RBAC)</code> & <a href="https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac?toc=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json#create-the-aks-cluster-resources-for-app-devs" rel="nofollow noreferrer">Azure RBAC for Kubernetes Authorization</a>. There are 2 options:</p>
<pre><code>az aks create \
-g myResourceGroup \
-n myManagedCluster \
--enable-aad \
--enable-azure-rbac
</code></pre>
<p>1st Option:</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: data
labels:
kubernetes.io/metadata.name: data
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: data-view-access
namespace: data
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: Group
namespace: data
name: <GROUP_OBJECT_ID>
</code></pre>
<p>2nd Option is to use Azure Custom Roles as explained <a href="https://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac#create-role-assignments-for-users-to-access-cluster" rel="nofollow noreferrer">here</a> and also with this example from user yk1 :</p>
<pre><code>az role assignment create \
--role "Azure Kubernetes Service RBAC Reader" \
--assignee <AAD-ENTITY-ID> \
--scope $AKS_ID/namespaces/<namespace-name>
</code></pre>
<p>NOTE: All users must be member of <code>Azure Kubernetes Service Cluster User Role</code> in order the execute <code>az aks get-credentials</code></p>
| Philip Welz |
<p>I have 3 masters, several workers and Calico as cni. Pods created on masters get <code>172.17.0.*</code> IPs and this is docker network. Pods on workers get IP from calico pool as it should be. <code>calicoctl</code> shows <code>status ok</code> for all nodes.</p>
<p>Also I have same kubelet parameters and config files and I don't have any pod cidr settings there. <code>Kube-system/calico</code> pods are up and running and logs do not show any reason. How can I set correct cidr for pods on masters?</p>
<pre><code>kubectl describe node master1 | egrep -i 'cidr|calico':
projectcalico.org/IPv4Address: 192.168.0.26/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.129.40.64
PodCIDR: 10.128.0.0/24
PodCIDRs: 10.128.0.0/24
</code></pre>
<p>pod details:</p>
<pre><code>kubectl describe po mypod | egrep -i 'master|ip'
Node: master1/192.168.0.26
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
</code></pre>
| mzv | <p>Posted community wiki based on comments for better visibility. Feel free to expand it.</p>
<hr />
<p>The solution for the issue is to add flag <code>--network-plugin=cni</code> to Kubelet startup options on the masters nodes (from the @mzv comment):</p>
<blockquote>
<p>I needed to add "--network-plugin=cni" to kubelet startup options</p>
</blockquote>
<p>Instructions on how to add this flag to the Kubelet <a href="https://stackoverflow.com/questions/49278317/adding-network-flag-network-plugin-cni-to-kubelet/49285339#49285339">can be found here</a>.</p>
| Mikolaj S. |
<p>I have recently started working on Docker, K8s and Argo. I am currently working on creating 2 containerized applications and then link them up in such a way that they can run on Argo. The 2 containerized applications would be as follows:</p>
<ol>
<li><p><code>ReadDataFromAFile</code>: This container would have the code that would receive a url/file with some random names. It would separate out all those names and return an array/list of names.</p>
</li>
<li><p><code>PrintData</code>: This container would accept the list of names and then print them out with some business logic involved.</p>
</li>
</ol>
<p>I am currently not able to understand how to:</p>
<ol>
<li>Pass text/file to the <code>ReadData</code> Container.</li>
<li>Pass on the processed array of names from the first container to the second container.</li>
</ol>
<p>I have to write an Argo Workflow that would regularly perform these steps!</p>
| Manan Kapoor | <p>Posting this as Community wiki for better visibility with a general solution.
Feel free to expand it.</p>
<hr />
<p><strong>Since you don't need to store any artifacts, the best options to pass data between Kubernetes Pods are</strong> (as @David Maze mentioned in his comment):</p>
<h4>1. Pass the data in the body of HTTP POST requests.</h4>
<p>There is a good article with examples of HTTP POST requests <a href="https://reqbin.com/Article/HttpPost" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>POST is an HTTP method designed to send data to the server from an HTTP client. The HTTP POST method requests the web server accept the data enclosed in the body of the POST message.</p>
</blockquote>
<h4>2. Use a message broker, for example, <a href="https://www.rabbitmq.com/" rel="nofollow noreferrer">RabbitMQ</a>.</h4>
<blockquote>
<p>RabbitMQ is the most widely deployed open source message broker. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.</p>
</blockquote>
<p>RabbitMQ provides a <a href="https://www.rabbitmq.com/devtools.html" rel="nofollow noreferrer">wide range of developer tools</a> for most popular languages.</p>
<p>You can install RabbitMQ into the Kubernetes cluster using the <a href="https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq-cluster-operator" rel="nofollow noreferrer">Bitnami Helm chart</a>.</p>
| mozello |
<p>Looking for the best way to integrate ACR with AKS for Producation environment, Seems there are multiple ways like, during installation, and after installation, using service principala,a nd using image pull secret etc..</p>
<p>So for our production environment looking for most recommended option, where the requirement as follows.</p>
<ul>
<li>Is it mandatory to attach acr during aks creation itself</li>
<li>What will be the advantage if we are integrating ACR along with AKS instalation itself. (seems , we dont want to pass the image pull secret to the pod spec in that case and for other options we need to)</li>
<li>What is the another way to integrate ACR with AKS ( az aks update) command will help in this case? if yes, what will be the difference from the previous method where we integrated during AKS installation.</li>
<li>IF I want to setup a secodary AKS cluster in another region, but need to connect the ACR georeplicated instance of Primary instance of ACR , How i can get it done? In this case is it mandaory to attach tge ACR during AKS installation or later post installation also its good to go?</li>
</ul>
| Vowneee | <p>IMHO the best way is Azure RBAC. You dont need to attach the ACR while creating the AKS. You can leverage Azure RBAC and assign the Role "AcrPull" to the Kubelet identity of your nodepool. This can be done for every ACR you have:</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export ACR_ID=$(az acr show -g <resource group> -n <acr name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "AcrPull" --scope $ACR_ID
</code></pre>
<p>Terraform:</p>
<pre><code> resource "azurerm_role_assignment" "example" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
}
</code></pre>
| Philip Welz |
<p>I have applied kiali in Istio 1.10.2 by using <a href="https://raw.githubusercontent.com/istio/istio/1.10.2/samples/addons/kiali.yaml" rel="nofollow noreferrer">this</a>. Now I am trying to secure it by filtering source ip address. I tried using authorization policy but it didn't work. It keeps allowing everyone when <a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="nofollow noreferrer">it should deny any request that is not in the ALLOW policy</a></p>
<p>AuthorizationPolicy:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: kiali-ingress-policy-allow
namespace: istio-system
spec:
selector:
matchLabels:
app: kiali
action: ALLOW
rules:
- from:
- source:
remoteIpBlocks: ["10.43.212.247/32","10.43.212.242/32"]
</code></pre>
<p>VirtualService:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
hosts:
- "kiali.myinternaldomain.local"
gateways:
- istio-system/my-internal-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: kiali
port:
number: 20001
</code></pre>
<p>Installed ISTIO using default profile and these extra parameters:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
overlays:
- apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
patches:
- path: kind
value: DaemonSet
- path: spec.strategy
- path: spec.updateStrategy
value:
rollingUpdate:
maxUnavailable: 50%
type: RollingUpdate
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
hpaSpec:
minReplicas: 2
pilot:
k8s:
hpaSpec:
minReplicas: 2
values:
gateways:
istio-ingressgateway:
autoscaleEnabled: false
env:
ISTIO_META_HTTP10: '1'
pilot:
env:
PILOT_HTTP10: '1'
</code></pre>
| brgsousa | <p>I manage to setup working <code>ALLOW</code> policy in the Istio 1.10.2 on <a href="https://cloud.google.com/compute" rel="nofollow noreferrer">GCP VMs</a>, the cluster is setup using kubeadm with Calico CNI plugin. I used this documentation - <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/" rel="nofollow noreferrer">Ingress Gateway</a>.</p>
<p>Note: I changed <code>istio-ingressgateway</code> service type to <code>NodePort</code> service instead of <code>LoadBalancer</code>, but this does not matter in this case.</p>
<hr />
<p>My network design is following:</p>
<ul>
<li>First VM - Kubernetes node - 10.xxx.0.2 address</li>
<li>Second VM - 10.xxx.0.3 address</li>
<li>Third VM - 10.xxx.0.4 address - this address will be in the ALLOW policy</li>
</ul>
<p>I deployed the following NGINX service and deployment...</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deply
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
</code></pre>
<p>... and following gateway and virtual service definitions:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-vs
spec:
hosts:
- "my-test.com"
gateways:
- gateway
http:
- route:
- destination:
host: nginx-svc
port:
number: 80
</code></pre>
<p>Take a look at the <code>nginx-svc</code> service name in virtual service definition (you should use the one that you want to configure).</p>
<p>From every VM in the network I can run following command:</p>
<pre><code>user@vm-istio:~$ curl 10.xxx.0.2:31756 -H "Host: my-test.com"
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
</code></pre>
<p>So it's working properly.</p>
<p>I <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#before-you-begin" rel="nofollow noreferrer">enabled RBAC debugging for the ingress gateway pod</a>...</p>
<pre><code>kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do istioctl proxy-config log "$pod" -n istio-system --level rbac:debug; done
</code></pre>
<p>...and I set <code>externalTrafficPolicy</code> to <code>Local</code> in service <code>istio-ingressgateway</code> to <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#source-ip-address-of-the-original-client" rel="nofollow noreferrer">preserve an IP address from the original client</a>:</p>
<pre><code>kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}
</code></pre>
<p>In the logs of the istio ingress gateway controller I can the that requests are coming and they are being proceed:</p>
<pre><code>kubectl logs istio-ingressgateway-5d57955454-9mz4b -n istio-system -f
...
[2021-11-24T13:39:47.220Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 615 1 0 "10.xxx.0.3" "curl/7.64.0" "69953f69-8a46-9e2a-a5a7-36861bae4a77" "my-test.com" "192.168.98.74:80" outbound|80||nginx-svc.default.svc.cluster.local 192.168.98.72:60168 192.168.98.72:8080 10.xxx.0.3:55160 - -
[2021-11-24T13:39:48.980Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 615 1 0 "10.xxx.0.4" "curl/7.64.0" "89284e11-42f9-9e84-b256-d3ea37311e92" "my-test.com" "192.168.98.74:80" outbound|80||nginx-svc.default.svc.cluster.local 192.168.98.72:60192 192.168.98.72:8080 10.xxx.0.4:35800 - -
</code></pre>
<p>Now, I will apply IP-based allow list to allow only second VM address to host <code>my-test.com</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["10.xxx.0.4"]
to:
- operation:
hosts:
- "my-test.com"
</code></pre>
<p>On the VM with address 10.xxx.0.4 curl is working as before, but on the VM with address 10.xxx.0.3 we can notice following:</p>
<pre><code>user@vm-istio:~$ curl 10.xxx.0.2:31756 -H "Host: my-test.com"
RBAC: access denied
</code></pre>
<p>So it's working as expected.</p>
<p>In the logs of the istio ingress gateway controller we can notice that request is denied (look for the logs related to the RBAC):</p>
<pre><code>kubectl logs istio-ingressgateway-5d57955454-9mz4b -n istio-system -f
2021-11-24T14:05:11.382613Z debug envoy rbac checking request: requestedServerName: , sourceIP: 10.xxx.0.3:55194, directRemoteIP: 10.xxx.0.3:55194, remoteIP: 10.xxx.0.3:55194,localAddress: 192.168.98.72:8080, ssl: none, headers: ':authority', 'my-test.com'
':path', '/'
':method', 'GET'
':scheme', 'http'
'user-agent', 'curl/7.64.0'
'accept', '*/*'
'x-forwarded-for', '10.xxx.0.3'
'x-forwarded-proto', 'http'
'x-envoy-internal', 'true'
'x-request-id', '63c0f55c-5545-92a0-80fc-aa2a5a63bf04'
'x-envoy-decorator-operation', 'nginx-svc.default.svc.cluster.local:80/*'
'x-envoy-peer-metadata', '...'
'x-envoy-peer-metadata-id', 'router~192.168.98.72~istio-ingressgateway-5d57955454-9mz4b.istio-system~istio-system.svc.cluster.local'
, dynamicMetadata:
2021-11-24T14:05:11.382662Z debug envoy rbac enforced denied, matched policy none
[2021-11-24T14:05:11.382Z] "GET / HTTP/1.1" 403 - rbac_access_denied_matched_policy[none] - "-" 0 19 0 - "10.xxx.0.3" "curl/7.64.0" "63c0f55c-5545-92a0-80fc-aa2a5a63bf04" "my-test.com" "-" outbound|80||nginx-svc.default.svc.cluster.local - 192.168.98.72:8080 10.xxx.0.3:55194 - -
</code></pre>
<p>Especially this part:</p>
<pre><code>[2021-11-24T14:05:11.382Z] "GET / HTTP/1.1" 403 - rbac_access_denied_matched_policy[none]
</code></pre>
<p>It clearly shows that our policy is working.</p>
<p>Example of the log that allowed our request:</p>
<pre><code>2021-11-25T10:58:34.717495Z debug envoy rbac enforced allowed, matched policy ns[istio-system]-policy[ingress-policy]-rule[0]
[2021-11-25T10:58:34.717Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 615 25 23 "10.xxx.0.4" "curl/7.64.0" "889e3326-093c-94b1-b856-777c06cbe2b7" "my-test.com" "192.168.98.75:80" outbound|80||nginx-svc.default.svc.cluster.local 192.168.98.72:46190 192.168.98.72:8080 10.xxx.0.4:37148 - -
</code></pre>
<p>What's important:</p>
<ul>
<li>make sure that in logs of the pod <code>istio-ingressgateway-{..}</code> you can notice the source IP address of the other VM so the policy can allow / block</li>
<li>my advice is to setup AuthorizationPolicy <strong>for external traffic</strong> at the edge of service mesh - which is istio ingress gateway, that’s why my AuthorizationPolicy is using “selector” matching label <code>app: ingress-istio-ingressgateway</code> (not the kiali)</li>
<li><a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#ip-based-allow-list-and-deny-list" rel="nofollow noreferrer">use a proper type of IPs - <code>ipBlocks</code> vs <code>remoteIpBlocks</code></a>. Base on your setup, it should be <code>ipBlocks</code>:</li>
</ul>
<blockquote>
<p><strong>When to use <code>ipBlocks</code> vs. <code>remoteIpBlocks</code>:</strong> If you are using the X-Forwarded-For HTTP header or the Proxy Protocol to determine the original client IP address, then you should use <code>remoteIpBlocks</code> in your <code>AuthorizationPolicy</code>. If you are using <code>externalTrafficPolicy: Local</code>, then you should use <code>ipBlocks</code> in your <code>AuthorizationPolicy</code>.</p>
</blockquote>
| Mikolaj S. |
<p>I am new to the whole Kubernetes-Helm thing, please bear with me and I'll try to give as much clarity to my question as possible</p>
<p>So I have this ConfigMap.yaml file that does this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: envread-settings
namespace: {{ .Values.environment.namespace }}
data:
appsettings.environment.json: |-
{
"featureBranch": {{ .Values.component.vars.featureId | quote }},
"BFFServiceUrl": {{ .Values.environment.BFFServiceUrl | quote }}
}
---
</code></pre>
<p>Where the Values are:</p>
<ul>
<li>.Values.component.vars.featureId = 123</li>
<li>.Values.environment.BFFServiceUrl = api.dev.integrations/bff-service</li>
</ul>
<p>This creates an appsettings.environment.json file in a volume path I specified. I need to dynamically create this json file because I need to insert the above variables in there (can't use environment variables sadly for my app).</p>
<p>When I ssh into the terminal and vim everything looks dandy on that file i.e:</p>
<pre><code>{
"featureBranch": "123",
"BFFServiceUrl": "api.dev.integration/bff-service"
}
</code></pre>
<p>But when I curl this file I get:</p>
<pre><code>{
"featureBranch": "123",
</code></pre>
<p>and the same can be said when I browse directly to this file (I am running an Angular SPA app using ASP.NET Core 3.1).</p>
<p>Is there something horribly wrong I am doing in the yaml file?</p>
<p><strong>Edit</strong>
The curl command that I am running is:
<code>curl https://api.integrations.portal/assets/appsettings.json</code>.
There is a NGINX Ingress running in between the request and response.</p>
| user13731870 | <p>I used to have a similar problem. In my case, curl returned error code 18. You can <a href="https://stackoverflow.com/questions/7248031/meaning-of-dollar-question-mark-in-shell-scripts">check this</a> for yourself by running your <code>curl</code> and then <code>echo $?</code>. As I mentioned I had error code 18 which means:</p>
<blockquote>
<p>CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.</p>
</blockquote>
<p><a href="https://curl.se/libcurl/c/libcurl-errors.html" rel="nofollow noreferrer">Here</a> you will find a link to the description of any errors that curl may return. In case you get another error.</p>
<p>This seems to be a server side issue. You might try to work it around by forcing HTTP 1.0 connection (to avoid chunked transfer which might cause this problem) with the --http1.0 option.</p>
<p>Additionally, if you have a Reverse Proxy or Load Balancer using Nginx and your /var (or your partition where Nginx logging happens) is full, Nginx's server response might be cut off.</p>
<p>You can also read <a href="https://stackoverflow.com/questions/10557927/server-response-gets-cut-off-half-way-through">this question</a>.</p>
| Mikołaj Głodziak |
<p>We use Terraform to create all of our infrastructure resources then we use Helm to deploy apps in our cluster.</p>
<p>We're looking for a way to streamline the creation of infra and apps, so currently this is what we do:</p>
<ul>
<li>Terraform creates kubernetes cluster, VPC network etc and a couple of static public IP addresses</li>
<li>We have to wait for the dynamic creation of these static IPs by Terraform to complete</li>
<li>We find out what the public IP is that's been created, and manually add that to our <code>loadBalancerIP:</code> spec on our ingress controller helm chart</li>
</ul>
<p>If at all possible, I'd like to store the generated public IP somewhere via terraform (config map would be nice), and then reference that in the ingress service <code>loadBalancerIP:</code> spec, so the end to end process is sorted.</p>
<p>I know configmaps are for pods and I don't <em>think</em> they can be used for kubernetes service objects - does anyone have any thoughts/ideas on how I could achieve this?</p>
| sc-leeds | <p>I suggest creating a static public IP in GCP using terraform by specifying the name you want like this:</p>
<pre class="lang-json prettyprint-override"><code>module "address" {
source = "terraform-google-modules/address/google"
version = "3.0.0"
project_id = "your-project-id"
region = "your-region"
address_type = "EXTERNAL"
names = [ "the-name-you-want" ]
global = true
}
</code></pre>
<p>You can then refer to this static public IP <code>name</code> in the Kubernetes ingress resource by specifying the annotations <code>kubernetes.io/ingress.global-static-ip-name: "the-name-you-want"</code> like this:</p>
<pre class="lang-json prettyprint-override"><code>resource "kubernetes_ingress_v1" "example" {
wait_for_load_balancer = true
metadata {
name = "example"
namespace = "default"
annotations = {
"kubernetes.io/ingress.global-static-ip-name" = "the-name-you-want"
}
}
spec {
....
</code></pre>
<p>This will create ingress resource 'example' in GKE and attach static public IP named 'the-name-you-want' to it.</p>
| mozello |
<p>Say I have several services in kubernetes. And I have one entry point to the cluster, it's a public facing service that is meant to validate the JWT token (from AWS cognito).</p>
<p>The entry point routes the request to an internal service, and that in turn usually makes more requests to other internal services.</p>
<p>My question is: is it enough to validate the JWT only once and make other communications without any form of authentication, just passing the user id (or any other data needed)? Or do I need to have some form of authentication when making http requests between services? if so, which? should I validate the JWT again? should I have server certificates or something like that?</p>
| Moshe Shaham | <p>Posted a community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>As David Szalai’s comment mentioned, it depends on your security and project requirements:</p>
<blockquote>
<p>If you go with a zero-trust model inside k8s, you can use mTLS with a service mesh between services. Passing JWTs is also good if you need to propagate user-auth info to different services.</p>
</blockquote>
<blockquote>
<p>In the current (project) we’ll use mTLS with a service mesh, and send JWTs along with requests where the receiver needs info about user, and parse/validate it there again.</p>
</blockquote>
<p>If you apps do not have built-in authentication / authorization mechanisms you may try Istio - check these articles:</p>
<ul>
<li><a href="https://istio.io/latest/docs/concepts/security/" rel="nofollow noreferrer">Istio documentation - Security</a></li>
<li><a href="https://medium.com/intelligentmachines/istio-jwt-step-by-step-guide-for-micro-services-authentication-690b170348fc" rel="nofollow noreferrer">Istio & JWT: Step by Step Guide for Micro-Services Authentication</a>.</li>
</ul>
<p>Also check these articles about authentication in Kubernetes:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes Docs - Authenticating</a></li>
<li><a href="https://learnk8s.io/microservices-authentication-kubernetes" rel="nofollow noreferrer">Authentication between microservices using Kubernetes identities</a></li>
</ul>
<p><strong>EDIT:</strong></p>
<p><em>Why?</em></p>
<p>In security, <a href="https://wso2.com/blogs/thesource/securing-microservices-in-a-zero-trust-environment/" rel="nofollow noreferrer">we say</a>:</p>
<blockquote>
<p>It’s a common principle in security that the strength of a given system is only as strong as the strength of its weakest link.</p>
</blockquote>
<p>This article - <a href="https://cloud.google.com/blog/products/networking/the-service-mesh-era-securing-your-environment-with-istio" rel="nofollow noreferrer">The service mesh era: Securing your environment with Istio</a> mentions some possible attacks that can be done in insecure system, like man-in-the middle attacks and replayed attacks:</p>
<blockquote>
<p>An approach to mitigate this risk is to ensure that peers are only authenticated using non-portable identities. <a href="https://cloud.google.com/istio/docs/istio-on-gke/installing#choose_a_security_option" rel="nofollow noreferrer">Mutual TLS authentication</a> (mTLS) ensures that peer identities are bound to the TLS channel and cannot be replayed. It also ensures that all communication is encrypted in transit, and mitigates the risk of man-in-the middle attacks and replay attacks by the destination service. While mutual TLS helps strongly identify the network peer, end user identities (or identity of origin) can still be propagated using bearer tokens like JWT.</p>
</blockquote>
<blockquote>
<p>Given the proliferation of threats within the production network and the increased points of privileged access, it is increasingly necessary to adopt a zero-trust network security approach for microservices architectures. This approach requires that all accesses are strongly authenticated, authorized based on context, logged, and monitored … and the controls must be optimized for dynamic production environments.</p>
</blockquote>
<p>Without adding additional security layers (like mTLS and service mesh in a cluster), we are assuming that communication between microservices in the cluster is done in fully trusted network, <a href="https://wso2.com/blogs/thesource/securing-microservices-in-a-zero-trust-environment/" rel="nofollow noreferrer">so they can give an attacker possibility to exploit business assets via network</a>:</p>
<blockquote>
<p>Many microservices deployments today, mostly worry about the edge security by exposing the microservices via APIs and protecting those with an API gateway at the edge. Once a request passes the API gateway, the communications among microservices assume a trusted network, and expose endless possibilities to an attacker gaining access to the network to exploit all valuable business assets exposed by the microservices.</p>
</blockquote>
| Mikolaj S. |
<p>I have deployed mongodb on Kubernetes using mongodb-operator (verison: 1.9.2). The ReplicaSet consists of 3 instances: 1 primary and 2 secondary. I am able to access the ReplicaSet from any of given Kubernetes pods but if I try to do the same from the local machine with pymongo client I am getting error as shown below. Can anyone suggest the reason for such behavior?</p>
<pre><code>pymongo.errors.ServerSelectionTimeoutError: mongodb-2.mongodb-svc.mongodb-new.svc.cluster.local:27017: [Errno -2] Name or service not known,mongodb-0.mongodb-svc.mongodb-new.svc.cluster.local:27017: [Errno -2] Name or service not known,mongodb-1.mongodb-svc.mongodb-new.svc.cluster.local:27017: [Errno -2] Name or service not known, Timeout: 30s, Topology Description: <TopologyDescription id: 610292f511a5060cc91f8a11, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('mongodb-0.mongodb-svc.mongodb-new.svc.cluster.local', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongodb-0.mongodb-svc.mongodb-new.svc.cluster.local:27017: [Errno -2] Name or service not known',)>, <ServerDescription ('mongodb-1.mongodb-svc.mongodb-new.svc.cluster.local', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongodb-1.mongodb-svc.mongodb-new.svc.cluster.local:27017: [Errno -2] Name or service not known',)>, <ServerDescription ('mongodb-2.mongodb-svc.mongodb-new.svc.cluster.local', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongodb-2.mongodb-svc.mongodb-new.svc.cluster.local:27017: [Errno -2] Name or service not known',)>]>
</code></pre>
| steve steel | <p>I have posted Community wiki answer for better visibility.</p>
<p>As <a href="https://stackoverflow.com/users/13866186/steve-steel">steve steel</a> has mentioned in the comment, he has found the solution in <a href="https://stackoverflow.com/questions/44730112/exposing-mongodb-on-kubernetes-statefulsets-to-external-world">this link</a>:</p>
<blockquote>
<p>I have found this link helpful: <a href="https://stackoverflow.com/questions/44730112/exposing-mongodb-on-kubernetes-statefulsets-to-external-world">Exposing mongodb on kubernetes statefulsets to external world</a></p>
</blockquote>
| Mikołaj Głodziak |
<p>What I am trying to do, is to deploy an API on Kubernetes and, using Google-managed SSL certificates, redirect it to point on my domain on HTTPS protocol.</p>
<p>I have already spent some time on it and done a lot of debugging, but there is one thing that I can't succeed to fix.</p>
<p>What is already done and works:</p>
<ul>
<li>Static IP is reserved</li>
<li>Google-managed SSL certificate is Active and verified</li>
<li>Both Ingress and Service NodePort are deployed using <strong>443 HTTPS protocol</strong>.</li>
<li>Health Checks I managed to put on HTTPS as well.</li>
</ul>
<p>Problem:</p>
<ul>
<li>I cannot change the default configuration for loadbalancer backend service. It is always on HTTP.</li>
</ul>
<p><a href="https://i.stack.imgur.com/vFFXy.png" rel="nofollow noreferrer">Problematic place</a></p>
<p><strong>BUT</strong> if I change it manually to HTTPS, <strong>API works as expected on my domain</strong> <em>api.mydomain.com</em>. The problem is that in 5 minutes, the default configurations are sync with the current configuration in K8s, and the protocol changes to HTTP automatically.</p>
<p>My question: how can set a default configuration to HTTPS for backend service which will not be overwritten afterward.</p>
<p>Here is the guide that I partially followed:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#console" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#console</a></p>
<p>And my configurations for Ingress, Service and Health Check</p>
<p><strong>healthcheck.yaml</strong></p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-default-config
spec:
healthCheck:
checkIntervalSec: 60
timeoutSec: 60
healthyThreshold: 1
unhealthyThreshold: 10
type: HTTPS
requestPath: /
port: 31303
</code></pre>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: management-api-test-service
annotations:
cloud.google.com/backend-config: '{"default": "api-default-config"}'
spec:
type: NodePort
selector:
app: management-api-test
environment: test
ports:
- protocol: TCP
port: 443
targetPort: 5000
nodePort: 31303
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-test2
networking.gke.io/managed-certificates: test-cert
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "false"
spec:
defaultBackend:
service:
name: management-api-test-service
port:
number: 443
rules:
- host: api.mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: management-api-test-service
port:
number: 443
</code></pre>
<p><strong>kubectl describe svc management-api-test-service -n web-application</strong></p>
<pre><code>Name: management-api-test-service
Namespace: web-application
Labels: <none>
Annotations: cloud.google.com/backend-config: {"default": "api-default-config"}
Selector: app=management-api-test,environment=test
Type: NodePort
IP Families: <none>
IP: **.***.**.130
IPs: <none>
Port: <unset> 443/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31303/TCP
Endpoints: **.***.*.13:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p><strong>kubectl describe ingress api-ingress -n web-application</strong></p>
<pre><code>Name: api-ingress
Namespace: web-application
Address: **.***.***.196
Default backend: management-api-test-service:443 (***.**.**.13:5000)
Rules:
Host Path Backends
---- ---- --------
api.mydomain.com
/* management-api-test-service:443 (***.**.**.13:5000)
Annotations: ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-blablabla
ingress.kubernetes.io/backends: {"k8s-be-31303--efb221b572e568cb":"HEALTHY"}
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-1xka7p8q-web-application-api-ingress-5jc6y1ty
ingress.kubernetes.io/https-target-proxy: k8s2-ts-1xka7p8q-web-application-api-ingress-5jc6y1ty
ingress.kubernetes.io/ssl-cert: mcrt-blablabla
ingress.kubernetes.io/url-map: k8s2-um-1xka7p8q-web-application-api-ingress-5jc6y1ty
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: api-test2
networking.gke.io/managed-certificates: test-cert
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 5m2s (x22 over 155m) loadbalancer-controller Scheduled for sync
</code></pre>
<p><strong>I TRIED</strong>:</p>
<ul>
<li><code>kubernetes.io/ingress.allow-http: false</code> changes nothing</li>
<li>some configuration with nginx, where http set to "false", but I cannot find it and it did not work.</li>
</ul>
<p><strong>Thanks in advance!</strong></p>
| Kizy | <p>Found it!!!</p>
<p>In the <strong>service.yaml</strong> in the annotations had to add another config and attribute a name to my port. Here is a new config</p>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: management-api-test-service
annotations:
cloud.google.com/backend-config: '{"default": "api-default-config"}'
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS"}' # new line
spec:
type: NodePort
selector:
app: management-api-test
environment: test
ports:
- name: my-https-port # add port name
protocol: TCP
port: 443
targetPort: 5000
nodePort: 31303
</code></pre>
| Kizy |
<p>I have my values.yaml in my_project_directory under deployment as below</p>
<pre><code>C:\Users\Username\IdeaProjects\my-project\deployment\values.yaml
</code></pre>
<p>I need to use different values.yaml based on certain triggers in AzureDevops.</p>
<p>Current way I am running is (This runs fine and uses values.yaml)</p>
<pre><code>helm install my-app-name ./deployment/ --namespace=my-namespace-name
</code></pre>
<p>I have two other values.yaml as below</p>
<pre><code>C:\Users\Username\IdeaProjects\my-project\deployment\values_dev.yaml
C:\Users\Username\IdeaProjects\my-project\deployment\values_preprod.yaml
</code></pre>
<p>and can run using respective commands</p>
<pre><code>helm install my-app-name . -f values_dev.yaml --namespace=my-namespace-name
helm install my-app-name . -f values_preprod.yaml --namespace=my-namespace-name
</code></pre>
<p>These work fine but I have to cd into deployment and run these. Anyway I can run it from project root directory?</p>
<p>Tried this</p>
<pre><code>helm install my-app-name -f ./deployment/values_dev.yaml --namespace=my-namespace-name
</code></pre>
<p>Got this error</p>
<pre><code>Error: must either provide a name or specify --generate-name
</code></pre>
<p>Also tried this</p>
<pre><code> helm install my-app-name -f ./deployment/values_dev.yaml --namespace=my-namespace-name --generate-name
</code></pre>
<p>Got this error</p>
<pre><code>Error: failed to download "my-app-name" (hint: running `helm repo update` may help)
</code></pre>
<p>Also tried this</p>
<pre><code>helm install my-app-name ./deployment/values_dev.yaml --namespace=my-namespace-name
</code></pre>
<p>Got this error</p>
<pre><code>Error: file 'C:\Users\User\IdeaProjects\my-project\deployment\values_dev.yaml' seems to be a YAML file, but expected a gzipped archive
</code></pre>
| Venkatesh Gotimukul | <p>run from the project root directory =</p>
<p><code>helm install my-app-name . -f deployment/values_dev.yaml --namespace=my-namespace-name</code></p>
| Philip Welz |
<p>I need to run powershell as an container in kubernetes</p>
<p>I am using following deployment file <code>sample.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
</code></pre>
<p>When I run <code>kubectl apply -f sample.yaml</code></p>
<p>I get the following error on <code>kubectl get pods</code></p>
<pre><code>powershell 0/1 CrashLoopBackOff 3 (50s ago) 92s
</code></pre>
<p>I did check the log <code>kubectl logs powershell</code></p>
<pre><code>PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> ←[?1h
</code></pre>
<p>But when i run same image as a docker container with following command its working</p>
<pre><code>docker run --rm -it mcr.microsoft.com/powershell:latest
</code></pre>
| NRaj | <p>If you want to keep a container for running, you should write like this yaml..</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
command: ["pwsh"]
args: ["-Command", "Start-Sleep", "3600"]
</code></pre>
<pre><code>
[root@master1 ~]# kubectl get pod powershell
NAME READY STATUS RESTARTS AGE
powershell 1/1 Running 0 3m32s
[root@master1 ~]# kubectl exec -it powershell -- pwsh
PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> date
Thu Oct 13 12:50:24 PM UTC 2022
PS />
</code></pre>
| Dante_KR |
<p>My requirement is that in my POD multiple processes are running and I want to collect the metrices for all the processes i.e (CPU AND MEMORY and other details OF all Process). Now , I want to write the output of any command I run inside my pod to stout .</p>
| beingumang | <blockquote>
<p>A container engine handles and redirects any output generated to a containerized application's <code>stdout</code> and <code>stderr</code> streams. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in JSON format.</p>
</blockquote>
<p>Usually, it is PID1 process's stdout and stderr.<br />
So, try the following command inside a k8s Pod:</p>
<pre><code>$ cat /proc/meminfo >> /proc/1/fd/1
</code></pre>
<p>Then you will see the standard output in the pod's logs:</p>
<pre><code>$ kubectl logs yourPodName
...
MemTotal: 12807408 kB
MemFree: 10283624 kB
MemAvailable: 11461168 kB
Buffers: 50996 kB
Cached: 1345376 kB
...
</code></pre>
<p>To write <code>stdout</code> and <code>stderr</code> from the command, run it like this:</p>
<pre><code>$ cat /proc/meminfo 1>> /proc/1/fd/1 2>> /proc/1/fd/2
</code></pre>
| mozello |
<p>I am trying to create elasticsearch statefulset in kubernetes but my pods keep changing state from running to error to CrashLoopBackOff to running and goes on I have 2 replicas, Minikube is running with 8 cpu's with the memory of 15gn ,why my laptop almost hangs up when the pod is in running state and the system memory usage history shows the memory at 90% then the pod goes back to CrashLoopBackOff
here is the output of <code>kubectl get pods -w</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE
elastic-stateful-0 0/1 CrashLoopBackOff 3 (42s ago) 70m
elastic-stateful-1 0/1 CrashLoopBackOff 3 (20s ago) 3m25s
elastic-stateful-0 1/1 Running 4 (51s ago) 70m
elastic-stateful-0 0/1 Error 4 (72s ago) 71m
elastic-stateful-1 1/1 Running 4 (50s ago) 3m55s
elastic-stateful-0 0/1 CrashLoopBackOff 4 (12s ago) 71m
elastic-stateful-1 0/1 Error 4 (70s ago) 4m15s
elastic-stateful-1 0/1 CrashLoopBackOff 4 (11s ago) 4m26s
elastic-stateful-0 1/1 Running 5 (90s ago) 72m
elastic-stateful-1 1/1 Running 5 (86s ago) 5m41s
elastic-stateful-0 0/1 Error 5 (111s ago) 72m
elastic-stateful-0 0/1 CrashLoopBackOff 5 (14s ago) 73m
elastic-stateful-1 0/1 Error 5 (110s ago) 6m5s
elastic-stateful-1 0/1 CrashLoopBackOff 5 (16s ago) 6m20s
</code></pre>
<p>kubectl describe pod elastic-stateful-0</p>
<p>shows</p>
<pre><code>ame: elastic-stateful-0
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Fri, 10 Mar 2023 20:21:08 +0500
Labels: app=elastic-label
controller-revision-hash=elastic-stateful-766d849885
statefulset.kubernetes.io/pod-name=elastic-stateful-0
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: StatefulSet/elastic-stateful
Containers:
elastic-container:
Container ID: docker://bab1650f5014677283cccb030c1c91d949096888671dbf6b285ac32ff1ad126d
Image: elasticsearch:8.4.3
Image ID: docker-pullable://elasticsearch@sha256:bb72a5788e156171b111d2fc21825d007f235c3314295aa86d0ef500678923bd
Port: 9200/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Fri, 10 Mar 2023 21:30:37 +0500
Finished: Fri, 10 Mar 2023 21:31:00 +0500
Ready: False
Restart Count: 3
Environment:
discovery.type: <set to the key 'es.discovery.type' of config map 'elasticsearch-configmap'> Optional: false
xpack.security.enabled: <set to the key 'es.xpack.security.enabled' of config map 'elasticsearch-configmap'> Optional: false
xpack.security.enrollment.enabled: <set to the key 'es.xpack.security.enrollment.enabled' of config map 'elasticsearch-configmap'> Optional: false
xpack.security.http.ssl.enabled: <set to the key 'es.xpack.security.http.ssl.enabled' of config map 'elasticsearch-configmap'> Optional: false
ingest.geoip.downloader.enabled: <set to the key 'es.ingest.geoip.downloader.enabled' of config map 'elasticsearch-configmap'> Optional: false
discovery.seed_hosts: elastic-stateful-0.elastic-service.default.svc.cluster.local,elastic-stateful-1.elastic-service.default.svc.cluster.local
cluster.initial_master_nodes: elastic-stateful-0
Mounts:
/usr/share/elasticsearch/data from elastic-pvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kcgkg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
elastic-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elastic-pvc-elastic-stateful-0
ReadOnly: false
kube-api-access-kcgkg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 45m (x2 over 69m) kubelet Error: ErrImagePull
Warning Failed 45m kubelet Failed to pull image "elasticsearch:8.4.3": rpc error: code = Unknown desc = context canceled
Normal Pulling 44m (x3 over 70m) kubelet Pulling image "elasticsearch:8.4.3"
Normal Pulled 35m kubelet Successfully pulled image "elasticsearch:8.4.3" in 9m30.225714793s
Warning Failed 33m (x11 over 35m) kubelet Error: configmap "elasticsearch-configmap" not found
Normal Pulled 5m3s (x142 over 35m) kubelet Container image "elasticsearch:8.4.3" already present on machine
Warning BackOff 3s (x5 over 2m13s) kubelet Back-off restarting failed container
</code></pre>
<p>here is the manifest files</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elastic-stateful
spec:
serviceName: elastic-service
replicas: 2
selector:
matchLabels:
app: elastic-label
template:
metadata:
name: elastic-pod
labels:
app: elastic-label
spec:
containers:
- name: elastic-container
image: elasticsearch:8.4.3
ports:
- containerPort: 9200
env:
- name: discovery.type
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: es.discovery.type
- name: xpack.security.enabled
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: es.xpack.security.enabled
- name: xpack.security.enrollment.enabled
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: es.xpack.security.enrollment.enabled
- name: xpack.security.http.ssl.enabled
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: es.xpack.security.http.ssl.enabled
- name: ingest.geoip.downloader.enabled
valueFrom:
configMapKeyRef:
name: elasticsearch-configmap
key: es.ingest.geoip.downloader.enabled
- name: discovery.seed_hosts
value: "elastic-stateful-0.elastic-service.default.svc.cluster.local,elastic-stateful-1.elastic-service.default.svc.cluster.local"
- name: cluster.initial_master_nodes
value: "elastic-stateful-0"
volumeMounts:
- name: elastic-pvc
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elastic-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: elastic-service
spec:
# type: ClusterIP
clusterIP: None
selector:
app: elastic-label
ports:
- protocol: TCP
port: 9200
targetPort: 9200
</code></pre>
<p>configmap file</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-configmap
data:
es.discovery.type: "multi-node"
es.xpack.security.enabled: "false"
es.xpack.security.enrollment.enabled: "false"
es.xpack.security.http.ssl.enabled: "false"
es.ingest.geoip.downloader.enabled: "false"
</code></pre>
<p>i have an 16gb of ram with core i7</p>
| swaheed | <p>You might want to check the logs for the failing pod:</p>
<pre><code>kubectl logs pod/<pod name> -n <pod namespace>
</code></pre>
<p>will show the logs for a pod.</p>
<p>For your case:</p>
<pre><code>kubectl logs pod/elastic-stateful-0
</code></pre>
| Amancio Pontes |
<p>I'm trying to install an nginx ingress controller into an Azure Kubernetes Service cluster using helm. I'm following <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip" rel="nofollow noreferrer">this Microsoft guide</a>. It's failing when I use helm to try to install the ingress controller, because it needs to pull a "kube-webhook-certgen" image from a local Azure Container Registry (which I created and linked to the cluster), but the kubernetes pod that's initially scheduled in the cluster fails to pull the image and shows the following error when I use <code>kubectl describe pod [pod_name]</code>:</p>
<pre><code>failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
</code></pre>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip#ip-and-dns-label" rel="nofollow noreferrer">This section describes using helm to create an ingress controller</a>.</p>
<p>The guide describes creating an Azure Container Registry, and link it to a kubernetes cluster, which I've done successfully using:</p>
<pre><code>az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
</code></pre>
<p>I then import the required 3rd party repositories successfully into my 'local' Azure Container Registry as detailed in the guide. I checked that the cluster has access to the Azure Container Registry using:</p>
<pre><code>az aks check-acr --name MyAKSCluster --resource-group myResourceGroup --acr letsencryptdemoacr.azurecr.io
</code></pre>
<p>I also used the Azure Portal to check permissions on the Azure Container Registry and the specific repository that has the issue. It shows that both the cluster and repository have the ACR_PULL permission)</p>
<p>When I run the helm script to create the ingress controller, it fails at the point where it's trying to create a kubernetes pod named <code>nginx-ingress-ingress-nginx-admission-create</code> in the ingress-basic namespace that I created. When I use <code>kubectl describe pod [pod_name_here]</code>, it shows the following error, which prevents creation of the ingress controller from continuing:</p>
<pre><code>Failed to pull image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen:v1.5.1@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": [rpc error: code = NotFound desc = failed to pull and unpack image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
</code></pre>
<p>This is the helm script that I run in a linux terminal:</p>
<pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-basic --set controller.replicaCount=1 --set controller.nodeSelector."kubernetes\.io/os"=linux --set controller.image.registry=$ACR_URL --set controller.image.image=$CONTROLLER_IMAGE --set controller.image.tag=$CONTROLLER_TAG --set controller.image.digest="" --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux --set controller.admissionWebhooks.patch.image.registry=$ACR_URL --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux --set defaultBackend.image.registry=$ACR_URL --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG --set controller.service.loadBalancerIP=$STATIC_IP --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
</code></pre>
<p>I'm using the following relevant environment variables:</p>
<pre><code>$ACR_URL=letsencryptdemoacr.azurecr.io
$PATCH_IMAGE=jettech/kube-webhook-certgen
$PATCH_TAG=v1.5.1
</code></pre>
<p>How do I fix the authorization?</p>
| Chris Halcrow | <p>It seems like the issue is caused by the new ingress-nginx/ingress-nginx helm chart release. I have fixed it by using version 3.36.0 instead of the latest (4.0.1).</p>
<pre><code>helm upgrade -i nginx-ingress ingress-nginx/ingress-nginx \
--version 3.36.0 \
...
</code></pre>
| David Truong |
<p>I want to use user assigned or managed identity for AKS and ArgoCD to create an application.</p>
<p>I assigned AcrPull on my ACR for AKS identity, and then tried to create a ArgoCD repo with</p>
<pre><code>argocd repo add myACR.azurecr.io --type helm --name helm --enable-oci
</code></pre>
<p><code>helm</code> is the root of my repos there. It worked correctly and I can see a green "successful" tick in a connection status in ArgoCD UI.</p>
<p>But when I try to create an app for one of the actual images, it fails with</p>
<blockquote>
<p><code>helm pull oci://myACR.azurecr.io/helm/mychart --version 0.0.1 --destination /tmp/helm706383544</code></p>
<p>failed exit status 1: Error: failed to authorize: failed to fetch
anonymous token: unexpected status: 401 Unauthorized</p>
</blockquote>
<p>(on a side note, if I enable admin user and create argocd repo with <code>--username XXX --password XXX</code> options, everything works as expected)</p>
<p>What am I missing? Is it possible to achieve this? Or do I need to enable admin user on ACR (or use tokens?)</p>
| JoeBloggs | <p>When you say AKS identity and mean the user managed identity then its wrong in this case.</p>
<p>For accessing the ACR you need to assign the kubelet identity of your AKS the <code>AcrPull</code> Role as the <a href="https://kubernetes.io/docs/concepts/overview/components/#kubelet" rel="nofollow noreferrer">kubelet</a> is responsible for pulling images:</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export ACR_ID=$(az acr show -g <resource group> -n <acr name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "AcrPull" --scope $ACR_ID
</code></pre>
<p>But this is only the part where Kubernetes is pulling the images. I dont think that ArgoCD out-of-the-box leverages the Azure Identites to connect to your repo.</p>
<p>So maybe you need to specify username and password in order that ArgoCD can connect to the helm repo:</p>
<pre><code>argocd repo add myACR.azurecr.io --type helm --name helm --enable-oci --username <username> --password <password>
</code></pre>
| Philip Welz |
<p>I am trying to access my microservice "externalforum-api-svc" inside my kubernetes cluster using ocelot gateway. I`ve followed the docs but it does not seem to be working.</p>
<p>Can someone please tell me whats wrong with it?</p>
<p>I want to deploy the ocelot api gateway as clusterIP and use Ingress to access it from outside of the cluster, but i am facing this issue when trying to reroute from ocelot -> service inside the cluster.</p>
<blockquote>
<p>## Error warn: Ocelot.Responder.Middleware.ResponderMiddleware[0] requestId: 0HMCO5SFMMOIQ:00000002, previousRequestId: no previous
request id, message: Error Code:
UnableToFindServiceDiscoveryProviderError Message: Unable to find
service discovery provider for type: consul errors found in
ResponderMiddleware. Setting error response for request
path:/externalForumService, request method: GET</p>
</blockquote>
<pre><code>{
"Routes": [
{
"UpstreamPathTemplate": "/externalForumService/GetAll",
"DownstreamPathTemplate": "/api/externalforum/v1/forum/GetAll",
"DownstreamScheme": "http",
"ServiceName": "externalforum-api-svc",
"UpstreamHttpMethod": [ "Get" ]
},
{
"UpstreamPathTemplate": "/externalForumService",
"DownstreamPathTemplate": "/api/externalforum/v1/forum",
"DownstreamScheme": "http",
"ServiceName": "externalforum-api-svc",
"UpstreamHttpMethod": [ "Get" ]
}
],
"GlobalConfiguration": {
"ServiceDiscoveryProvider": {
"Namespace": "propnull",
"Type": "kube"
}
}
}
</code></pre>
<h2>Service to map</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: externalforum-api-svc
namespace: propnull
spec:
type: ClusterIP
selector:
app: externalforum-api
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>I have already ran <code>kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts</code></p>
<h2>Specifications</h2>
<ul>
<li>Version: 17.0.0</li>
<li>Platform: net core 5.0</li>
</ul>
| Arthur Muller | <p>Try to change "type : kube" to "type : KubernetesServiceDiscoveryProvider" in GlobalConfiguration section.</p>
| tiomkin |
<p>I have an application running on kubernetes (It is a cluster running on cloud) and want to setup monitoring and logging for that application. There are various possibilities for the setup. What would be the best practices of doing that, like recommended method or industrial standard?</p>
<ul>
<li>A <strong>prometheus monitoring setup inside kubernetes cluster</strong>: prometheus-operator helm chart installed inside the cluster that can monitor the entire cluster, including the application.</li>
<li>an <strong>external prometheus + grafana setup</strong> deployed with docker-compose.(But I doubt if the external setup can reach the k8s properly to scrape all the metrics)</li>
<li>A <strong>prometheus federation setup</strong> where one external prometheus setup gets metrics from an internal prometheus setup of k8s.</li>
</ul>
<p>Can anyone please help me with some suggestions regarding best practices?</p>
| AnjK | <p>It all depends on how many clusters you have. If you have one cluster, the application you want to monitor on it will be the best choice option 1:</p>
<blockquote>
<ul>
<li>A <strong>prometheus monitoring setup inside kubernetes cluster</strong> : prometheus-operator helm chart installed inside the cluster that can monitor the entire cluster, including the application.</li>
</ul>
</blockquote>
<p>The advantages of such a solution include possibly simple and quick configuration, in addition, you have everything in one place (application and Prometheus) and you do not need a new cluster to monitor another. <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics-with-grafana" rel="nofollow noreferrer">Here</a> you can find example tutorial.</p>
<p>However, if you plan to expand to many clusters, or you already need to monitor many clusters, option 3 will be the best choice:</p>
<blockquote>
<ul>
<li>A <strong>prometheus federation setup</strong> where one external prometheus setup gets metrics from an internal prometheus setup of k8s.</li>
</ul>
</blockquote>
<p>Thanks to this solution, you will have all the metrics in one place, regardless of the <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">number of clusters</a> you need to monitor:</p>
<blockquote>
<p>Commonly, it is used to either achieve scalable Prometheus monitoring setups or to pull related metrics from one service's Prometheus into another.</p>
</blockquote>
<p>You can find example tutorials about <a href="https://medium.com/@jotak/prometheus-federation-in-kubernetes-4ce46bda834e" rel="nofollow noreferrer">Prometheus federation in Kubernetes</a> and <a href="https://banzaicloud.com/blog/prometheus-federation/" rel="nofollow noreferrer">Monitoring multiple federated clusters with Prometheus - the secure way</a></p>
| Mikołaj Głodziak |
<p>I am fiddling around with Kubernetes on a small managed cluster within AKS.</p>
<p>It looks like I'm ready to go with deploying as my node pools are already provisioned and bootstrapped (or that's what it looks like) upon setup.</p>
<p>Am I missing something here?</p>
| 7Leven | <blockquote>
<p>Do I really need kubeadm on a managed cloud cluster?</p>
</blockquote>
<p>You DO NOT need <code>kubeadm</code> tool when using <a href="https://azure.microsoft.com/en-us/services/kubernetes-service/#overview" rel="nofollow noreferrer">Azure AKS</a> / <a href="https://aws.amazon.com/eks/" rel="nofollow noreferrer">AWS EKS</a> / <a href="https://cloud.google.com/kubernetes-engine" rel="nofollow noreferrer">Google GKE</a> managed Kubernetes clusters.</p>
<p><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/" rel="nofollow noreferrer"><code>kubeadm</code></a> is used to create a self-managed Kubernetes cluster.</p>
<blockquote>
<p>You can use the kubeadm tool to create and manage Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way.</p>
</blockquote>
| mozello |
Subsets and Splits