Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>On an occasional basis I need to perform a rolling replace of all Pods in my StatefulSet such that all PVs are also recreated from scratch. The reason to do so is to get rid of all underlying hard drives that use old versions of encryption key. This operation should not be confused with regular rolling upgrades, for which I still want volumes to survive Pod terminations. The best routine I figured so far to do that is following:</p>
<ol>
<li>Delete the PV.</li>
<li>Delete the PVC.</li>
<li>Delete the Pod.</li>
<li>Wait until all deletions complete.</li>
<li>Manually recreate the PVC deleted in step 2.</li>
<li>Wait for the new Pod to finish streaming data from other Pods in the StatefulSet.</li>
<li>Repeat from step 1. for the next Pod.</li>
</ol>
<p>I'm not happy about step 5. I wish StatefulSet recreated the PVC for me, but unfortunately it does not. I have to do it myself, otherwise Pod creation fails with following error: </p>
<pre><code>Warning FailedScheduling 3s (x15 over 15m) default-scheduler persistentvolumeclaim "foo-bar-0" not found
</code></pre>
<p>Is there a better way to do that?</p>
| lopek | <p>I just recently had to do this. The following worked for me:</p>
<pre><code># Delete the PVC
$ kubectl delete pvc <pvc_name>
# Delete the underlying statefulset WITHOUT deleting the pods
$ kubectl delete statefulset <statefulset_name> --cascade=false
# Delete the pod with the PVC you don't want
$ kubectl delete pod <pod_name>
# Apply the statefulset manifest to re-create the StatefulSet,
# which will also recreate the deleted pod with a new PVC
$ kubectl apply -f <statefulset_yaml>
</code></pre>
| jpdstan |
<p>According to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#debug" rel="nofollow noreferrer">the Kubernetes docs</a>, you can start a debug version of a container and run a command on it like this:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]
</code></pre>
<p>But when I try and do this in real life I get the following:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl debug mypod \
--copy-to=mypod-dev \
--env='PYTHONPATH="/my_app"' \
--set-image=mycontainer=myimage:dev -- python do_the_debugging.py
error: you must specify an existing container or a new image when specifying args.
</code></pre>
<p>If I don't specify <code>-- python do_the_debugging.py</code> I can create the debug container, but then I need a separate command to actually do the debugging:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec -it mypod-dev -- python do_the_debugging.py
</code></pre>
<p>Why can't I do this all in one line as the docs seem to specify?</p>
<hr>
<p>Some kubernetes details:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-23T02:22:53Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-ad4801", GitCommit:"ad4801fd44fe0f125c8d13f1b1d4827e8884476d", GitTreeState:"clean", BuildDate:"2020-10-20T23:27:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| LondonRob | <p>Try to add <code>-it</code> and <code>--container</code> flags to your command. In your specific case, it might look like this:<br></p>
<pre><code>$ kubectl debug mypod \
--copy-to=mypod-dev \
--env='PYTHONPATH="/my_app"' \
--set-image=mycontainer=myimage:dev \
--container=mycontainer -it -- python do_the_debugging.py
</code></pre>
<p>I am not able to reproduce your exact issue because I don't have the <code>do_the_debugging.py</code> script, but I've created simple example.<br />
First, I created <code>Pod</code> with name <code>web</code> using <code>nginx</code> image:<br></p>
<pre><code>root@kmaster:~# kubectl run web --image=nginx
pod/web created
</code></pre>
<p>And then I ran <code>kubectl debug</code> command to create a copy of <code>web</code> named <code>web-test-1</code> but with <code>httpd</code> image:<br></p>
<pre><code>root@kmaster:~# kubectl debug web --copy-to=web-test-1 --set-image=web=httpd --container=web -it -- bash
If you don't see a command prompt, try pressing enter.
root@web-test-1:/usr/local/apache2#
</code></pre>
<p>Furthermore, I recommend you to upgrade your cluster to a newer version because your client and server versions are very diffrent.<br>
Your <code>kubectl</code> version is <code>1.20</code>, therefore you should have <code>kube-apiserver</code> in version <code>1.19</code> or <code>1.20</code>.
Generally speaking if <code>kube-apiserver</code> is in version <code>X</code>, <code>kubectl</code> should be in version <code>X-1</code> or <code>X</code> or <code>X+1</code>.</p>
| matt_j |
<p>I have a kafka installation running within in kubernetes cluster. I have a pod running a spring boot application which is using the default bootstrap.servers (localhost:9092) configuration and not the one passed in (bootstrap.kafka.svc.cluster.local:9092). The pod then fails to start as kafka is not running on localhost.</p>
<p>Here is my spring boot configuration</p>
<pre><code>spring:
kafka:
consumer:
group-id: spring-template
auto-offset-reset: earliest
bootstrap-servers: "bootstrap.kafka.svc.cluster.local:9092"
producer:
bootstrap-servers: "bootstrap.kafka.svc.cluster.local:9092"
bootstrap-servers: "bootstrap.kafka.svc.cluster.local:9092"
</code></pre>
<p>When setting up the consumers on startup the group id and auto-offset-reset are being correctly passed in from the above configuration. However the bootstrap-servers configuration is not and the application is actually using localhost:9092 as per the log below</p>
<pre><code>2019-03-11 07:34:36.826 INFO 1 --- [ restartedMain] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = spring-template
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2019-03-11 07:34:36.942 INFO 1 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-03-11 07:34:36.945 INFO 1 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-03-11 07:34:37.149 WARN 1 --- [ restartedMain] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=spring-template] Connection to node -1 could not be established. Broker may not be available.
</code></pre>
<p>I have kubernetes service called bootstrap, running in namespace kafka in the kubernetes cluster. Here is a snippet of the log file. Why is the spring boot application not picking up the configured bootstrap.servers configuration</p>
| Khetho Mtembo | <p>Sometimes, it will actually be your Kafka Kubernetes deployment that is at fault.</p>
<p>Can I see your deployment YAML/JSON file for the Kafka deployment, please?</p>
<p>It should look like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka
id: "0"
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka:latest
ports:
- containerPort: 30035
env:
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_HEAP_OPTS
value: -Xms320m
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
---
apiVersion: v1
kind: Service
metadata:
name: api-kafka
namespace: default
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
type: NodePort
</code></pre>
<p>With an emphasis on the:</p>
<pre><code>- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
</code></pre>
<p>Make sure your profiles are right and that'll fix all of your problems to do with the connections and the advertised hostnames within a Kubernetes Deployment</p>
| Ben Neighbour |
<p>This has been happening ever since I have updated Intellij (IDEA CE 2020.3) to a newer version (today). I am getting this exception from the plugin when running the <code>Develop on Kubernetes</code> Run Configuration that I usually use with my local Minikube instance to get all of the services in the cluster up and running, and able to Debug in debug mode.</p>
<p>My local Minikube instance is fine shown by the following:</p>
<pre><code>(Dev) $ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
</code></pre>
<p>I've tried checking for updates, restarting Intellij, and I am still getting the same thing. It must be something in relation to my Intellij Update but we'll have to see...</p>
<p>The full stack trace is:</p>
<pre><code>java.util.ServiceConfigurationError: io.grpc.ManagedChannelProvider: io.grpc.netty.shaded.io.grpc.netty.NettyChannelProvider not a subtype
at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:588)
at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1236)
at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1264)
at java.base/java.util.ServiceLoader$2.hasNext(ServiceLoader.java:1299)
at java.base/java.util.ServiceLoader$3.hasNext(ServiceLoader.java:1384)
at io.grpc.ServiceProviders.loadAll(ServiceProviders.java:67)
at io.grpc.ServiceProviders.load(ServiceProviders.java:42)
at io.grpc.ManagedChannelProvider.<clinit>(ManagedChannelProvider.java:37)
at io.grpc.ManagedChannelBuilder.forAddress(ManagedChannelBuilder.java:37)
at com.google.cloud.tools.intellij.kubernetes.skaffold.events.SkaffoldEventHandler.newManagedChannel(SkaffoldEventHandler.kt:319)
at com.google.cloud.tools.intellij.kubernetes.skaffold.events.SkaffoldEventHandler.listenEvents(SkaffoldEventHandler.kt:75)
at com.google.cloud.tools.intellij.kubernetes.skaffold.run.SkaffoldCommandLineState$startProcess$1.invokeSuspend(SkaffoldCommandLineState.kt:189)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)
at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740)
</code></pre>
<p>I am getting the same behaviour in both <code>DEBUG</code> mode and <code>RUN</code> mode.</p>
<h3>Environment Info</h3>
<ul>
<li>IDE type: IntelliJ</li>
<li>IDE version: Community Edition 2020.3</li>
<li>Cloud Code version: 20.10.1-202</li>
<li>Skaffold version: v1.14.0</li>
<li>Operating System: Windows 10 Pro 64-bit</li>
</ul>
<p>Any help, suggestions or resolutions would be really appreciated so thank you in advance! Thanks</p>
| Ben Neighbour | <p>This issue was fixed with patch release 20.12.1 that was put out shortly after the EAP release. Please try it out and if you run into any other issues feel free to post on our GitHub. – eshaul</p>
| Siva Kalva |
<p>I am currently working on Spring micro-service(Eureka Implementation) project. To manage the distributed configuration we are using Consul KV. We are deploying services on Kubernetes cluster.</p>
<p>The issue I am facing that, whenever I restart the cluster for Consul it deletes all the data of KV. I am creating Kubernetes cluster on local with docker image by having Deployment.yaml file.
Please refer the below Deployment.yaml file for consul.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: consul
labels:
app: consul
spec:
clusterIP: None
ports:
- port: 8500
name: consul
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: hashicorp/consul:latest
imagePullPolicy: Always
ports:
- containerPort: 8500
---
apiVersion: v1
kind: Service
metadata:
name: consul-lb
labels:
app: consul
spec:
selector:
app: consul
type: NodePort
ports:
- port: 80
targetPort: 8500
</code></pre>
<p>After some research I found that we can specify the -data-dir location in config, so I have modified StatefulSet kind yaml as below:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: hashicorp/consul:latest
imagePullPolicy: Always
ports:
- containerPort: 8500
args:
- "agent"
- "-server"
- "-data-dir=/home/consul/data"
</code></pre>
<p>But after this Consul UI is not getting started, so wanted some help to resolve so it stores the data even after I delete Consul cluster.
PS: I tried deploying cluster with helm, and it was persisting the data but I did not know how to make that cluster StatefulSet so I can refer it in other services with static url.
Thanks!</p>
| Bhaumik Thakkar | <p>Please note that k8s <code>pods</code> are by default ephemeral even if you deploy them as <code>StatefulSet</code>.</p>
<p><code>StatefulSet</code> is providing you option for <code>pod</code> with define name eg. <code>consul-0</code> rather that standard <code>consul-<<random string>></code>. It also keeps track of where to deploy <code>pod</code> in case you have different zones and you need to deploy <code>pod</code> in the same zone as <code>storage</code>.</p>
<p>What is missing in your manifest is <code>volumeMounts</code> and <code>volumeClaimTemplates</code> sections . If you set your data directory to <code>/home/consul/data</code> your manifest should looks similar to this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: hashicorp/consul:latest
imagePullPolicy: Always
ports:
- containerPort: 8500
args:
- "agent"
- "-server"
- "-data-dir=/home/consul/data"
volumeMounts:
- name: consul-data
mountPath: /home/consul/data
volumeClaimTemplates: # volume claim template will create volume for you so you don't need to define PVC
- metadata:
name: consul-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class" # you can get this with kubectl get sc
resources:
requests:
storage: 1Gi
</code></pre>
<p>Regarding you second problem with <code>consul</code> UI I would not help much since I never use <code>consul</code> but I can advise to deploy <code>helm chart</code> once again and check how arguments are passed there.</p>
| Michał Lewndowski |
<p>we are looking to find the list of pods which is not in running state or having some issue. Though below command pull pods detail including good ones, however we are targeting only bad ones</p>
<pre><code>'kubectl get pods -A'
</code></pre>
| sachin | <p><code>kubectl get pods --field-selector=status.phase=Failed</code></p>
<p>Or some better specification can be found <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources" rel="nofollow noreferrer">here</a>.</p>
| Samuel Stuchlý |
<p>Using Jenkins on Kubernetes plugin and using Jenkins as a code.</p>
<p>I'm getting this error when trying to use 'docker build'</p>
<p><code>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?</code></p>
<ol>
<li>I tried to mount /var/run/docker.sock.. but still not working..</li>
<li>I tried to use runAsUser: root to run with root permissions... but still not working..</li>
</ol>
<p>My Jenkins as a code pod template configuration -</p>
<pre><code> Jenkins:config:
chart: jenkins
namespace: default
repo: https://charts.jenkins.io
values:
agent:
enabled: true
podTemplates:
jenkins-slave-pod: |
- name: jenkins-xxx-pod
label: ecs-slave
serviceAccount: jenkins-xxx-prod
containers:
- name: main
image: '805xxxx.dkr.ecr.us-west-2.amazonaws.com/slave:ecs-xxxx-node_master-3'
command: "sleep"
args: "30d"
privileged: true
runAsUser: root
volumes:
- hostPathVolume:
hostPath: "/var/run/docker.sock"
mountPath: "/var/run/docker.sock"
</code></pre>
| EilonA | <p>I assume that you are using k8s >= v1.24 where <code>docker</code> as runtime is not <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#dockershim-removed-from-kubelet" rel="nofollow noreferrer">supported anymore</a>.</p>
<p>I would also add that mounting <code>docker</code> socket is not a good practice from security perspective.</p>
<p>If you want to build container image in k8s please use <a href="https://podman.io/" rel="nofollow noreferrer">podman</a> or <a href="https://github.com/GoogleContainerTools/kaniko" rel="nofollow noreferrer">kaniko</a>.</p>
| Michał Lewndowski |
<p>I heard ElasticSearch is already changing its license to SSPL. Because of that, it will not be considered as an OSS (OpenSource Software) anymore.</p>
<p>Do you know of a better OSS as replacement for ElasticSearch?</p>
<p>Hope suggested OSS has an official image in dockerhub since I will be using it also in Kubernetes.</p>
| lemont80 | <p>Elasticsearch was on SSPL, but we moved to a simpler license. check out <a href="https://www.elastic.co/blog/elastic-license-v2" rel="nofollow noreferrer">https://www.elastic.co/blog/elastic-license-v2</a> for details on that aspect</p>
| warkolm |
<p>As the title says, im trying to mount a secret, as a volume, into a deployment.</p>
<p>I found out i can do it in this way if <code>kind: Pod</code> but couldnt replicate it on <code> kind: Deployment</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
volumeMounts:
- name: certs-vol
mountPath: "/certs"
readOnly: true
volumes:
- name: certs-vol
secret:
secretName: certs-secret
</code></pre>
<p>the error shows as follows <code>ValidationError(Deployment.spec.template.spec.volumes[1]): unknown field "secretName" in io.k8s.api.core.v1.Volume, ValidationError(Deployment.spec.template.spec.volumes[2]</code></p>
<p>is there a way to do this, exclusivelly on deployment?</p>
| Juan. | <p>As <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> mentioned in the comment:</p>
<blockquote>
<p>Does <code>secretName:</code> need to be indented one step further (a child of <code>secret:</code>)?</p>
</blockquote>
<p>Your yaml file should be as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
volumeMounts:
- name: certs-vol
mountPath: "/certs"
readOnly: true
volumes:
- name: certs-vol
secret:
secretName: certs-secret
</code></pre>
<p>You can read more about <a href="https://medium.com/avmconsulting-blog/secrets-management-in-kubernetes-378cbf8171d0" rel="noreferrer">mounting secret as a file</a>. This could be the most interesing part:</p>
<blockquote>
<p>It is possible to create <code>Secret</code> and pass it as a <strong>file</strong> or multiple <strong>files</strong> to <code>Pods</code>.<br />
I've created a simple example for you to illustrate how it works. Below you can see a sample <code>Secret</code> manifest file and <code>Deployment</code> that uses this Secret:<br />
<strong>NOTE:</strong> I used <code>subPath</code> with <code>Secrets</code> and it works as expected.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
</code></pre>
<blockquote>
<p><strong>Note:</strong> <code>Secret</code> should be created before <code>Deployment</code>.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I created an AWS EKS Fargate cluster with the following Fargate profile</p>
<pre><code>fargateProfiles:
- name: fp-fluxcd
selectors:
- namespace: fluxcd
</code></pre>
<p>How do I either add (or change) the namespace so it looks like this?</p>
<pre><code>fargateProfiles:
- name: fp-fluxcd
selectors:
- namespace: fluxcd
- namespace: flux-system
</code></pre>
<p>I updated the config file and tried <code>eksctl upgrade -f my-cluster.yml</code> to no avail.</p>
<p>I guess another way to skin the cat is to add the fargate nodes to a namespace? How to do that?</p>
| Chris F | <p>As you can see in the <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="nofollow noreferrer">AWS Fargate profile</a> documentation:</p>
<blockquote>
<p>Fargate profiles are immutable. However, you can create a new updated profile to replace an existing profile and then delete the original after the updated profile has finished creating.</p>
</blockquote>
<p>Fargate profiles are immutable by design, so there is no <code>update</code> command.
In my opinion you should use <code>eksctl create fargateprofile</code> and <code>eksctl delete fargateprofile</code> commands instead.</p>
<p>Additionally you can find similar discussion <a href="https://github.com/weaveworks/eksctl/issues/2399" rel="nofollow noreferrer">here</a>.</p>
| matt_j |
<p>I am new into Kubernetes. I have difficulty digesting some concepts in my head.
Please help clarifying it. let us say, there is an ElasticSearch cluster running in K8S env with 5 replicas.</p>
<ol>
<li>Will all the pods have identical replicas(data)? Let us say I have 10GB data in my ES, so will there be 50GB approx space taken by 5 replicas in K8S cluster?</li>
<li>If I insert/delete a single document/data in my ES, who(which component) is responsible to insert/delete it among all replicas and keep them consistent with each other all the time?</li>
<li>Let us say, if a K8S node goes down and hence one replica. I observed a new replica is spinned instantly(5-10 seconds). I understand it as, 10GB of data has to be copied, ES image to be pulled, installed in pod and made consistent with other replicas and then made available. How these all process are done instantly?</li>
</ol>
<p>Please educate me, if I have conceptual blockage.
Thanks in advance.</p>
| Om Sao | <ol>
<li>only if you enable 5 replicas. the default is 1 replica set</li>
<li>Elasticsearch will handle that internally</li>
<li>it's not instant, it does take time and how long that is depends on what version you are on. take a look at <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/delayed-allocation.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/delayed-allocation.html</a></li>
</ol>
| warkolm |
<p>I have created an image and uploaded to docker:</p>
<pre class="lang-dockerfile prettyprint-override"><code>FROM jenkins/inbound-agent:jdk11 as agent
LABEL maintainer="JoSSte <re@dacted>"
USER root
ENV TZ=Europe/Copenhagen
RUN ln -snf /usr/share/zoneinfo/"${TZ}" /etc/localtime && echo "${TZ}" > /etc/timezone \
&& dpkg-reconfigure -f noninteractive tzdata
# update apt cache
RUN apt-get update \
&& apt-get upgrade -qy \
&& apt-get install -y curl lsb-release ca-certificates apt-transport-https software-properties-common gnupg2
# add php8 repo
RUN curl -sSLo /usr/share/keyrings/deb.sury.org-php.gpg https://packages.sury.org/php/apt.gpg \
&& sh -c 'echo "deb [signed-by=/usr/share/keyrings/deb.sury.org-php.gpg] https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list' \
&& apt-get update
# install php8 & composer
RUN apt-get install -qy rsync zip php8.2-curl php8.2-gd apache2 php8.2 unzip php8.2-mysql php8.2-zip php8.2-mbstring php-xdebug php-pear* \
&& curl -sSLo composer-setup.php https://getcomposer.org/installer \
&& php composer-setup.php --install-dir=/usr/local/bin --filename=composer \
&& composer self-update
# Cleanup old packagess
RUN apt-get -qy autoremove
USER jenkins
ENTRYPOINT ["jenkins-slave"]
</code></pre>
<p>Which I have used fine in a Jenkins build environment with docker agents.</p>
<p>Now I am changing my setup to work in kubernetes, so I made sure that things work as expected:</p>
<pre class="lang-groovy prettyprint-override"><code>podTemplate(containers: [
containerTemplate(name: 'php8builder', image: 'jstevnsvig/jenkins-build-slave-php:v8.2', command: 'sleep', args: '99d')
]) {
node(POD_LABEL) {
container('php8builder') {
stage('list apt packages') {
sh '/bin/sh -c "dpkg-query -l | cat"'
}
stage('dump php version') {
sh '/bin/sh -c "php --version"'
}
stage('dump /usr/local/bin files') {
sh 'ls /usr/local/bin'
}
stage('dump composer version') {
sh '/bin/sh -c "composer --version"'
}
}
}
}
</code></pre>
<p>This works beautifully:</p>
<blockquote>
<pre><code> + /bin/sh -c php --version
PHP 8.2.1 (cli) (built: Jan 13 2023 10:38:46) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.1, Copyright (c) Zend Technologies
with Zend OPcache v8.2.1, Copyright (c), by Zend Technologies
with Xdebug v3.2.0, Copyright (c) 2002-2022, by Derick Rethans
</code></pre>
</blockquote>
<p>...</p>
<blockquote>
<pre><code> + ls /usr/local/bin
composer
jenkins-agent
jenkins-slave
</code></pre>
</blockquote>
<p>...</p>
<blockquote>
<pre><code> + /bin/sh -c composer --version
Composer version 2.5.1 2022-12-22 15:33:54
</code></pre>
</blockquote>
<p>BUT</p>
<p>When I then tried to run it in a pipeline (this has worked before with all the agents defined in the Jenkins configuration - but I want to have less configuration in case of things going wrong), the same image , on the same Jenkins server in the same kubernetes cluster, can't find php or composer... :</p>
<pre class="lang-groovy prettyprint-override"><code>pipeline {
agent {
kubernetes {
yamlFile 'KubernetesPod.yaml'
retries 2
}
}
options {
disableConcurrentBuilds()
}
stages {
stage ('Staging'){
steps {
echo "looking for composer"
sh 'ls /usr/local/bin'
}
}
stage('run php') {
steps {
container('phpbuilder') {
sh 'php --version'
sh 'php src/test.php'
}
}
}
}
}
</code></pre>
<h2>KubernetesPod.yaml</h2>
<pre class="lang-yaml prettyprint-override"><code># https://github.com/jenkinsci/kubernetes-plugin/blob/master/examples/declarative_from_yaml_file/KubernetesPod.yaml
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: phpbuilder
image: jstevnsvig/jenkins-build-slave-php:v8.2
command:
- sleep
args:
- 99d
#env:
#- name: CONTAINER_ENV_VAR
# value: maven
</code></pre>
<p>Then composer isn't found:</p>
<blockquote>
<pre><code> looking for composer
+ ls /usr/local/bin
jenkins-agent
jenkins-slave
</code></pre>
</blockquote>
<p>...</p>
<blockquote>
<pre><code> + php --version
PHP 8.2.1 (cli) (built: Jan 13 2023 10:38:46) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.1, Copyright (c) Zend Technologies
with Zend OPcache v8.2.1, Copyright (c), by Zend Technologies
with Xdebug v3.2.0, Copyright (c) 2002-2022, by Derick Rethans
</code></pre>
</blockquote>
<p>As I stated bove, I have used this image in my previous build setup with the jenkins cloud config built in</p>
<pre class="lang-groovy prettyprint-override"><code>
pipeline {
agent { label 'php8' }
...
</code></pre>
<p>and I have confirmed that it still works on this cluster - so WHY doesn't it work when i provide kubernetes yaml?</p>
| JoSSte | <p>Your pipeline configuration is missing one thing which is <code>defaultContainer</code> directive.</p>
<p>Please note that by default <code>Jenkins</code> will run 2 containers in one pod. One will be <code>jnlp</code> even if you don't define it and second you container defined in your <code>KubernetesPod.yaml</code>.</p>
<p>So to be able to run things on your custom container your pipeline should looks like this:</p>
<pre class="lang-groovy prettyprint-override"><code>pipeline {
agent {
kubernetes {
defaultContainer '<<you container name from yaml file>>'
yamlFile 'KubernetesPod.yaml'
retries 2
}
}
options {
disableConcurrentBuilds()
}
stages {
stage ('Staging'){
steps {
echo "looking for composer"
sh 'ls /usr/local/bin'
}
}
stage('run php') {
steps {
container('phpbuilder') {
sh 'php --version'
sh 'php src/test.php'
}
}
}
}
}
</code></pre>
<p>Please see more examples in <a href="https://plugins.jenkins.io/kubernetes/#plugin-content-declarative-pipeline" rel="nofollow noreferrer">plugin doc</a>.</p>
<p>I forgot to mention that you can also run single container in pod by replacing default <code>jnlp</code> since you where just adding stuff to <code>jenkins-agent</code> image. You can use following manifest to accomplish that</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
role: agent
name: jenkins-slave
namespace: jenkins
spec:
serviceAccountName: jenkins-agent
containers:
- name: jnlp
image: <<your custom container>>
args: ['\$(JENKINS_SECRET)', '\$(JENKINS_NAME)']
</code></pre>
| Michał Lewndowski |
<p>kubectl task failing to deploying manifest files into AKS. pipeline failing with below error</p>
<p><strong>##[error]No configuration file matching /home/vsts/work/1/s/manifests was found.</strong></p>
<p>pipeline is working fine with run both stages (Like Build and Deploy) because after build stage it will create the artifacts for that manifest files and it will download in deploy stage and deploy in to AKS..</p>
<p>I have issue occur if I select stages to run only for deploy stage it will fail with above error msg..</p>
<p>Pipeline</p>
<pre><code>- master
resources:
- repo: self
variables:
tag: '$(Build.BuildId)'
imagePullSecret: 'aks-acr-auth'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
vmImage: ubuntu-latest
steps:
- task: Docker@2
displayName: Build And Push Into ACR
inputs:
containerRegistry: 'AKS-ACR'
repository: 'apps/web'
command: 'buildAndPush'
Dockerfile: '$(Build.SourcesDirectory)/app/Dockerfile'
tags: |
$(tag)
- publish: manifests
artifact: manifests
- stage: 'Deployment'
displayName: 'Deploy To AKS'
jobs:
- deployment: Release
environment: 'DEV-AKS.default'
displayName: 'Release'
pool:
vmImage: ubuntu-latest
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: 'createSecret'
kubernetesServiceConnection: 'DEV-AKS'
secretType: 'dockerRegistry'
secretName: '$(imagePullSecret)'
dockerRegistryEndpoint: 'AKS-ACR'
- task: DownloadPipelineArtifact@2
inputs:
buildType: 'current'
artifactName: 'manifests'
targetPath: '$(Pipeline.Workspace)'
- task: Kubernetes@1
displayName: Deploying Manifests into AKS
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: 'DEV-AKS'
namespace: 'default'
command: 'apply'
useConfigurationFile: true
configuration: 'manifests'
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'
</code></pre>
| Avi | <p>As per Kasun comment I added -checkout: self and $(Build.SourcesDirectory) in pipeline it's works..</p>
<p>Pipeline</p>
<pre><code>
- master
resources:
- repo: self
variables:
imagePullSecret: 'acr-auth'
stages:
- stage: 'Deployment'
displayName: 'Deploy To AKS'
jobs:
- deployment: Release
environment: 'DEV-AKS.default'
displayName: 'Release'
pool:
vmImage: ubuntu-latest
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: 'createSecret'
kubernetesServiceConnection: 'DEV-AKS'
secretType: 'dockerRegistry'
secretName: '$(imagePullSecret)'
dockerRegistryEndpoint: 'AKS-ACR'
- script: dir $(Build.SourcesDirectory)/manifests
displayName: Cloning Manifest Files From Repo
- task: KubernetesManifest@0
displayName: Deploying Manifests InTo AKS
inputs:
action: 'deploy'
kubernetesServiceConnection: 'DEV-AKS'
namespace: 'default'
manifests: |
manifests/deployment.yml
manifests/service.yml
imagePullSecrets: '$(imagePullSecret)'
</code></pre>
| Avi |
<p>I am deploying nodejs application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error.</p>
<p>Dockerfile</p>
<pre><code>FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 3123
CMD [ "node", "index.js" ]
</code></pre>
<p>Deployment.yaml</p>
<pre><code>---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "node-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "node-development"
replicas: 1
template:
metadata:
labels:
app: "node-development"
spec:
containers:
-
name: "node-development"
image: "xxx"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 47033
</code></pre>
<p>service.yaml</p>
<pre><code>---
apiVersion: "v1"
kind: "Service"
metadata:
name: "node-development-service"
namespace: "development"
labels:
app: "node-development"
spec:
ports:
-
port: 47033
targetPort: 3123
selector:
app: "node-development"
</code></pre>
<p>ingress.yaml</p>
<pre><code>---
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "node-development-ingress"
namespace: "development"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
rules:
-
host: "xxxx"
http:
paths:
-
backend:
service:
name: "node-development"
port:
number: 47033
path: "/node-development/(.*)"
pathType: "ImplementationSpecific"
</code></pre>
<p>With ingress or even with the pod cluster ip I am not being able to access application it is throwing 502 bad gateway nginx</p>
| SVD | <p>Issue got resolved, I am using SSL in my application as a result it was not re-directing with the given ingress url.</p>
<p>Need to add below annotation in ingress.yaml file.</p>
<pre><code>nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
</code></pre>
| SVD |
<p>I am writing a script to run in Jenkins as a job - which deletes kubernetes pvc's:</p>
<pre><code>node('example') {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub-creds') {
docker.image('example').inside() {
sh "kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$" | cut -f2 -d: | paste -d " " - - | xargs -n2 bash -c 'kubectl -n ${1} delete pvc ${0}'"
}
}
}
</code></pre>
<p>Now when I add this in the Jenkins item configure script area... it is giving me this error:</p>
<p><a href="https://i.stack.imgur.com/orUbF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/orUbF.png" alt="enter image description here" /></a></p>
<ul>
<li><p>error comes on line 4 which is the .. "sh "kubectl describe -A pvc ...." line</p>
</li>
<li><p>what do I need to do to fix this?</p>
</li>
</ul>
| devgirl | <p>I think the easiest way is to surround your command using <code>'''</code> (3 x single quotation mark).<br />
I've created example to illustrate you how it may work.</p>
<p>First I created two <code>PVCs</code> (<code>block-pvc</code>,<code>block-pvc2</code>) that should be removed by the script.</p>
<pre><code># kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default block-pvc Pending 9m45s
test block-pvc2 Pending 9m42s
</code></pre>
<p>Then I added command to my pipeline:</p>
<pre><code>sh '''
./kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Used By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$" | cut -f2 -d: | paste -d " " - - | xargs -n2 bash -c './kubectl -n ${1} delete pvc ${0}'
'''
</code></pre>
<p>As a result in the Console Output of this build we can see that it works as expected:</p>
<pre><code>persistentvolumeclaim "block-pvc" deleted
persistentvolumeclaim "block-pvc2" deleted
</code></pre>
| matt_j |
<p>I am trying to implement blue/green deployment for my application. I am using istio <code>VirtuaService</code> for navigating to blue environment or green environment based on clientId in request header. Backends are working fine.</p>
<p>My concern is the frontend. How can I implement blue green for Angular ui at frontend? Since it’s a single page application, entire ui loads up during initial load.</p>
<p>What should be the strategy for angular blue/green deployment?</p>
| amrit ghose | <p>It's hard for you to get an unambiguous answer about</p>
<blockquote>
<p>What should be the strategy for angular blue / green deployment?</p>
</blockquote>
<p>It may all depend on how you set up the cluster, what your application configuration looks like, what your network settings are, and much more. However, you can use many guides on how to correctly create a blue / green deployment:</p>
<ul>
<li><a href="https://medium.com/geekculture/simple-yet-scalable-blue-green-deployments-in-aws-eks-87815aa37c03" rel="nofollow noreferrer">simple blue/green deployment in AWS EKS</a></li>
<li><a href="https://semaphoreci.com/blog/continuous-blue-green-deployments-with-kubernetes" rel="nofollow noreferrer">continuous blue/green deployments</a></li>
<li><a href="http://blog.itaysk.com/2017/11/20/deployment-strategies-defined" rel="nofollow noreferrer">blue/green deployment strategies defined</a></li>
<li><a href="https://codefresh.io/blue-green-deployments-kubernetes/" rel="nofollow noreferrer">how work blue/green deployment</a></li>
</ul>
<p>One more point to consider. You will need two separate complete environments to be able to create a blue / green update. Look at the <a href="https://stackoverflow.com/questions/42358118/blue-green-deployments-vs-rolling-deployments">differences</a> between blue/green deployments and <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">rolling update</a>:</p>
<blockquote>
<p>In <strong>Blue Green Deployment</strong>, you have <strong>TWO</strong> complete environments.
One is Blue environment which is running and the Green environment to which you want to upgrade. Once you swap the environment from blue to green, the traffic is directed to your new green environment. You can delete or save your old blue environment for backup until the green environment is stable.
In <strong>Rolling Deployment</strong>, you have only <strong>ONE</strong> complete environment.
Once you start upgrading your environment. The code is deployed in the subset of instances of the same environment and moves to another subset after completion</p>
</blockquote>
<p>So if you decide to blue/green update, you need to create 2 separate, equivalent environments, then modify the environment with Angular UI, and then update.</p>
| Mikołaj Głodziak |
<p>I want my default JNLP container to run with my image.
Therefore, I need to overrite the JNLP with my image, but to keep the JNLP data to be able to connect to my master.</p>
<ol>
<li>The jnlp image I should have as base is inbound-agent no?</li>
<li>How can I combine it with my image if I already have "FROM UBUNTU" can I combine multiple base images and copy the artifacts? How can I do that and what should be my dockerfile?</li>
</ol>
<p>My own image -</p>
<pre><code>FROM ubuntu:18.04
ARG JFROG_CI_INT_USERNAME
ARG JFROG_CI_INT_PASSWORD
ARG JFROG_CI_INT_NPM_TOKEN
ARG GITHUB_ORANGE_USER
ARG GITHUB_ORANGE_PASSWORD
ARG PULUMI_USER
ARG PULUMI_TOKEN
ENV GITHUB_ORANGE_USER=$GITHUB_ORANGE_USER
ENV GITHUB_ORANGE_PASSWORD=$GITHUB_ORANGE_PASSWORD
ENV DEBIAN_FRONTEND=noninteractive
#=============
# Set WORKDIR
#=============
WORKDIR /home/jenkins
COPY requirements.txt /
COPY authorization.sh /
# Update software repository
RUN apt-get update
RUN apt-get -qqy install software-properties-common
RUN add-apt-repository ppa:git-core/ppa
RUN add-apt-repository ppa:openjdk-r/ppa
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get -qqy install git
ENV PYTHON_VER="3.9"
RUN apt-get -qqy update && \
apt-get -qqy --no-install-recommends install \
gpg-agent \
software-properties-common \
openjdk-11-jdk \
ca-certificates \
build-essential \
tzdata \
zip \
unzip \
ENTRYPOINT ["/usr/local/bin/start.sh"]
</code></pre>
| EilonA | <p>You can do a <a href="https://docs.docker.com/build/building/multi-stage/" rel="nofollow noreferrer">multi-stage build</a> but you'll end with a tightly coupled image, and you'll have to rebuild it every time you want to change the jenkins agent version.</p>
<p>There's a better (IMHO) option, using two containers. You can run an agent in kubernetes using two images: inbound-agent and your image. This is from a working pipeline that I have:</p>
<pre class="lang-js prettyprint-override"><code>pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: jnlp
image: jenkins/inbound-agent:4.10-3-alpine-jdk8
volumeMounts:
- name: home-volume
mountPath: /home/jenkins
env:
- name: HOME
value: /home/jenkins
- name: maven
image: my-registry:5000/maven-3.6.3-jdk-11:latest
command:
- sleep
args:
- 1d
volumeMounts:
- name: home-volume
mountPath: /home/jenkins
env:
- name: JAVA_TOOL_OPTIONS
value: -Dfile.encoding=UTF8
volumes:
- name: home-volume
emptyDir: {}
"""
}
}
stages {
stage('Build') {
steps {
script {
container('maven') {
sh('mvn clean deploy')
</code></pre>
<p>This way you have both images decoupled, but they run together in the same pod to make the pipeline work.</p>
| Atxulo |
<p>I have a AKS cluster with default FQDN name with the suffix of "cloudapp.azure.com". I want to get a domain and apply it to the cluster but am not sure how to apply custom domain to Kubernetes cluster in azure.</p>
<p>Can anyone help me with the steps to apply custom domain name to AKS cluster?</p>
| Harish Darne | <p>If I understand you correctly, you've already deployed your application on Kubernetes and want to connect it to your custom domain name.</p>
<p>For this purpose you can use <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p>
<p>Below I will briefly describe how you can do it on AKS:</p>
<ol>
<li>First you need to create an <code>ingress controller</code> and <code>ingress resource</code>. For Azure AKS detailed instructions can be found here: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip#create-an-ingress-controller" rel="nofollow noreferrer">create-an-ingress-controller</a>.<br />
<strong>Note:</strong> By default, the public IP address acquired by NGINX Ingress is lost
if the controller is deleted. I recommend you to
create static public IP address, because it remains if the <code>ingress controller</code> is deleted.</li>
<li>Next identify the public IP address (<code>EXTERNAL-IP</code>) associated with
your NGINX Ingress service that was created in the previous step.</li>
<li>Now you need to create an <code>A</code> DNS record, to point your domain to the cluster.
Additionally you may
want to provide <code>CNAME</code> record, but is isn't mandatory and depends
on your needs.<br>It is possible to create <code>Azure DNS Zone</code> for your
custom domain and then add appropriate record sets to this zone.<br />
<strong>Note:</strong> Azure DNS is not the domain registrar, you have to configure the
Azure DNS name servers as the correct name servers for
the domain name with the domain name registrar. For more
information, see <a href="https://learn.microsoft.com/en-us/azure/dns/dns-domain-delegation" rel="nofollow noreferrer">Delegate a domain to Azure DNS</a>.</li>
</ol>
| matt_j |
<p>I’m having a hard time understanding what’s going on with my horizontal pod autoscaler.</p>
<p>I’m trying to scale up my deployment if the memory or cpu usage goes above 80%.</p>
<p>Here’s my HPA template:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
</code></pre>
<p>The thing is, it’s been sitting at 3 replicas for days even though the usage is below 80% and I don’t understand why.</p>
<pre><code>$ kubectl get hpa --all-namespaces
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-ns my-hpa Deployment/my-deployment 61%/80%, 14%/80% 2 10 3 2d15h
</code></pre>
<p>Here’s the output of the top command:</p>
<pre><code>$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
my-deployment-86874588cc-chvxq 3m 146Mi
my-deployment-86874588cc-gkbg9 5m 149Mi
my-deployment-86874588cc-nwpll 7m 149Mi
</code></pre>
<p>Each pod consumes approximately 60% of their requested memory (So they are below the 80% target):</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
</code></pre>
<p>Here's my deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: my-deployment
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: ...
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /liveness
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /readiness
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
timeoutSeconds: 3
ports:
- containerPort: 3000
protocol: TCP
</code></pre>
<p>I manually scale down to 2 replicas and it goes back up to 3 right away for no reason: </p>
<pre><code>Normal SuccessfulRescale 28s (x4 over 66m) horizontal-pod-autoscaler New size: 3; reason:
</code></pre>
<p>Anyone have any idea what’s going on?</p>
| Etienne Martin | <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details</a></p>
<p>As per your current numbers, It will never scale down unless your memory usage goes down to half of the desired percentage.</p>
<p>i.e the current utilization of both cpu and memory should go to 40%(in your case) or below</p>
<p>as per the below formula</p>
<pre><code>desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
= ceil[3 * (61/80)]
= ceil[3 * (0.7625)]
= ceil[2.2875]
desiredReplicas = 3
</code></pre>
<p>you might be having doubt like your cpu is below 40% why it is not downscaling.. but hpa will not work in that way.. it will always look for larger number.</p>
| Mukesh Baskaran |
<p>I have a helm chart and I want to add it to my gitlab repository. But when I run:</p>
<pre><code>helm repo add repo_name url
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Error: looks like "https://gitlab.<domain>.com/group/infra/repo/helm/charts/" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context
</code></pre>
<p>Linter shows it is a valid chart.</p>
<p>Here is <code>index.yaml</code>:</p>
<pre><code>apiVersion: v1
entries:
helloworld:
- apiVersion: v2
appVersion: 1.0.0
created: "2021-06-28T14:05:53.974207+01:00"
description: This Helm chart will be used to create hello world
digest: f290432f0280fe3f66b126c28a0bb21263d64fd8f73a16808ac2070b874619e7
name: helloworld
type: application
urls:
- https://gitlab.<domain>.com/group/infra/repo/helm/charts/helloworld-0.1.0.tgz
version: 0.1.0
generated: "2021-06-28T14:05:53.973549+01:00"
</code></pre>
<p>Not sure what is missing here.</p>
| Ram | <p>It looks like you want to use the helm chart that is hosted on the gitlab. Unfortunately, it won't work as you want it to. As <a href="https://stackoverflow.com/users/1518100/lei-yang">Lei Yang</a> mentioned well in the comment:</p>
<blockquote>
<p><code>helm</code> repo and <code>git</code> repo are different things.</p>
</blockquote>
<p>In the official documentation of Helm, you can find <a href="https://helm.sh/docs/topics/chart_repository/#create-a-chart-repository" rel="nofollow noreferrer">The Chart Repository Guide</a>.
You can find it also a guide <a href="https://helm.sh/docs/topics/chart_repository/#create-a-chart-repository" rel="nofollow noreferrer">how to create a chart repository</a>:</p>
<blockquote>
<p>A <em>chart repository</em> is an HTTP server that houses an <code>index.yaml</code> file and optionally some packaged charts. When you're ready to share your charts, the preferred way to do so is by uploading them to a chart repository.</p>
</blockquote>
<p>Here you can find section, how to properly <a href="https://helm.sh/docs/topics/chart_repository/#hosting-chart-repositories" rel="nofollow noreferrer">host chart repos</a>. There are several ways to do this - for example you can use a Google Cloud Storage (GCS) bucket, Amazon S3 bucket, GitHub Pages, or even create your own web server.</p>
<p>You can also use the <a href="https://chartmuseum.com/docs/#using-with-local-filesystem-storage" rel="nofollow noreferrer">ChartMuseum</a> server to host a chart repository from a local file system.</p>
<blockquote>
<p>ChartMuseum is an open-source Helm Chart Repository server written in Go (Golang), with support for cloud storage backends, including <a href="https://cloud.google.com/storage/" rel="nofollow noreferrer">Google Cloud Storage</a>, <a href="https://aws.amazon.com/s3/" rel="nofollow noreferrer">Amazon S3</a>, <a href="https://azure.microsoft.com/en-us/services/storage/blobs/" rel="nofollow noreferrer">Microsoft Azure Blob Storage</a>, <a href="https://www.alibabacloud.com/product/oss" rel="nofollow noreferrer">Alibaba Cloud OSS Storage</a>, <a href="https://developer.openstack.org/api-ref/object-store/" rel="nofollow noreferrer">Openstack Object Storage</a>, <a href="https://cloud.oracle.com/storage" rel="nofollow noreferrer">Oracle Cloud Infrastructure Object Storage</a>, <a href="https://cloud.baidu.com/product/bos.html" rel="nofollow noreferrer">Baidu Cloud BOS Storage</a>, <a href="https://intl.cloud.tencent.com/product/cos" rel="nofollow noreferrer">Tencent Cloud Object Storage</a>, <a href="https://www.digitalocean.com/products/spaces/" rel="nofollow noreferrer">DigitalOcean Spaces</a>, <a href="https://min.io/" rel="nofollow noreferrer">Minio</a>, and <a href="https://etcd.io/" rel="nofollow noreferrer">etcd</a>.</p>
</blockquote>
<p>Alternatively it could be also possible to <a href="https://jfrog.com/blog/host-your-helm-chart-in-chartcenter-directly-from-source/" rel="nofollow noreferrer">host helm charts in JFrog</a>.</p>
| Mikołaj Głodziak |
<p>I created a react native application with expo. and I use nodejs for backend.
my application is ready and works well locally. now i would like to deploy it and i would like to know what are the good methods to deploy it should i use docker kubernetes etc... if or what platforms would you recommend me.</p>
<p>thank you</p>
| Anicet | <p>Welcome to the stackoverflow community!</p>
<p>For frontend I would suggest deploying it to the google play on android, or app store on ios. But you would need to pay a fee of $99 per year to deploy apps to the apple app store.</p>
<p>If you would like to deploy to other platforms, try to deploy to the official stores, because official stores have more traffic and can be trusted by more people. Thus getting more customers or users to your app.</p>
<p>For backend I would use heroku to deploy my backend code, I have many projects on heroku and it works fine, also it has a free plan for hosting your app. But it's not just flowers and roses, heroku is quite hard to deal with and their service is not the best in my experiences. If you are looking for a enterprice way, I suggest google cloud or firebase, It may cost some money however the performance, the service and user interface is way better than heroku.</p>
<p>More information about heroku: <code>https://heroku.com/</code></p>
<p>More information about google cloud: <code>https://cloud.google.com</code></p>
<p>More information about firebase: <code>https://firebase.google.com/</code></p>
<p>More information about how to deploy: <code>https://docs.expo.dev/distribution/app-stores/</code></p>
<p>More information about Apple app store: <code>https://developer.apple.com/programs/</code></p>
<p>More information about google play store: <code>https://play.google.com/console/about/guides/releasewithconfidence/</code></p>
<p>Tutorials that may be useful:</p>
<p><code>https://www.youtube.com/watch?v=6IPr7oOugTs</code></p>
<p><code>https://www.youtube.com/watch?v=4D3X6Xl5c_Y</code></p>
<p><code>https://www.youtube.com/watch?v=oWK7kesoCQY</code></p>
<p>Hope this helps!</p>
<p>NOTE: I'm not sponsored by ANY of the companies above, and I'm just a regular human being on the internet.</p>
| Alvin CHENG |
<p>Sometimes when I run <code>docker ps -a</code>, I see about 5-10 containers in <code>Exited</code> status. For example, after one or two hours I see them.</p>
<p>Why are there such these exited containers and why are they in <code>Exited</code> status?</p>
<pre><code>9f6bd4fa5a05 8522d622299c "/opt/bin/flanneld -…" About a minute ago Exited (1) About a minute ago k8s_kube-flannel_kube-flannel-ds-x8s8v_kube-system_024c58c6-9cc2-4c2a-b0d3-12a49101a57b_3
7203f2beaaef 8522d622299c "cp -f /etc/kube-fla…" 2 minutes ago Exited (0) About a minute ago k8s_install-cni_kube-flannel-ds-x8s8v_kube-system_024c58c6-9cc2-4c2a-b0d3-12a49101a57b_1
e4ade2e2617c 09708983cc37 "kube-controller-man…" 2 minutes ago Exited (2) 2 minutes ago k8s_kube-controller-manager_kube-controller-manager-localhost_kube-system_00f8734ea34e7389b55cb740d1fcf000_2
1a3762bfa98c k8s.gcr.io/pause:3.4.1 "/pause" 2 minutes ago Exited (0) 2 minutes ago k8s_POD_kube-controller-manager-localhost_kube-system_00f8734ea34e7389b55cb740d1fcf000_2
65555c2db5c6 nginx "/docker-entrypoint.…" 5 hours ago Exited (0) 2 minutes ago k8s_task-pv-container_task-pv-pod_default_8bfa648a-ddb0-4531-8984-81d4ce8db444_0
a6b99bcc020a k8s.gcr.io/pause:3.4.1 "/pause" 5 hours ago Exited (0) 2 minutes ago k8s_POD_task-pv-pod_default_8bfa648a-ddb0-4531-8984-81d4ce8db444_0
b625c68f4fce 296a6d5035e2 "/coredns -conf /etc…" 6 hours ago Exited (0) 2 minutes ago k8s_coredns_coredns-558bd4d5db-c88d8_kube-system_6170d444-f992-4163-aff0-480cd1ed1ce4_1
7897a49ca8c6 296a6d5035e2 "/coredns -conf /etc…" 6 hours ago Exited (0) 2 minutes ago k8s_coredns_coredns-558bd4d5db-z8kfs_kube-system_b34f29d6-bd72-40ba-a4ba-102e71965f98_1
9b7cac87b4bb k8s.gcr.io/pause:3.4.1 "/pause" 6 hours ago Exited (0) 2 minutes ago k8s_POD_coredns-558bd4d5db-z8kfs_kube-system_b34f29d6-bd72-40ba-a4ba-102e71965f98_10
02661c9b507e k8s.gcr.io/pause:3.4.1 "/pause" 6 hours ago Exited (0) 2 minutes ago k8s_POD_coredns-558bd4d5db-c88d8_kube-system_6170d444-f992-4163-aff0-480cd1ed1ce4_10
0f31cf13fc3a 38ddd85fe90e "/usr/local/bin/kube…" 6 hours ago Exited (2) 2 minutes ago k8s_kube-proxy_kube-proxy-s9rf6_kube-system_ff2a1ef4-7316-47c4-8584-aac1a1cd63d5_1
91a53859a423 k8s.gcr.io/pause:3.4.1 "/pause" 6 hours ago Exited (0) 2 minutes ago k8s_POD_kube-proxy-s9rf6_kube-system_ff2a1ef4-7316-47c4-8584-aac1a1cd63d5_1
a4833a1301a4 4d217480042e "kube-apiserver --ad…" 6 hours ago Exited (137) 2 minutes ago k8s_kube-apiserver_kube-apiserver-localhost_kube-system_bbc60c89c6c6215aca670678072d707c_1
e4d76060736a k8s.gcr.io/pause:3.4.1 "/pause" 6 hours ago Exited (0) 2 minutes ago k8s_POD_kube-apiserver-localhost_kube-system_bbc60c89c6c6215aca670678072d707c_1
7c7f09ab3633 0369cf4303ff "etcd --advertise-cl…" 6 hours ago Exited (0) 2 minutes ago k8s_etcd_etcd-localhost_kube-system_146e48c8e9d3b464d1f47c594a8becc8_1
0204740e75b0 k8s.gcr.io/pause:3.4.1 "/pause" 6 hours ago Exited (0) 2 minutes ago k8s_POD_etcd-localhost_kube-system_146e48c8e9d3b464d1f47c594a8becc8_1
8631dd16c767 62ad3129eca8 "kube-scheduler --au…" 6 hours ago Exited (2) 2 minutes ago k8s_kube-scheduler_kube-scheduler-localhost_kube-system_7d7118d190aeaaf065b4e86b4982f05f_1
a59ee59475cd k8s.gcr.io/pause:3.4.1 "/pause" 6 hours ago Exited (0) 2 minutes ago k8s_POD_kube-scheduler-localhost_kube-system_7d7118d190aeaaf065b4e86b4982f05f_1
</code></pre>
<p>The log of one of them:</p>
<pre><code>docker logs k8s_kube-proxy_kube-proxy-s9rf6_kube-system_ff2a1ef4-7316-47c4-8584-aac1a1cd63d5_1
I0622 09:48:20.971080 1 node.go:172] Successfully retrieved node IP: 192.168.43.85
I0622 09:48:20.971170 1 server_others.go:140] Detected node IP 192.168.43.85
W0622 09:48:20.971200 1 server_others.go:592] Unknown proxy mode "", assuming iptables proxy
I0622 09:48:25.066916 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0622 09:48:25.067112 1 server_others.go:212] Using iptables Proxier.
I0622 09:48:25.067154 1 server_others.go:219] creating dualStackProxier for iptables.
W0622 09:48:25.067185 1 server_others.go:506] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0622 09:48:25.187175 1 server.go:643] Version: v1.21.0
I0622 09:48:25.407303 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0622 09:48:25.407351 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0622 09:48:25.407727 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0622 09:48:25.477081 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0622 09:48:25.477170 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0622 09:48:25.517411 1 config.go:315] Starting service config controller
I0622 09:48:25.517679 1 config.go:224] Starting endpoint slice config controller
I0622 09:48:25.531370 1 shared_informer.go:240] Waiting for caches to sync for service config
I0622 09:48:25.548740 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0622 09:48:25.665718 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 09:48:25.667197 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0622 09:48:25.731527 1 shared_informer.go:247] Caches are synced for service config
I0622 09:48:25.753275 1 shared_informer.go:247] Caches are synced for endpoint slice config
W0622 09:54:21.682298 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:01:18.747213 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:10:27.734119 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:18:50.736315 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:28:33.903205 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:36:05.938074 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:43:23.940703 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:49:20.951579 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 10:58:54.953921 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 11:08:28.961630 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 11:14:26.988498 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 11:20:12.010192 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 11:29:09.023270 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:00:28.838837 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:05:46.840726 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:10:50.843578 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:19:07.846779 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:26:46.848443 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:34:57.853658 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:42:00.887206 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:50:03.935106 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 14:57:18.937677 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 15:03:41.955870 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 15:11:26.957845 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0622 15:17:51.979285 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
</code></pre>
<pre><code>docker logs k8s_kube-flannel_kube-flannel-ds-x8s8v_kube-system_024c58c6-9cc2-4c2a-b0d3-12a49101a57b_3
I0622 15:19:21.070951 1 main.go:520] Determining IP address of default interface
I0622 15:19:21.071786 1 main.go:533] Using interface with name enp0s3 and address 192.168.43.85
I0622 15:19:21.073223 1 main.go:550] Defaulting external address to interface address (192.168.43.85)
W0622 15:19:21.073336 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0622 15:19:31.157536 1 main.go:251] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-x8s8v': Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-x8s8v": net/http: TLS handshake timeout
</code></pre>
| Saeed | <p>You are using deprecated API:</p>
<pre><code>discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
</code></pre>
<p>Based on <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/" rel="nofollow noreferrer">this documentation</a>:</p>
<blockquote>
<h4>EndpointSlice<a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#endpointslice-v125" rel="nofollow noreferrer"></a></h4>
<p>The <strong>discovery.k8s.io/v1beta1</strong> API version of EndpointSlice will no longer be served in v1.25.</p>
<ul>
<li>Migrate manifests and API clients to use the <strong>discovery.k8s.io/v1</strong> API version, available since v1.21.</li>
<li>All existing persisted objects are accessible via the new API</li>
<li>Notable changes in <strong>discovery.k8s.io/v1</strong>:
- use per Endpoint <code>nodeName</code> field instead of deprecated <code>topology["kubernetes.io/hostname"]</code> field
- use per Endpoint <code>zone</code> field instead of deprecated <code>topology["topology.kubernetes.io/zone"]</code> field
- <code>topology</code> is replaced with the <code>deprecatedTopology</code> field which is not writable in v1</li>
</ul>
</blockquote>
<p>You need to find all deprecated API and migrate to supported versions.</p>
| Mikołaj Głodziak |
<p>I am not able to iterate over range using helm templating for networkpolicies to allow egress with ports to ipBlock. Below I have my values.yaml:</p>
<pre><code>networkPolicy:
ports:
- port: xxxx
cidrs:
- ipBlock:
cidr: x.x.x.x/32
- ipBlock:
cidr: x.x.x.x/32
- port: xxxx
cidrs:
- ipBlock:
cidr: x.x.x.x/32
- ipBlock:
cidr: x.x.x.x/32
</code></pre>
<p>And my template file is</p>
<pre><code>spec:
podSelector:
matchLabels:
policy: allow
policyTypes:
- Egress
egress:
{{- range $item := .Values.networkPolicy.ports}}
- ports:
- port: {{$item.port}}
protocol: TCP
to:
{{$item.cidrs | nindent 4 }}
{{- end }}
</code></pre>
<p>I get when I try to template.</p>
<pre><code>spec:
podSelector:
matchLabels:
policy: allow
policyTypes:
- Egress
egress:
</code></pre>
<p>What is expected</p>
<pre><code>spec:
podSelector:
matchLabels:
policy: allow
policyTypes:
- Egress
egress:
- ports:
- port: xxxx
protocol: TCP
to:
- ipBlock:
cidr: x.x.x.x/32
- ipBlock:
cidr: x.x.x.x/32
- ports:
- port: xxxx
protocol: TCP
to:
- ipBlock:
cidr: x.x.x.x/32
- ipBlock:
cidr: x.x.x.x/32
</code></pre>
<p>Thanks in advance!</p>
| Divin D | <p>I have changed it to below and it worked.</p>
<p>values.yaml</p>
<pre><code>networkPolicy:
ports:
- port: xxxx
ipBlock:
- cidr: x.x.x.x/32
- cidr: x.x.x.x/32
- port: xxx
ipBlock:
- cidr: x.x.x.x/32
- cidr: x.x.x.x/32
</code></pre>
<p>And my template file is</p>
<pre><code>spec:
podSelector:
matchLabels:
nwp: allow-backends
policyTypes:
- Egress
egress:
{{- range $item := .Values.networkPolicy.ports}}
- ports:
- port: {{ $item.port }}
protocol: TCP
to:
{{- range $item.ipBlock }}
- ipBlock:
cidr: {{ .cidr }}
{{- end }}
{{- end }}
</code></pre>
| Divin D |
<p>I want to use the same hostname let's say <code>example.com</code> with multiple Ingress resources running in different namespaces i.e <code>monitoring</code> and <code>myapp</code>. I'm using <strong>Kubernetes nginx-ingress</strong> controller.</p>
<p><strong>haproxy-ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: haproxy-ingress
namespace: myapp
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
# fill in host here
- example.com
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: haproxy
port:
number: 80
</code></pre>
<p><strong>grafana-ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
# only match /grafana and paths under /grafana/
- path: /grafana(/|$)(.*)
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
</code></pre>
<p>When I'm doing <code>curl example.com</code> then it is redirecting me to the deployment running in namespace one(as expected) but when I'm doing <code>curl example.com/grafana</code> then still it is redirecting me to namespace one deployment.</p>
<p>Please help.</p>
| metadata | <p>Yes it is possible.</p>
<p>There can be two issues in your case.</p>
<p>One is you don't need the regex path for grafana ingress. Simple <code>/grafana</code> path will be fine with path type <code>Prefix</code> as with path type <code>Prefix</code> any <code>/grafana/...</code> will be redirected associated service. So the manifest file will be:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
spec:
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /grafana
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
</code></pre>
<p>And the second issue can be the related service or deployment might not be the under same namespace <code>monitoring</code>. Please make sure the deployment/service/secret or other resources needed for grafana remains under the same namespace <code>monitoring</code>.</p>
| Pulak Kanti Bhowmick |
<p>We recently started using istio <a href="https://istio.io" rel="noreferrer">Istio</a> to establish a service-mesh within out <a href="http://kubernetes.io" rel="noreferrer">Kubernetes</a> landscape.</p>
<p>We now have the problem that jobs and cronjobs do not terminate and keep running forever if we inject the istio <code>istio-proxy</code> sidecar container into them. The <code>istio-proxy</code> should be injected though to establish proper mTLS connections to the services the job needs to talk to and comply with our security regulations.</p>
<p>I also noticed the open issues within Istio (<a href="https://github.com/istio/istio/issues/6324" rel="noreferrer">istio/issues/6324</a>) and kubernetes (<a href="https://github.com/kubernetes/kubernetes/issues/25908" rel="noreferrer">kubernetes/issues/25908</a>), but both do not seem to provide a valid solution anytime soon.</p>
<p>At first a pre-stop hook seemed suitable to solve this issue, but there is some confusion about this conecpt itself: <a href="https://github.com/kubernetes/kubernetes/issues/55807" rel="noreferrer">kubernetes/issues/55807</a></p>
<pre><code>lifecycle:
preStop:
exec:
command:
...
</code></pre>
<p>Bottomline: Those hooks will not be executed if the the container successfully completed.</p>
<p>There are also some relatively new projects on GitHub trying to solve this with a dedicated controller (which I think is the most preferrable approach), but to our team they do not feel mature enough to put them right away into production:</p>
<ul>
<li><a href="https://github.com/nrmitchi/k8s-controller-sidecars" rel="noreferrer">k8s-controller-sidecars</a></li>
<li><a href="https://github.com/cropse/K8S-job-sidecar-terminator" rel="noreferrer">K8S-job-sidecar-terminator</a> </li>
</ul>
<p>In the meantime, we ourselves ended up with the following workaround that execs into the sidecar and sends a <code>SIGTERM</code> signal, but only if the main container finished successfully:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: terminate-sidecar-example-service-account
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: terminate-sidecar-example-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: terminate-sidecar-example-rolebinding
subjects:
- kind: ServiceAccount
name: terminate-sidecar-example-service-account
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: terminate-sidecar-example-role
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: terminate-sidecar-example-cronjob
labels:
app: terminate-sidecar-example
spec:
schedule: "30 2 * * *"
jobTemplate:
metadata:
labels:
app: terminate-sidecar-example
spec:
template:
metadata:
labels:
app: terminate-sidecar-example
annotations:
sidecar.istio.io/inject: "true"
spec:
serviceAccountName: terminate-sidecar-example-service-account
containers:
- name: ****
image: ****
command:
- "/bin/ash"
- "-c"
args:
- node index.js && kubectl exec -n ${POD_NAMESPACE} ${POD_NAME} -c istio-proxy -- bash -c "sleep 5 && /bin/kill -s TERM 1 &"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
<p>So, the ultimate question to all of you is: <strong>Do you know of any better workaround, solution, controller, ... that would be less hacky / more suitable to terminate the <code>istio-proxy</code> container once the main container finished its work?</strong></p>
| croeck | <pre><code>- command:
- /bin/sh
- -c
- |
until curl -fsI http://localhost:15021/healthz/ready; do echo \"Waiting for Sidecar...\"; sleep 3; done;
echo \"Sidecar available. Running the command...\";
<YOUR_COMMAND>;
x=$(echo $?); curl -fsI -X POST http://localhost:15020/quitquitquit && exit $x
</code></pre>
<p><strong>Update:</strong> sleep loop can be omitted if <code>holdApplicationUntilProxyStarts</code> is set to <code>true</code> (globally or as an annotation) starting with <a href="https://istio.io/latest/news/releases/1.7.x/announcing-1.7/change-notes/" rel="noreferrer">istio 1.7</a></p>
| Dimitri Kuskov |
<p>I'm trying to configure my Kubernetes application so that my frontend application can speak to my backend, which is running on another deployment and is exposed via a ClusterIP service.</p>
<p>Currently, the frontend application is serving up some static content through Nginx. The configuration for that server is located inside a mounted configuration. I've got the <code>/</code> route serving up my static content to users, and I'd like to configure another route in the server block to point to my backend, at <code>/api</code> but I'm not sure how to direct that at the ClusterIP service for my other deployment.</p>
<p>The full frontend deployment file looks like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
## To make changes to the configuration
## You use the kubectl rollout restart nginx command.
events {}
http {
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/extra-conf.d/*.conf;
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
#location /api {
##
## Send traffic to my API, running on another Kubernetes deployment, here...
## }
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: mydockerusername/ts-react
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
</code></pre>
<p>My backend API is exposed via a ClusterIP Service on PORT 1234, and looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: typeorm
spec:
selector:
matchLabels:
app: typeorm # Find and manage all the apps with this label
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: typeorm # Create apps with this label
spec:
containers:
- image: mydockerusername/typeorm
imagePullPolicy: Always
name: typeorm
ports:
- containerPort: 1234
env:
- name: ENV
value: "production"
envFrom:
- secretRef:
name: typeorm-config
---
apiVersion: v1
kind: Service
metadata:
name: typeorm
labels:
app: typeorm
spec:
type: ClusterIP
ports:
- port: 1234
targetPort: 1234
selector:
app: typeorm
</code></pre>
| Harrison Cramer | <p>You can't expose your ClusterIP service through nginx config file here as ClusterIP service is only available inside kubernetes. You need an nginx ingress controller and ingress component to expose your ClusterIP service to outside world.</p>
<p>You can use ingress component to expose your ClusterIP service to /api path.
Your ingress manifest file will look like below.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
spec:
rules:
- host: foo.bar.com #your server address here
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: typeorm
port:
number: 1234
</code></pre>
<p>Even you can use just one ingress component to expose your both frontend and backend. But for that you need another Service pointing to that frontend deployment.Then your manifest file you look like something like below:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 1234
</code></pre>
| Pulak Kanti Bhowmick |
<p>I have a Python application and it utilizes environment variables in Kubernetes configuration such as:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
namespace: default
data:
var1: foo
var2: bar
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: default
spec:
containers:
- envFrom:
- configMapRef:
name: my-config
</code></pre>
<p>So, it is fine when the app is Dockerized and running on a Kubernetes farm.</p>
<p>However, when running the application on a <strong>local machine</strong> without Docker and Kubernetes but a humble command:</p>
<pre class="lang-py prettyprint-override"><code>python app.py
</code></pre>
<p>I have to make the Python modules find the environment variables with <code>os.getenv('var1')</code> even if there is no ConfigMap or Pod on.</p>
<p>Is it possible to without a need to add extra code in Python modules or adding environment variables to the local machine system?</p>
| vahdet | <p>In a shell, you could also simply temporarily assign the value(s) for the environment variable(s) right before calling the script. No need to change your script at all.</p>
<p>Consider the following <code>app.py</code> which just prints the environment variables <code>ENV_PARAM</code> and <code>ENV_PARAM2</code>:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
import os
print(os.environ['ENV_PARAM'])
print(os.environ['ENV_PARAM2'])
</code></pre>
<p>When the vars are not set and you call it like this</p>
<pre class="lang-bash prettyprint-override"><code>python app.py
</code></pre>
<p>you will get a <code>KeyError</code>.</p>
<blockquote>
<p>KeyError: 'ENV_PARAM'</p>
</blockquote>
<p>When you instead specify the values in the same line and call it like this</p>
<pre class="lang-bash prettyprint-override"><code>ENV_PARAM='foo' ENV_PARAM2='bar' python app.py
</code></pre>
<p>it works fine. Output:</p>
<pre><code>foo
bar
</code></pre>
<p>This will not set the environment variable beyond that, so if you do</p>
<pre class="lang-bash prettyprint-override"><code>echo "$ENV_PARAM"
</code></pre>
<p>afterwards, it will return nothing. The environment variable was only set temporary, like you required.</p>
| buddemat |
<h1>Problem</h1>
<p>I have generated keys and certificates by OpenSSL with the secp256k1, run <code>rke</code> version v1.2.8 from the Rancher Kubernetes Engine (RKE), and got the following error:</p>
<pre><code>FATA[0000] Failed to read certificates from dir [/home/max/cluster_certs]: failed to read certificate [kube-apiserver-requestheader-ca.pem]: x509: unsupported elliptic curve
</code></pre>
<p><code>kubectl version</code>:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I have generated the root CA key and certificate the following way:</p>
<pre><code>openssl ecparam -name secp256k1 -genkey -noout -out ca-pvt.pem -rand random.bin -writerand random.bin
openssl req -config .\openssl.cnf -x509 -sha256 -new -nodes -key ca-pvt.pem -days 10227 -out ca-cert.cer -rand random.bin -writerand random.bin
</code></pre>
<p>Then I used it to sign the CSRs generated by <code>rke cert generate-csr</code> from my Kubernetes Rancher <code>cluster.yml</code>.</p>
<p>The command line to approve a CSR was the following:</p>
<pre><code>openssl ca -config openssl.cnf -batch -in %1 -out %2 -create_serial -notext -rand random.bin -writerand random.bin
</code></pre>
<h1>Question</h1>
<p>Which curves are supported today by Kubernetes for the certificates if <code>secp256k1</code> yields the <code>x509: unsupported elliptic curve</code> error message?</p>
<h1>P.S.</h1>
<p>I have also tried the <code>prime256v1</code>, also known as <code>secp256r1</code>. It progressed further comparing to <code>secp256k1</code>, but still got an error.</p>
<p>With <code>prime256v1</code>, RKE did not complain <code>x509: unsupported elliptic curve</code>.</p>
<p>Instead, it gave an error <code>panic: interface conversion: interface {} is *ecdsa.PrivateKey, not *rsa.PrivateKey</code>. Here is the full error message:</p>
<p>Here is the full error message:</p>
<pre><code>DEBU[0000] Certificate file [./cluster_certs/kube-apiserver-requestheader-ca.pem] content is greater than 0
panic: interface conversion: interface {} is *ecdsa.PrivateKey, not *rsa.PrivateKey
goroutine 1 [running]: github.com/rancher/rke/pki.getKeyFromFile(0x7ffe6294c74e, 0xf, 0xc00105cb10, 0x27, 0x8, 0xc00105cb10, 0x27)
/go/src/github.com/rancher/rke/pki/util.go:656 +0x212
</code></pre>
| Maxim Masiutin | <blockquote>
<p>Which curves are supported today by Kubernetes for the certificates if <code>secp256k1</code> yields the <code>x509: unsupported elliptic curve</code> error message?</p>
</blockquote>
<p>To try to answer this question I will look directly at the <a href="https://go.googlesource.com/go/+/8bf6e09f4cbb0242039dd4602f1f2d58e30e0f26/src/crypto/x509/x509.go" rel="noreferrer">source code</a>. You can find there lines, that gives an error <code>unsupported elliptic curve</code>:</p>
<pre><code>case *ecdsa.PublicKey:
publicKeyBytes = elliptic.Marshal(pub.Curve, pub.X, pub.Y)
oid, ok := oidFromNamedCurve(pub.Curve)
if !ok {
return nil, pkix.AlgorithmIdentifier{}, errors.New("x509: unsupported elliptic curve")
}
</code></pre>
<p>There are two functions here that are responsible for processing the curve:</p>
<ul>
<li>Marshal:</li>
</ul>
<pre><code>// Marshal converts a point on the curve into the uncompressed form specified in
// section 4.3.6 of ANSI X9.62.
func Marshal(curve Curve, x, y *big.Int) []byte {
byteLen := (curve.Params().BitSize + 7) / 8
ret := make([]byte, 1+2*byteLen)
ret[0] = 4 // uncompressed point
x.FillBytes(ret[1 : 1+byteLen])
y.FillBytes(ret[1+byteLen : 1+2*byteLen])
return ret
}
</code></pre>
<ul>
<li>oidFromNamedCurve:</li>
</ul>
<pre><code>// OIDFromNamedCurve returns the OID used to specify the use of the given
// elliptic curve.
func OIDFromNamedCurve(curve elliptic.Curve) (asn1.ObjectIdentifier, bool) {
switch curve {
case elliptic.P224():
return OIDNamedCurveP224, true
case elliptic.P256():
return OIDNamedCurveP256, true
case elliptic.P384():
return OIDNamedCurveP384, true
case elliptic.P521():
return OIDNamedCurveP521, true
case secp192r1():
return OIDNamedCurveP192, true
}
return nil, false
}
</code></pre>
<p>The final answer is therefore in the switch. Supported elliptic curves are:</p>
<ul>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P224" rel="noreferrer">elliptic.P224</a></li>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P521" rel="noreferrer">elliptic.P256</a></li>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P384" rel="noreferrer">elliptic.P384</a></li>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P521" rel="noreferrer">elliptic.P521</a></li>
<li>secp192r1</li>
</ul>
<p>You need to change your curve to <code>secp256r1</code>. The main difference is that <code>secp256k1</code> is a Koblitz curve, while <code>secp256r1</code> is not. Koblitz curves are known to be a few bits weaker than other curves.</p>
<blockquote>
<p>OpenSSL supports "secp256r1", it is just called "prime256v1". Check section 2.1.1.1 in RFC 5480, where the "secp192r1" curve is called "prime192v1" and the "secp256r1" curve is called "prime256v1".</p>
</blockquote>
| Mikołaj Głodziak |
<p>I am trying to create a K8s cluster in Azure AKS and when cluster is ready I can see couple of resources are created within the <code>default</code> namespace. Example secret, configmap:</p>
<p><a href="https://i.stack.imgur.com/6cIag.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6cIag.png" alt="enter image description here" /></a></p>
<p>As a security recommendation NO k8s resources should be created under the <code>default</code> namespace so how to avoid it? It's created by default during cluster creation.</p>
| user584018 | <p><strong>I have found the same question asked</strong> <a href="https://learn.microsoft.com/en-us/answers/questions/522874/how-to-avoid-resource-creation-in-default-namespac.html" rel="nofollow noreferrer"><strong>here</strong></a>:</p>
<p>User <a href="https://learn.microsoft.com/answers/users/6678978/srbose-msft.html" rel="nofollow noreferrer">srbose-msft</a> (Microsoft employee) explained the principle of operation very well:</p>
<blockquote>
<p>In Kubernetes, a <code>ServiceAccount controller</code> manages the <em>ServiceAccounts</em> inside namespaces, and ensures a <em>ServiceAccount</em> named "<strong>default</strong>" exists in every active namespace. [<a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#serviceaccount-controller" rel="nofollow noreferrer">Reference</a>]</p>
<p><em>TokenController</em> runs as part of <code>kube-controller-manager</code>. It acts asynchronously. It watches <em>ServiceAccount</em> creation and creates a corresponding <strong>ServiceAccount token Secret to allow API access</strong>. [<a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#token-controller" rel="nofollow noreferrer">Reference</a>] Thus, the <em>secret</em> for the <strong>default</strong> <em>ServiceAccount token</em> is also created.</p>
<p>Trusting the custom CA from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate chain and adding the parsed certificates to the <code>RootCAs</code> field in the <code>tls.Config</code> struct.</p>
<p>You can distribute the CA certificate as a <em>ConfigMap</em> that your pods have access to use. [<a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#trusting-tls-in-a-cluster" rel="nofollow noreferrer">Reference</a>] AKS implements this in all active namespaces through <em>ConfigMaps</em> named <code>kube-root-ca.crt</code> in these namespaces.</p>
<p>You shall also find a <em>Service</em> named <code>kubernetes</code> in the <strong>default</strong> namespace. It has a ServiceType of ClusterIP and <strong>exposes the API Server <em>Endpoint</em> also named <code>kubernetes</code> internally to the cluster in the default namespace</strong>.</p>
<p>All the resources mentioned above will be created by design at the time of cluster creation and their creation <strong>cannot be prevented</strong>. If you try to remove these resources manually, they will be recreated to ensure desired goal state by the <code>kube-controller-manager</code>.</p>
</blockquote>
<p>Additionally:</p>
<blockquote>
<p>The <a href="https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373" rel="nofollow noreferrer">Kubernetes clusters should not use the default namespace</a> Policy is still in <strong>Preview</strong>. Currently the schema does not explicitly allow for Kubernetes resources in the <strong>default</strong> namespace to be excluded during policy evaluation. However, at the time of writing, the schema allows for <code>labelSelector.matchExpressions[].operator</code> which can be set to <code>NotIn</code> with appropriate <code>labelSelector.matchExpressions[].values</code> for the Service <strong>default/kubernetes</strong> with label:</p>
<p><code>component=apiserver</code></p>
<p>The default <code>ServiceAccount</code>, the default <code>ServiceAccount token Secret</code> and the <code>RootCA ConfigMap</code> themselves are not created with any labels and hence cannot to added to this list. If this is impeding your use-case I would urge you to share your feedback at <a href="https://techcommunity.microsoft.com/t5/azure/ct-p/Azure" rel="nofollow noreferrer">https://techcommunity.microsoft.com/t5/azure/ct-p/Azure</a></p>
</blockquote>
| Mikołaj Głodziak |
<p>I have uploaded my image on ACR. When I try to deploy it using a <code>deployment.yaml</code> with <code>kubectl</code> commands, the <code>kubectl get pods</code> command shows <code>ErrImageNeverPull</code> in the pods.</p>
<p>Also, I am not using minikube. Is it necessary to use minikube for this?
I am a beginner in azure/kubernetes.</p>
<p>I've also used <code>imagePullPolicy: Never</code> in the yaml file. It's not working even without this and shows <code>ImagePullBackOff</code>.</p>
| Payal Jindal | <p>As <a href="https://stackoverflow.com/users/16077085/payal-jindal">Payal Jindal</a> mentioned in the comment:</p>
<blockquote>
<p>It worked fine. There was a problem with my docker installation.</p>
</blockquote>
<p>Problem is now resolved. The way forward is to set the image pull policy to <code>IfNotPresent</code> or <code>Always</code>.</p>
<pre><code>spec:
containers:
- imagePullPolicy: Always
</code></pre>
| Mikołaj Głodziak |
<p>I'm running a Kubernetes cluster on AWS and need to configure a replicated MongoDB 4.2 Database.
I'm using StatefulSets in order for other Pods (e.g., REST API NodeJS Pod) to easily connect to the mongo instances (example dsn: "mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/app").</p>
<p>mongo-configmap.yaml (provides a shell script to perform the replication initialization upon mongo container creation):</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-init
data:
init.sh: |
#!/bin/bash
# wait for the readiness health check to pass
until ping -c 1 ${HOSTNAME}.mongo; do
echo "waiting for DNS (${HOSTNAME}.mongo)..."
sleep 2
done
until /usr/bin/mongo --eval 'printjson(db.serverStatus())'; do
echo "connecting to local mongo..."
sleep 2
done
echo "connected to local."
HOST=mongo-0.mongo:27017
until /usr/bin/mongo --host=${HOST} --eval 'printjson(db.serverStatus())'; do
echo "connecting to remote mongo..."
sleep 2
done
echo "connected to remote."
if [[ "${HOSTNAME}" != 'mongo-0' ]]; then
until /usr/bin/mongo --host=${HOST} --eval="printjson(rs.status())" \
| grep -v "no replset config has been received"; do
echo "waiting for replication set initialization"
sleep 2
done
echo "adding self to mongo-0"
/usr/bin/mongo --host=${HOST} --eval="printjson(rs.add('${HOSTNAME}.mongo'))"
fi
if [[ "${HOSTNAME}" == 'mongo-0' ]]; then
echo "initializing replica set"
/usr/bin/mongo --eval="printjson(rs.initiate(\
{'_id': 'rs0', 'members': [{'_id': 0, \
'host': 'mongo-0.mongo:27017'}]}))"
fi
echo "initialized"
while true; do
sleep 3600
done
</code></pre>
<p>mongo-service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
clusterIP: None
ports:
- port: 27017
selector:
app: mongo
</code></pre>
<p>mongo-statefulset.yaml (2 containers inside one Pod, 1 for the actual DB, the other for initialization of the replication):</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodb
image: mongo:4.2
command:
- mongod
args:
- --replSet
- rs0
- "--bind_ip_all"
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: database
mountPath: /data/db
livenessProbe:
exec:
command:
- /usr/bin/mongo
- --eval
- db.serverStatus()
initialDelaySeconds: 10
timeoutSeconds: 10
- name: init-mongo
image: mongo:4.2
command:
- bash
- /config/init.sh
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: "mongo-init"
volumeClaimTemplates:
- metadata:
name: database
annotations:
volume.beta.kubernetes.io/storage-class: mongodb-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
<p>After applying these configurations, the 3 mongo pods start running (mongo-0, mongo-1, mongo-2).
However, other pods can't connect to these mongo pods.
Further look into the mongo-0 pods (should be primary instance) reveals that the replication did not work.</p>
<blockquote>
<p>kubectl exec -it mongo-0 -- /bin/bash</p>
</blockquote>
<p>Then running 'mongo' to start the mongo shell, and entering 'rs.status()' into the mongo shell results in the following output:</p>
<pre><code>{
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}
</code></pre>
| Andi R. | <p>After start running all pods, then hit this command</p>
<pre><code> kubectl exec -it mongo-0 -- /bin/bash
</code></pre>
<p>(here mongo-0 is pod name)</p>
<p>now start mongo shell,</p>
<pre><code>mongo
</code></pre>
<p>now check pod is initiate or not</p>
<pre><code>rs.status()
</code></pre>
<p>If not then, initiate and make it primary by hitting these commands one by one</p>
<pre><code>rs.initiate()
var cfg = rs.conf()
cfg.members[0].host=”mongo-0.mongo:27017”
</code></pre>
<p>(here is mongo-0 is pod name and mongo is serive name.)</p>
<p>now reconfig primary node</p>
<pre><code>rs.reconfig(cfg)
</code></pre>
<p>Add all slaves to primary node</p>
<pre><code>rs.add(“mongo-1.mongo:27017”)
rs.add(“mongo-2.mongo:27017”)
</code></pre>
<p>(here is mongo-1 and mongo-2 is pod name and mongo is serive name.)</p>
<p>now check status</p>
<pre><code>rs.status()
</code></pre>
<p>now exit from primary shell and go to secondary (slave) node</p>
<pre><code>exit
exit
kubectl exec -it mongo-1 -- /bin/bash
mongo
rs.secondaryOk()
</code></pre>
<p>check status and exit</p>
<pre><code>rs.status()
exit
exit
</code></pre>
<p>now do same for other secondary (slave) nodes</p>
| Amit Rathee |
<p>I have the following deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-persistent-volume-claim
containers:
- name: postgres
image: prikshet/postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
</code></pre>
<p>And when I include the lines 23-26 and do <code>kubectl apply</code>, the pod gives an error and doesn't run but when I remove the lines 23-36 the pod runs. I want to create a volume mount with the lines 23-26. The error when checking the logs of this pod is:</p>
<pre><code>postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory
</code></pre>
<p>postgres persistent volume claim is:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>How to fix this?</p>
| zendevil.eth | <p>There are actually two possibilities for this error to occur:</p>
<ol>
<li><p>Your file <code>/var/lib/postgresql/data/postgresql.conf</code> does not really actually exist. In this case, you need to create it before mounting it.</p>
</li>
<li><p>As <a href="https://stackoverflow.com/users/9250303/meaningqo">meaningqo</a> good mentioned in the comment, you are using the wrong <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer"><code>subPath</code></a>. Your file <code>/var/lib/postgresql/data/postgresql.conf</code> exist, but you are trying to mount wrong file based on <code>subPath</code>. Look at the <a href="https://stackoverflow.com/questions/65399714/what-is-the-difference-between-subpath-and-mountpath-in-kubernetes">explanation</a> the difference between <code>mountPath</code> and <code>subPath</code>:</p>
</li>
</ol>
<blockquote>
<p><code>mountPath</code> shows where the referenced volume should be mounted in the container. For instance, if you mount a volume to <code>mountPath: /a/b/c</code>, the volume will be available to the container under the directory <code>/a/b/c</code>.
Mounting a volume will make all of the volume available under <code>mountPath</code>. If you need to mount only part of the volume, such as a single file in a volume, you use <code>subPath</code> to specify the part that must be mounted. For instance, <code>mountPath: /a/b/c</code>, <code>subPath: d</code> will make whatever <code>d</code> is in the mounted volume under directory <code>/a/b/c</code>
Notice that when <code>subPath</code> is a folder, the content of the folder will be mounted to the <code>mountPath</code></p>
</blockquote>
| Mikołaj Głodziak |
<p>I have a cluster with RBAC in AKS, and it works just fine, but sometimes (it seems after my laptop goes to sleep) I just get this error and have to create context again:</p>
<p><code>kubectl error: You must be logged in to the server (Unauthorized)</code></p>
<p>It does not seem to happen all the time. Sometimes many sleep cycles (few days) passes, sometimes just few hours. It seem totally random.</p>
<p>Would appreciate any help on figuring out why this is happening.</p>
<p>My set up is like that (I don't know if it is important though):</p>
<p>I usually work on Windows Subsystem for Linux 2, but I have the same version of kubectl on windows itself and the config files are the same between the two (I linked kubectl config from linux).</p>
<p>I am pretty sure though I did not use windows kubectl last time it happened, only linux version</p>
| Ilya Chernomordik | <p>I had the same issue with WSL2 and the reason is lack of time sync after laptop sleeping (see <a href="https://github.com/microsoft/WSL/issues/4245" rel="nofollow noreferrer">https://github.com/microsoft/WSL/issues/4245</a>)</p>
<p>After running <code>sudo hwclock -s</code> I have no more error message and can run kubectl comand.</p>
| Eric Latour |
<p>I have read many links similar to my issue, but none of them were helping me to resolve the issue.</p>
<p><strong>Similar Links</strong>:</p>
<ol>
<li><a href="https://github.com/containerd/containerd/issues/7219" rel="noreferrer">Failed to exec into the container due to permission issue after executing 'systemctl daemon-reload'</a></li>
<li><a href="https://github.com/opencontainers/runc/issues/3551" rel="noreferrer">OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li>
<li><a href="https://stackoverflow.com/questions/73379718/ci-runtime-exec-failed-exec-failed-unable-to-start-container-process-open-de">CI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li>
<li><a href="https://github.com/moby/moby/issues/43969" rel="noreferrer">OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=277995" rel="noreferrer">Fail to execute docker exec</a></li>
<li><a href="https://github.com/docker/for-linux/issues/246" rel="noreferrer">OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "open /proc/self/fd: no such file or directory": unknown</a></li>
</ol>
<p><strong>Problem Description</strong>:</p>
<p>I have created a new Kubernetes cluster using <code>Kubespray</code>. When I wanted to execute some commands in one of containers I faced to the following error:</p>
<h6>Executed Command</h6>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it -n rook-ceph rook-ceph-tools-68d847b88d-7kw2v -- sh
</code></pre>
<h6>Error:</h6>
<blockquote>
<p>OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/1: operation not permitted: unknown
command terminated with exit code 126</p>
</blockquote>
<p>I have also logged in to the node, which runs the pod, and try executing the container using <code>docker exec</code> command, but the error was not changed.</p>
<p><strong>Workarounds</strong>:</p>
<ul>
<li><p>As I have found, the error code (126) implies that the permissions are insufficient, but I haven't faced this kind of error (like executing <code>sh</code>) in Docker or Kubernetes.</p>
</li>
<li><p>I have also checked whether <code>SELinux</code> is enabled or not (as it has been said in the 3rd link).</p>
<pre class="lang-bash prettyprint-override"><code>apt install policycoreutils
sestatus
# Output
SELinux status: disabled
</code></pre>
</li>
<li><p>In the 5th link, it was said to check whether you have updated the kernel, and I didn't upgrade anything on the nodes.</p>
<pre class="lang-bash prettyprint-override"><code>id; stat /dev/pts/0
# output
uid=0(root) gid=0(root) groups=0(root)
File: /dev/pts/0
Size: 0 Blocks: 0 IO Block: 1024 character special file
Device: 18h/24d Inode: 3 Links: 1 Device type: 88,0
Access: (0600/crw-------) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2022-08-21 12:01:25.409456443 +0000
Modify: 2022-08-21 12:01:25.409456443 +0000
Change: 2022-08-21 11:54:47.474457646 +0000
Birth: -
</code></pre>
</li>
<li><p>Also tried <code>/bin/sh</code> instead of <code>sh</code> or <code>/bin/bash</code>, but not worked and the same error occurred.</p>
</li>
</ul>
<p>Can anyone help me to find the root cause of this problem and then solve it?</p>
| Mostafa Ghadimi | <p>This issue may relate to docker, first drain your node.</p>
<pre><code>kubectl drain <node-name>
</code></pre>
<p>Second, SSH to the node and restart docker service.</p>
<pre><code>systemctl restart docker.service
</code></pre>
<p>At the end try to execute your command.</p>
| Mohammad Amin Taheri |
<p>I am trying to configure my ingress to expose one of my services for any request for a host ending with <code>.mywebsite.com</code></p>
<p>I tried <code>*.mywebsite.com</code> but it does not work.
Is there a way to configure the ingress to do so ?</p>
<p>My sub-domains are dynamically handled by the service, and the DNS is configured with a wildcard record.</p>
| Michaël Carrette | <p>You can try the below manifest file.</p>
<p>Prerequisite: k8s version 1.19+</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "*.mywebsite.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: <your_service_name_here>
port:
number: <port_number_here>
</code></pre>
| Pulak Kanti Bhowmick |
<p>I'm seeing liveliness and readiness probes failing in the Kubernetes setup.</p>
<p>Below im attaching screenshots of Pod Events, Resource Limits of the pod, probe configurations.</p>
<p>Anyone can help he with this issue and some explanation why this can happen and when do we see status code 503 in probes.</p>
<p>thankyou in advance!</p>
<p><strong>Below screen shot is from events section of pod</strong></p>
<p><a href="https://i.stack.imgur.com/5UfLI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5UfLI.png" alt="screenshot is from events of pod " /></a></p>
<p><strong>Configurations of liveliness and readiness probe</strong></p>
<p><a href="https://i.stack.imgur.com/xAxzF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xAxzF.png" alt="configurations for liveliness and readiness probe " /></a></p>
<p><strong>Resource limits of pod</strong></p>
<p><a href="https://i.stack.imgur.com/UYIiE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UYIiE.png" alt="resource limits set to pod " /></a></p>
<p><strong>FYI</strong>: I've tried changing initialDelaySeconds to 180 didn't help and also i don't see any issue with service startup it is not taking much time to start I could see in logs of pod.</p>
| Nikhil Lingam | <p>Community wiki answer for better visibility. As <a href="https://stackoverflow.com/users/10753078/ni-kill12">ni_kill12</a> has mentioned in the comment, the issue is solved:</p>
<blockquote>
<p>I got the issue, what is happening is one of the component is going OUT_OF_STATE because of that readiness and liveliness probe is failing for me. I got to know about this by hitting the request of livelines probe. <a href="https://github.com/alexandreroman/spring-k8s-probes-demo/blob/master/README.md" rel="nofollow noreferrer">This link</a> helped me to understand probes.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I am new to Kubernetes. I have set up 3 Ubuntu 20.04.2 LTS VMs on Oracle Virtualbox Manager.</p>
<p>I have installed docker, kubelet, kubeadm, and kubectl in all 3 VMs according to the following documentation.<br />
<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</a></p>
<p>And I created cluster using the following link:
<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</a></p>
<p>I used the following commands to setup flannel</p>
<pre><code>$ wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
$ kubectl create -f kube-flannel.yml
</code></pre>
<p>Everything looks fine.</p>
<pre><code>root@master-node:~/k8s# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-node Ready control-plane,master 23h v1.20.5 192.168.108.10 <none> Ubuntu 20.04.2 LTS 5.4.0-70-generic docker://19.3.15
node-1 Ready <none> 10h v1.20.5 192.168.108.11 <none> Ubuntu 20.04.2 LTS 5.4.0-70-generic docker://19.3.15
node-2 Ready <none> 10h v1.20.5 192.168.108.12 <none> Ubuntu 20.04.2 LTS 5.4.0-70-generic docker://19.3.15
</code></pre>
<p>I then create nginx deployment with 3 replicas.</p>
<pre><code>root@master-node:~/k8s# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 2 127m 10.244.2.8 node-2 <none> <none>
nginx-deploy-7848d4b86f-4nvg7 1/1 Running 0 9m8s 10.244.2.9 node-2 <none> <none>
nginx-deploy-7848d4b86f-prj7g 1/1 Running 0 9m8s 10.244.1.9 node-1 <none> <none>
nginx-deploy-7848d4b86f-r95hq 1/1 Running 0 9m8s 10.244.1.8 node-1 <none> <none>
</code></pre>
<p>The problem shows only when I tried to curl the nginx pods. It is not responsive.</p>
<pre><code>root@master-node:~/k8s# curl 10.244.2.9
^C
</code></pre>
<p>I then login to the pod and confirmed that nginx is up.</p>
<pre><code>root@master-node:~/k8s# kubectl exec -it nginx-deploy-7848d4b86f-4nvg7 -- /bin/bash
root@nginx-deploy-7848d4b86f-4nvg7:/# curl 127.0.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@nginx-deploy-7848d4b86f-4nvg7:/# exit
exit
</code></pre>
<p>Here is the result of kubectl describe pod on one of the pods:</p>
<pre><code>root@master-node:~/k8s# kubectl describe pod nginx-deploy-7848d4b86f-4nvg7
Name: nginx-deploy-7848d4b86f-4nvg7
Namespace: default
Priority: 0
Node: node-2/192.168.108.12
Start Time: Sun, 28 Mar 2021 04:49:15 +0000
Labels: app=nginx
pod-template-hash=7848d4b86f
Annotations: <none>
Status: Running
IP: 10.244.2.9
IPs:
IP: 10.244.2.9
Controlled By: ReplicaSet/nginx-deploy-7848d4b86f
Containers:
nginx:
Container ID: docker://f6322e65cb98e54cc220a786ffb7c967bbc07d80fe8d118a19891678109680d8
Image: nginx
Image ID: docker-pullable://nginx@sha256:b0ea179ab61c789ce759dbe491cc534e293428ad232d00df83ce44bf86261179
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 28 Mar 2021 04:49:19 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xhkzx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-xhkzx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xhkzx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/nginx-deploy-7848d4b86f-4nvg7 to node-2
Normal Pulling 25m kubelet Pulling image "nginx"
Normal Pulled 25m kubelet Successfully pulled image "nginx" in 1.888247052s
Normal Created 25m kubelet Created container nginx
Normal Started 25m kubelet Started container nginx
</code></pre>
<p>I tried to troubleshoot by using: <a href="https://www.praqma.com/stories/debugging-kubernetes-networking/" rel="nofollow noreferrer">Debugging Kubernetes Networking</a></p>
<pre><code>root@master-node:~/k8s# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:db:6f:21 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:90:88:7c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:1d:21:66:20 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
link/ether 4a:df:fb:be:7b:0e brd ff:ff:ff:ff:ff:ff
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 02:48:db:46:53:60 brd ff:ff:ff:ff:ff:ff
7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fa:29:13:98:2c:31 brd ff:ff:ff:ff:ff:ff
8: vethc2e0fa86@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 7a:66:b0:97:db:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth3eb514e1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 3e:3c:9d:20:5c:42 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 02:35:f0:fb:e3:b1 brd ff:ff:ff:ff:ff:ff link-netns test1
root@master-node:~/k8s# kubectl create -f nwtool-deployment.yaml
deployment.apps/nwtool-deploy created
root@master-node:~/k8s# kubectl get po
NAME READY STATUS RESTARTS AGE
nwtool-deploy-6d8c99644b-fq6gv 1/1 Running 0 14s
nwtool-deploy-6d8c99644b-fwc6d 1/1 Running 0 14s
root@master-node:~/k8s# ^C
root@master-node:~/k8s# kubectl exec -it nwtool-deploy-6d8c99644b-fq6gv -- ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default
link/ether 2e:02:b6:97:2f:10 brd ff:ff:ff:ff:ff:ff
root@master-node:~/k8s# kubectl exec -it nwtool-deploy-6d8c99644b-fwc6d -- ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default
link/ether 82:21:fa:aa:34:27 brd ff:ff:ff:ff:ff:ff
root@master-node:~/k8s# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:db:6f:21 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:90:88:7c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:1d:21:66:20 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
link/ether 4a:df:fb:be:7b:0e brd ff:ff:ff:ff:ff:ff
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 02:48:db:46:53:60 brd ff:ff:ff:ff:ff:ff
7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fa:29:13:98:2c:31 brd ff:ff:ff:ff:ff:ff
8: vethc2e0fa86@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 7a:66:b0:97:db:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth3eb514e1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 3e:3c:9d:20:5c:42 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 02:35:f0:fb:e3:b1 brd ff:ff:ff:ff:ff:ff link-netns test1
root@master-node:~/k8s#
</code></pre>
<p>It looks that no veth pairs were created for the new pod on the master node. Any idea how to resolve this? Any help will be greatly appreciated. Thank you!</p>
| learning | <p>I have found out the issue. Thanks to: <a href="https://medium.com/@anilkreddyr/kubernetes-with-flannel-understanding-the-networking-part-1-7e1fe51820e4" rel="nofollow noreferrer">Kubernetes with Flannel — Understanding the Networking — Part 1 (Setup the demo)</a> I have copied the excerpts that helped to resolve my issue below:</p>
<p>The VM’s will have 2 interfaces created. And, when running flannel, you would need to mention the interface name properly. Without that, you may see that the pods will come up and get the IP address, but can’t talk to each other.</p>
<p>You need to specify the interface name enp0s8 in flannel manifest file.</p>
<pre><code>vagrant@master:~$ grep -A8 containers kube-flannel.yml
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=enp0s8 ####Add the iface name here.
</code></pre>
<p>If you happen to have different interfaces to be matched, you can match it on regex pattern. Let’s say the worker nodes could’ve enp0s8 or enp0s9 configured, then the flannel args would be — --iface-regex=[enp0s8|enp0s9]</p>
| learning |
<p>So we are running istio 1.3.5</p>
<p>We have a docker container running a dotnet core application that is trying to consume messages from azure event hub.</p>
<p>With the istio sidecar turned on, we get a 404, with it turned off things work just fine.</p>
<p>Also interesting is you can send messages to the event hub just fine with the sidecar enabled.</p>
<p>here's the istio related yaml that's "in play" in the namespace:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: default
spec:
egress:
- hosts:
- "./*"
- "elasticsearch/*"
</code></pre>
<pre><code>apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "default"
spec:
host: "*.MYNAMESPACE_NAME.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
</code></pre>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
spec:
peers:
- mtls: {}
</code></pre>
| Todd Spurlock | <p>We have figured out the issue.<br />
When the istio sidecar and the main micro service startup, there is a period of time where the istio sidecar is initializing itself and is not “on”.<br />
During this period all network traffic from the main microservice is blocked.<br />
<strong>To fix this you need to make a call to an istio sidecar endpoint before your main program starts.</strong></p>
<pre><code> #!/bin/bash
until curl --head localhost:15000
do
echo "Waiting for Sidecar"
sleep 3
done
echo "Sidecar available"
</code></pre>
| Todd Spurlock |
<p>I am new to Kubernetes and I'm experimenting with different things. This is my deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hellok8s
spec:
replicas: 2
selector:
matchLabels:
app: hellok8s
template:
metadata:
labels:
app: hellok8s
spec:
containers:
- image: brianstorti/hellok8s:v3
name: hellok8s-container
</code></pre>
<p>and below is my service file.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hellok8s-svc
spec:
type: NodePort
selector:
app: hellok8s
ports:
- port: 4567
nodePort: 30001
</code></pre>
<p>I'm using the exec command and from inside the container, I'm trying to access the service using the service name. When I access this using the cluster IP, it works fine but when I try to access it using the service name it doesn't work. what might be the problem?</p>
| nimramubashir | <p>Posted as community wiki, to better visibility.
As <a href="https://stackoverflow.com/users/14802233/nimramubashir">nimramubashir</a> mentioned in the comment:</p>
<blockquote>
<p>I have debugged deeper and there seems to be some problem with the internal pod-pod communication. I'm creating a cluster through kubeadm and there seems to be some problem with it that Is causing this problem. I'm asking a new question for that. I have tried deploying it on the cloud and this is working fine.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I have microservice for email sending in cloud function on some GCP project. On different GCP project i have Ecommerce application deployed with kubernetes. I want to send mails after some activity on ecommerce via my microservice i use pub/sub for communication. I have problem with publishing messages in celery tasks. When I publish message using managment command everything is ok, but when i trigger celery task from managment command there is a problem.
<a href="https://i.stack.imgur.com/VRhW5.png" rel="nofollow noreferrer">code image</a>
This will cause an error like in the picture below (Process "ForkPoolWorker-10" pid:276 exited with "signal 11 (SIGSEGV)") But werid thing - message is being send, and recipient recive this message.
<a href="https://i.stack.imgur.com/HhDra.png" rel="nofollow noreferrer">error image</a></p>
<p>With other tasks there are no such problems. If i comment line "
future =publisher.publish(topic_path, json_data.encode("utf-8"))
"
There is no error. I know signal 11 (SIGSEGV) is connected to memory but i already make my kubernetes machines stronger so i don't think this is the issue.</p>
<p><a href="https://i.stack.imgur.com/X9v4o.png" rel="nofollow noreferrer">Logs from GCP</a>
Weird is also why ForkPoolWorker-7 succeeded with this task, but 'ForkPoolWorker-3' pid:227 exited with 'signal 11'</p>
| janek_axpo | <p>Ok, problem was with that publisher PublisherClient creating multi threads and celery don't work correct with threads.</p>
| janek_axpo |
<p>Requirement: Want to deploy Minio and another backend service using an ingress with HTTPS (Not for production purposes)</p>
<p>I have been trying to create an ingress to access two services externally from the Kubernetes cluster in GKE. These are the attempts I tried.</p>
<p>Attempt One</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: lightning-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /storage
backend:
serviceName: minio
servicePort: 9000
- path: /portal
backend:
serviceName: oscar
servicePort: 8080
</code></pre>
<p>Attempt Two</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oscar
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: oscar
servicePort: 8080
- host: storage.lightningfaas.tech
http:
paths:
- backend:
serviceName: minio
servicePort: 9000
</code></pre>
<p>Attempt Three</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: lightning-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- backend:
serviceName: minio
servicePort: 9000
path: /minio(/|$)(.*)
- backend:
serviceName: oscar
servicePort: 8080
path: /portal(/|$)(.*)
</code></pre>
<p>Attempt Four</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: minio-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: minio.lightningfaas.tech
http:
paths:
- backend:
serviceName: minio
servicePort: 9000
- host: portal.lightningfaas.tech
http:
paths:
- backend:
serviceName: oscar
servicePort: 8080
</code></pre>
<p>However, none of the above attempts suites for my requirement. Either it gives a 404 0r a 503. But I can confirm that creating an individual ingress for each service works fine as below.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oscar
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: oscar
servicePort: 8080
</code></pre>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: minio-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: minio
servicePort: 9000
</code></pre>
<p>Changing domain servers takes a huge time to test as well, therefore creating hosts are very annoying since I have to wait a massive time to test my code. Is there anything more that I can try?</p>
<p>Something like below would be ideal:</p>
<p><a href="https://34.452.234.45:9000" rel="nofollow noreferrer">https://34.452.234.45:9000</a> > will access minio</p>
<p><a href="https://34.452.234.45:8080" rel="nofollow noreferrer">https://34.452.234.45:8080</a> > will access oscar</p>
<p>Your suggestions and opinions will be really helpful for me.</p>
<p>Minio helm chart: <a href="https://github.com/minio/charts" rel="nofollow noreferrer">https://github.com/minio/charts</a></p>
<p>Minio deployment</p>
<pre><code>helm install --namespace oscar minio minio/minio --set accessKey=minio --set secretKey=password --set persistence.existingClaim=lightnig --set resources.requests.memory=256Mi
</code></pre>
<p>Oscar helm chart: <a href="https://github.com/grycap/helm-charts/tree/master/oscar" rel="nofollow noreferrer">https://github.com/grycap/helm-charts/tree/master/oscar</a></p>
<p>Oscar deployment</p>
<pre><code>helm install --namespace=oscar oscar oscar --set authPass=password --set service.type=ClusterIP --set createIngress=false --set volume.storageClassName=nfs --set minIO.endpoint=http://104.197.173.174 --set minIO.TLSVerify=false --set minIO.accessKey=minio --set minIO.secretKey=password --set serverlessBackend=openfaas
</code></pre>
| Rajitha Warusavitarana | <p>According to kubernetes doc, simple fan-out example should solve your problem.
A simple fan-out example is given below where same host has two different paths for two different services.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: service1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: service2
port:
number: 8080
</code></pre>
<p>So your manifest file might look like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lightning-ingress
namespace: default
spec:
rules:
- host: [your host name here]
http:
paths:
- path: /storage
pathType: Prefix
backend:
service:
name: minio
port:
number: 9000
- path: /portal
pathType: Prefix
backend:
service:
name: oscar
port:
number: 8080
</code></pre>
<p>Ref: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes doc</a></p>
| Pulak Kanti Bhowmick |
<p>I want to test the queuing and preemption features of Kubernetes (v1.21.0). I run Kubernetes using Minikube with a pod limit of 10. I have a script that creates two priority classes: 'low-priority' and 'high-priority'.</p>
<p><a href="https://i.stack.imgur.com/P5dpe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P5dpe.png" alt="Priority Classes" /></a></p>
<p>I then have a script that creates 10 low priority jobs, waits 20 seconds, and then creates a high priority one. In this scenario, one of the low priority ones is correctly terminated so that the high priority job can be executed.</p>
<p><a href="https://i.stack.imgur.com/dGfBF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dGfBF.png" alt="Preemption Working" /></a></p>
<p>I then have another script that does the same thing, but in a namespace with resource quotas:</p>
<pre><code>kubectl create namespace limited-cpu
cat <<EOF | kubectl apply -n limited-cpu -f -
apiVersion: v1
kind: ResourceQuota
metadata:
name: limit-max-cpu
spec:
hard:
requests.cpu: "1000m"
EOF
</code></pre>
<p>In this scenario, the low priority jobs request 333m of cpu and the high priority one 500m. The expected behavior is for Kubernetes to run three low priority at the same time, then to stop two of them when the high priority one is submitted.</p>
<p>But it does not happen. Worst: even when the low priority jobs end, other low priority jobs are scheduled before the high priority one.</p>
<p><a href="https://i.stack.imgur.com/uQHY1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uQHY1.png" alt="Preemption and queuing not working with resource quotas" /></a></p>
<p>Here are the two jobs definitions:</p>
<pre><code>for i in $(seq -w 1 10) ;
do
cat <<EOF | kubectl apply -n limited-cpu -f -
apiVersion: batch/v1
kind: Job
metadata:
name: low-priority-$i
spec:
template:
spec:
containers:
- name: low-priority-$i
image: busybox
command: ["sleep", "60s"]
resources:
requests:
memory: "64Mi"
cpu: "333m"
restartPolicy: Never
priorityClassName: "low-priority"
EOF
done
sleep 20s
cat <<EOF | kubectl apply -n limited-cpu -f -
apiVersion: batch/v1
kind: Job
metadata:
name: high-priority-1
spec:
template:
spec:
containers:
- name: high-priority-1
image: busybox
command: ["sleep", "30s"]
resources:
requests:
memory: "128Mi"
cpu: "500m"
restartPolicy: Never
priorityClassName: "high-priority"
EOF
</code></pre>
<p>Even the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption" rel="nofollow noreferrer">Kubernetes Documentation</a> agrees that it should be working.</p>
<p>EDIT:
Here are the Priority Classes definitions:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: scheduling.k8s.io/v1
description: Low-priority Priority Class
kind: PriorityClass
metadata:
name: low-priority
value: 1000000
EOF
cat <<EOF | kubectl apply -f -
apiVersion: scheduling.k8s.io/v1
description: Low-priority Priority Class
kind: PriorityClass
metadata:
name: high-priority
value: 99999999
EOF
</code></pre>
| Gaëtan | <p>Fair question, fair assumption.</p>
<p>I've run into a similar situation and was also disappointed to see that k8s does not evict low-priority pods in favor of high-priority ones.</p>
<p>A couple of consultations with k8s experts revealed that indeed, when framed into namespace quotas, k8s is not expected to be as aggressive as in pure "pods x nodes" setup.</p>
<p>The official documentation you point to also seems to describe everything only in the context of "pods x nodes".</p>
| Sergey Bolshakov |
<p>I'm trying to deploy a Mongodb ReplicaSet on microk8s cluster. I have installed a VM running on Ubuntu 20.04. After the deployment, the mongo pods do not run but crash. I've enabled microk8s storage, dns and rbac add-ons but still the same problem persists. Can any one help me find the reason behind it? Below is my manifest file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
environment: test
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
spec:
storageClassName: microk8s-hostpath
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
<p>Also, here are the pv, pvc and sc outputs:</p>
<pre><code>yyy@xxx:$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-persistent-storage-claim-mongo-0 Bound pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413 5Gi RWO microk8s-hostpath 13m
yyy@xxx:$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5b75ddf6-abbd-4ff3-a135-0312df1e6703 20Gi RWX Delete Bound container-registry/registry-claim microk8s-hostpath 38m
pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413 5Gi RWO Delete Bound default/mongodb-persistent-storage-claim-mongo-0 microk8s-hostpath 13m
yyy@xxx:$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
microk8s-hostpath (default) microk8s.io/hostpath Delete Immediate false 108m
</code></pre>
<pre><code>yyy@xxx:$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
metrics-server-8bbfb4bdb-xvwcw 1/1 Running 1 148m
dashboard-metrics-scraper-78d7698477-4qdhj 1/1 Running 0 146m
kubernetes-dashboard-85fd7f45cb-6t7xr 1/1 Running 0 146m
hostpath-provisioner-5c65fbdb4f-ff7cl 1/1 Running 0 113m
coredns-7f9c69c78c-dr5kt 1/1 Running 0 65m
calico-kube-controllers-f7868dd95-wtf8j 1/1 Running 0 150m
calico-node-knzc2 1/1 Running 0 150m
</code></pre>
<p>I have installed the cluster using this command:</p>
<p>sudo snap install microk8s --classic --channel=1.21</p>
<p>Output of mongodb deployment:</p>
<pre><code>yyy@xxx:$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongo-0 0/1 CrashLoopBackOff 5 4m18s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 109m
service/mongodb-service ClusterIP None <none> 27017/TCP 4m19s
NAME READY AGE
statefulset.apps/mongo 0/3 4m19s
</code></pre>
<p>Pod logs:</p>
<pre><code>yyy@xxx:$ kubectl logs pod/mongo-0
{"t":{"$date":"2021-09-07T16:21:13.191Z"},"s":"F", "c":"CONTROL", "id":20574, "ctx":"-","msg":"Error during global initialization","attr":{"error":{"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"}}}
</code></pre>
<pre><code>yyy@xxx:$ kubectl describe pod/mongo-0
Name: mongo-0
Namespace: default
Priority: 0
Node: citest1/192.168.9.105
Start Time: Tue, 07 Sep 2021 16:17:38 +0000
Labels: controller-revision-hash=mongo-66bd776569
environment=test
replicaset=MainRepSet
role=mongo
statefulset.kubernetes.io/pod-name=mongo-0
Annotations: cni.projectcalico.org/podIP: 10.1.150.136/32
cni.projectcalico.org/podIPs: 10.1.150.136/32
Status: Running
IP: 10.1.150.136
IPs:
IP: 10.1.150.136
Controlled By: StatefulSet/mongo
Containers:
mongod-container:
Container ID: containerd://458e21fac3e87dcf304a9701da0eb827b2646efe94cabce7f283cd49f740c15d
Image: mongo
Image ID: docker.io/library/mongo@sha256:58ea1bc09f269a9b85b7e1fae83b7505952aaa521afaaca4131f558955743842
Port: 27017/TCP
Host Port: 0/TCP
Command:
numactl
--interleave=all
mongod
--wiredTigerCacheSizeGB
0.1
--bind_ip
0.0.0.0
--replSet
MainRepSet
--auth
--clusterAuthMode
keyFile
--keyFile
/etc/secrets-volume/internal-auth-mongodb-keyfile
--setParameter
authenticationMechanisms=SCRAM-SHA-1
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 16:24:03 +0000
Finished: Tue, 07 Sep 2021 16:24:03 +0000
Ready: False
Restart Count: 6
Requests:
cpu: 200m
memory: 200Mi
Environment: <none>
Mounts:
/data/db from mongodb-persistent-storage-claim (rw)
/etc/secrets-volume from secrets-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7nf8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mongodb-persistent-storage-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodb-persistent-storage-claim-mongo-0
ReadOnly: false
secrets-volume:
Type: Secret (a volume populated by a Secret)
SecretName: shared-bootstrap-data
Optional: false
kube-api-access-b7nf8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m53s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 7m52s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 7m50s default-scheduler Successfully assigned default/mongo-0 to citest1
Normal Pulled 7m25s kubelet Successfully pulled image "mongo" in 25.215669443s
Normal Pulled 7m21s kubelet Successfully pulled image "mongo" in 1.192994197s
Normal Pulled 7m6s kubelet Successfully pulled image "mongo" in 1.203239709s
Normal Pulled 6m38s kubelet Successfully pulled image "mongo" in 1.213451175s
Normal Created 6m38s (x4 over 7m23s) kubelet Created container mongod-container
Normal Started 6m37s (x4 over 7m23s) kubelet Started container mongod-container
Normal Pulling 5m47s (x5 over 7m50s) kubelet Pulling image "mongo"
Warning BackOff 2m49s (x23 over 7m20s) kubelet Back-off restarting failed container
</code></pre>
| van | <p>The logs you provided show that you have an incorrectly set parameter <code>wiredTigerCacheSizeGB</code>. In your case it is 0.1, and according to the message</p>
<pre><code>"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"
</code></pre>
<p>it should be at least 0.25.</p>
<p>In the section <code>containers</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
</code></pre>
<p>you should change in this place</p>
<pre><code>- "--wiredTigerCacheSizeGB"
- "0.1"
</code></pre>
<p>the value <code>"0.1"</code> to any other greather or equal <code>"0.25"</code>.</p>
<hr />
<p>Additionally I have seen another error:</p>
<pre class="lang-yaml prettyprint-override"><code>1 pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>It should related to what I wrote earlier. However, you may find alternative ways to solve it <a href="https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims">here</a>, <a href="https://stackoverflow.com/questions/59506631/k8s-pod-has-unbound-immediate-persistentvolumeclaims-mongodb">here</a> and <a href="https://github.com/helm/charts/issues/12521" rel="nofollow noreferrer">here</a>.</p>
| Mikołaj Głodziak |
<p>I need to make use of PVC to specify the specs of the PV and I also need to make sure it uses a custom local storage path in the PV.</p>
<p><em>I am unable to figure out how to mention the hostpath in a PVC</em>?</p>
<p>This is the PVC config:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>And this is the mongodb deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
</code></pre>
<p><strong>How</strong> and <strong>where</strong> do I mention the <strong><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">hostPath</a></strong> to be mounted in here?</p>
| Karan Kumar | <p>Doc says that you set <code>hostPath</code> when creating a PV (the step before creating PVC).</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<p>After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.</p>
<p>Please see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</a></p>
| Roar S. |
<p>I deployed a cluster (on premise) as proof of concept, using this command:</p>
<p><code>sudo kubeadm init --upload-certs --pod-network-cidr=x.x.x.x/16 --control-plane-endpoint=x.x.x.x.nip.io</code></p>
<p>Now, i need to change the endpoint from <code>x.x.x.x.nip.io</code> to <code>somename.example.com</code>. How can i do this?</p>
<hr />
<p>Kubeadm version: <code>&version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:36:57Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}</code></p>
| v1d3rm3 | <p>Posting an answer as a community wiki out of comments, feel free to edit and expand.</p>
<hr />
<p>Based on the documentation and <a href="https://stackoverflow.com/questions/65505137/how-to-convert-a-kubernetes-non-ha-control-plane-into-an-ha-control-plane/65565377#65565377">very good answer</a> (which is about switching from simple to high availability cluster and it has steps about adding <code>--control-plane-endpoint</code>), there's no easy/straight-forward solution to make it.</p>
<p>Considering risks and difficulty it's easier to create another cluster with a correct setup and migrate all workflows there.</p>
| moonkotte |
<p>I try to run my web application with two backend containers.</p>
<ul>
<li>/ should be routed to the frontend container</li>
<li>everything starting with /backend/ should go to the backend container.</li>
</ul>
<p>So fare, so good, but now the css & js files from the /backend are not loaded because the files are referenced in the HTML file like "/bundles/css/style.css" and now ingress controller route this request to the frontend container instead of to the backend.</p>
<p>How can I fix this issue?</p>
<ul>
<li>Can I fix that with a smart Ingress rule?</li>
<li>Do I need to update the app root of the backend container?</li>
</ul>
<p>Here my Ingress resource</p>
<pre><code>apiVersion: networking.k8s.io/v1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: example
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- www.example.ch
secretName: tls-example.ch
rules:
- host: www.example.ch
http:
paths:
- path: /backend(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-backend-svc
port:
number: 8081
- path: /
pathType: Prefix
backend:
service:
name: example-frontend-svc
port:
number: 8080
</code></pre>
| jonny172 | <p>You can add another path if all files are located in /bundles/* path.
I have given an example manifest file below.</p>
<pre><code>apiVersion: networking.k8s.io/v1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: example
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- www.example.ch
secretName: tls-example.ch
rules:
- host: www.example.ch
http:
paths:
- path: /backend(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-backend-svc
port:
number: 8081
- path: /bundles
pathType: Prefix
backend:
service:
name: example-backend-svc
port:
number: 8081
- path: /
pathType: Prefix
backend:
service:
name: example-frontend-svc
port:
number: 8080
</code></pre>
| Pulak Kanti Bhowmick |
<p>I am working on a way to manipulate the ConfigMap for kubernetes with Mike Farah's yq.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: game-config
namespace: default
data:
game.properties: |
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
</code></pre>
<p>I want to update the game.properties value - lives to 999.</p>
<p>However when I try below commands i get the error respectively.</p>
<pre><code>$ yq e '.data.[game.properties]="enemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30 \n"' test-configmap.yaml
Error: Parsing expression: Lexer error: could not match text starting at 1:8 failing at 1:9.
unmatched text: "g"
</code></pre>
<p>I think the problem is in accessing the data.</p>
<pre><code>$ yq e ".data[0]" test-configmap.yaml
null
$ yq e ".data.[0]" test-configmap.yaml
null
$ yq e ".data.[game.properties]" test-configmap.yaml
Error: Parsing expression: Lexer error: could not match text starting at 1:8 failing at 1:9.
unmatched text: "g"
</code></pre>
<p>But when I try below I get the values of the data:</p>
<pre><code>yq e ".data.[]" test-configmap.yaml
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
</code></pre>
<p>It is strange that it doesnt let me access the data name, i.e. game.properties and ui.properties.</p>
| Chinmaya Biswal | <p>Looks like I found out how to do it.
So we have to use double quotes for accessing the data field parameters.</p>
<p>Adding my command for reference.</p>
<pre><code>yq e '.data."game.properties"="enemies=aliens\nlives=999\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase
=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30 \n"' test-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:52:05Z
name: game-config
namespace: default
resourceVersion: "516"
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
data:
game.properties: |-
enemies=aliens\nlives=999\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.passphrase=UUDDLRLRBABAS\nsecret.code.allowed=true\nsecret.code.lives=30 \n
ui.properties: "color.good=purple\ncolor.bad=yellow\nallow.textmode=true\nhow.nice.to.look=fairlyNice \n"
</code></pre>
| Chinmaya Biswal |
<p>I found that creating a yaml object description using <code>--dry-run=client</code> and providing <code>--command</code> only works when the provided arguments are in a very specific order.</p>
<p>This works:</p>
<pre><code>k run nginx --image=nginx --restart=Never --dry-run=client -o yaml --command -- env > nginx.yaml
</code></pre>
<p>This does NOT:</p>
<pre><code>k run nginx --image=nginx --restart=Never --command -- env --dry-run=client -o yaml > nginx.yaml
</code></pre>
<p>I feel a bit confused because the version that does not work looks a lot more intuitive to me then the one that does work. Ideally both should work in my opinion. Is this intended behavior? I can't find any documentation about it.</p>
| Maximilian Jesch | <blockquote>
<p>Ideally both should work in my opinion.</p>
</blockquote>
<p>Unfortunately, the commands you presented are not the same. They will never work the same either. This is correct behaviour. <a href="https://unix.stackexchange.com/questions/11376/what-does-double-dash-mean">Double dash</a> (<code>--</code>) is of special importance here:</p>
<blockquote>
<p>a double dash (<code>--</code>) is used in most Bash built-in commands and many other commands to signify the end of command options, after which only positional arguments are accepted.</p>
</blockquote>
<p>So you can't freely swap "parameters" places. Only these options can be freely set</p>
<pre class="lang-yaml prettyprint-override"><code>--image=nginx --restart=Never --dry-run=client -o yaml --command
</code></pre>
<p>Then you have <code>-- env</code> (double dash, space, and another command). After <code>--</code> (double dash and space) only positional arguments are accepted.</p>
<p>Additionally, <code>></code> is shell meta-character to set <a href="https://www.gnu.org/software/bash/manual/html_node/Redirections.html" rel="nofollow noreferrer">redirection</a>.</p>
| Mikołaj Głodziak |
<p>From "Extending kubectl with plugins":</p>
<blockquote>
<p>It is currently not possible to create plugins that overwrite existing
<code>kubectl</code> commands. [...] Due to this limitation, it is also not
possible to use plugins to add new subcommands to existing <code>kubectl</code>
commands. For example, adding a subcommand <code>kubectl create foo</code> by
naming your plugin <code>kubectl-create-foo</code> will cause that plugin to be
ignored.</p>
<p>-- <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#limitations" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#limitations</a></p>
</blockquote>
<p>Is there another way to extend <code>kubectl create</code>?</p>
| ciis0 | <p>It does not look like that in source code, all sub-commands are currently registered explicitly (<a href="https://github.com/kubernetes/kubernetes/blob/db8d77cfeb0593b0accc17804df43834cc7f9917/staging/src/k8s.io/kubectl/pkg/cmd/create/create.go#L142-L160" rel="nofollow noreferrer">cf.</a>):</p>
<pre class="lang-golang prettyprint-override"><code> // create subcommands
cmd.AddCommand(NewCmdCreateNamespace(f, ioStreams))
cmd.AddCommand(NewCmdCreateQuota(f, ioStreams))
cmd.AddCommand(NewCmdCreateSecret(f, ioStreams))
cmd.AddCommand(NewCmdCreateConfigMap(f, ioStreams))
cmd.AddCommand(NewCmdCreateServiceAccount(f, ioStreams))
cmd.AddCommand(NewCmdCreateService(f, ioStreams))
cmd.AddCommand(NewCmdCreateDeployment(f, ioStreams))
cmd.AddCommand(NewCmdCreateClusterRole(f, ioStreams))
cmd.AddCommand(NewCmdCreateClusterRoleBinding(f, ioStreams))
cmd.AddCommand(NewCmdCreateRole(f, ioStreams))
cmd.AddCommand(NewCmdCreateRoleBinding(f, ioStreams))
cmd.AddCommand(NewCmdCreatePodDisruptionBudget(f, ioStreams))
cmd.AddCommand(NewCmdCreatePriorityClass(f, ioStreams))
cmd.AddCommand(NewCmdCreateJob(f, ioStreams))
cmd.AddCommand(NewCmdCreateCronJob(f, ioStreams))
cmd.AddCommand(NewCmdCreateIngress(f, ioStreams))
cmd.AddCommand(NewCmdCreateToken(f, ioStreams))
return cmd
</code></pre>
| ciis0 |
<p>I'm working on a system that spins up pods in k8s for user to work in for a while. They'll be running code, modifying files, etc. One thing I'd like to do is be able to effectively "export" their pod in it's modified state. In docker I'd just <code>docker commit && docker save</code> to bundle it all to a tar, but I can't see anything at all similar in the kubernetes api, kubectl, nor client libs.</p>
| justincely | <p><strong>Short answer: No, Kubernetes doesn't have an equivalent to docker commit/save.</strong></p>
<p>As <a href="https://stackoverflow.com/users/9423721/markus-dresch" title="4,635 reputation">Markus Dresch</a> mentioned in the comment:</p>
<blockquote>
<p>kubernetes orchestrates containers, it does not create or modify them.</p>
</blockquote>
<p>Kubernetes and Docker are 2 different tools for different purposes.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.</p>
<p>Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.</p>
<p>You can find more information about Pull, Edit, and Push a Docker Image <a href="https://blog.macstadium.com/blog/how-to-k8s-pull-edit-and-push-a-docker-image" rel="nofollow noreferrer">here</a>.</p>
| Mikołaj Głodziak |
<p>I need to set up NiFi in Kubernetes (microk8s), in a VM (Ubuntu, using VirtualBox) using a helm chart. The end goal is to have two-way communication with Kafka, which is also already deployed in Kubernetes.</p>
<p>I have found a helm chart for NiFi available through Cetic <a href="https://github.com/cetic/helm-nifi" rel="nofollow noreferrer">here</a>. Kafka is already set up to allow external access through a NodePort, so my assumption is that I should do the same for NiFi (at least for simplicity's sake), though any alternative solution is welcome.</p>
<p>From the documentation, there is NodePort access optionality:</p>
<blockquote>
<p>NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). You’ll be able to contact the NodePort service, from
outside the cluster, by requesting NodeIP:NodePort.</p>
</blockquote>
<p>Additionally, the documentation states (paraphrasing):</p>
<blockquote>
<p>service.type defaults to NodePort</p>
</blockquote>
<p>However, this does not appear to be true for the helm file, given that the default value in the chart's <a href="https://github.com/cetic/helm-nifi/blob/master/values.yaml" rel="nofollow noreferrer">values.yaml file</a> has <code>service.type=ClusterIP</code>.</p>
<p>I have very little experience with any of these technologies, so my question is, how do I actually set up the NiFi helm chart YAML file to allow two-way communication (presumably via NodePorts)? Is it as simple as "requesting NodeIP:NodePort", and if so, how do I do this?</p>
<p><strong>UPDATE</strong></p>
<p>I attempted <a href="https://jmrobles.medium.com/running-apache-nifi-on-kubernetes-5b7e95adebf3" rel="nofollow noreferrer">JM Robles's approach</a> (which does not use helm), but the API version used for Ingress is out-of-date and I haven't been able to figure out how to fix it.</p>
<p>I also tried <a href="https://github.com/getindata/apache-nifi-kubernetes" rel="nofollow noreferrer">GetInData's approach</a>, but the helm commands provided result in: <code>Error: unknown command "nifi" for "helm"</code>.</p>
| wb1210 | <p>I found an answer, for any faced with a similar problem. As of late January 2023, the following can be used to set up NiFi as described in the question:</p>
<pre><code>helm remo add cetic https://cetic.github.io/helm-charts
helm repo update
helm install -n <namespace> --set persistence.enabled=True --set service.type=NodePort --set properties.sensitiveKey=<key you want> --set auth.singleUser.username=<your username> --set auth.singleUser.password=<password you select, must be at least 12 characters> nifi cetic/nifi
</code></pre>
| wb1210 |
<p>The following error is returned:</p>
<pre><code>error: you must specify two or three arguments: verb, resource, and optional resourceName
</code></pre>
<p>when I executed:</p>
<pre><code>kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp 00-mytest1
</code></pre>
<p>I already have following manifests for <code>podsecuritypolicy</code> (psp.yaml), <code>role</code> (role.yaml) and <code>rolebinding</code> (rb.yaml) and deployed in the namespace <code>mytest1</code>.</p>
<p>psp.yaml</p>
<pre><code> apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: 00-mytest1
labels: {}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
runAsUser:
rule: 'MustRunAsNonRoot'
runAsGroup:
rule: 'MustRunAs'
ranges:
- min: 1000
max: 1000
- min: 1
max: 65535
supplementalGroups:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
hostNetwork: false
hostIPC: false
hostPID: false
hostPorts: []
volumes:
- configMap
- downwardAPI
- emptyDir
- projected
- secret
</code></pre>
<p>role.yaml</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['00-mytest1']
</code></pre>
<p>and rb.yaml</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mytest1
subjects:
- kind: ServiceAccount
name: default
namespace: "mytest1"
</code></pre>
<p>I expect the return <code>yes</code> or <code>no</code> for <code>kubectl auth can-i ...</code> check and not the above mentioned error. Is the use-case for auth check correct? I appreciate he correction.</p>
| jack_t | <p>You are missing the flag <code>--subresource</code>. If I execute</p>
<pre><code>kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp --subresource=00-mytest1
</code></pre>
<p>I have clear answer. In my situation:</p>
<pre><code>no
</code></pre>
<p>You can also get an warning like this:</p>
<pre><code>Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
</code></pre>
<p>But it is related directly to your config.</p>
<p>For more information about kubectl auth can-i command check</p>
<pre><code>kubectl auth can-i --help
</code></pre>
<p>in your terminal.
You can also read <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">this doc</a>.</p>
| Mikołaj Głodziak |
<p>Getting this error on my Zabbix Server web. I have my server on a VM and the agent is running on the Kubernetes (GKE). The following image is the status of Zabbix agent. <a href="https://i.stack.imgur.com/Qgg0s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qgg0s.png" alt="enter image description here"></a></p>
| Jayanth | <p>Can telnet ZabbixSeverIP 10050 from agent?, if not then need to open port from agentServer to Zabbixserver.</p>
<p>and in /etc/zabbix/zabbix_agentd.conf , need to add Zabbixserver Ipaddress and LocalHostname as well</p>
| Syed Nayab |
<p>I'm new to EKS (but familiar with k8s), and I'm trying to run the my project on EKS.</p>
<p>I ran the my project deployment and the db deployment, and both are running:</p>
<pre><code>kubectl get deploy -owide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
my-app 1/1 1 1 98m my-app me/my-app:latest app=my-app
db 1/1 1 1 16h db mariadb:10.4.12 app=db
</code></pre>
<p>And I created loadbalancer to reach the my-app pods:</p>
<pre><code>get svc -owide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
my-app LoadBalancer 10.102.XXX.XXX XXX-XXX.us-east-1.elb.amazonaws.com 8000:31722/TCP 114m app=my-app
kubernetes ClusterIP 10.102.0.1 <none> 443/TCP 39h <none>
</code></pre>
<p>I tried to reach to the website via the external ip created by the loadbalancer (XXX-XXX.us-east-1.elb.amazonaws.com), with the port (I tried 80, 8000 and 31722), and I get "the site can't be reached".</p>
<p>Do I miss something?</p>
<p>This is the my-app service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: LoadBalancer
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
app: my-app
</code></pre>
<p>This is the my-app yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- env:
- name: REVERSE_PROXY
value: "true"
- name: UPLOAD_FOLDER
value: /var/uploads
- name: WORKERS
value: "1"
image: me/my-app:latest
imagePullPolicy: ""
name: my-app
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
serviceAccountName: ""
</code></pre>
| Yagel | <p>I would suggest following steps to find root-cause;</p>
<ol>
<li><p>Check whether laod balancer's XXX-XXX.us-east-1.elb.amazonaws.com domain is reachable from the host/machine you are trying to access. You can use ping command to check network reachability from your host/machine/laptop/desktop.</p>
</li>
<li><p>If step 1 is successfully executed then, Check in default cloudwatch metrics for your load balancer. Please refer <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudwatch-metrics.html#view-metric-data" rel="nofollow noreferrer">this</a> document about viewing cloudwatch metrics through AWS web console. You can check any unusable error counts in this metrics.</p>
</li>
<li><p>If you did not find any useful information in the Step2 and if you have enabled VPC Flow Logs, then you can trace vpc flow logs to check, what is the problem with network traffic to & from the AWS load balancer. For VPC flow logs information please refer <a href="https://aws.amazon.com/blogs/aws/vpc-flow-logs-log-and-view-network-traffic-flows/" rel="nofollow noreferrer">this</a> document from AWS.</p>
</li>
</ol>
| amitd |
<p>I enabled Istio on GKE using istio-addon. According to the images the version of Istio is <code>1.6</code>. Deployment of the application, which contains <code>RequestAuthentication</code> resource gives the following error:</p>
<pre><code> admission webhook "pilot.validation.istio.io" denied the request:
unrecognized type RequestAuthentication
</code></pre>
<p><code>RequestAuthentication</code> must be available in the version <code>1.6</code>. Is there any way to check the compatibility?</p>
<p>Updated: On my on-premise installation everything works with Istio <code>1.9</code>. The configuration is the following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: xxx-auth
spec:
selector:
matchLabels:
app: xxx-fe
jwtRules:
- issuer: "{{ .Values.idp.issuer }}"
jwksUri: "{{ .Values.idp.jwksUri }}"
</code></pre>
| Katya Gorshkova | <p>I have posted community wiki answer for better visibility.</p>
<p>As <a href="https://stackoverflow.com/users/9496448/katya-gorshkova">Katya Gorshkova</a> has mentioned in the comment:</p>
<blockquote>
<p>Finally, I turned off istio addon and installed the newest istio 1.11.1. It worked without any problems</p>
</blockquote>
<p>See also</p>
<ul>
<li><a href="https://istio.io/latest/news/releases/1.11.x/announcing-1.11.1/" rel="nofollow noreferrer">release notes for istio 1.11.1</a></li>
<li><a href="https://istio.io/latest/docs/setup/upgrade/" rel="nofollow noreferrer">how to upgrade istio</a></li>
<li><a href="https://istio.io/latest/news/releases/1.11.x/announcing-1.11/upgrade-notes/" rel="nofollow noreferrer">Important changes to consider when upgrading to Istio 1.11.0</a></li>
</ul>
| Mikołaj Głodziak |
<p>I have a following <code>kubectl</code> command to obtain the credentials for my Azure cluster:</p>
<pre><code>kubectl config set-credentials token --token="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --auth-provider=azure
</code></pre>
<p>However, this throws the following error:</p>
<pre><code>creating a new azure token source for device code authentication: client-id is empty
</code></pre>
<p>After doing some investigation, I found out that we need to supply additional information for <code>client id</code>, <code>tenant id</code>, and <code>apiserver id</code>:</p>
<pre><code>kubectl config \
set-credentials "<username>" \
--auth-provider=azure \
--auth-provider-arg=environment=AzurePublicCloud \
--auth-provider-arg=client-id=<kubectl-app-id> \
--auth-provider-arg=tenant-id=<tenant-id> \
--auth-provider-arg=apiserver-id=<apiserver-app-id>
</code></pre>
<p>How should we obtain the <code>client id</code>, <code>tenant id</code>, and <code>apiserver id</code> details?</p>
| jtee | <p>Command <code>kubectl config set-credentials</code> is used to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">set credentials</a> as the name implies. If you want to get some information from your cluster you have several ways to do. For example you can use Azure Portal. Everything is described <a href="https://www.inkoop.io/blog/how-to-get-azure-api-credentials/" rel="nofollow noreferrer">in this article</a>. For example to get Tenant ID you need to:</p>
<blockquote>
<ol>
<li><a href="https://portal.azure.com/" rel="nofollow noreferrer">Login into your azure account</a>.</li>
<li>Select azure active directory in the left sidebar.</li>
<li>Click properties.</li>
<li>Copy the directory ID.</li>
</ol>
</blockquote>
<p>To get Client ID:</p>
<blockquote>
<ol>
<li><a href="https://portal.azure.com/" rel="nofollow noreferrer">Login into your azure account</a>.</li>
<li>Select azure active directory in the left sidebar.</li>
<li>Click Enterprise applications.</li>
<li>Click All applications.</li>
<li>Select the application which you have created.</li>
<li>Click Properties.</li>
<li>Copy the Application ID .</li>
</ol>
</blockquote>
<p>To get Client Secret:</p>
<blockquote>
<ol>
<li><a href="https://portal.azure.com/" rel="nofollow noreferrer">Login into your azure account</a>.</li>
<li>Select azure active directory in the left sidebar.</li>
<li>Click App registrations.</li>
<li>Select the application which you have created.</li>
<li>Click on All settings.</li>
<li>Click on Keys.</li>
<li>Type Key description and select the Duration.</li>
<li>Click save.</li>
<li>Copy and store the key value. You won't be able to retrieve it after you leave this page.</li>
</ol>
</blockquote>
<p>You can also find these informations using cli based on <a href="https://learn.microsoft.com/en-us/cli/azure/ad/app/credential?view=azure-cli-latest#az_ad_app_credential_list" rel="nofollow noreferrer">oficial documentation</a>.</p>
<p>You can also find additional example for <a href="https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant" rel="nofollow noreferrer">Tenant ID</a> (example with Azure portal and cli options):</p>
<pre><code>az login
az account list
az account tenant list
</code></pre>
| Mikołaj Głodziak |
<p>How can I check if some namespace is missing quota?</p>
<p>I expected the <code>absent()</code> function to return 1 when something doesn't exist and 0 when something exists.
So I tried to do the next query:</p>
<pre class="lang-yaml prettyprint-override"><code>absent(kube_namespace_labels) * on(namespace) group(kube_resourcequota) by(namespace)
</code></pre>
<p>But Prometheus returned <code>Empty query result</code>.</p>
<p>My final goal is to alert if some namespace is missing quota, how can I achieve this?</p>
| Mom Mam | <p>You can use a different query instead to have all namespaces where <code>resourcequota</code> is missing:</p>
<pre><code>count by (namespace)(kube_namespace_labels) unless sum by (namespace)(kube_resourcequota)
</code></pre>
| moonkotte |
<p>I have a RESTful service within a spring boot application. This spring boot app is deployed inside a kubernetes cluser and we have Istio as a service mesh attached to the sidecar of each container pod in the cluster. Every request to my service first hits the service mesh i.e Istio and then gets routed accordingly.</p>
<p>I need to put a validation for a request header and if that header is not present then randomly generate a unique value and set it as a header to the request. I know that there is Headers.HeaderOperations which i can use in the destination rule but how can i generate a unique value every time the header is missing? I dont want to write the logic inside my application as this is a general rule to apply for all the applications inside the cluster</p>
| codeninja | <pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true
</code></pre>
| user14847574 |
<p>For <code>ReplicaSets</code> I see there is a way to use a Horizontal Pod Autoscaler (HPA) and set a max/min value for the number of replicas allowed. Is there a similar feature for <code>StatefulSets</code>? Since it also allows you to specify the number of replicas to deploy initially? For example, how would I tell Kubernetes to limit the number of pods it can deploy for a given <code>StatefulSets</code>?</p>
| Jim | <p>I have posted community wiki answer for better visibility.
<a href="https://stackoverflow.com/users/213269/jonas" title="102,968 reputation">Jonas</a> well mentioned in the comment:</p>
<blockquote>
<p>First sentence in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">documentation</a>:</p>
</blockquote>
<blockquote>
<p>"The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set"</p>
</blockquote>
<p>summary, <strong>it is possible to set min / max replicas for a statefulset using HPA.</strong> In <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">this documentation</a> you will learn how HPA works, how to use it, what is supported etc. HPA will not work only with the objects that can't be scaled, for example, DaemonSets.</p>
<p>See also this <a href="https://stackoverflow.com/questions/54663845/apply-hpa-for-statefulset-in-kubernetes">related question</a>.</p>
| Mikołaj Głodziak |
<p>I am trying to deploy a docker container via Kubernetes. In my DockerFile, I specify this (neuro:</p>
<pre><code>FROM docker/psh-base-centos-tomcat:7.7.1908.1
RUN groupadd -r mygroup && useradd --no-log-init -r -g mygroup mygroup
USER mygroup:mygroup
WORKDIR /home/mygroup
RUN chmod 755 -R /tomcat
RUN chown -R mygroup:mygroup /tomcat
COPY ./target/rest-*.war /tomcat/webapps/rest.war
ENTRYPOINT ["sh", "/tomcat/bin/startup.sh"]
</code></pre>
<p>However, when I deploy this service via AKS, logs say this:</p>
<pre><code>sh: /tomcat/bin/catalina.sh: Permission denied
</code></pre>
<p>I manually enabled permission of <code>catalina.sh</code> file specifically by adding <code>RUN chmod 755 -R /tomcat/bin/catalina.sh</code> to DockerFile, then I re-deployed it and now I get this:</p>
<pre><code>touch: cannot touch '/tomcat/logs/catalina.out': Permission denied
/tomcat/bin/catalina.sh: line 434: /tomcat/logs/catalina.out: Permission denied
</code></pre>
<p>It seems like <code>RUN chmod 755 -R /tomcat</code> is not working correctly, and I have no idea why. Is there anything I am doing wrong here to get permission for <code>/tomcat</code> folder?</p>
| Jonathan Hagen | <p>it's a Linux users management issue in your Dockerfile, Dockerfiles are interpreted line by line in the build process (layer per layer) in your case you have set the "neurostar" as a current user in the third line and you want to change the default user directories access permissions.</p>
<p>This should work for you :</p>
<pre><code>FROM docker/psh-base-centos-tomcat:7.7.1908.1
USER root
RUN groupadd -r mygroup && useradd --no-log-init -r -g mygroup mygroup
RUN chmod 755 -R /tomcat
RUN chown -R mygroup:mygroup /tomcat
USER mygroup
COPY ./target/rest-*.war /tomcat/webapps/rest.war
ENTRYPOINT ["sh", "/tomcat/bin/startup.sh"]
</code></pre>
| Hajed.Kh |
<p>I have several Windows servers available and would like to setup a Kubernetes cluster on them.
Is there some tool or a step by step instruction how to do so?</p>
<p>What I tried so far is to install DockerDesktop and enable its Kubernetes feature.
That gives me a single node Cluster. However, adding additional nodes to that Docker-Kubernetes Cluster (from different Windows hosts) does not seem to be possible:
<a href="https://stackoverflow.com/questions/54658194/docker-desktop-kubernetes-add-node">Docker desktop kubernetes add node</a></p>
<p>Should I first create a Docker Swarm and could then run Kubernetes on that Swarm? Or are there other strategies?</p>
<p>I guess that I need to open some ports in the Windows Firewall Settings of the hosts? And map those ports to some Docker containers in which Kubernetes is will be installed? What ports?</p>
<p>Is there some program that I could install on each Windows host and that would help me with setting up a network with multiple hosts and connecting the Kubernetes nodes running inside Docker containers? Like a "kubeadm for Windows"?</p>
<p>Would be great if you could give me some hint on the right direction.</p>
<p><strong>Edit</strong>:<br>
Related info about installing <strong>kubeadm inside Docker</strong> container:<br>
<a href="https://github.com/kubernetes/kubernetes/issues/35712" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/35712</a><br>
<a href="https://github.com/kubernetes/kubeadm/issues/17" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/17</a></p>
<p>Related question about <strong>Minikube</strong>:<br>
<a href="https://stackoverflow.com/questions/51821057/adding-nodes-to-a-windows-minikube-kubernetes-installation-how">Adding nodes to a Windows Minikube Kubernetes Installation - How?</a></p>
<p>Info on <strong>kind</strong> (kubernetes in docker) multi-node cluster:<br>
<a href="https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/" rel="nofollow noreferrer">https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/</a>
(Creates multi-node kubernetes cluster on <strong>single</strong> windows host)<br>
Also see:</p>
<ul>
<li><a href="https://github.com/kubernetes-sigs/kind/issues/2652" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kind/issues/2652</a></li>
<li><a href="https://hub.docker.com/r/kindest/node" rel="nofollow noreferrer">https://hub.docker.com/r/kindest/node</a></li>
</ul>
| Stefan | <p>You can always refer to the official kubernetes documentation which is the right source for the information.</p>
<p>This is the correct way to manage this question.</p>
<p>Based on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/" rel="nofollow noreferrer">Adding Windows nodes</a>, you need to have two prerequisites:</p>
<blockquote>
<ul>
<li><p>Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are
using VXLAN/Overlay networking you must have also have KB4489899
installed.</p>
</li>
<li><p>A <strong>Linux-based Kubernetes kubeadm cluster</strong> in which you have access to the control plane (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">see Creating a single control-plane cluster with kubeadm</a>).</p>
</li>
</ul>
</blockquote>
<p>Second point is especially important since all control plane components are supposed to be run on linux systems (I guess you can run a Linux VM on one of the servers to host a control plane components on it, but networking will be much more complicated).</p>
<p>And once you have a proper running control plane, there's a <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/#joining-a-windows-worker-node" rel="nofollow noreferrer"><code>kubeadm for windows</code></a> to proper join Windows nodes to the kubernetes cluster. As well as a documentation on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/" rel="nofollow noreferrer">how to upgrade windows nodes</a>.</p>
<p>For firewall and which ports should be open check <a href="https://kubernetes.io/docs/reference/ports-and-protocols/" rel="nofollow noreferrer">ports and protocols</a>.</p>
<p>For worker node (which will be windows nodes):</p>
<pre><code>Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All
</code></pre>
<hr />
<p>Another option can be running windows nodes in cloud managed kuberneres, for example <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows" rel="nofollow noreferrer">GKE with windows node pool</a> (yes, I understand that it's not your use-case, but for further reference).</p>
| moonkotte |
<p>I have implemented a gRPC service, build it into a container, and deployed it using k8s, in particular AWS EKS, as a DaemonSet.</p>
<p>The Pod starts and turns to be in Running status very soon, but it takes very long, typically 300s, for the actual service to be accessible.</p>
<p>In fact, when I run <code>kubectl logs</code> to print the log of the Pod, it is empty for a long time.</p>
<p>I have logged something at the very starting of the service. In fact, my code looks like</p>
<pre class="lang-golang prettyprint-override"><code>package main
func init() {
log.Println("init")
}
func main() {
// ...
}
</code></pre>
<p>So I am pretty sure when there are no logs, the service is not started yet.</p>
<p>I understand that there may be a time gap between the Pod is running and the actual process inside it is running. However, 300s looks too long for me.</p>
<p>Furthermore, this happens randomly, sometimes the service is ready almost immediately. By the way, my runtime image is based on <a href="https://hub.docker.com/r/chromedp/headless-shell/" rel="nofollow noreferrer">chromedp headless-shell</a>, not sure if it is relevant.</p>
<p>Could anyone provide some advice for how to debug and locate the problem? Many thanks!</p>
<hr />
<p>Update</p>
<p>I did not set any readiness probes.</p>
<p>Running <code>kubectl get -o yaml</code> of my DaemonSet gives</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2021-10-13T06:30:16Z"
generation: 1
labels:
app: worker
uuid: worker
name: worker
namespace: collection-14f45957-e268-4719-88c3-50b533b0ae66
resourceVersion: "47265945"
uid: 88e4671f-9e33-43ef-9c49-b491dcb578e4
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: worker
uuid: worker
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "2112"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: worker
uuid: worker
spec:
containers:
- env:
- name: GRPC_PORT
value: "22345"
- name: DEBUG
value: "false"
- name: TARGET
value: localhost:12345
- name: TRACKER
value: 10.100.255.31:12345
- name: MONITOR
value: 10.100.125.35:12345
- name: COLLECTABLE_METHODS
value: shopping.ShoppingService.GetShop
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: DISTRIBUTABLE_METHODS
value: collection.CollectionService.EnumerateShops
- name: PERFORM_TASK_INTERVAL
value: 0.000000s
image: xxx
imagePullPolicy: Always
name: worker
ports:
- containerPort: 22345
protocol: TCP
resources:
requests:
cpu: 1800m
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- env:
- name: CAPTCHA_PARALLEL
value: "32"
- name: HTTP_PROXY
value: http://10.100.215.25:8080
- name: HTTPS_PROXY
value: http://10.100.215.25:8080
- name: API
value: 10.100.111.11:12345
- name: NO_PROXY
value: 10.100.111.11:12345
- name: POD_IP
image: xxx
imagePullPolicy: Always
name: source
ports:
- containerPort: 12345
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ssl/certs/api.crt
name: ca
readOnly: true
subPath: tls.crt
dnsPolicy: ClusterFirst
nodeSelector:
api/nodegroup-app: worker
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: ca
secret:
defaultMode: 420
secretName: ca
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 2
desiredNumberScheduled: 2
numberAvailable: 2
numberMisscheduled: 0
numberReady: 2
observedGeneration: 1
updatedNumberScheduled: 2
</code></pre>
<p>Furthermore, there are two containers in the Pod. Only one of them is exceptionally slow to start, and the other one is always fine.</p>
| HanXu | <p>I have posted community wiki answer to summarize the topic:</p>
<p>As <a href="https://stackoverflow.com/users/14704799/gohmc">gohm'c</a> has mentioned in the comment:</p>
<blockquote>
<p>Do connections made by container "source" always have to go thru HTTP_PROXY, even if it is connecting services in the cluster - do you think possible long time been taken because of proxy? Can try <code>kubectl exec -it <pod> -c <source> -- sh</code> and curl/wget external services.</p>
</blockquote>
<p>This is an good observation. Note that some connections can be made directly and that adding extra traffic through the proxy may result in delays. For example, a bottleneck may arise. You can read more information about using an HTTP Proxy to Access the Kubernetes API in the <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/" rel="nofollow noreferrer">documentation</a>.</p>
<p>Additionally you can also create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">readiness probes</a> to know when a container is ready to start accepting traffic.</p>
<blockquote>
<p>A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
<p>The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.</p>
</blockquote>
| Mikołaj Głodziak |
<p>currently busy learning kubernetes and running configs on the command line, and I'm using an M1 MacOS running on version 11.5.1, and one of the commands I wanted to run is <code>curl "http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy"</code> but I get the below error message <code>curl: (3) URL using bad/illegal format or missing URL</code>. Not sure if anyone has experienced this issue before, would appreciate the help.</p>
| msingathi majola | <p>First, <code>curl</code> command should receive only 1 host, not multiple hosts.
Therefore pod should be single.</p>
<p>Then, you need to save POD's name to a variable without any special characters.</p>
<p>Last, when you're using <code>kubectl proxy</code>, you need to add <code>-L</code> option to the <code>curl</code> command so it will follow the redirection.</p>
<p>Simple example will be:</p>
<pre><code># run pod with echo image
kubectl run echo --image=mendhak/http-https-echo
# start proxy
kubectl proxy
# export pod's name
export POD_NAME=echo
# curl with `-I` - headers and `-L` - follow redirects
curl -IL http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy
HTTP/1.1 301 Moved Permanently
Location: /api/v1/namespaces/default/pods/echo/proxy/
HTTP/1.1 200 OK
</code></pre>
| moonkotte |
<p>I'm trying to create an Istio ingress gateway (istio: 1.9.1, EKS: 1.18) with a duplicate <code>targetPort</code> like this:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
spec:
components:
ingressGateways:
- name: ingressgateway
k8s:
service:
ports:
- port: 80
targetPort: 8080
name: http2
- port: 443
name: https
targetPort: 8080
</code></pre>
<p>but I get the error:</p>
<pre><code>- Processing resources for Ingress gateways.
✘ Ingress gateways encountered an error: failed to update resource with server-side apply for obj Deployment/istio-system/istio: failed to create typed patch object: errors:
.spec.template.spec.containers[name="istio-proxy"].ports: duplicate entries for key [containerPort=8080,protocol="TCP"]
</code></pre>
<p>I am running Istio in EKS so we terminate TLS at the NLB, so all traffic (<code>http</code> and <code>https</code>) should go to the pod on the same port (8080)</p>
<p>Any ideas how I can solve this issue?</p>
| SamMaj | <p>Until the istio 1.8 version, this type of configuration worked (although with initial problems) - <a href="https://github.com/kubernetes/kubernetes/issues/53526" rel="nofollow noreferrer">github #53526</a>.</p>
<p>As of version 1.9 an error is generated and you can no longer use two of the same ports in one definition. There have also been created (or reopened) reports on github:</p>
<ul>
<li><a href="https://github.com/istio/istio/issues/35349" rel="nofollow noreferrer">github #35349</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/pull/53576" rel="nofollow noreferrer">github #53576</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/1622" rel="nofollow noreferrer">github #1622</a></li>
</ul>
<p>However, the issue is with an outdated version of Kubernetes and Istio at the time of writing the response.</p>
<p>Possible workarounds:</p>
<ul>
<li>Use different <code>targetPort</code></li>
<li>Upgrade Kubernetes and Istio to newest versions.</li>
</ul>
| Mikołaj Głodziak |
<p>Kubectl describe nodes ?
like wise do we have any commands like mentioned below to describe cluster information ?</p>
<p>kubectl describe cluster</p>
| AkhilKarumanchi | <p>"Kubectl describe <api-resource_type> <api_resource_name> "command is used to describe a specific resources running in your kubernetes cluster, Actually you need to verify different components separately as a developer to check your pods, nodes services and other tools that you have applied/created.</p>
<p>If you are the cluster administrator and you are asking about useful command to check the actual kube-system configuration it depends on your k8s cluster type for example if you are using "kubeadm" package to initialize k8s cluster on premises you can check and change the default cluster configuration using this command :</p>
<pre><code>kubeadm config print init-defaults
</code></pre>
<p>after initializing your cluster all main server configurations files a.k.a manifests are located here /etc/kubernetes/manifests (and they are Realtime updated, change anything and the cluster will redeploy it automatically)</p>
<p>Useful kubectl commands :</p>
<p>For cluster infos (api-server domain and dns) run:</p>
<pre><code>kubectl cluster-info
</code></pre>
<p>Either ways you can list all api-resources and check it one by one using these commands</p>
<pre><code>kuectl api-resources (list all api-resources names and types)
kubectl get <api_resource_name> (specific to your cluster)
kubectl explain <api_resource_name> (explain the resource object with docs link)
</code></pre>
<p>For extra infos you can add specific flags, examples:</p>
<pre><code>kubectl get nodes -o wide
kubectl get pods -n <specific-name-space> -o wide
kubectl describe pods <pod_name>
...
</code></pre>
<p>For more informations about the kubectl command line check the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">kubectl_cheatsheet</a></p>
| Hajed.Kh |
<p>I have tried desperately to apply a simple pod specification without any luck, even with this previous answer: <a href="https://stackoverflow.com/questions/48534980/mount-local-directory-into-pod-in-minikube">Mount local directory into pod in minikube</a></p>
<p>The yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: hostpath-pod
spec:
containers:
- image: httpd
name: hostpath-pod
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp/data/
</code></pre>
<p>I started minikube cluster with: <code>minikube start --mount-string="/tmp:/tmp" --mount</code> and there are 3 files in <code>/tmp/data</code>:</p>
<pre class="lang-sh prettyprint-override"><code>ls /tmp/data/
file2.txt file3.txt hello
</code></pre>
<p>However, this is what I get when I do <code>kubectl describe pods</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m26s default-scheduler Successfully assigned default/hostpath-pod to minikube
Normal Pulled 113s kubelet, minikube Successfully pulled image "httpd" in 32.404370125s
Normal Pulled 108s kubelet, minikube Successfully pulled image "httpd" in 3.99427232s
Normal Pulled 85s kubelet, minikube Successfully pulled image "httpd" in 3.906807762s
Normal Pulling 58s (x4 over 2m25s) kubelet, minikube Pulling image "httpd"
Normal Created 54s (x4 over 112s) kubelet, minikube Created container hostpath-pod
Normal Pulled 54s kubelet, minikube Successfully pulled image "httpd" in 4.364295872s
Warning Failed 53s (x4 over 112s) kubelet, minikube Error: failed to start container "hostpath-pod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/tmp/data" to rootfs at "/data" caused: stat /tmp/data: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning BackOff 14s (x6 over 103s) kubelet, minikube Back-off restarting failed container
</code></pre>
<p>Not sure what I'm doing wrong here. If it helps I'm using minikube version <code>v1.23.2</code> and this was the output when I started minikube:</p>
<pre><code>😄 minikube v1.23.2 on Darwin 11.5.2
▪ KUBECONFIG=/Users/sachinthaka/.kube/config-ds-dev:/Users/sachinthaka/.kube/config-ds-prod:/Users/sachinthaka/.kube/config-ds-dev-cn:/Users/sachinthaka/.kube/config-ds-prod-cn
✨ Using the hyperkit driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
❗ This VM is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
🔎 Verifying Kubernetes components...
📁 Creating mount /tmp:/tmp ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
❗ /usr/local/bin/kubectl is version 1.18.0, which may have incompatibilites with Kubernetes 1.22.2.
▪ Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
</code></pre>
<p>Anything I can try? :'(</p>
<h2>Update 1</h2>
<ul>
<li>Changing from <code>minikube</code> to <code>microk8s</code> helped. But I'm still not seeing anything inside <code>/data/</code> in the pod.</li>
<li>Also changing from <code>/tmp/</code> to a different folder helped in <code>minikube</code>. Something to do with MacOs.</li>
</ul>
| sachinruk | <p><strong>OP has said, that problem is solved:</strong></p>
<blockquote>
<p>changing from /tmp/ to a different folder helped in minikube. Something to do with MacOs
For some reason minikube doesn't like <code>/tmp/</code></p>
</blockquote>
<p><strong>An explanation of this problem:</strong>
You cannot mount <code>/tmp</code> to <code>/tmp</code>. The problem isn't with macOS, but with the way you do it. I tried to recreate this problem in several ways. I used a docker and got a very interesting error:</p>
<pre><code>docker: Error response from daemon: Duplicate mount point: /tmp.
</code></pre>
<p>This error makes it clear what the problem is. If you mount your catalog elsewhere, everything should work (which was confirmed):</p>
<blockquote>
<p>Do I understand correctly, that when you changed the mount point to a different folder, does it work?</p>
</blockquote>
<blockquote>
<p>that is correct. For some reason minikube doesn't like <code>/tmp/</code></p>
</blockquote>
<p>I know you are using hyperkit instead of docker in my case, but the only difference will be in the message you get on the screen. In the case of the docker, it is very clear.</p>
| Mikołaj Głodziak |
<p>I want to fix version difference between client (1.21) and server (1.24).<br />
1.21 version for client is desired version.<br />
So version for server shall be decreased.</p>
<pre><code>$ kubectl version --short
Client Version: v1.21.14
Server Version: v1.24.3
WARNING: version difference between client (1.21) and server (1.24) exceeds the supported minor version skew of +/-1
</code></pre>
| Ryo Matsuzaka | <p>I could solved the issue thanks to <a href="https://stackoverflow.com/users/147356/larsks">larsks</a>'s advice.</p>
<p>I uninstalled latest version and installed <code>v1.23.2</code> of minikube.<br />
Then kubernetes server version <code>v1.22.2</code> was intalled.</p>
<p>Not only stop minikube but also delete it is needed to overwrite it's version.</p>
<pre><code>$ minikube stop
$ minikube delete
$ curl -LO https://storage.googleapis.com/minikube/releases/v1.23.2/minikube-linux-amd64
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube
$ kubectl version --short
Client Version: v1.21.14
Server Version: v1.22.2
</code></pre>
<p>WARNING disappeared.</p>
<p>Refference</p>
<ul>
<li><a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">minikube start</a></li>
</ul>
| Ryo Matsuzaka |
<p>Following a tutorial on Kubernetes and got stuck after the logs look fine, but the port exposed doesn't work : "Connection Refused" using Chrome / curl.</p>
<p>Used a yaml file to power up the service via NodePort / ClusterIP.</p>
<p>posts-srv.yaml - Updated</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: posts-srv
spec:
type: NodePort
selector:
app: posts
ports:
- name: posts
protocol: TCP
port: 4000
targetPort: 4000
nodePort: 32140
</code></pre>
<p>posts-depl.yaml - Updated</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: suraniadi/posts
ports:
- containerPort: 4000
</code></pre>
<pre><code>$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
posts-depl 1/1 1 1 27m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
posts-srv NodePort 10.111.64.122 <none> 4000:32140/TCP 21m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
posts-depl-79b6889f89-rxdv2 1/1 Running 0 26m
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| Adrian Surani | <p>For structural reasons, it's better to specify the nodePort in your service yaml configuration file or kubernetes will allocate it randomly from the k8s port range (30000-32767).
In the ports section it's a list of ports no need, in your case, to specify a name check the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">nodePort_docs</a> for more infos.
This should work for you :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: posts-srv
spec:
type: NodePort
selector:
app: posts
ports:
- port: 4000
targetPort: 4000
nodePort: 32140
protocol: TCP
</code></pre>
<p>To connect to the nodePort service verify if any firewall service is up then verify that this port is enabled in your VMs : (centos example)</p>
<pre><code>sudo firewall-cmd --permanent --add-port=32140/tcp
</code></pre>
<p>Finally connect to this service using any <strong>node IP</strong> address (not the CLusterIP, this IP is an INTERNAL-IP not accessible outside the cluster) and the nodePort : <node_pubilc_IP>:<nodePort:32140></p>
| Hajed.Kh |
<p>AFAIK ceph have 2 specific trafic path:</p>
<ol>
<li>Traffic between client and ceph nodes,</li>
<li>Traffic between ceph nodes (Inter Ceph-node).</li>
</ol>
<p>So, let say my network is like this</p>
<p><a href="https://i.stack.imgur.com/wvDb7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wvDb7.jpg" alt="enter image description here" /></a></p>
<p>Note :</p>
<ol>
<li>Kube-node-4 is a kubernet worker that do not take part as rook node. Just a ceph-client</li>
<li>Red, Green and blue line is a seperate ethernet network.</li>
</ol>
<p>Can I do trafic seperation like this using Rook?
Is there any documentation on how to do it?</p>
<p>Sincerely</p>
<p>-bino-</p>
| Bino Oetomo | <p>Check out the <a href="https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/" rel="nofollow noreferrer">ceph docs</a>, what you describe is the separation of public and cluster networks. Cluster network is used for OSD <--> OSD traffic only (replication of PGs) while the public network is for Ceph clients as well as the other Ceph daemons (MON, MGR, etc). I'm not familiar with rook but according to the <a href="https://www.rook.io/docs/rook/v1.9/Storage-Configuration/Advanced/ceph-configuration/#osd-dedicated-network" rel="nofollow noreferrer">guide</a> you have to override the config, to get the current config map run:</p>
<pre><code>kubectl -n rook-ceph get ConfigMap rook-config-override -o yaml
</code></pre>
<blockquote>
<p>Enable the hostNetwork setting in the <a href="https://www.rook.io/docs/rook/v1.9/CRDs/Cluster/ceph-cluster-crd/#samples" rel="nofollow noreferrer">Ceph Cluster CRD configuration</a>.
For example,</p>
</blockquote>
<pre><code> network:
provider: host
</code></pre>
<p>and then</p>
<blockquote>
<p>Define the subnets to use for public and private OSD networks. Edit
the rook-config-override configmap to define the custom network
configuration:</p>
</blockquote>
<pre><code>kubectl -n rook-ceph edit configmap rook-config-override
</code></pre>
<blockquote>
<p>In the editor, add a custom configuration to instruct ceph which
subnet is the public network and which subnet is the private network.
For example:</p>
</blockquote>
<pre><code>apiVersion: v1
data:
config: |
[global]
public network = 10.0.7.0/24
cluster network = 10.0.10.0/24
public addr = ""
cluster addr = ""
</code></pre>
<blockquote>
<p>After applying the updated rook-config-override configmap, it will be
necessary to restart the OSDs by deleting the OSD pods in order to
apply the change. Restart the OSD pods by deleting them, one at a
time, and running ceph -s between each restart to ensure the cluster
goes back to "active/clean" state.</p>
</blockquote>
| eblock |
<p>I am trying to configure Kubernetes on docker-for-desktops and I want to change the default network assigned to containers. </p>
<blockquote>
<p>Example: the default network is <code>10.1.0.0/16</code> but I want <code>172.16.0.0/16</code>. </p>
</blockquote>
<p>I changed the docker network section to <code>Subnet address: 172.16.0.0 and netmask 255.255.0.0</code> but the cluster keeps assigning the network 10.1.0.0/16.
<a href="https://i.stack.imgur.com/mdlFB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mdlFB.png" alt="Network configuration"></a></p>
<p>The problem I am facing here is that I am in a VPN which has the same network IP of kubernetes default network (<code>10.1.0.0/16</code>) so if I try to ping a host that is under the vpn, the container from which I am doing the ping keeps saying <code>Destination Host Unreachable</code>.</p>
<p>I am running Docker Desktop (under Windows Pro) Version 2.0.0.0-win81 (29211) Channel: stable Build: 4271b9e.</p>
<p>Kubernetes is provided from Docker desktop <a href="https://i.stack.imgur.com/xshra.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xshra.png" alt="Kuberbetes"></a></p>
<p>From the official <a href="https://docs.docker.com/docker-for-windows/kubernetes/" rel="noreferrer">documentation</a> I know that </p>
<blockquote>
<p>Kubernetes is available in Docker for Windows 18.02 CE Edge and higher, and 18.06 Stable and higher , this includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, <strong>is not configurable</strong>, and is a single-node cluster</p>
</blockquote>
<p>Said so, should Kubernetes use the underlying docker's configuration (like network, volumes etc.)?</p>
| Justin | <p>On Windows, edit this file for a permanent fix:</p>
<pre><code>%AppData%\Docker\cni\10-default.conflist
</code></pre>
| user14137974 |
<p>Hi just A noobiew question.
I manage(?) to implement PV and PVC over mongo DB. I'm using PV as local and not on the cloud.
There is a way to save the data when k8s runs on my pc after container restart ?</p>
<p>I'm not sure I got this right <strong>but what I need is to save the mongo data after he restart</strong>. What is the best way ? (no mongo atlas)</p>
<p><strong>UPDATE:
I managed to make tickets service db work great, but I have 2 other services that it just wont work ! i update the yaml files so u can see the current state. the auth-mongo is just the same as tickets-mongo so why it wont work ?</strong></p>
<p>the ticket-depl-mongo yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets-mongo
template:
metadata:
labels:
app: tickets-mongo
spec:
containers:
- name: tickets-mongo
image: mongo
args: ["--dbpath", "data/auth"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- mountPath: /data/auth
name: tickets-data
volumes:
- name: tickets-data
persistentVolumeClaim:
claimName: tickets-pvc
---
apiVersion: v1
kind: Service
metadata:
name: tickets-mongo-srv
spec:
selector:
app: tickets-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>auth-mongo-depl.yaml :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
args: ["--dbpath", "data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- mountPath: /data/db
name: auth-data
volumes:
- name: auth-data
persistentVolumeClaim:
claimName: auth-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-auth 1Gi RWO Retain Bound default/auth-pvc auth 78m
pv-orders 1Gi RWO Retain Bound default/orders-pvc orders 78m
pv-tickets 1Gi RWO Retain Bound default/tickets-pvc tickets 78m
</code></pre>
<p>I'm using mongo containers with tickets, orders, and auth services.
Just adding some info to make it clear.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
auth-depl-66c5d54988-ffhwc 1/1 Running 0 36m
auth-mongo-depl-594b98fcc5-k9hj8 1/1 Running 0 36m
client-depl-787cf6c7c6-xxks9 1/1 Running 0 36m
expiration-depl-864d846445-b95sh 1/1 Running 0 36m
expiration-redis-depl-64bd9fdb95-sg7fc 1/1 Running 0 36m
nats-depl-7d6c7dc46-m6mcg 1/1 Running 0 36m
orders-depl-5478cf4dfd-zmngj 1/1 Running 0 36m
orders-mongo-depl-5f974847d7-bz9s4 1/1 Running 0 36m
payments-depl-78f85d94fd-4zs55 1/1 Running 0 36m
payments-mongo-depl-5d5c47494b-7zjrl 1/1 Running 0 36m
tickets-depl-84d59fd47c-cs4k5 1/1 Running 0 36m
tickets-mongo-depl-66798d9874-cfbqb 1/1 Running 0 36m
</code></pre>
<p>example for pv:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-tickets
labels:
type: local
spec:
storageClassName: tickets
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
</code></pre>
| YanivCode | <p>All I had to do is to change the path of hostPath in each PV. the same path will make the app to faill.</p>
<p>pv1:</p>
<pre><code> hostPath:
path: "/path/x1"
</code></pre>
<p>pv2:</p>
<pre><code> hostPath:
path: "/path/x2"
</code></pre>
<p>like so.. just not the same path.</p>
| YanivCode |
<p>I am trying to deploy a pod with second interface using multus-cni. However, when I deploy my pod I only see just one interface the main one. The secondary interface is not created.</p>
<p>I followed the steps in the quick start guide to install multus.</p>
<p>Environment:
minikube v1.12.1 on Microsoft Windows 10 Enterprise
Kubernetes v1.18.3 on Docker 19.03.12</p>
<p><strong>Multus version</strong>
--cni-version=0.3.1</p>
<p><strong>$00-multus.conf</strong></p>
<pre><code>{ "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "cniVersion": "0.3.1", "name":
"bridge", "type": "bridge", "bridge": "bridge", "addIf": "true", "isDefaultGateway": true, "forceAddress": false, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local
", "subnet": "10.244.0.0/16" } } ] }
</code></pre>
<p><strong>$1-k8s.conf</strong></p>
<pre><code>{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "bridge",
"addIf": "true",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16"
}
}
</code></pre>
<p><strong>$87-podman-bridge.conflist</strong></p>
<pre><code>{
"cniVersion": "0.4.0",
"name": "podman",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman0",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [{ "dst": "0.0.0.0/0" }],
"ranges": [
[
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall"
},
{
"type": "tuning"
}
]
}
</code></pre>
<p><strong>$multus.kubeconfig</strong></p>
<pre><code>apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://[10.96.0.1]:443
certificate-authority-data: .....
users:
- name: multus
user:
token: .....
contexts:
- name: multus-context
context:
cluster: local
user: multus
current-context: multus-context
File of '/etc/cni/multus/net.d'
**NetworkAttachment info:**
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
</code></pre>
<p><strong>Pod yaml info:</strong></p>
<pre><code>cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
name: samplepod
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
</code></pre>
| Farhad | <p>I installed new minikube version, now adding secondary interface seems to be fine.</p>
| Farhad |
<p>Hi there I was reviewing the GKE autopilot mode and noticed that in cluster configureation istio is disabled and I'm not able to change it. Also installation via istioctl install fail with following error</p>
<pre><code> error installer failed to update resource with server-side apply for obj MutatingWebhookConfiguration//istio-sidecar-injector: mutatingwebhookconfigurations.admissionregistration.k8s.io "istio-sidecar-injector" is forbidden: User "something@example" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied
</code></pre>
<p>Am I correct or it's not possible to run istio in GKE autopilot mode?</p>
| Maciej Perliński | <p><strong>TL;DR</strong></p>
<p>It is not possible at this moment to run istio in GKE autopilot mode.</p>
<p><strong>Conclusion</strong></p>
<p>If you are using Autopilot, you don't need to manage your nodes. You don't have to worry about operations such as updating, scaling or changing the operating system. However, the autopilot has a number of <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#limits" rel="noreferrer">limitations</a>.</p>
<p>Even if you are trying to install istio with a command <code>istioctl install</code>, istio will not be installed. You will then see the following message:</p>
<p>This will install the Istio profile into the cluster. Proceed? (y/N) y</p>
<p>✔ Istio core installed<br />
✔ Istiod installed<br />
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway</p>
<ul>
<li>Pruning removed resources 2021-05-07T08:24:40.974253Z warn installer retrieving resources to prune type admissionregistration.k8s.io/v1beta1, Kind=MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "something@example" cannot list resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied not found
Error: failed to install manifests: errors occurred during operation</li>
</ul>
<p>This command failed, bacuse for <a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/" rel="noreferrer">sidecar injection</a>, installer tries to create a <em>MutatingWebhookConfiguration</em> called <em>istio-sidecar-injector</em>. This limitation is <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#webhooks_limitations" rel="noreferrer">mentioned here</a>.</p>
<p>For more information you can also <a href="https://medium.com/sakajunlabs/running-istio-on-gke-autopilot-707f6de2d43b" rel="noreferrer">read this page</a>.</p>
| Mikołaj Głodziak |
<p>I'm trying to create a very simple Kubernetes project that includes communication between frontend client written in Reactjs +nginx and backend server written in Java + Spring boot.</p>
<p>I'm able to make this communication with docker-compose locally but when deploying it to gke I'm getting: failed (111: Connection refused)</p>
<p>on the backend I have:</p>
<p>controller:</p>
<pre><code>@RestController
@RequestMapping("/msgs")
@CrossOrigin
public class MsgController {
@GetMapping("/getMsg")
public String getMsg() {
return "hello from backend";
}
}
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM adoptopenjdk/openjdk11:alpine-jre
WORKDIR /opt/app
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
# java -jar /opt/app/app.jar
ENTRYPOINT ["java","-jar","app.jar"]
</code></pre>
<p>deployment yml:</p>
<pre><code>Version: apps/v1
kind: Deployment
metadata:
name: server-demo
spec:
selector:
matchLabels:
app: server-demo
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: server-demo
tier: backend
track: stable
spec:
containers:
- name: hello
image: "gcr.io/gcp-kub-course/server-demo:latest"
ports:
- name: http
containerPort: 4420
---
apiVersion: v1
kind: Service
metadata:
name: server-demo
spec:
selector:
app: hello
tier: backend
ports:
- protocol: TCP
port: 4420
targetPort: 4420
</code></pre>
<p>on the frontend side, I have</p>
<pre><code>const [msg, setMsg] = useState('');
useEffect(() => {
fetch('/msgs/getMsg')
.then(response => response.text())
.then(m => {
// console.log(JSON.stringify(m))
setMsg(m)
});
});
return <div>{msg}</div>
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM node:10-alpine as a builder
COPY package.json package-lock.json ./
RUN npm install && mkdir /react-ui && mv ./node_modules ./react-ui
WORKDIR /react-ui
COPY . .
# Build the project and copy the files
RUN npm run build
FROM nginx:alpine
#!/bin/sh
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /react-ui/build /usr/share/nginx/html
EXPOSE 3000 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
</code></pre>
<p>nginx.conf:</p>
<pre><code>worker_processes 5; ## Default: 1
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
upstream client {
}
server {
listen 80;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
location /msgs {
proxy_pass http://server-demo:4420;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
}
</code></pre>
<p>and deployment yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
app: frontend-service
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: frontend-service
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-service
namespace: default
labels:
app: frontend-service
spec:
replicas: 1
selector:
matchLabels:
app: frontend-service
template:
metadata:
labels:
app: frontend-service
spec:
containers:
- name: frontend-service
image: gcr.io/gcp-kub-course/frontend-service:latest
imagePullPolicy: "Always"
ports:
- name: http
containerPort: 80
</code></pre>
<p>when looking at the services:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-service LoadBalancer 10.24.15.122 34.121.100.70 80:32506/TCP 57m
kubernetes ClusterIP 10.24.0.1 <none> 443/TCP 25h
server-demo ClusterIP 10.24.4.49 <none> 4420/TCP 57m
</code></pre>
<p>when looking at the pods:</p>
<pre><code>frontend-service-bf9b4ccfd-jcjvm 1/1 Running 0 58m
server-demo-84df7f57c6-blgxq 1/1 Running 0 58m
</code></pre>
<p>and finally when looking at the frontend-service logs I see:</p>
<pre><code>2020/08/23 16:05:11 [error] 6#6: *28 connect() failed (111: Connection refused) while connecting to upstream, client: 10.128.0.7, server: , request: "GET /msgs/getMsg HTTP/1.1", upstream: "http://10.24.4.49:4420/msgs/getMsg", host: "34.121.100.70", referrer: "http://34.121.100.70/"
10.128.0.7 - - [23/Aug/2020:16:05:11 +0000] "GET /msgs/getMsg HTTP/1.1" 502 559 "http://34.121.100.70/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36"
</code></pre>
<p>I can see that the nginx did its job since its proxying the GET /msgs/getMsg to 10.24.4.49 which is demo-server IP, they are both on default namespace and I'm lost</p>
| Lior Derei | <p>the second I posted it I saw my mistake:
at the end of the server-demo deployment yaml</p>
<p>I missed configured:</p>
<pre><code>selector:
app: hello
</code></pre>
<p>which had to be:</p>
<pre><code>selector:
app: server-demo
</code></pre>
| Lior Derei |
<p>I have two development phases, <code>dev</code> and <code>prod</code>. and it could be distinguished as <code>.Values.profile</code></p>
<p>In dev, the below resource limit is enough.</p>
<pre><code>cpu: 4
memory: 4Gi
ephemeral-storage: 4Gi
</code></pre>
<p>However, I have to increase it as below while using <code>prod</code> phase.</p>
<pre><code>cpu: 8
memory: 16Gi
ephemeral-storage: 16Gi
</code></pre>
<p>I tried to write my <code>deployment.yaml</code> as follows, and it works.</p>
<pre><code>name : my-app
...
resources:
limits:
{{ if (eq .Values.profile "dev")}}
cpu: 4
memory: 4Gi
ephemeral-storage: 4Gi
{{ end }}
{{ if (eq .Values.profile "prod")}}
cpu: 8
memory: 16Gi
ephemeral-storage: 16Gi
{{end}}
</code></pre>
<p>However, I wondered what is the best practice to separate container resources between two phases.</p>
<p>Thank you!</p>
| mozzi | <p>Create a separate values.yaml file for each environment.
E.g. values_dev.yaml , values_prod.yaml.
Put the environment specific values in the corresponding yamls.
Based on the environment on which the deployment is being done, pass the corresponding values.yaml file as argument helm install command.</p>
| Prince Arora |
<p>Recently I got this warning message, when I have builded <code>quarkus maven</code> project. I have tested with several later versions and I think this has to be something local in my environment.</p>
<pre><code>[WARNING] Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring
</code></pre>
<p>And this result in a build failure ...</p>
<pre><code>[error]: Build step io.quarkus.kubernetes.deployment.KubernetesDeployer#deploy threw an exception: java.lang.RuntimeException:
</code></pre>
<p>Although Kubernetes deployment was requested, it however cannot take place, because there was an error during communication with the API Server at <code>https://kubernetes.default.svc/</code></p>
<p>Any ideas what could be wrong?</p>
| Tapani Rundgren | <p>Like <a href="https://stackoverflow.com/users/11023331/tapani-rundgren">Tapani Rundgren</a> mentioned in the comments, the solution is to export the variables:</p>
<pre><code>export KUBERNETES_MASTER=<your server here>
export KUBERNETES_NAMESPACE=<your namspace here>
</code></pre>
| Mikołaj Głodziak |
<p>Is there a possibility to configure all the unbound configurations listed <a href="https://linux.die.net/man/5/unbound.conf" rel="nofollow noreferrer">here</a> similarly in kubernetes coredns 'Corefile' configuration like this. Only few options are listed <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">here</a>. I am looking for the below server options in unbound conf to be done on kubernetes Corefile coredns configmap.</p>
<ol>
<li>do-ip6</li>
<li>verbosity</li>
<li>outgoing-port-avoid, outgoing-port-permit</li>
<li>domain-insecure</li>
<li>access-control</li>
<li>local-zone</li>
</ol>
<p>I need to do above configurations similarly in kubernetes Corefile configuration. As I am new to kubernetes coredns, I am not sure whether these configurations are possible in Coredns. Can someone direct me how to do that? Also I am looking for steps on how to configure this in Corefile configmap using helm. It would be really helpful if I get some information on this. Thanks in advance!!!</p>
| anonymous user | <p><code>CoreDNS</code> supports some requested features via <a href="https://coredns.io/plugins/" rel="nofollow noreferrer"><code>plugins</code></a>:</p>
<ul>
<li><code>do-ip6</code> - CoreDNS works with ipv6 by default (if cluster is dual-stack)</li>
<li><code>verbosity</code> - <a href="https://coredns.io/plugins/log/" rel="nofollow noreferrer"><code>log</code></a> plugin will show more details about queries, it can have different format and what it shows (success, denial, errors, everything)</li>
<li><code>outgoing-port-avoid, outgoing-port-permit</code> - did not find any support of this</li>
<li><code>domain-insecure</code> - please check if <a href="https://coredns.io/plugins/dnssec/" rel="nofollow noreferrer"><code>dnssec</code></a> can help (It looks similar to what <code>unbound</code> has, but I'm not really familiar with it).</li>
<li><code>access-control</code> - <a href="https://coredns.io/plugins/acl/" rel="nofollow noreferrer"><code>acl</code></a> plugin does it.</li>
<li><code>local-zone</code> - <a href="https://coredns.io/plugins/local/" rel="nofollow noreferrer"><code>local</code></a> plugin can be tried for this purpose, it doesn't have lots of options though.</li>
</ul>
<p>Bonus point:</p>
<ul>
<li>CoreDNS config's change - <a href="https://coredns.io/plugins/reload/" rel="nofollow noreferrer"><code>reload</code></a> allows automatic reload of a changed Corefile.</li>
</ul>
<p>All mentioned above plugins have syntax and examples on their pages.</p>
| moonkotte |
<p>What is the logic algorithm that a kubernets service uses to assign requests to pods that it exposes? Can this algorithm be customized?</p>
<p>Thanks.</p>
| Mazen Ezzeddine | <p>You can use a component <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-proxy" rel="nofollow noreferrer"><code>kube-proxy</code></a>. What is it?</p>
<blockquote>
<p>kube-proxy is a network proxy that runs on each <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">node</a> in your cluster, implementing part of the Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> concept.
<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">kube-proxy</a> maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.</p>
</blockquote>
<p>But why use a proxy when there is a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#why-not-use-round-robin-dns" rel="nofollow noreferrer">round-robin DNS algorithm</a>? There are a few reasons for using proxying for Services:</p>
<blockquote>
<ul>
<li>There is a long history of DNS implementations not respecting record TTLs, and caching the results of name lookups after they should have expired.</li>
<li>Some apps do DNS lookups only once and cache the results indefinitely.</li>
<li>Even if apps and libraries did proper re-resolution, the low or zero TTLs on the DNS records could impose a high load on DNS that then becomes difficult to manage.</li>
</ul>
</blockquote>
<p><code>kube-proxy</code> has many modes:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="nofollow noreferrer"><strong>User space proxy mode</strong></a> - In the userspace mode, the iptables rule forwards to a local port where a go binary (kube-proxy) is listening for connections. The binary (running in userspace) terminates the connection, establishes a new connection to a backend for the service, and then forwards requests to the backend and responses back to the local process. An advantage of the userspace mode is that because the connections are created from an application, if the connection is refused, the application can retry to a different backend</li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer"><strong>Iptables proxy mode</strong></a> - In iptables mode, the iptables rules are installed to directly forward packets that are destined for a service to a backend for the service. This is more efficient than moving the packets from the kernel to kube-proxy and then back to the kernel so it results in higher throughput and better tail latency. The main downside is that it is more difficult to debug, because instead of a local binary that writes a log to <code>/var/log/kube-proxy</code> you have to inspect logs from the kernel processing iptables rules.</li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer"><strong>IPVS proxy mode</strong></a> - IPVS is a Linux kernel feature that is specifically designed for load balancing. In IPVS mode, kube-proxy programs the IPVS load balancer instead of using iptables. This works, it also uses a mature kernel feature and IPVS is <em>designed</em> for load balancing lots of services; it has an optimized API and an optimized look-up routine rather than a list of sequential rules.</li>
</ul>
<p>You can read more <a href="https://stackoverflow.com/questions/36088224/what-does-userspace-mode-means-in-kube-proxys-proxy-mode">here</a> - good question about proxy mode on StackOverflow, <a href="https://www.tigera.io/blog/comparing-kube-proxy-modes-iptables-or-ipvs/" rel="nofollow noreferrer">here</a> - comparing proxy modes and <a href="https://www.stackrox.com/post/2020/01/kubernetes-networking-demystified/" rel="nofollow noreferrer">here</a> - good article about proxy modes.</p>
<hr />
<p>Like <a href="https://stackoverflow.com/users/4551228/rohatgisanat">rohatgisanat</a> mentioned in his answer you can also use <a href="https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh" rel="nofollow noreferrer">service mesh</a>. Here is also good article about Kubernetes service mesh <a href="https://www.toptal.com/kubernetes/service-mesh-comparison" rel="nofollow noreferrer">comparsion</a>.</p>
| Mikołaj Głodziak |
<p>How to delete below given <code>ingress</code> from default namespace?</p>
<pre><code># kubectl get ingress
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
abc-ingress <none> app.company.com 80 24h
</code></pre>
<pre><code>kubectl describe ingress jenkins-ingress
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: abc-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
abc.company.com
/ app-svc:80 (<error: endpoints "app-svc" not found>)
Annotations: <none>
Events: <none>
</code></pre>
<p>Tried to delete using below command, But it doesn't work.Please suggest to delete it.</p>
<pre><code># kubectl delete abc-ingress -n default
error: the server doesn't have a resource type "abc-ingress"
</code></pre>
| user4948798 | <p>If your default namespace for your user defined default in your kube config you can use:</p>
<pre><code>kubectl delete ingress abc-ingress
</code></pre>
<p>but if your default namespace in diffrent, you should pass namespace name:</p>
<pre><code>kubectl delete ingress abc-ingress -n default
</code></pre>
| Ali Rezvani |
<p>I am quite new to kubernetes and when i check the kubernetes version with kubectl version I am getting the following error.</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp 127.0.0.1:6443: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>Can you please help me with this?</p>
<p>Thanks in advance!</p>
| lse23 | <p>Seems you have setup kubernetes via kubeadm. Please verify if your kubernetes is running or not.</p>
<pre><code>systemctl restart kubeadm
</code></pre>
<p>After that check if port is listening or not</p>
<pre><code>netstat -an | grep :6443
</code></pre>
| Geralt of Ravia |
<p>Usage of the same TCP port for Rabbitmq 5672 and transfer requests to different <code>namespaces/rabbitmq_service</code> based on the host-based routing.</p>
<p>What works:</p>
<pre class="lang-yaml prettyprint-override"><code>chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672"
</code></pre>
<p>Block reflected in nginx.conf:</p>
<pre><code>server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen :5672;
proxy_pass upstream_balancer;
}
</code></pre>
<p>Note: this will transfer all the requests coming to port 5672 to <code>cust1namespace/rabbitmq:5672</code>, irrespective of the client domain name and we want host-based routing based on domain name.</p>
<p>What is expected:</p>
<pre><code>chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
cust1domainname:5672: "cust1namespace/rabbitmq:5672"
cust2domainname:5672: "cust2namespace/rabbitmq:5672"
</code></pre>
<p>Error:</p>
<pre><code>Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Service.spec.ports[3].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer", ValidationError(Service.spec.ports[4].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer"]
</code></pre>
<p>The final nginx.conf should look like:</p>
<pre><code>server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen cust1domainname:5672;
proxy_pass upstream_balancer;
}
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust2namespace-services-rabbitmq-5672";
}
listen cust2domainname:5672;
proxy_pass upstream_balancer;
}
</code></pre>
| mayank arora | <h2>A bit of theory</h2>
<p>Approach you're trying to implement is not possible due to network protocols implementation and difference between them.</p>
<p><code>TCP</code> protocol works on transport layer, it has source and destination IPs and ports, it does <strong>not</strong> have any hosts information within. In turn <code>HTTP</code> protocol works on application layer which seats on top of the <code>TCP</code> and it does have information about host where this request is intended to be sent.</p>
<p>Please get familiar with <a href="https://docs.oracle.com/cd/E19683-01/806-4075/ipov-10/index.html" rel="nofollow noreferrer">OSI model and protocols which works on these levels</a>. This will help to avoid any confusion why this works this way and no other.</p>
<p>Also there's a <a href="https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol/answer/Daniel-Miller-7?srid=nZLo" rel="nofollow noreferrer">good answer on quora about difference between HTTP and TCP protocols</a>.</p>
<h2>Answer</h2>
<p>At this point you have two options:</p>
<ol>
<li>Use ingress to work on application layer and let it direct traffic to services based on hosts which are presented in <code>request body</code>. All traffic should go through ingress endpoint (usually it's loadbalancer which is exposed outside of the cluster).</li>
</ol>
<p>Please find examples with</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">two paths and services behind them</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="nofollow noreferrer">two different hosts and services behind them</a></li>
</ul>
<ol start="2">
<li>Use ingress to work on transport layer and expose separate TCP ports for each service/customer. In this case traffic will be passed through ingress directly to services.</li>
</ol>
<p>Based on your example it will look like:</p>
<pre><code>chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672" # port 5672 for customer 1
5673: "cust2namespace/rabbitmq:5672" # port 5673 for customer 2
...
</code></pre>
| moonkotte |
<p>I have the ssl certificate zip file and the <code>privatekey.key</code> file. In total I have the certificate file <code>.crt</code> and another <code>.crt</code> with the name <code>bundle.crt</code> and a <code>.pem</code> file along with the private key with an extension <code>.key</code>.</p>
<p>Now I am trying to use it to create a secret in istio using these files. I am able to create a secret with these files (<code>thecertificate.cert</code> and the <code>privatekey.key</code> and not using the <code>.pem</code> and <code>bundle.cert</code> file) but then when I use in my istio ingress gateway configuration and test it, I get an error on Postman:</p>
<pre><code>SSL Error: Unable to verify the first certificate.
</code></pre>
<p>Here are the details:</p>
<pre><code># kubectl create -n istio-system secret tls dibbler-certificate --key=privatekey.key --cert=thecertificate.crt
# kubectl get secrets -n istio-system
</code></pre>
<p>output: <strong>dibbler-certificate</strong></p>
<p>gateway:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dibbler-gateway
spec:
selector:
istio: ingressgateway
servers:
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
# serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
# privateKey: /etc/istio/ingressgateway-certs/tls.key
credentialName: dibbler-certificate
hosts:
- "test.ht.io" # domain name goes here
</code></pre>
<p>Any help is appreciated. Thanks</p>
| test test | <p>Your config files looks good. I have found very similar problem on <a href="https://discuss.istio.io/t/postman-ssl-error-unable-to-verify-the-first-certificate/12471" rel="nofollow noreferrer">discuss.istio.io</a>. The problem is resolved by following:</p>
<blockquote>
<p>Two servers was an error too but the important thing is I had to concatenate the godaddy ssl certificate.crt & the bundle.crt and then used the private key to create a secret. Now it’s workng fine.</p>
</blockquote>
<p>You can also see <a href="https://community.postman.com/t/unable-to-verify-first-cert-issue-enable-ssl-cert-verification-off/14951/5" rel="nofollow noreferrer">this postman page</a>.</p>
| Mikołaj Głodziak |
<p>I am trying to convert docker-compose.yaml Keycloak to Char values, I'm stuck with this a bit:</p>
<p>Docker-compose config looks like this:</p>
<pre><code> keycloak:
container_name: keycloak
image: jboss/keycloak:10.0.0
hostname: keycloak
command:
[
'-b',
'0.0.0.0',
'-Djboss.socket.binding.port-offset=1000',
'-Dkeycloak.migration.action=import',
'-Dkeycloak.migration.provider=dir',
'-Dkeycloak.migration.dir=/keycloak',
'-Dkeycloak.migration.strategy=IGNORE_EXISTING',
]
volumes:
- ./keycloak:/realm-config
environment:
KEYCLOAK_USER: [email protected]
KEYCLOAK_PASSWORD: password
networks:
keycloak:
aliases:
- keycloak.localtest.me
ports:
- 9080:9080/tcp
</code></pre>
<p>What I'm trying to do with Chart values:</p>
<pre><code>keycloak:
basepath: auth
username: admin
password: password
route:
tls:
enabled: false
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_IMPORT
value: /keycloak/master-realm.json
- name: JAVA_OPTS
value: >-
-Djboss.socket.binding.port-offset=1000
extraVolumes: |
- name: realm-secret
secret:
secretName: realm-secret
extraVolumeMounts: |
- name: realm-secret
mountPath: "../keycloak/"
readOnly: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
</code></pre>
<p>I don't quite understand where to put this from docker-compose.yaml:</p>
<pre><code> command:
[
'-b',
'0.0.0.0',
'-Djboss.socket.binding.port-offset=1000',
'-Dkeycloak.migration.action=import',
'-Dkeycloak.migration.provider=dir',
'-Dkeycloak.migration.dir=/realm-config',
'-Dkeycloak.migration.strategy=IGNORE_EXISTING',
]
</code></pre>
<p>P.S I'm trying to run a k8s example for <a href="https://github.com/oauth2-proxy/oauth2-proxy/tree/master/contrib/local-environment" rel="nofollow noreferrer">https://github.com/oauth2-proxy/oauth2-proxy/tree/master/contrib/local-environment</a>
There they have k8s demo with Dex, and I want to adapt it with Keycloak.</p>
| xeLL | <p>You can use kompose tool to directly convert docker compose to kubernetes files. If you want to make a helm chart just replace with templates and provide values in chart values. Also Kubernetes deployment has command field in yaml.</p>
<p>as you can see in github.com/codecentric/helm-charts/blob/master/charts/keycloak/… <strong>command</strong> is set by .Values.command</p>
<p>So in file github.com/codecentric/helm-charts/blob/master/charts/keycloak/… replace <strong>command : []</strong> with your <strong>docker-compose command</strong>.</p>
| Saurabh Nigam |
<p>I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.</p>
<p>I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.</p>
<p>What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.</p>
<p>The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:</p>
<pre><code>
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
</code></pre>
<p>Any help at all is greatly appreciated and I'm happy to provide more info.</p>
| biscuit_cakes | <p>Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.</p>
<p>I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.</p>
| biscuit_cakes |
<p>Does the Kubernetes scheduler assign the pods to the nodes one by one in a queue (not in parallel)?</p>
<p>Based on <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/#how-the-scheduler-iterates-over-nodes" rel="nofollow noreferrer">this</a>, I guess that might be the case since it is mentioned that the nodes are iterated round robin.</p>
<p>I want to make sure that the pod scheduling is not being done in parallel.</p>
| Saeid Ghafouri | <h2>Short answer</h2>
<p>Taking into consideration all the processes <code>kube-scheduler</code> performs when it's scheduling the pod, the answer is <strong>yes</strong>.</p>
<h2>Scheduler and pods</h2>
<blockquote>
<p>For <strong>every newly created pod or other unscheduled pods</strong>, kube-scheduler
selects an optimal node for them to run on. However, <strong>every container</strong>
<strong>in pods</strong> has different requirements for resources and every pod also
has different requirements. Therefore, existing nodes need to be
filtered according to the specific scheduling requirements.</p>
<p>In a cluster, Nodes that meet the scheduling requirements for a Pod
are called feasible nodes. If none of the nodes are suitable, the pod
remains unscheduled until the scheduler is able to place it.</p>
<p>The scheduler finds feasible Nodes for a Pod and then runs a set of
functions to score the feasible Nodes and picks a Node with the
highest score among the feasible ones to run the Pod. The scheduler
then notifies the API server about this decision in a process called
binding.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler" rel="nofollow noreferrer">Reference - kube-scheduler</a>.</p>
<blockquote>
<p>The scheduler determines which Nodes are valid placements for <strong>each Pod
in the scheduling queue</strong> according to constraints and available
resources.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/#synopsis" rel="nofollow noreferrer">Reference - kube-scheduler - synopsis</a>.</p>
<p>In short words, <code>kube-scheduler</code> picks up pods one by one, assess them and its requests, then proceeds to finding appropriate <code>feasible</code> nodes to schedule pods on.</p>
<h2>Scheduler and nodes</h2>
<p>Mentioned link is related to nodes to give a fair chance to run pods across all <code>feasible</code> nodes.</p>
<blockquote>
<p>Nodes in a cluster that meet the scheduling requirements of a Pod are
called feasible Nodes for the Pod</p>
</blockquote>
<p>Information here is related to default <code>kube-scheduler</code>, there are solutions which can be used or even it's possible to implement self-written one. Also it's possible to run <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">multiple schedulers in cluster</a>.</p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation" rel="nofollow noreferrer">Node selection in kube-scheduler</a></li>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler" rel="nofollow noreferrer">Kubernetes scheduler</a></li>
</ul>
| moonkotte |
<p>Have successfully implemented Vault with Kubernetes and applications running in K8s are getting their environment variables from Hashicorp vault. Everything is great! But, want to take a step forward and want to restart the pod whenever a change is made to the secret in the Vault, as of now, we have to restart the pod manually to reset environment variables whenever we make changes to Vault secret. How this can be achieved? Have heard about confd but not sure how it can be implemented!</p>
| AshitAcharya | <p>Use reloader <a href="https://github.com/stakater/Reloader" rel="noreferrer">https://github.com/stakater/Reloader</a>. We found it quite useful in our cluster. It does a rolling update hence you can change config with zero downtime too. Also if you made some errors in configmap you can easily do a rollback.</p>
| Saurabh Nigam |
<p>I have a simple ingress configuration file-</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
name: tut-ingress
namespace: default
spec:
rules:
- host: tutorial.com
http:
paths:
- path: /link1/
pathType: Prefix
backend:
service:
name: nginx-ingress-tut-service
port:
number: 8080
</code></pre>
<p>in which requests coming to <code>/link1</code> or <code>/link1/</code> are rewritten to
<code>/link2/link3/</code>.
When I access it using <code>http://tutorial.com/link1/</code>
I am shown the correct result but when I access it using
<code>http://tutorial.com/link1</code>, I get a 404 not found.
The <code>nginx-ingress-tut-service</code> has the following endpoints-</p>
<ul>
<li><code>/</code></li>
<li><code>/link1</code></li>
<li><code>/link2/link3</code></li>
</ul>
<p>I am a beginner in the web domain, any help will be appreciated.</p>
<p>When I change it to-</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /link1
</code></pre>
<p>it starts working fine, but can anybody tell why is it not working with <code>/link1/</code>.</p>
<p>Helpful resources -
<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#examples" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#examples</a></p>
<p><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
<p>Edit- Please also explain what happens when you write a full HTTP link in
<code>nginx.ingress.kubernetes.io/rewrite-target</code></p>
| BeastMaster64 | <p>The answer is posted in the comment:</p>
<blockquote>
<p>Well, <code>/link1/</code> is not a prefix of <code>/link1</code> because a prefix must be the same length or longer than the target string</p>
</blockquote>
<p>If you have</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /link1/
</code></pre>
<p>the string to match will have to have a <code>/</code> character at the end of the path. Everything works correctly. In this situation if you try to access by the link <code>http://tutorial.com/link1</code> you will get 404 error, because ingress was expecting <code>http://tutorial.com/link1/</code>.</p>
<p>For more you can see <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#examples" rel="nofollow noreferrer">examples of rewrite rule</a> and documentation about <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">path types</a>:</p>
<blockquote>
<p>Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit <code>pathType</code> will fail validation. There are three supported path types:</p>
<ul>
<li><p><code>ImplementationSpecific</code>: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate <code>pathType</code> or treat it identically to <code>Prefix</code> or <code>Exact</code> path types.</p>
</li>
<li><p><code>Exact</code>: Matches the URL path exactly and with case sensitivity.</p>
</li>
<li><p><strong><code>Prefix</code>: Matches based on a URL path prefix split by <code>/</code>. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the <code>/</code> separator. A request is a match for path <em>p</em> if every <em>p</em> is an element-wise prefix of <em>p</em> of the request path.</strong></p>
</li>
</ul>
</blockquote>
<p><strong>EDIT:</strong>
Based on documentation this should work, but it looks like there is a <a href="https://github.com/kubernetes/ingress-nginx/issues/8047" rel="nofollow noreferrer">fresh problem with nginx ingress</a>. The problem is still unresolved. You can use workaround posted in <a href="https://github.com/kubernetes/ingress-nginx/issues/646" rel="nofollow noreferrer">this topic</a> or try to change your you similar to this:</p>
<pre><code>- path: /link1(/|$)
</code></pre>
| Mikołaj Głodziak |
<p>In Kubernetes (using a filter) we can limit the output from k8s on interesting resource and I wonder whether it is possible to list top 5 latest created pods using only filter.</p>
<p>The current example mostly list all pods and pipe to another unix (<code>head</code> command)</p>
<pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp | head -n 5
</code></pre>
<p>But I guess it takes quite long time to get all first from server side, then lists first 5.</p>
<p>Can I use special filter will make it more efficiency?</p>
| Larry Cai | <p>There are several aspects which prevent you to solve this question using only <code>filter</code>:</p>
<ol>
<li><p><code>Filter</code> itself:</p>
<blockquote>
<p>Field selectors are essentially resource filters. By default, no
selectors/filters are applied, meaning that all resources of the
specified type are selected. This makes the kubectl queries kubectl
get pods and kubectl get pods --field-selector "" equivalent.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">Reference - Field selectors</a>.</p>
<p>And its limitations on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/#supported-operators" rel="nofollow noreferrer">supported operations</a>:</p>
<blockquote>
<p>You can use the =, ==, and != operators with field selectors (= and ==
mean the same thing). This kubectl command, for example, selects all
Kubernetes Services that aren't in the default namespace:</p>
<p>kubectl get services --all-namespaces --field-selector
metadata.namespace!=default</p>
</blockquote>
<p>It can't compare value like <code>></code> or <code><</code> even if a number of total pods would have been available.</p>
</li>
<li><p>I compared requests with <code>--v=8</code> to see which exact response is performed when <code>kubectl get pods</code> is executed with different options:</p>
<pre><code>$ kubectl get pods -A --sort-by=.metadata.creationTimestamp --v=8
I1007 12:40:45.296727 562 round_trippers.go:432] GET https://10.186.0.2:6443/api/v1/pods?includeObject=Object
</code></pre>
<p>and</p>
<pre><code>$ kubectl get pods -A --field-selector=metadata.namespace=kube-system --v=8
I1007 12:41:42.873609 1067 round_trippers.go:432] GET https://10.186.0.2:6443/api/v1/pods?fieldSelector=metadata.namespace%3Dkube-system&limit=500
</code></pre>
<p>The difference when <code>--field-selector</code> <code>kubectl</code> used is it adds <code>&limit=500</code> to the request, so it's a point where some data inconsistency can appear if using <code>kubectl</code> from terminal. While <code>--sort-by</code> get all data from <code>api server</code> to the client.</p>
</li>
<li><p>Using <code>-o jsonpath</code> works the same way as regular <code>kubectl get pods</code> request and has again the limit of 500 results which may lead to data inconsistency:</p>
<pre><code>$ kubectl get pods -A --v=7 -o jsonpath='{range.items[*]}{.metadata.creationTimestamp}{"\t"}{.metadata.name}{"\n"}{end}'
I1007 12:52:25.505109 6943 round_trippers.go:432] GET https://10.186.0.2:6443/api/v1/pods?limit=500
</code></pre>
<p>Also even developers using another linux commands (<code>jq</code>, <code>grep</code>, <code>sort</code>, <code>|</code>) to work with initial results which are got from kubernetes api. See <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources" rel="nofollow noreferrer">examples of Viewing, finding resources</a>.</p>
</li>
</ol>
<p>So to confirm <a href="https://stackoverflow.com/questions/69408472/how-to-get-top-n-latest-created-pods-inside-kubernetes-based-on-filter#comment122686547_69408472">@P...'s comment</a>, you will need to get data first to the client and only then work on it.</p>
| moonkotte |
<p>I am taking a course in Udemy and I am new to the world of <strong>Kubernetes</strong> and I am trying to configure <strong>ingress nginx controller</strong> in Kubernetes but it returns <strong>404 not found</strong> when i send a request at specified URL, it has been 10 days that I am trying to fix it, i've looked at similar questions but none of their answers are working for me. I am also using <strong>Skaffold</strong> to do build/deploy image on docker hub automatically when i change something in files.</p>
<p><a href="https://i.stack.imgur.com/U0WyJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0WyJ.png" alt="404 not found screenshot" /></a></p>
<p>My <strong>express app</strong> server:</p>
<pre><code>app.get('/api/users/currentuser', (req, res) => {
res.send('Hi there');
});
app.listen(3000, () => {
console.log('[Auth] - Listening on port 3000');
});
</code></pre>
<p><strong>ingress-srv.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.com
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
</code></pre>
<p><strong>auth-depl.yaml</strong> (Auth deployment & srv)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: myusername/auth:latest
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: ClusterIP
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p><strong>skaffold.yaml</strong> file:</p>
<pre><code>apiVersion: skaffold/v2beta25
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: username/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p><strong>Dockerfile</strong>:</p>
<pre><code>FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>I also executed command from <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">NGINX Ingress Controller docs</a>:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>I also changed hosts.file in the system:
127.0.0.1 ticketing.com</p>
<p>Logs:</p>
<p><strong>kubectl get pods</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
auth-depl-5f89899d9f-wtc94 1/1 Running 0 6h33m
</code></pre>
<p><strong>kubectl get svc</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.23.71 <none> 3000/TCP 23h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
</code></pre>
<p><strong>kubectl get pods --namespace=ingress-nginx</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7fm56 0/1 Completed 0 23h
ingress-nginx-admission-patch-5vflr 0/1 Completed 1 23h
ingress-nginx-controller-5c8d66c76d-89zhp 1/1 Running 0 23h
</code></pre>
<p><strong>kubectl get ing</strong></p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-srv <none> ticketing.com localhost 80 23h
</code></pre>
<p><strong>kubectl describe ing ingress-srv</strong></p>
<pre><code>Name: ingress-srv
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticketing.com
/api/users/?(.*) auth-srv:3000 (10.1.0.10:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 22m (x18 over 23h) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>Could there be a problem with the Windows IIS web server? since I previously configured something for another project, and in the screenshot above I see:</p>
<pre><code>Requested URL http://ticketing.com:80/api/users/currentuser
Physical Path C:\inetpub\wwwroot\api\users\currentuser
</code></pre>
<p>Also the screenshot shows the port <strong>:80</strong> at the requested URL but I have the server port 3000? + when i request at <strong>https</strong> it returns:</p>
<pre><code>502 Bad Gateway
nginx
</code></pre>
<p>also C:\inetpub\wwwroot is strange to me.</p>
<p>Any ideas would help me a lot with continuing the course.</p>
| sycamore55 | <p>After a few days of research I finally solved the problem, the problem was with IIS Web Server which I had enabled when I was working on a project in ASP.NET core, I uninstalled it and the problem was solved.</p>
<p>How to uninstall IIS from Windows 10:</p>
<ul>
<li>Go to Control Panel > Programs and Features</li>
<li>Click Turn Windows features on or off</li>
<li>Scroll down to Internet Information Services</li>
<li>Click on the square next to Internet Information Services so it becomes empty</li>
<li>Click OK and restart the PC (required).</li>
</ul>
| sycamore55 |
<p>We are running our EKS node groups on AWS Spot Instances. In order to make sure that the services don't go down at any point of time we would like to run 1 pod of each service on a separate node group which uses reserved instances. Is there any way we can configure the deployments so that 1 pod runs on reserved node group and rest on the spot instances node group? Currently we are using node selector to match the label to decide on which node group the service has to run. Is there any way we can use the labels of two separate node groups(reserved and spot) in node selector and specify the weights to divide the load?</p>
| Wizard | <p>Couldn't find a way to do that in a single deployment, but achieved the same using two seperate deployments one deploys to spot node group and other to on-demand node group but the service uses both the pods.</p>
| Wizard |
<p>The <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">Kubernetes docs</a> state the following about the <code>metadata.resourceVersion</code> of a K8s object</p>
<blockquote>
<p>You must not assume resource versions are numeric or collatable. API clients may only compare two resource versions for equality (this means that you must not compare resource versions for greater-than or less-than relationships).</p>
</blockquote>
<p>I'm glad I read this because I was about to do exactly the thing it says not to do (comparison with <code>></code> or <code><</code> to check consistency).</p>
<p>My question is: Why not? Why are clients not allowed to do this?</p>
<p>Edits: To clarify, the resource version this question refers to is the <code>metadata.resourceVersion</code> of K8s objects, not the <code>apiVersion</code>.</p>
<p>In my experience, the resource version is an integer, increasing monotonically in time (incremented each time the resource is modified). It seems the docs say this is not a guarantee.</p>
| Dmitri Gekhtman | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p><strong>resourceVersion</strong></p>
<p>I think most correct answer will be:</p>
<blockquote>
<p>Kubernetes leverages the concept of resource versions to achieve
optimistic concurrency. All Kubernetes resources have a
"resourceVersion" field as part of their metadata. This
resourceVersion is a string that identifies the internal version of an
object that can be used by clients to determine when objects have
changed. When a record is about to be updated, it's version is checked
against a pre-saved value, and if it doesn't match, the update fails
with a StatusConflict (HTTP status code 409).</p>
<p>The resourceVersion is changed by the server every time an object is
modified. If resourceVersion is included with the PUT operation the
system will verify that there have not been other successful mutations
to the resource during a read/modify/write cycle, by verifying that
the current value of resourceVersion matches the specified value.</p>
<p>The resourceVersion is currently backed by etcd's modifiedIndex.
However, it's important to note that the application should not rely
on the implementation details of the versioning system maintained by
Kubernetes. We may change the implementation of resourceVersion in the
future, such as to change it to a timestamp or per-object counter.</p>
<p>The only way for a client to know the expected value of
resourceVersion is to have received it from the server in response to
a prior operation, typically a GET. This value MUST be treated as
opaque by clients and passed unmodified back to the server. Clients
should not assume that the resource version has meaning across
namespaces, different kinds of resources, or different servers.
Currently, the value of resourceVersion is set to match etcd's
sequencer. You could think of it as a logical clock the API server can
use to order requests. However, we expect the implementation of
resourceVersion to change in the future, such as in the case we shard
the state by kind and/or namespace, or port to another storage system.</p>
</blockquote>
<p>Since there are chances that this logic can be changed, it's better to follow what kubernetes developers offer since they know further changes better than anyone else do.</p>
<p>Also in API documentation it's said that it's a <code>string</code> type, not <code>integer</code> or <code>date</code>, see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#objectmeta-v1-meta" rel="nofollow noreferrer">here</a>.</p>
<p>Please find more details <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency" rel="nofollow noreferrer">about Concurrency Control and Consistency</a>.</p>
<hr />
<p><strong>apiVersion</strong></p>
<p>Simple and short answer - because versions are not numeric and sometimes it can be changed very significantly.</p>
<p>Below is a couple of examples that I know/used which will show the difference:</p>
<ul>
<li><p><code>ingress</code>:</p>
<p>used to be - <code>extensions/v1beta1</code></p>
<p>actual version - <code>networking.k8s.io/v1</code></p>
</li>
<li><p><code>istio</code> uses both versions at the moment:</p>
<p><code>networking.istio.io/v1alpha3</code> and <code>networking.istio.io/v1beta1</code></p>
</li>
</ul>
<p>So at this point comparison with <code>></code> or <code><</code> won't work. And much more precise method is to check whether API version is same or not.</p>
| moonkotte |
<p>One of my namespace is in <code>Terminating</code> state.
While there are many posts that explain how to forcefully delete such namespaces. The ultimate result is that everything in your namespace will be gone. Which is not what you might want especially if that termination was a result of mistake or bug (or may cause downtime of any kind).</p>
<p>Is it possible to tell kubernetes not to try to delete that namespace anymore. Where that state is kept?</p>
<p><code>Terminating</code> state blocks me from recreating the whole stack with gitops (helm chart installation in such namespace is not possible).</p>
<p>I simply wish to remove the <code>terminating</code> state and my fluxcd controller would fix everything else.</p>
| Piotr | <blockquote>
<p>Is there a way to cancel namespace termination in kubernetes?</p>
</blockquote>
<p>As far as I know, unfortunately not. Termination is a one-way process. Note how <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/abstractions/pod-termination/#:%7E:text=Kubernetes%20marks%20the%20Pod%20state,still%20running%20in%20the%20Pod." rel="nofollow noreferrer">termination pods take place</a>:</p>
<blockquote>
<ol>
<li>You send a command or API call to terminate the Pod.</li>
<li>Kubernetes updates the Pod status to reflect the time after which the Pod is to be considered "dead" (the time of the termination request plus the grace period).</li>
<li><strong>Kubernetes marks the Pod state as "Terminating" and stops sending traffic to the Pod.</strong></li>
<li><strong>Kubernetes send a <code>TERM</code> signal to the Pod, indicating that the Pod should shut dow</strong>n.</li>
<li><strong>When the grace period expires, Kubernetes issues a <code>SIGKILL</code> to any processes still running in the Pod.</strong></li>
<li>Kubernetes removes the Pod from the API server on the Kubernetes Master.</li>
</ol>
</blockquote>
<p>So it is impossible to cancel termination process.</p>
<blockquote>
<p>Is it possible to tell kubernetes not to try to delete that namespace anymore.</p>
</blockquote>
<p>There is no dedicated solution, but you can try to automate this process with custom scripts. Look at <a href="https://gist.github.com/jossef/a563f8651ec52ad03a243dec539b333d" rel="nofollow noreferrer">this example in Python</a> and another one <a href="https://stackoverflow.com/a/62463004/15407542">in Bash</a>.</p>
<p>See also <a href="https://stackoverflow.com/questions/52369247/namespace-stuck-as-terminating-how-i-removed-it">this question</a>.</p>
| Mikołaj Głodziak |
<p>Hello I am trying to deploy a simple tomcat service. Below are the details:</p>
<p>1.minikube version: v1.8.1</p>
<p>2.OS: mac</p>
<p>3.The <strong>deployment.yaml</strong> file (I am in the directory of the yaml file)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
spec:
selector:
matchLabels:
app: tomcat
replicas: 1
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:9.0
ports:
- containerPort: 8080
</code></pre>
<p>4.Commands used to deploy and expose the service</p>
<pre><code>kubectl apply -f deployment.yaml
kubectl expose deployment tomcat-deployment --type=NodePort
minikube service tomcat-deployment --url
curl [URL]
</code></pre>
<p>I get a 404 when I curl the URL.
I am unsure if there's an issue with the deployment.yaml file or some minikube settings.</p>
| Nath | <p>Tomcat image comes with the default preinstalled apps (ROOT, manager...) inside webapps.dist folder, to avoid them to be loaded by default at container startup (<a href="https://github.com/docker-library/tomcat/issues/183" rel="nofollow noreferrer">https://github.com/docker-library/tomcat/issues/183</a>). You can for example simply rename webapps.dist to webapps and (via e.g. kubectl exec <pod_name> -- bash), and after apps are deployed, the 404 should no longer occur.</p>
| Sara |
<p>I have an application which stores logs in a file at a configurable location.
Let's say <code>/abc/pqr/application.log</code></p>
<p>Application is being migrated to Kubernetes where it will run in a single pod. If I run <code>kubectl log <pod-name></code>, I get anything that gets printed on <code>stdout</code> which I can redirect to a file. I want to do the other way around, I have a file containing logs at above location and I want <code>kubectl logs <pod-name></code> to print logs from that file.</p>
<p>For example, if run <code>kubectl logs kafka</code> for a kafka pod deployed using <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">bitnami/kafka</a>, I get logs from <code>/opt/bitnami/kafka/logs/server.log</code>. I want to mimic this behavior.</p>
| zweack | <p><code>kubectl logs</code> command takes everything from <code>stdout</code> and <code>stderr</code>, so you need to supply logs there.</p>
<p>It's a common practice when containerised applications write their logs to <code>stdout</code> and <code>stderr</code>.</p>
<p>This way there are two main options:</p>
<ol>
<li><p>Adjust the application so it writes logs to <code>stdout</code> and file as well.
E.g. using shell it can be done with <code>tee</code> command.</p>
<p>Please find a <a href="https://stackoverflow.com/questions/418896/how-to-redirect-output-to-a-file-and-stdout">good answers</a> with description of the command.</p>
</li>
<li><p>Use a sidecar container which will be getting logs from file and translating them into its own <code>stdout</code>.
Please find <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent" rel="nofollow noreferrer">Using a sidecar container with the logging agent</a></p>
</li>
</ol>
<p>Useful link about kubernetes logging (including containers):</p>
<ul>
<li><a href="https://sematext.com/guides/kubernetes-logging/#how-does-logging-in-kubernetes-work" rel="nofollow noreferrer">The Complete Guide to Kubernetes Logging</a></li>
</ul>
| moonkotte |
<p>I'm trying to setup a remote EJB call between 2 WebSphere Liberty servers deployed in k8s.
Yes, I'm aware that EJB is not something one would want to use when deploying in k8s, but I have to deal with it for now.</p>
<p>The problem I have is how to expose remote ORB IP:port in k8s. From what I understand, it's only possible to get it to work if both client and remote "listen" on the same IP. I'm not a network expert, and I'm quite fresh in k8s, so maybe I'm missing something here, that's why I need help.</p>
<p>The only way I got it to work is when I explicitly set host on remote server to it's own IP address and then accessed it from client on that same IP. This test was done on Docker host with macvlan0 network (each container had it's own IP address).</p>
<p>This is ORB setup for remote server.xml configuration:</p>
<pre><code><iiopEndpoint id="defaultIiopEndpoint" host="172.30.106.227" iiopPort="2809" />
<orb id="defaultOrb" iiopEndpointRef="defaultIiopEndpoint">
<serverPolicy.csiv2>
<layers>
<!-- don't care about security at this point -->
<authenticationLayer establishTrustInClient="Never"/>
<transportLayer sslEnabled="false"/>
</layers>
</serverPolicy.csiv2>
</orb>
</code></pre>
<p>And client server.xml configuration:</p>
<pre><code> <orb id="defaultOrb">
<clientPolicy.csiv2>
<layers>
<!-- really, I don't care about security -->
<authenticationLayer establishTrustInClient="Never"/>
<transportLayer sslEnabled="false"/>
</layers>
</clientPolicy.csiv2>
</orb>
</code></pre>
<p>From client, this is JNDI name I try to access it:</p>
<pre><code>corbaname::172.30.106.227:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
</code></pre>
<p>And this works.</p>
<p>Since one doesn't want to set fixed IP when exposing ORB port, I have to find a way to expose it dynamically, based on host IP.
Exposing on 0.0.0.0 does not work. Same goes for localhost. In both cases, client refuses to connect with this kind of error:</p>
<pre><code>Error connecting to host=0.0.0.0, port=2809: Connection refused (Connection refused)
</code></pre>
<p>In k8s, I've exposed port 2809 through LoadBalancer service for remote pods, and try to access remote server from client pod, where I've set remote's service IP address in corbaname definition.
This, of course, does not work. I can access remote ip:port by telnet, so it's not a network issue.</p>
<p>I've tried all combinations of setup on remote server. Exporting on host="0.0.0.0" results with same exception as above (Connection refused).</p>
<p>I'm not sure exporting on internal IP address would work either, but even if it would, I don't know the internal IP before pod is deployed in k8s. Or is there a way to know? There is no env. variable with it, I've checked.</p>
<p>Exposing on service IP address (with host="${REMOTE_APP_SERVICE_HOST}") fails with this error:</p>
<pre><code>The server socket could not be opened on 2,809. The exception message is Cannot assign requested address (Bind failed).
</code></pre>
<p>Again, I know replacing EJB with Rest is the way to go, but it's not an option for now (don't ask why).</p>
<p>Help, please!</p>
<hr />
<p>EDIT:</p>
<p>I've managed to get some progress. Actually, I believe I've successfully called remote EJB.
What I did was add <code>hostAliases</code> in pod definition, which added alias for my host, something like this:</p>
<pre><code>hostAliases:
- ip: 0.0.0.0
hostnames:
- my.host.name
</code></pre>
<p>Then I added this host name to remote server.xml:</p>
<pre><code><iiopEndpoint id="defaultIiopEndpoint" host="my.host.name" iiopPort="2809" />
</code></pre>
<p>I've also added host alias to my client pod:</p>
<pre><code>hostAliases:
- ip: {remote.server.service.ip.here}
hostnames:
- my.host.name
</code></pre>
<p>Finally, I've changed JNDI name to:</p>
<pre><code>corbaname::my.host.name:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
</code></pre>
<p>With this setup, remote server was successfully called!</p>
<p>However, now I have another problem which I didn't have while testing on Docker host. Lookup is done, but what I get is not what I expect.</p>
<p>Lookup code is pretty much what you'd expect:</p>
<pre><code>Object obj = new InitialContext().lookup(jndi);
BeanRemote remote = (BeanRemote) PortableRemoteObject.narrow(obj, BeanRemote.class);
</code></pre>
<p>Unfortunatelly, this narrow call fails with <code>ClassCastException</code>:</p>
<pre><code>Caused by: java.lang.ClassCastException: org.example.com.BeanRemote
at com.ibm.ws.transport.iiop.internal.WSPortableRemoteObjectImpl.narrow(WSPortableRemoteObjectImpl.java:50)
at [internal classes]
at javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:62)
</code></pre>
<p>Object I do receive is <code>org.omg.stub.java.rmi._Remote_Stub</code>. Any ideas?</p>
| nkuzman | <p>Solved it!</p>
<p>So, the first problem was resolving host mapping, which was resolved as mentioned in edit above, by adding host aliases id pod definitions:</p>
<p>Remote pod:</p>
<pre><code>hostAliases:
- ip: 0.0.0.0
hostnames:
- my.host.name
</code></pre>
<p>Client pod:</p>
<pre><code>hostAliases:
- ip: {remote.server.service.ip.here}
hostnames:
- my.host.name
</code></pre>
<p>Remote server then has to use that host name in iiop host definition:</p>
<pre><code><iiopEndpoint id="defaultIiopEndpoint" host="my.host.name" iiopPort="2809" />
</code></pre>
<p>Also, client has to reference that host name through JNDI lookup:</p>
<pre><code>corbaname::my.host.name:2809#ejb/global/some-app/ejb/BeanName!org\.example\.com\.BeanRemote
</code></pre>
<p>This setup resolves remote EJB call.</p>
<p>The other problem with <code>ClassCastException</code> was really unusual. I managed to reproduce the error on Docker host and then changed one thing at a time until the problem was resolved. It turns out that the problem was with <code>ldapRegistry-3.0</code> feature (!?). Adding this feature to client's feature list resolved my problem:</p>
<pre><code><feature>ldapRegistry-3.0</feature>
</code></pre>
<p>With this feature added, remote EJB was successfully called.</p>
| nkuzman |
<p>I have several k8s clusters and I would like to monitor pods metrics (cpu/memory mainly). For that, I already have one central instance of prometheus/grafana and I want to use it to monitor pods metrics from all my podsk8s clusters.</p>
<p>Sorry if the question was already asked by I already read lots of tutorials but it's always to install a dedicated prometheus/grafana instance on the cluster itself. I don't want that since I already have prometheus/grafana running somewhere else. I just want to "export" metrics.</p>
<p>I have metrics-servers installed on each clusters but I'm not sure if I need to deploy something else. Please advise me.</p>
<p>So, how can I export my pods metrics to my prometheus/grafana instance?</p>
<p>Thanks</p>
| iAmoric | <p>Posting the answer as a community wiki, feel free to edit and expand.</p>
<hr />
<p>You need to use <code>federation</code> for prometheus for this purpose.</p>
<blockquote>
<p>Federation allows a Prometheus server to scrape selected time series
from another Prometheus server.</p>
</blockquote>
<p>Main idea of using <code>federation</code> is:</p>
<blockquote>
<p>Prometheus is a very flexible monitoring solution wherein each
Prometheus server is able to act as a target for another Prometheus
server in a highly-available, secure way. By configuring and using
federation, Prometheus servers can scrape selected time series data
from other Prometheus servers</p>
</blockquote>
<p>See example <a href="https://banzaicloud.com/blog/prometheus-federation/" rel="nofollow noreferrer">here</a>.</p>
| moonkotte |
<p>I've a NextJS app which needs a .env file mounted. I usually do this with providing a configMap:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: frontend-configmap
namespace: default
data:
.env: |-
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
</code></pre>
<p>But how to do this with Kustomize?</p>
<p>I try it with <code>envs</code>, but how do I get the values inside?</p>
<pre class="lang-yaml prettyprint-override"><code>configMapGenerator:
- name: frontend-configmap
envs:
- .env
</code></pre>
<p>Thank you in advance</p>
| Jan | <p>You need to have <code>.env</code> file created first. And ideally even creating configmaps should be based on the existing file (below are examples for <code>kustomize</code> and <code>kubectl --from-file</code>).</p>
<p>Then there are two options how to create a configmap:</p>
<ul>
<li>create <code>.env</code> file with environment variables within (which is your example configmap)</li>
<li>create a configmap with environment variables from <code>.env</code> file (each variable is a separate key)</li>
</ul>
<p><strong>Test structure</strong>:</p>
<pre><code>$ tree -a
.
├── .env
└── kustomization.yaml
$ cat .env # same as your test data
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
</code></pre>
<hr />
<p><strong>configmap with <code>.env</code> file with envvars inside:</strong></p>
<p><code>kustomization.yaml</code> with an additional option :</p>
<pre><code>$ cat kustomization.yaml
configMapGenerator:
- name: frontend-configmap
files: # using files here as we want to create a whole file
- .env
generatorOptions:
disableNameSuffixHash: true # use a static name
</code></pre>
<p><code>disableNameSuffixHash</code> - disable appending a content hash suffix to the names of generated resources, see <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/generatorOptions.md#generator-options" rel="noreferrer">generator options</a>.</p>
<p>And all left is to run it:</p>
<pre><code>$ kustomize build .
apiVersion: v1
data:
.env: | # you can see it's a file with context within
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
kind: ConfigMap
metadata:
name: frontend-configmap
</code></pre>
<p>The same result can be achieved by running using <code>--from-file</code> option:</p>
<pre><code>$ kubectl create cm test-configmap --from-file=.env --dry-run=client -o yaml
apiVersion: v1
data:
.env: |
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
kind: ConfigMap
metadata:
creationTimestamp: null
name: test-configmap
</code></pre>
<hr />
<p><strong>configmap with envvars as keys within:</strong></p>
<pre><code>$ cat kustomization.yaml
configMapGenerator:
- name: frontend-configmap
envs: # now using envs to create a configmap with envvars as keys inside
- .env
generatorOptions:
disableNameSuffixHash: true # use a static name
</code></pre>
<p>Run it to see the output:</p>
<pre><code>$ kustomize build .
apiVersion: v1
data: # you can see there's no file and keys are created directly
API_URL: http://my.domain.com
NEXT_PUBLIC_API_URL: http://my.domain.com
kind: ConfigMap
metadata:
name: frontend-configmap
</code></pre>
<p>Same with <code>kubectl</code> and <code>--from-env-file</code> option:</p>
<pre><code>$ kubectl create cm test-configmap --from-env-file=.env --dry-run=client -o yaml
apiVersion: v1
data:
API_URL: http://my.domain.com
NEXT_PUBLIC_API_URL: http://my.domain.com
kind: ConfigMap
metadata:
creationTimestamp: null
name: test-configmap
</code></pre>
<hr />
<p><strong>More details:</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="noreferrer">configMapGenerator</a></li>
</ul>
<hr />
<p><strong>Edit - use already existing configmap.yaml</strong></p>
<p>If <code>configmap</code> already exists, then it's possible to reference to it from <code>kustomization.yaml</code> (as mentioned in comment, <code>kustomize</code> is a template engine and using it with this direct reference only without any transformations doesn't really make sense. <a href="https://kubectl.docs.kubernetes.io/guides/example/multi_base/" rel="noreferrer">Here</a> is one of the examples of why you need to use <code>kustomize</code>).</p>
<pre><code>$ tree
.
├── cm.yaml
└── kustomization.yaml
</code></pre>
<p><code>cm.yaml</code> has exactly the same config from the question.</p>
<pre><code>$ cat kustomization.yaml
resources:
- cm.yaml
namePrefix: test- # used namePrefix for demo purpose (you can omit it as well)
</code></pre>
<p>Building this and getting the same <code>configmap</code> with <code>.env</code> file inside:</p>
<pre><code>$ kustomize build .
apiVersion: v1
data:
.env: |-
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
kind: ConfigMap
metadata:
name: test-frontend-configmap # name with prefix as it was setup for demo
namespace: default
</code></pre>
| moonkotte |
<p>I am running a one-node Kubernetes cluster in a VM for development and testing purposes. I used Rancher Kubernetes Engine (RKE, Kubernetes version 1.18) to deploy it and MetalLB to enable the LoadBalancer service type. Traefik is version 2.2, deployed via the official Helm chart (<a href="https://github.com/containous/traefik-helm-chart" rel="nofollow noreferrer">https://github.com/containous/traefik-helm-chart</a>). I have a few dummy containers deployed to test the setup (<a href="https://hub.docker.com/r/errm/cheese" rel="nofollow noreferrer">https://hub.docker.com/r/errm/cheese</a>).</p>
<p>I can access the Traefik dashboard just fine through the nodes IP (-> MetalLB seems to work). It registers the services and routes for the test containers. Everything is looking fine but when I try to access the test containers in my browser I get a 502 Bad Gateway error.</p>
<p>Some probing showed that there seems to be an issue with outbound traffic from the pods. When I SSH into the node I can reach all pods by their service or pod IP. DNS from node to pod works as well. However, if I start an interactive busybox pod I can't reach any other pod or host from there. When I <code>wget</code> to any other container (all in the default namespace) I only get <code>wget: can't connect to remote host (10.42.0.7): No route to host.</code> The same is true for servers on the internet.</p>
<p>I have not installed any network policies and there are none installed by default that I am aware of.</p>
<p>I have also gone through this: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service</a></p>
<p>Everything in the guide is working fine, except that the pods don't seem to have any network connectivity whatsoever.</p>
<p>My RKE config is standard, except that I turned off the standard Nginx ingress and enabled etcd encryption-at-rest.</p>
<p>Any ideas?</p>
| MadMonkey | <p>Maybe just double check that your node's ip forwarding is turned on: <code>sysctl net.ipv4.ip_forward</code></p>
<p>If for some reason it doesn't return:
<code>net.ipv4.ip_forward = 1</code></p>
<p>Then you can set it with:
<code>sudo sysctl -w net.ipv4.ip_forward=1</code></p>
<p>And to make it permanent:</p>
<ul>
<li>edit <code>/etc/sysctl.conf</code></li>
<li>add or uncomment <code>net.ipv4.ip_forward = 1</code></li>
<li>and reload via <code>sysctl -p /etc/sysctl.conf</code></li>
</ul>
| sfb103 |
<p>I am able to mount different directories to the same container at different mount points using <code>volumeMounts.subPath</code> attribute.</p>
<p>Is it OK to use this in production environment? I am using <code>AWS EFS</code> as my persistent storage.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">This</a> doc says it is not recommended. What is the concern if this is used?</p>
| sandy | <h2>Short answer</h2>
<p>It's absolutely fine to use the <code>subPath</code> in production</p>
<h2>Detailed answer</h2>
<p><strong>Kubernetes example with <code>subPath</code> used.</strong></p>
<p>What this phrase "This sample subPath configuration is not recommended for production use." means is exactly this sample is not recommended, not the <code>subPath</code> usage.</p>
<p>The example contains frontend and backend applications' containers in a single pod which is fundamentally wrong approach for production usage (for testing it's applicable).</p>
<p>In production frontend and backend applications should be separated to different deployments, it will allow:</p>
<ul>
<li>manage front and back end applications separately</li>
<li>fault tolerance - in single pod if one of the app crashes, the whole pod is affected</li>
<li>pod are disposable units and for databases separate set of pods should be used (like statefulset), it allows to maintain sticky sessions and data persistence even if pod crashed</li>
</ul>
<p><strong><code>subPath</code> vulnerabilities</strong></p>
<p>First it's a good idea to figure out <a href="https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/#kubernetes-background" rel="nofollow noreferrer">how <code>subPath</code> works</a> to understand what risks/vulnerabilities can be introduced.</p>
<p>I found at least two ones:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/issues/60813" rel="nofollow noreferrer">CVE-2017-1002101</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/104980" rel="nofollow noreferrer">CVE-2021-25741</a></li>
</ul>
<p>Both are fixed as for today. It's very important to use last available versions which contain fixes for different issues (including both mentioned above).</p>
<p>Since kubernetes developers fix vulnerabilities related to <code>subPath</code> it can be safely used in production clusters.</p>
| moonkotte |
<p>After a pod crashes and restarts, is it possible to retrieve the IP address of the pod prior to the crash?</p>
| Chris Gonzalez | <p>This is a very wide question as not possible to identify where exactly and why pod crashes.
However I'll show what's possible to do with in different scenarios.</p>
<ul>
<li>Pod's container crashes and then restarts itself:</li>
</ul>
<p>In this case pod will save its IP address. The easiest way is to run
<code>kubectl get pods -o wide</code></p>
<p>Output will be like</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-699f9b54d-g2w97 0/1 CrashLoopBackOff 2 55s 10.244.1.53 worker1 <none> <none>
</code></pre>
<p>As you can see even if container crashes, pod has assigned with IP address</p>
<p>Also it's possible to add <code>initContainers</code> and add a command which will get the IP address of the pod (depending on the image you will use, there are different options like <code>ip a</code>, <code>ifconfig -a</code> etc.</p>
<p>Here's a simple example how it can be added:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
initContainers: # here is where to add initContainer section
- name: init-container
image: busybox
args: [/bin/sh, -c, "echo IP ADDRESS ; ifconfig -a"]
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
command: ["/sh", "-c", "nginx --version"] #this is to make nginx image failing
</code></pre>
<p>Before your main container starts, this <code>init-container</code> will run an <code>ifconfig -a</code> command and will put its results into logs.</p>
<p>You can check it with:</p>
<p><code>kubectl logs %pod_name% -c init-container</code></p>
<p>Output will be:</p>
<pre><code>IP ADDRESS
eth0 Link encap:Ethernet HWaddr F6:CB:AD:D0:7E:7E inet addr:10.244.1.52 Bcast:10.244.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1410 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:398 (398.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
</code></pre>
<p>Also you can check logs for previous running version of pod by adding <code>--previous</code> to the command above.</p>
<ul>
<li>Pod crashes and then is recreated
In this case new pod is created which means local logs are gone. You will need to think about saving them separately from pods. For this matter you can use <code>volumes</code>. E.g. <code>hostPath</code> will store logs on the node where pod runs or <code>nfs</code> can be attached to different pods and be accessed.</li>
<li>Control plane crashed while pods are still running
You can't access logs using control-plane and <code>kubectl</code> however your containers will still be running on the nodes. To get logs directly from nodes where your containers are running use <code>docker</code> or <code>crictl</code> depending on your runtime.</li>
</ul>
<p>Ideal solution for such cases is to use monitoring systems such as <code>prometheus</code> or <code>elasticseach</code>.
It will require additional set up of <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">fluentd</a> or <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a></p>
| moonkotte |
Subsets and Splits