Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I am trying to deploy <code>Postgresql</code> through helm on <code>microk8s</code>, but pod keeps pending showing <code>pod has unbound immediate PersistentVolumeClaims</code> error.</p> <p>I tried creating <code>pvc</code> and a <code>storageclass</code> inside it, and editing it but all keeps pending. </p> <p>Does anyone know whats holding the <code>pvc</code> claiming <code>pv</code>?</p> <p><a href="https://i.stack.imgur.com/xOHJh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xOHJh.png" alt="enter image description here"></a></p>
Mohammad Hussein
<blockquote> <p>on the 'PVC' it shows 'no persistent volumes available for this claim and no storage class is set' Error</p> </blockquote> <p>This means that you have to prepare PersistentVolumes for your platform that can be used by your PersistentVolumeClaims (e.g. with correct StorageClass or other requirements)</p>
Jonas
<p>I have a PersistenceVolumeClaim defined by</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: &quot;standard&quot; accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>And the containers section of the deployment yaml looks like this</p> <pre><code>spec: containers: - name: my-container image: abc/xyz:1.2.3 volumeMounts: - mountPath: /var/store name: mystore volumes: - name: mystore persistentVolumeClaim: claimName: my-pvc </code></pre> <p>I have a few questions about this set up.</p> <ol> <li>Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)?</li> <li>How would this behave if the pod replicas are on different kubernetes nodes?</li> </ol> <p><strong>Edit</strong></p> <p>I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume?</p>
Jon
<blockquote> <p>Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)?</p> </blockquote> <p>No. Since you use one <code>PersistentVolumeClaim</code>, you will get one <code>PersistentVolume</code>.</p> <blockquote> <p>How would this behave if the pod replicas are on different kubernetes nodes?</p> </blockquote> <p>It will not work, unless you use a volume type that can be used from multiple nodes at once, with <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> <code>ReadWriteMany</code> or <code>ReadOnlyMany</code>. But you have declared <code>ReadWriteOnce</code> in your PersistentVolumeClaim, so it will likely not work.</p> <blockquote> <p>I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume?</p> </blockquote> <p>Yes, you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> instead of <code>Deployment</code>, and use the <code>volumeClaimTemplates:</code>-field.</p>
Jonas
<p>I was going through in depth of K8s volumes and one of the concept messed me up. It would be great if someone could explain me this. These are the modes in K8s volumes.</p> <p>ReadWriteOnce -- the volume can be mounted as read-write by a single node</p> <p>ReadOnlyMany -- the volume can be mounted read-only by many nodes</p> <p>ReadWriteMany -- the volume can be mounted as read-write by many nodes</p> <ol> <li><p>What if I run two mysql pods on same node and the mode is ReadWriteOnce ? How my both pods will write?</p> </li> <li><p>What if I run two mysql pods on two different nodes and the mode is ReadWriteOnce ? How my both pods will write?</p> </li> <li><p>What if I run two mysql pods on same node and the mode is ReadWriteMany ? How my both pods will write?</p> </li> <li><p>What if I run two mysql pods on two different nodes and the mode is ReadWriteMany ? How my both pods will write?</p> </li> </ol> <p>All have same pod replica.</p>
Aditya Malviya
<blockquote> <p>What if I run two mysql pods on same node and the node is ReadWriteOnce ? How my both pods will write?</p> </blockquote> <p>If you run two DBMS systems (e.g. MySQL or PostgreSQL) in Pods, they are both independent and they should have <strong>two independent PersistentVolumeClaims</strong>. Only run single-node DBMS for test purposes. For production consider a <em>distributed database</em> that replicates the data.</p> <blockquote> <p>What if I run two mysql pods on two different node and the node is ReadWriteOnce ? How my both pods will write?</p> </blockquote> <p>As I recommended above, the two should be independent (unless you use a distributed database like <a href="https://www.cockroachlabs.com/product/kubernetes/" rel="nofollow noreferrer">cockroachDB</a>) so running on different nodes is perfectly fine.</p> <blockquote> <p>What if I run two mysql pods on same different node and the node is ReadWriteMany ? How my both pods will write?</p> </blockquote> <p>MySQL is a single-node DBMS system, it should not share volumes/files with other instances unless you run a clustered system.</p> <blockquote> <p>What if I run two mysql pods on two different node and the node is ReadWriteMany ? How my both pods will write?</p> </blockquote> <p>Same as above, single-node system should not share volumes/files.</p>
Jonas
<p>When I am using below it deletes the running POD after matching the pattern from commandline: </p> <pre><code>kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print $1}' | xargs kubectl delete -n bi-dev pod </code></pre> <p>However when I am using this command as an alias in .bash_profile it doesn't execute . This is how I defined it : </p> <pre><code> alias kdpgroup="kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print $1}'| kubectl delete -n bi-dev pod" </code></pre> <p>when execute this as below I get below error in commandline:</p> <pre><code>~ $ kdpgroup error: resource(s) were provided, but no name, label selector, or --all flag specified </code></pre> <p>When I define this in .bash_profile I get this : </p> <pre><code>~ $ . ./.bash_profile -bash: alias: }| xargs kubectl delete -n bi-dev pod: not found ~ $ </code></pre> <p>Am I missing something to delete POD using Pattern Match or with Wilcard ?</p> <p>thanks</p>
pauldx
<blockquote> <p>Am I missing something to delete POD using Pattern Match or with Wilcard?</p> </blockquote> <p>When using Kubernetes it is more common to use <em>labels</em> and <em>selectors</em>. E.g. if you deployed an application, you usually set a label on the pods e.g. <code>app=my-app</code> and you can then get the pods with e.g. <code>kubectl get pods -l app=my-app</code>.</p> <p>Using this aproach, it is easier to delete the pods you are interested in, with e.g.</p> <pre><code>kubectl delete pods -l app=my-app </code></pre> <p>or with namespaces</p> <pre><code>kubectl delete pods -l app=my-app -n default </code></pre> <p>See more on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">Kubernetes Labels and Selectors</a></p> <p><strong>Set-based selector</strong> </p> <blockquote> <p>I have some pod's running in the name of "superset-react" and "superset-graphql" and I want to search my wildcard superset and delete both of them in one command</p> </blockquote> <p>I suggest that those pods has labels <code>app=something-react</code> and <code>app=something-graphql</code>. If you want to classify those apps, e.g. if your "superset" varies, you could add a label <code>app-type=react</code> and <code>app-type=graphql</code> to all those type of apps.</p> <p>Then you can delete pods for both app types with this command:</p> <pre><code>kubectl delete pods -l 'app-type in (react, graphql)' </code></pre>
Jonas
<p>I'm working on a website based in Google's Kubernetes</p> <p>Our system has a workload that is for our main website. Right now I have an uptime alert that will send me an text/email.</p> <p>Under some very rare circumstances, the site will go down and the only way to fix this is to delete the pod that the web service is running on. It'll then recreate itself and be back up and running.</p> <p>However, I have to do this manually. If I'm away from the computer, or not available for some reason, the site will stay down until I'm available to delete the pod and let it restart.</p> <p>I would like this to be automated. Is there a way to configure something like this? Some process that, if there is an uptime alert failure, it'll automatically delete the web service pod(s) so that they recreate themselves?</p>
Justin Lloyd
<blockquote> <p>I would like this to be automated. Is there a way to configure something like this? Some process that, if there is an uptime alert failure, it'll automatically delete the web service pod(s) so that they recreate themselves?</p> </blockquote> <p>If Kubernetes can detect this situation, you can use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Liveness probe</a> for this situation, and the Pod will be deleted (and re-created) if the probe fails.</p>
Jonas
<p>I have a single master cluster with 3 worker nodes. The master node has one network interface of 10Gb capacity and all worker nodes have a 40Gb interface. They are all connected via a switch.</p> <p>I'd like to know if this might create a bottleneck if the data between nodes have to pass through the master node?</p> <p>In general, I like to understand the communication flow between worker nodes. For instance, a pod in node1 sends data to a pod in node2, does the traffic go through the master node? I have seen the architecture diagram on the Kubernetes docs and it appears to be the case:</p> <p><a href="https://i.stack.imgur.com/RUasH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RUasH.png" alt="enter image description here" /></a> source: <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/components/</a></p> <p>If this is the case, it is possible to define a control plane network separate from the data plane by possibly adding another interface to worker nodes?</p> <p>Please note that this is a bare-metal on-prem installation with OSS Kubernetes v1.20.</p>
Muzammil
<blockquote> <p>For instance, a pod in node1 sends data to a pod in node2, does the traffic go through the master node?</p> </blockquote> <p>No. Kubernetes is designed with a flat network model. If Pod on node A send a request to Pod on node B, the inter-node traffic is directly from node A to node B as they are on the same IP network.</p> <p>See also <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model" rel="nofollow noreferrer">The Kubernetes network model</a></p>
Jonas
<p>We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?</p> <p>Thanks in advance.</p>
ramesh reddy
<p>There are many ways to handle this.</p> <p>First, use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> instead of &quot;naked&quot; Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.</p> <p>Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#secretgenerator" rel="noreferrer">Kustomize SecretGenerator</a> - then each new <code>Secret</code> will get its unique name. In addition, that unique name is reflected to the <code>Deployment</code> automatically - and your pods will automatically be recreated when a <code>Secret</code> is changed - this match your origin problem. When <code>Secret</code> and <code>Deployment</code> is handled this way, you apply the changes with:</p> <pre><code>kubectl apply -k &lt;folder&gt; </code></pre>
Jonas
<p>I just started researching Kubernetes. I have 2 Windows VM on hyper-v. One is a SQL server and one is an application server with a .net application. How can I migrate my SQL and APP servers to Kubernetes in an architecture so that applications can continue to run at the time of failover or down.</p> <p>Thanks.</p>
ufukcam
<blockquote> <p>in an architecture so that applications can continue to run at the time of failover or down.</p> </blockquote> <p>To achieve this, you need to make sure that you don't have any single point of failure. Kubernetes is well architected for this kind of workload.</p> <blockquote> <p>an application server with a .net application</p> </blockquote> <p>For this to run well in a container, it would be good to rewrite your app so it is run as a single process. This can be done in many different ways, but see e.g. <a href="https://learn.microsoft.com/en-us/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/build-aspnet-core-applications-linux-containers-aks-kubernetes" rel="nofollow noreferrer">Build ASP.NET Core applications deployed to Kubernetes</a>.</p> <blockquote> <p>a SQL server</p> </blockquote> <p>This is more challenging, since older relational database systems are architected for <em>single node systems</em>. It is easiest to run this <strong>outside</strong> your Kubernetes cluster, but with network access from your cluster. If you really want to run your relational database system in Kubernetes, you should use a <strong>distributed database system</strong>, e.g. <a href="https://www.cockroachlabs.com/" rel="nofollow noreferrer">CockroachDB</a> that has a PostgreSQL-like SQL syntax and is designed to run in a Kubernetes.</p>
Jonas
<p>We have a micro-services java applications, as and when we have changes in code we have to perform the kubernetes deployment.</p> <p>How will I apply the latest changes to the deployment with the same Image name</p> <p>we have a single replica and when I execute <code>kubectl apply -f deployment.yaml</code> it says unchanged.</p> <p>We have kubelet version of v1.13.12</p> <p>Please help.</p>
magic
<p>This has been discussed in <a href="https://github.com/kubernetes/kubernetes/issues/33664" rel="nofollow noreferrer">#33664</a></p> <blockquote> <p>using :latest tag IMO is not the best practice as it's hard to track what image is really in use in your pod. I think tagging images by versions or using the digests is strictly better than reusing the same tag. Is it really such a hassle to do that?</p> </blockquote> <p>The recommended way is to <strong>not</strong> use image tag <code>:latest</code> when using declarative deployment with <code>kubectl apply</code>.</p>
Jonas
<p>When I check the definition of &quot;WebhookClientConfig&quot; of API of Kubernetes I found comments like this:</p> <pre><code>// `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. // If unspecified, system trust roots on the apiserver are used. // +optional CABundle []byte `json:&quot;caBundle,omitempty&quot; protobuf:&quot;bytes,2,opt,name=caBundle&quot;` </code></pre> <p>in <a href="https://github.com/kubernetes/api/blob/508b64175e9264c2a4b42b1b81d2571bf036cf09/admissionregistration/v1beta1/types.go#L555" rel="nofollow noreferrer">WebhookClientConfig</a></p> <p>I wonder to know, what's exactly the &quot;system trust roots &quot;? and I'm afraid the internal signer for CSR API of Kubernetes is not one of them.</p>
vincent pli
<p>It is a good practice to use secure network connections. A Webhook-endpoint in Kubernetes is typically an endpoint in a private network. A custom private CABundle can be used to generate the TLS certificate to achieve a secure <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#contacting-the-webhook" rel="nofollow noreferrer">connection</a> within the cluster. See e.g. <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#contacting-the-webhook" rel="nofollow noreferrer">contacting the webhook</a>.</p> <blockquote> <p>Webhooks can either be called via a URL or a service reference, and can optionally include a custom CA bundle to use to verify the TLS connection.</p> </blockquote> <p>This CABundle is optional. See also <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#service-reference" rel="nofollow noreferrer">service reference</a> for how to connect.</p> <blockquote> <p>If the webhook is running within the cluster, then you should use service instead of url. The service namespace and name are required. The port is optional and defaults to 443. The path is optional and defaults to &quot;/&quot;.</p> </blockquote> <blockquote> <p>Here is an example of a mutating webhook configured to call a service on port &quot;1234&quot; at the subpath &quot;/my-path&quot;, and to verify the TLS connection against the ServerName my-service-name.my-service-namespace.svc using a custom CA bundle</p> </blockquote>
Jonas
<p>If I have a deployment with only a single replica defined, can I ensure that only ever one pod is running?</p> <p>I noticed that when I do something like <code>kubectl rollout</code> for a very short amount of time I will see two pods in my logs.</p>
trallnag
<blockquote> <p>If I have a deployment with only a single replica defined, can I ensure that only ever one pod is running?</p> </blockquote> <p>It sounds like you are asking for &quot;at most one Pod&quot; semantics. Also consider what happens when a Node becomes <em>unresponsive</em>.</p> <p>This is point where <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSet</a> has different behavior.</p> <h4>Deployment</h4> <p>Has <strong>at least one</strong> Pod behavior, and may scale up new pods if it is unclear it at least one is running.</p> <h4>StatefulSet</h4> <p>Has <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#statefulset-considerations" rel="noreferrer"><strong>at most one</strong></a> Pod behavior, and make sure to not scale up more pods if it is unclear if at most one is running.</p>
Jonas
<p>I am exploring different strategies for handling shutdown gracefully in case of deployment/crash. I am using the Spring Boot framework and Kubernetes. In a few of the services, we have tasks that can take around 10-20 minutes(data processing, large report generation). How to handle pod termination in these cases when the task is taking more time. For queuing I am using Kafka.</p>
Shivam Singh
<blockquote> <p>we have tasks that can take around 10-20 minutes(data processing, large report generation)</p> </blockquote> <p>First, this is more of a <em>Job/Task</em> rather than a <em>microservice</em>. But similar &quot;rules&quot; applies, the node where this job is executing might terminate for upgrade or other reason, so your <em>Job/Task</em> must be <em>idempotent</em> and be able to be <em>re-run</em> if it crashes or is terminated.</p> <blockquote> <p>How to handle pod termination in these cases when the task is taking more time. For queuing I am using Kafka.</p> </blockquote> <p>Kafka is a good technology for this, because it is able to let the client Jon/Task to be <em>idempotent</em>. The job receives the data to process, and after processing it can &quot;commit&quot; that it has processed the data. If the Task/Job is terminated before it has processed the data, a new Task/Job will spawn and continue processing on the &quot;offset&quot; that is not yet committed.</p>
Jonas
<p>I have 2 API backends Microservices in which each Microservice has MongoDB database. I want to deploy 2 or more instances of each of these in Kubernetes cluster on Cloud provider such as AWS.</p> <p>Each instance of Microservice runs as a container in a Pod. Is it possible to deploy MongoDB as another container in the same Pod? Or what is the best practice for this use case?</p> <p>If two or more instances of the same Microservice are running in different Pods, do I need to deploy 2 or more instances of MongoDB or single MongoDb is referenced by the multiple instances of the same Microservice? What is the best practice for this use case?</p> <p>Each Microservice is Spring Boot application. Do I need to do anything special in the Spring Boot application source code just because it will be run as Microservice as opposed to traditional Spring Boot application?</p>
ace
<blockquote> <p>Each instance of Microservice runs as a container in a Pod. Is it possible to deploy MongoDB as another container in the same Pod?</p> </blockquote> <p>Nope, then your data would be gone when you upgrade your application.</p> <blockquote> <p>Or what is the best practice for this use case?</p> </blockquote> <p>Databases in a modern production environment are run as <strong>clusters</strong> - for availability reasons. It is best practice to either use a managed service for the database or run it as a cluster with e.g. 3 instances on different nodes.</p> <blockquote> <p>If two or more instances of the same Microservice are running in different Pods, do I need to deploy 2 or more instances of MongoDB or single MongoDb is referenced by the multiple instances of the same Microservice?</p> </blockquote> <p>All your instances of the microservice should access the same database cluster, otherwise they would see different data.</p> <blockquote> <p>Each Microservice is Spring Boot application. Do I need to do anything special in the Spring Boot application source code just because it will be run as Microservice as opposed to traditional Spring Boot application?</p> </blockquote> <p>Spring Boot is designed according to <a href="https://12factor.net/" rel="nofollow noreferrer">The Twelve Factor App</a> and hence is designed to be run in a cloud environment like e.g. Kubernetes.</p>
Jonas
<p>I am trying to setup Kubernetes for my company. I have looked a good amount into Jenkins X and, while I really like the roadmap, I have come the realization that it is likely not mature enough for my company to use at this time. (<a href="https://docs.cloudbees.com/docs/cloudbees-jenkins-x-distribution/latest/user-interface/" rel="nofollow noreferrer">UI in preview</a>, <a href="https://github.com/jenkins-x/jx/issues/6352" rel="nofollow noreferrer">flaky command line</a>, <a href="https://github.com/jenkins-x/jx/issues/6398" rel="nofollow noreferrer">random IP address needs</a> and poor windows support are a few of the issues that have lead me to that conclusion.)</p> <p>But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.</p> <p>But I am not sure about gitops support. When I try to google it (<code>gitops jenkins</code>) I get a bunch of information that includes Jenkins X.</p> <p><strong>Is there an easy(ish) way for normal Jenkins to use GitOps? If so, how?</strong></p> <p><strong>Update:</strong><br> By GitOps, I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)</p>
Vaccano
<blockquote> <p>I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)</p> </blockquote> <p>Yes, this is the what Jenkins (or other CICD tools) do. You can declare a deployment pipeline in a <a href="https://jenkins.io/doc/book/pipeline/jenkinsfile/" rel="nofollow noreferrer">Jenkinsfile</a> that is triggered on merge (commit to master) and have other steps for other branches (if you want).</p> <p>I recommend to deploy with <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kubectl using kustomize</a> and store the config files in your Git repository. You <em>parameterize</em> different environments e.g. staging and production with <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#bases-and-overlays" rel="nofollow noreferrer">overlays</a>. You may e.g. deploy with only 2 replicas in staging but with 6 replicas and more memory resources in production.</p> <p>Using Jenkins for this, I would create a <a href="https://jenkins.io/doc/book/pipeline/docker/" rel="nofollow noreferrer">docker agent image</a> with <code>kubectl</code>, so your <em>steps</em> can use the <code>kubectl</code> command line tool.</p> <p><strong>Jenkins on Kubernetes</strong></p> <blockquote> <p>But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.</p> </blockquote> <p>I have not had the best experience with this. It may work - or it may not work so well. I currently host Jenkins outside the Kubernetes cluster. I think that <a href="https://jenkins-x.io/blog/2019/02/19/jenkins-x-next-gen-pipeline-engine/" rel="nofollow noreferrer">Jenkins X</a> together with <a href="https://tekton.dev/" rel="nofollow noreferrer">Tekton</a> may be an upcoming promising solution for this, but I have not tried that setup.</p>
Jonas
<p>I am testing stateful sets with replicas, is there a way to force a service on each replica? For example, if I refer to the following note:</p> <p><a href="https://itnext.io/introduction-to-stateful-services-kubernetes-6018fd99338d" rel="nofollow noreferrer">https://itnext.io/introduction-to-stateful-services-kubernetes-6018fd99338d</a></p> <p>It shows headless service is created on top of pods. I do not have a way to force the connection to the first pod or the pod-0 or the 2nd pod i.e. pod-1.</p>
drifter
<p>You can access the pods directly, or you can create headless services as you write. This headless service is not created automatically, it is up to you to create it.</p> <blockquote> <p>you are responsible for creating the Headless Service responsible for the network identity of the pods.</p> </blockquote> <p>From <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">StatefulSet - Stable Network Identity</a></p> <p>Also see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">StatefulSet Basics - Headless Services</a> on how to create headless services, by setting <code>clusterIP: &quot;None&quot;</code></p>
Jonas
<p>I've created a simple K8s deployment with the <code>kubectl create</code> command</p> <pre><code>kubectl create -f k8-deployment.yaml </code></pre> <p>My <code>k8-deployment.yaml</code> file looks like this</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: mage-di name: mage-di spec: replicas: 1 selector: matchLabels: app: mage-di strategy: {} template: metadata: creationTimestamp: null labels: app: mage-di spec: containers: - image: astorm/mage-di name: mage-di imagePullPolicy: Never resources: {} status: {} </code></pre> <p>This results in a single pod being started.</p> <p>I want to tell my k8 cluster that more pods are needed to handle an expected traffic spike.</p> <p>How should I do this? If I look at <code>kubectl help</code> I see there's an <code>edit</code> command that allows me to edit a deployment object's configuration, but this requires an interactive editor. Also, I'm new to K8s and I'm unsure if editing a deployment in place and updating its replica count is enough to trigger the <em>proper</em> creation of new pods. If I look at other <code>kubectl</code> commands I see there's also a <code>rollout</code>, <code>apply</code> and <code>patch</code> command that might do what I want.</p> <p>Is there a canonically accepted way to do what I want, or is K8s the sort of tech where I just need to experiment and hope for the best?</p>
Alana Storm
<p>You can do this in two ways. Either <em>imperative</em> - a quick command Or <em>declarative</em> - good for a production environment where you store your Deployment-manifest in Git.</p> <p><strong>Imperative way</strong>: (this will then diverge from what you have in your yaml-file)</p> <pre><code>kubectl scale deployment mage-di --replicas=2 </code></pre> <p><strong>Declarative way</strong>, edit this line in your Yaml file:</p> <pre><code>replicas: 2 </code></pre> <p>then apply it to the cluster with:</p> <pre><code>kubectl apply -f k8-deployment.yaml </code></pre> <p>See also:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="noreferrer">Declarative config management</a></li> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="noreferrer">Imperative config management with commands</a></li> </ul>
Jonas
<p>I have a service that has single thread writer with multiple thread reader to a file deployed on k8. Now I want to take advantage of the k8 persistent storage to save the huge load time in between pod restarts, by moving the file (1 writer, multi reader) to k8 persistent storage with local storage type. How would this affect my file lock?</p> <p>I researched a lot online, and there are not a lot of mentioning of how multi threaded access would work on a persistent volume. Hope I could get some pointers on wether multi-threaded access would even work on persistent volume.</p>
Yituo
<blockquote> <p>by moving the file (1 writer, multi reader) to k8 persistent storage with local storage type. How would this affect my file lock?</p> </blockquote> <p>In both cases the application interacts with the <strong><em>file on the filesystem</em></strong>. So there will be no logical difference for your application.</p>
Jonas
<p>I'm running a kubernetes cluster of 20+ nodes. And one pod in a namespace got restarted. The pod got killed due to OOM with exit code 137 and restarted again as expected. But would like to know the node in which the pod was running earlier. Any place we could check the logs for the info? Like tiller, kubelet, kubeproxy etc...</p>
vijay v
<blockquote> <p>But would like to know the node in which the pod was running earlier.</p> </blockquote> <p>If a pod is killed with <code>ExitCode: 137</code>, e.g. when it used more memory than its limit, it will be restarted on the same node - not re-scheduled. For this, check your metrics or container logs.</p> <p>But Pods can also be killed due to over-committing a node, see e.g. <a href="https://sysdig.com/blog/troubleshoot-kubernetes-oom/" rel="nofollow noreferrer">How to troubleshoot Kubernetes OOM and CPU Throttle</a>.</p>
Jonas
<p>Let's say I have a web application Backend that I want to deploy with the help of Kubernetes, how exactly does scaling work in this case.</p> <p>I understand scaling in Kubernetes as: We have one a master node that orchestrates multiple worker nodes where each of the worker nodes runs 0-n different containers with the same image. My question is, if this is correct, how does Kubernetes deal with the fact that the same application use the same Port within one worker node? Does the request reach the master node which then handles this problem internally?</p>
jonithani123
<blockquote> <p>Does the request reach the master node which then handles this problem internally?</p> </blockquote> <p>No, the master nodes does not handle traffic for your apps. Typically traffic meant for your apps arrive to a load balancer or gateway, e.g. <a href="https://cloud.google.com/load-balancing" rel="nofollow noreferrer">Google Cloud Load Balancer</a> or <a href="https://aws.amazon.com/elasticloadbalancing/" rel="nofollow noreferrer">AWS Elastic Load Balancer</a>, then the load balancer forwards the request to a replica of a matching service - this is managed by the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress resource</a> in your cluster.</p> <p>The master nodes - the control plane - is only used for management, e.g. when you deploy a new image or service.</p> <blockquote> <p>how does Kubernetes deal with the fact that the same application use the same Port within one worker node?</p> </blockquote> <p>Kubernetes uses a container runtime for your containers. You can try this on your own machine, e.g. when you use <a href="https://www.docker.com/" rel="nofollow noreferrer">docker</a>, you can create multiple containers (instances) of your app, all listening on e.g. port 8080. This is a key feature of containers - the provide network isolation.</p> <p>On Kubernetes, all containers are tied together with a custom container networking. How this works, depends on what <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">Container Networking Interface</a>-plugin you use in your cluster. Each Pod in your cluster will get its own IP address. All your containers can listen to the same port, if you want - this is an abstraction.</p>
Jonas
<p>I am looking to write to a file on a Kubernetes pod through Elixir code. The pod starts off when exec'ing in at /opt/app and I want to write a file to some path like /opt/app/file_tmp.</p> <p>Currently I am hard coding in the file writing the string &quot;/opt/app/&quot; and was wondering if there is a better way to get the Kubernetes home directory so I don't need to hard code this string within the application?</p> <p>Is there something env variable I can use like K8S_HOME or similar?</p>
William Ross
<blockquote> <p>wondering if there is a better way to get the Kubernetes home directory so I don't need to hard code this string within the application?</p> </blockquote> <p>Kubernetes is the container orchestrator. Your app runs in a container with its filesystem. You set the &quot;working directory&quot; in your Dockerfile with <a href="https://docs.docker.com/engine/reference/builder/#workdir" rel="nofollow noreferrer">WORKDIR</a>.</p>
Jonas
<p>This is just a general question about microservice architecture. Why do 2 or more internal services still need token auth like oauth2 to communicate with each other if the outside world doesn't have access to them? Couldn't their apis just filter internal IP addresses instead? What are the risks with that approach?</p>
u84six
<blockquote> <p>Why do 2 or more internal services still need token auth like oauth2 to communicate with each other if the outside world doesn't have access to them?</p> </blockquote> <p>You don't <em>need</em> OAuth2 or token authentication, but you <em>should</em> use it. It depends on how much you <em>trust</em> your traffic. Now in the "cloud age" it is common to not own your own datacenters, so there is another part that own your server and network hardware. That part may do a misconfiguration, e.g. traffic from another customer is routed to your server. Or maybe you setup your own infrastructure and do a misconfiguration so that traffic from your test environment is unintendently routed to your production service. There is new practices to handle this new landscape and it is described in <a href="https://cloud.google.com/beyondcorp" rel="noreferrer">Google BeyondCorp</a> and <a href="https://rads.stackoverflow.com/amzn/click/com/B072WD347M" rel="noreferrer" rel="nofollow noreferrer">Zero Trust Networks</a>.</p> <p>Essentially, you should not trust the network traffic. Use <em>authentication</em> (e.g. OAuth2, OpenID Connect, JWT) on all requests, and encrypt all traffic with TLS or mTLS.</p> <blockquote> <p>Couldn't their apis just filter internal IP addresses instead? What are the risks with that approach?</p> </blockquote> <p>See above, maybe you should not trust the internal traffic either.</p> <p>In addition, it is now common that your end-users is authenticated using <em>OpenID Connect</em> (OAuth2 based authentication) - JWT-tokens sent in the <code>Authorization: Bearer</code> header. Most of your system will operate in a <em>user context</em> when handling the request, that is located within the JWT-token, and it is easy to pass that token in requests to all services that are involved in the operation requested by the user.</p>
Jonas
<p>I am quite new to Tekton.</p> <p>I am facing an issue currently - replication of pods using tekton.</p> <p>What I want to achieve?</p> <ul> <li>I want to create a pipeline with two tasks.</li> <li>First task creates an echo hello pod</li> <li>Second task creates an echo goodbye pod.</li> <li>Both pods needs to have 2 replicas.</li> </ul> <p>Error - unknown field &quot;replicas&quot; while running the tasks or pipeline.</p> <p>I have tried to add replicas in spec section for both tasks and pipeline, but it does not work. <strong>Any idea where I went wrong?</strong></p> <p>Here is my script - First task -</p> <pre><code>kind: Task metadata: name: hello spec: replicas: 2 steps: - name: hello image: ubuntu command: - echo args: - &quot;Hello World!&quot; </code></pre> <p>Second Task</p> <pre><code>kind: Task metadata: name: goodbye spec: replicas: 2 steps: - name: goodbye image: ubuntu script: | #!/bin/bash echo &quot;Goodbye World!&quot; </code></pre> <p>Pipeline script -</p> <pre><code>kind: Pipeline metadata: name: hello-goodbye spec: replicas: 2 tasks: - name: hello taskRef: name: hello - name: goodbye runAfter: - hello taskRef: name: goodbye </code></pre>
Shresth Suman
<p>There is no such thing as &quot;replicas&quot; in Tekton Pipelines.</p> <p>A Tekton Pipeline is a pipeline of Tasks that execute in a <a href="https://en.wikipedia.org/wiki/Directed_acyclic_graph" rel="nofollow noreferrer"><em>directed acyclic graph</em></a>.</p>
Jonas
<p>What is the correct way of memory handling in OpenShift/Kubernetes?</p> <p>If I create a project in OKD, how can I determine optimal memory usage of pods? For example, if I use 1 deployment for 1-2 pods and each pod uses 300-500 Mb of RAM - Spring Boot apps. So technically, 20 pods uses around 6-10GB RAM, but as I see, sometimes each project could have around 100-150 containers which needs at least 30-50Gb of RAM.</p> <p>I also tried with horizontal scale, and/or request/limits but still lot of memory used by each micro-service.</p> <p>However, to start a pod, it requires around 500-700MB RAM, after spring container has been started they can live with around 300MB as mentioned.</p> <p>So, I have 2 questions:</p> <ul> <li>Is it able to give extra memory but only for the first X minutes for each pod start?</li> <li>If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod?</li> </ul> <p>Thanks for the answer in advance!</p>
tarcali
<blockquote> <p>Is it able to give extra memory but only for the first X minutes for each pod start?</p> </blockquote> <p>You do get this behavior when you set the <strong>limit</strong> to a higher value than the <strong>request</strong>. This allows pods to burst, unless they all need the memory at the same time.</p> <blockquote> <p>If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod?</p> </blockquote> <p>It is common to use some form of cluster autoscaler to add more nodes to your cluster if it needs more capacity. This is easy if you run in the cloud.</p> <p>In general, Java and JVM is memory hungry, consider some other technology if you want to use less memory. How much memory an application needs/uses totally depends on your application, e.g what data structures are used.</p>
Jonas
<p>I am writing an ansible playbook right now that deploys a dockerized application in kubernetes. However, for molecularity purposes I would rather not hard code the files that need to be apply after doing <code>kompose convert -f docker-compose.yaml --volumes hostPath</code> Is there a way to apply all the files in a directory?</p>
James Ukilin
<p>You can apply all files in a folder with</p> <pre><code>kubectl apply -f &lt;folder&gt; </code></pre> <p>You may also be interested in <em>parameterization</em> of your manifest files using <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="noreferrer">Kustomize</a> e.g. use more replicas in a prod-namespace than in a test-namespace. You can apply parameterized manifest files with</p> <pre><code>kubectl apply -k &lt;folder&gt; </code></pre>
Jonas
<p>I am trying to create a cli tool for kubernetes. I need to generate Bearer Token for communicating with kubernetes API. How can I generate the token from Kubeconfig File? I do not want to use external library or kubectl.<br><br> Here is example Kubeconfig File:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01USXhNREU1TVRReU0xb1hEVE13TVRJd09ERTVNVFF5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUE4CmhEcDBvRVUzNTFFTEVPTzZxd3dUQkZ2U2ZxWWlGOE0yR0VZMXNLRFZ0MUNyL3czOS83QkhCYi9NOE5XaW9vdlQKZ2hsZlN2TXhsaTBRUVRSQmd5NHp2ZkNveXdBWkg0dWphYTBIcW43R2tkbUdVUC94RlZoWEIveGhmdHY5RUFBNwpMSW1CT3dZVHJ6ajRtS0JxZ3RTenhmVm5hN2J2U2oxV203bElYaTNaSkZzQmloSFlwaXIwdFZEelMzSGtEK2d0Cno1RkhOU0dnSS9MTlczOWloTU1RQ0g0ZFhtQVVueGFmdFdwUlRQOXFvSHJDWTZxVlNlbEVBYm40UWZVZ2ZUaDEKMUNhdW01bllOUjlDZ3lPOStNY0hXMTdYV0c4NGdGV3p6VUxPczVXbUo0VVY4RjdpdkVhMVJlM2Q3VkpKUEF6VwpCME4rWFFmcXg5UTArRWlXWklVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBV0p0Y2RLYjRRQWU2ekw4NzdvN3FQNVVWNWZNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCYWt3bE1LL2VmWUpyNlVlWEpkenBURmRaS0lYbWFEaWxhZ3ZNOGNkci9nVjJlWVlEdgpRY3FvcUwvNS95U3Y1T2ZpR0MrU25nUXhZMHp0a0VVQm04N1NOR1dYLzd1VlUwbytVV2tzZERLR2JhRmxIVE9PCmFBR3dndEZ4T1YzeTF1WnZJVm8vbW12WTNIMTBSd29uUE8yMU5HMEtRWkRFSStjRXFFb1JoeDFtaERCeGVSMUgKZzdmblBJWTFUczhWM2w0SFpGZ015anpwVWtHeUNjMVYxTDk5Vk55UHJISEg0L1FibVM5UWdkNUNWZXNlRm9HaApOVkQ4ZHRjUmpWM2tGYVVJelJ6a3lRMG1FMXk1RXRXMWVZZnF4QnAxNUN3NnlSenNWMzcrdlNab0pSS1FoNGw4CjB1b084cFhCMGQ4V1hMNml0UWp2ZjJOQnBnOU1nY0Q2QzEvZgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.1.18:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJYldUcHpDV25zTVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeU1UQXhPVEUwTWpOYUZ3MHlNVEV5TVRBeE9URTBNalZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBGT09JcnZiTGd1SWJmVXUKd29BaG5SaktEQVFCdkp3TlliSWZkSlNGSFBhY1ljbmVUcUVSVXFZeEs4azFHRytoR0FDTlFPb2VNV3Q1anNjRwpuN0FFdHhscUJQUzNQMzBpMVhLSmZnY2Q1OXBxaG1kOVFIdFNOVTlUTVlaM2dtY0x4RGl1cXZFRGI0Q042UTl6CkI3Yk5iUDE4Y3pZdHVwbUJrY2plMFF1ZEd2dktHcWhaY1NkVFZMT3ErcTE0akM4TTM5UmgzdDk1ZEM2aWRYaUsKbWE3WGs5YnJtalJnWDZRVUJJc0xwTnkvc3dJaUFiUTlXSm1YL2VkdHhYTGpENllNK1JzQ0JkbGc5MEhhcURqdgpKSlcwQ2g4cDJkV1ZwalQrWjBMd2ZnUENBN1YzS1o4YWdldHhwQ0xQcmxlOTdnRStVM1BKbXJVY0lBaVJlbzFoCmsvOXVqUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVCWW0xeDBwdmhBQjdyTXZ6dnVqdW8vbFJYbDh3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFDeXVKazdjdVppdzhmQW5teUdSa0trdFAzUE5LUnBCdDVnUVdjUzJuRFUrTmpIMjh1MmpGUDQ5Cm1xbjY1SGpmQU9iOVREUUlRcUtZaWdjYTViOXFYRXlDWHZEN1k1SXJ4RmN3VnEvekdZenFYWjVkR0srUnlBUlQKdm0rQzNaTDV0N2hJc1RIYWJ0SkhTYzhBeFFPWEdTd1h0YkJvdHczd2ZuSXB0alY1SG1VYjNmeG9KQUU4S1hpTgpHcXZ5alhpZHUwc1RtckszOHM5ZjZzTFdyN1lOQTlKNEh4ditkNk15ZFpSWDhjS3VRaFQzNDFRcTVEVnRCT1BoCjBpb1Mwa0JEUDF1UWlIK0tuUE9MUmtnYXAyeDhjMkZzcFVEY1hJQlBHUDBPR1VGNWFMNnhIa2NsZ0Q5eHFkU0cKMVlGVjJUamtjNHN2U1hMSkt1cmU1S2IrODcyQlZWWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMEZPT0lydmJMZ3VJYmZVdXdvQWhuUmpLREFRQnZKd05ZYklmZEpTRkhQYWNZY25lClRxRVJVcVl4SzhrMUdHK2hHQUNOUU9vZU1XdDVqc2NHbjdBRXR4bHFCUFMzUDMwaTFYS0pmZ2NkNTlwcWhtZDkKUUh0U05VOVRNWVozZ21jTHhEaXVxdkVEYjRDTjZROXpCN2JOYlAxOGN6WXR1cG1Ca2NqZTBRdWRHdnZLR3FoWgpjU2RUVkxPcStxMTRqQzhNMzlSaDN0OTVkQzZpZFhpS21hN1hrOWJybWpSZ1g2UVVCSXNMcE55L3N3SWlBYlE5CldKbVgvZWR0eFhMakQ2WU0rUnNDQmRsZzkwSGFxRGp2SkpXMENoOHAyZFdWcGpUK1owTHdmZ1BDQTdWM0taOGEKZ2V0eHBDTFBybGU5N2dFK1UzUEptclVjSUFpUmVvMWhrLzl1alFJREFRQUJBb0lCQUEvclVxRTAyYnJiQnNIZwpTb0p5YUI4cEZjZDFSdXl5d0JNSEdZQS9HU3p0YTJYTmx6OUs3NWZ4T3pDdFgzRk9sbkRQR2Z3cjU4Sy9BN3IxCldudzVaeUxXdmxOQ24vNHFBYzl0d1RQd04walFWL09OVlBUb2Q0KzdVQkFveGxrZ3ByV0gzMUVRdWNKN2dGeWUKNFp0bFRLMVhjWHNjV01JNW1MMGJMR3V0QjRSWU5meHAwZ1AxekJ6Z2FLYjVGK2xVcFdHZ2w1dHNHay9ncm9uSwpUVkVCQmtBT0lyU0pFemc5YUJ2emJMS0h3TnZlL1QrVEdJTGVZalpRYVkxL1lLN2JpbFVkaFlQOGI2OWhxbFZnClVxc0hpRjVXNzYzenMrdXl5azNtUU1yblJKQ2ZUWDNTRWhOVm1BdTl0TXh2eE1BRk9QT1lLb3FPb25LNHdrZWwKU21HUHBnRUNnWUVBNjJhMjdWdlgrMVRlellIWmZWSW8rSi8welVmZERqZ0MvWG1zTlBWdkhXdXlkOUVRQ1JXKwpOS1FpOGdMWmNUSEpWU3RidkpRVENSQUdCL0wzM09SUTI5Tm1KNnVVUWNNR0pBSzhwckdLKytwTXF3NHRPdzMvCkhDblVQZGVaSGFVVVFnODVJeWMrbmg5QnFQWndXclk3REZEbENUOXI5cVZJN1RvS0ptd2RjdlVDZ1lFQTRvNVUKZDZXdFpjUk5vV041UUorZVJkSDRkb2daQnRjQ0ExTGNWRDdxUzYrd0s2eTdRU05zem9wWTc1MnlLWU91N2FCWQo2RlhDQVRHaG0ranN6ZE14allrV2ROdGxwbDZ4ejZRZmN6ZWgydjVUQVdpRkZyMTlqU1RkLzNrRlFDNytpeUQyCnZRSHpacXZZSUhtQ3VleldHRFJrVVB2dzk1dTFranphcEZCRHZqa0NnWUJXZUpLMXVra3FiOUN3V1FTVmZuckMKYWErNVFLNjVMR1ljeW5jeHRQNnVKZ09XODlzYUd6eVZoYjI0Zk1kM1J6eVg1cWQ2TEVLWno2TUhoSDc4UzNwUQpaZVZlcVM1NndiTWR3MHVkU0JhdjF5OTJubXlMQnVjeFowUXB1MnJwY3R4d0w3dGphR1VlSElrNEVkN1AwNlQ1Ckx6WVRJWkw5TlZZR25vMWY4OU1WaVFLQmdRQ2RKQjNnYzNGSEloYTZkM1cxNWtEd3FzZ001eTk4dUF0MFpMZmcKVTFkTnNnbWU4WXRjamdhOVorWnlKVTViVHpRNUxEd2V3c1R5OFFyb1NuSmQvVHZrc1E1N2RXWVhOSjFlcWJjSwp3cTZvYURrSXhBZDBFM0VQUW1BZEFFTXRGcXVGc3hLUlhOWUlBKysvN3FoRzc4ZzhON0xSSFQ4eGI3Wk1QWnRsCjF5cDF1UUtCZ0VGemtmR3VzeGxJU2xOY1VDUGFTUUx6bTZqYmdjdUlXcjZaN053R01pVHM3b2x5TnQrdnpiRnMKbnk5d1pnbHlsS0M2NjcreXpIa0tkbnZBdWRuS290bDhybzRCOVhjUHhGWDJ5NnpwZWIxWS91STZpVzl4Y2NSNQozbUlVS2QrOGdMczRrTUttL2dXYjZxTHdPZ3pjQWJIbTV6SVhBMXQ5TUJWYlE2ZHEvMlZDCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== </code></pre>
anilkuscu
<blockquote> <p>I need to generate Bearer Token for communicating with kubernetes API</p> </blockquote> <p>You cannot ”generate” these tokens. They are <em>issued</em> by the control plane and signed with the <em>private key</em> that the control plane holds. It would be a security hole if you could generate these on the client side.</p>
Jonas
<p>For example I want to list all the possible values I can put in the yaml file to create a <code>Pod</code>. I want to also look at the meanings of those values. e.g. in the container section you put in the details of the container</p>
james pow
<p>You can see field documentation for a Pod with</p> <pre><code>kubectl explain Pod </code></pre> <p>then you can look deeper into the structure with e.g. (or any other field)</p> <pre><code>kubectl explain Pod.spec </code></pre>
Jonas
<p>I am trying to run my Intellij Scala project in the cloud using GCP. I am currently trying to use Kubernetes but I am following <a href="https://cloud.google.com/kubernetes-engine/docs/quickstarts/deploy-app-container-image" rel="nofollow noreferrer">this tutorial</a> and am not finding any support for Scala. How would I run my project in the cloud?</p>
Elias Fizesan
<blockquote> <p>I am currently trying to use Kubernetes</p> </blockquote> <p>You deploy docker containers to Kubernetes.</p> <blockquote> <p>I am following this tutorial and am not finding any support for Scala.</p> </blockquote> <p>You can use any language in a docker container. In the link, only some example apps is shown for some popular languages.</p> <p>What you have to do is to figure out how you build a docker container with your app. Then you can test to run that container with docker on your local machine. When that is working, you can deploy that container to Kubernetes on GCP.</p>
Jonas
<p>Currently I am using <code>Jenkins on kubernetes</code> and wanted to migrate to <code>tekton</code> because trying achieve CI steps as a code (similar to Helm chart for CD steps). Just wondering about Tekton architecture -</p> <p>why every task creates different pod rather than creating different containers in a single pod. Creating multiples pod leads resources locking as every pod will hold cpus/memory (default) till pod receives SIGTERM.</p>
Ashish Kumar
<blockquote> <p>Just wondering about Tekton architecture - why every task creates different pod rather than creating different containers in a single pod?</p> </blockquote> <p>This is a design choice that was made early in the project. They want <em>Tasks</em> to be a reusable component with, e.g. parameters and results.</p> <p>But you are right that it leads to problems when it comes to resource allocation. There are on-going work for executing a whole <em>Pipeline</em> within a single Pod, see <a href="https://github.com/tektoncd/community/blob/main/teps/0044-data-locality-and-pod-overhead-in-pipelines.md" rel="nofollow noreferrer">Tekton Enhancement Proposal 44</a></p>
Jonas
<p>I have scenario when my cluster consist of two microservices.</p> <p>In service <strong>A</strong> i have .CSV(15MB) file, which also is needed in service <strong>B</strong>.</p> <p>I don't want to place this file two times in each repo.</p> <p>During deployment of service <strong>A</strong> I want to place this .csv file in some kind of shared volume, that pod containing service <strong>B</strong> can consume and process it. Any ideas and best practices how to do it?</p> <p>Best regards</p>
novja
<p>The easiest solution would be to build the file into the docker image.</p>
Jonas
<p>I have 2 Deployment - A(1 replica) and B(4 replica)</p> <p>I have scheduled job in the POD A and on successful completion it hits endpoint present in one of the Pods from Deployment B through the service for Deployment B.</p> <p>Is there a way I can hit all the endpoints of 4 PODS from Deployment B on successful completion of job?</p> <p>Ideally one of the pod is notified!! But is this possible as I don't want to use pub-sub for this.</p>
Arpan Sharma
<blockquote> <p>Is there a way I can hit all the endpoints of 4 PODS from Deployment B on successful completion of job?</p> <p>But is this possible as I don't want to use pub-sub for this.</p> </blockquote> <p>As you say, a pub-sub solution is best for this problem. But you don't want to use it.</p> <p><strong>Use stable network identity for Service B</strong></p> <p>To solve this without pub-sub, you need a stable network-identity for the pods in <em>Deployment B</em>. To get this, you need to change to a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> for your service B.</p> <blockquote> <p>StatefulSets are valuable for applications that require one or more of the following.</p> <ul> <li>Stable, unique network identifiers.</li> </ul> </blockquote> <p>When B is deployed with a StatefulSet, your job or other applications can reach your pods of B, with a stable network identity that is the same for every version of service B that you deploy. Remember that you also need to deploy a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Service</a> for your pods.</p> <p><strong>Scatter pattern:</strong> You can have an application aware (e.g. aware of number of pods of <em>Service B</em>) proxy, possibly as a sidecar. Your job sends the request to this proxy. The proxy then sends a request to all your replicas. As described in <a href="https://rads.stackoverflow.com/amzn/click/com/1491983647" rel="nofollow noreferrer" rel="nofollow noreferrer">Designing Distributed Systems: Patterns and Paradigms</a></p> <p><strong>Pub-Sub or Request-Reply</strong></p> <p>If using <em>pub-sub</em>, the job only publish an event. Each pod in B is responsible to subscribe.</p> <p>In a <em>request-reply</em> solution, the job or a proxy is responsible for watching what pods exists (unless it is a fixed number of pods) in service B, in addition it need to send request to all, if requests fails to any pod (it will happen on deployments sometime) it is responsibly to retry the request to those pods.</p> <p>So, yes, it is a much more complicated problem in a request-reply way.</p>
Jonas
<p>I created below statfulset on microk8s:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql13 spec: selector: matchLabels: app: postgresql13 serviceName: postgresql13 replicas: 1 template: metadata: labels: app: postgresql13 spec: containers: - name: postgresql13 image: postgres:13 imagePullPolicy: Always ports: - containerPort: 5432 name: sql-tcp volumeMounts: - name: postgresql13 mountPath: /data env: - name: POSTGRES_PASSWORD value: testpassword - name: PGDATA value: /data/pgdata volumeClaimTemplates: - metadata: name: postgresql13 spec: storageClassName: &quot;microk8s-hostpath&quot; accessModes: [&quot;ReadWriteOnce&quot;] resources: requests: storage: 1Ki </code></pre> <p>in the <code>volumeClaimTemplates</code> i gave it only 1Ki (this is one KB right ?) But the DB started normally and when i run <code>kubectl exec postgresql13-0 -- df -h</code> on the pod i get this</p> <pre><code>Filesystem Size Used Avail Use% Mounted on overlay 73G 11G 59G 15% / tmpfs 64M 0 64M 0% /dev /dev/mapper/ubuntu--vg-ubuntu--lv 73G 11G 59G 15% /data shm 64M 16K 64M 1% /dev/shm tmpfs 7.7G 12K 7.7G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 3.9G 0 3.9G 0% /proc/acpi tmpfs 3.9G 0 3.9G 0% /proc/scsi tmpfs 3.9G 0 3.9G 0% /sys/firmware </code></pre> <p>Isn't is supposed to not use more than what the PVC have ? I intentially sat the storage class <code>AllowVolumeExpansion: False</code></p> <p>what am i missing here ?</p>
frisky5
<blockquote> <p>Isn't is supposed to not use more than what the PVC have?</p> </blockquote> <p>This is a misunderstanding. What you specify in a <strong>resource request</strong> is the resources your application <em>needs at least</em>. You might get more. You typically use <strong>resource limits</strong> to set hard limits.</p>
Jonas
<p>I'm playing around with kubernetes ConfigMaps. In the <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#configmaps-and-pods" rel="nofollow noreferrer">official documentation</a>, I see &quot;file-like keys&quot; in the <code>data</code> field:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: game-demo data: # file-like keys game.properties: | enemy.types=aliens,monsters player.maximum-lives=5 user-interface.properties: | color.good=purple color.bad=yellow allow.textmode=true </code></pre> <p>Is it possible to break these &quot;file-like keys&quot; into different files and reference them in this ConfigMap resource?</p> <p>I see several benefits of this approach:</p> <ul> <li>Slimmed down ConfigMap</li> <li>Proper syntax highlighting for the &quot;file-like&quot; configurations</li> <li>Can run auto formatters against the &quot;file-like&quot; configurations</li> </ul>
Johnny Metz
<blockquote> <ul> <li>Proper syntax highlighting for the &quot;file-like&quot; configurations</li> <li>Can run auto formatters against the &quot;file-like&quot; configurations</li> </ul> </blockquote> <p>Yes, it is easier to save the files as proper files on your machine and in Git.</p> <p>I propose that you use the <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="noreferrer">kustomize feature of kubectl</a> and use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="noreferrer">configMapGenerator</a> to generate the ConfigMap instead.</p> <p>Example <code>kustomization.yaml</code> (saved in the same directory as your files, e.g. in <code>config/</code>)</p> <pre><code>configMapGenerator: - name: game-demo files: - game.properties - user-interface.properties </code></pre> <p>Then you can apply (and generate the configMap) with (if your config is in <code>config/</code>):</p> <pre><code>kubectl -k config/ </code></pre> <p>Or you can preview the &quot;generated&quot; configMap with:</p> <pre><code>kubectl kustomize config/ </code></pre>
Jonas
<p>I would like to run a single pod of Mongo db in my Kubernetes cluster. I would be using node selector to get the pod scheduled on a specific node.</p> <p>Since Mongo is a database and I am using node selector, is there any reason for me not to use Kubernetes Deployment over StatefulSet? Elaborate more on this if we should never use Deployment.</p>
RamPrakash
<blockquote> <p>Since mongo is a database and I am using node selector, Is there any reason for me not to use k8s deployment over StatefulSet? Elaborate more on this if we should never use Deployment.</p> </blockquote> <p>You should not run a database (or other stateful workload) as <code>Deployment</code>, use <code>StatefulSet</code> for those.</p> <p>They have different semantics while updating or when the pod becomes unreachable. <code>StatefulSet</code> use <strong>at-most-X</strong> semantics and <code>Deployments</code> use <strong>at-least-X</strong> semantics, where X is number of replicas.</p> <p>E.g. if the node becomes unreachable (e.g. network issue), for Deployment, a new Pod will be created on a different node (to follow your desired 1 replica), but for StatefulSet it will make sure to terminate the existing Pod before creating a new, so that there are never more than 1 (when you have 1 as desired number of replicas).</p> <p>If you run a database, I assume that you want the data consistent, so you don't want duplicate instances with different data - (but should probably run a distributed database instead).</p>
Jonas
<p>I have a pod that is defined by a deployment, and the yaml definition is stored in my codebase. There are time when I'd like to have a volume mount configured for the pod/container, so it would be great to have a script that could enable this. I know I can use <code>kubectl edit</code> to open up an editor and do this (then restart the pod), but it would be more applicable if our devs could simply do something like <code>./our_scripts/enable_mount.sh</code>.</p> <p>One option would be to simply have a copy of the YAML definition and create/apply that while deleting the other, but it would be nicer to modify the existing one in place.</p> <p>Is there a way to achieve this? Does <code>kubectl edit</code> have any flags that I'm missing to achieve this?</p>
s g
<p>Use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Declarative Management of Kubernetes Objects Using Kustomize</a>. You already have a <code>deployment.yaml</code> manifest in your codebase. Now, move that to <code>base/deployment.yaml</code> and also create a <code>overlays/with-mount/deployment-with-mount.yaml</code> that <em>overrides</em> with an mount when you want.</p> <p>To deploy the base, you use</p> <pre><code>kubectl apply -k base/deployment.yaml </code></pre> <p>and when you want to deploy and also override so you get a mount, you use</p> <pre><code>kubectl apply -k overlays/with-mount/deployment-with-mount.yaml </code></pre>
Jonas
<p>I have many services. In a day, a few services are busy for about ten hours, while most other services are idle or use a small amount of cpu.</p> <p>In the past, I put all services in a virtual machine with two cpus, and scale by cpu usage, there are two virtual machine at the busiest time, but most of the time there is only one.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>services</th> <th>instances</th> <th>busy time in a day</th> <th>cpu when busy<br/>(core/service)</th> <th>cpu when idle<br/>(core/service)</th> </tr> </thead> <tbody> <tr> <td>busy services</td> <td>2</td> <td>8~12 hours</td> <td>0.5~1</td> <td>0.1~0.5</td> </tr> <tr> <td>busy services</td> <td>2</td> <td>8~12 hours</td> <td>0.3~0.8</td> <td>0.1~0.3</td> </tr> <tr> <td>inactive services</td> <td>30</td> <td>0~1 hours</td> <td>0.1~0.3</td> <td>&lt; 0.1</td> </tr> </tbody> </table> </div> <p>Now, I want to put them in kubernetes, each node has two CPUs, and use node autoscaling and HPA, in order to make the node autoscaling, I must set requests CPU for all services, which is exactly the difficulty I encountered.</p> <p>This is my setting.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>services</th> <th>instances</th> <th>busy time</th> <th>requests cpu<br/> (cpu/service)</th> <th>total requests cpu</th> </tr> </thead> <tbody> <tr> <td>busy services</td> <td>2</td> <td>8~12 hours</td> <td>300m</td> <td>600m</td> </tr> <tr> <td>busy services</td> <td>2</td> <td>8~12 hours</td> <td>300m</td> <td>600m</td> </tr> <tr> <td>inactive services</td> <td>30</td> <td>0~1 hours</td> <td>100m</td> <td>3000m</td> </tr> </tbody> </table> </div> <p>Note: The inactive service requests CPU is set to 100m because it will not work well if it is less than 100m when it is busy.</p> <p>With this setting, the number of nodes will always be greater than three, which is too costly. I think the problem is that although these services require 100m of CPU to work properly, they are mostly idle.</p> <p>I really hope that all services can autoscaling, I think this is the benefit of kubernetes, which can help me assign pods more flexibly. Is my idea wrong? Shouldn't I set a request CPU for an inactive service?</p> <p>Even if I ignore inactive services. I find that kubernetes more often has more than two nodes. If I have more active services, even in off-peak hours, the requests CPU will exceed 2000m. Is there any solution?</p>
Fulo Lin
<blockquote> <p>I put all services in a virtual machine with two cpus, and scale by cpu usage, there are two virtual machine at the busiest time, but most of the time there is only one.</p> </blockquote> <p>First, if you have any availability requirements, I would recommend to always have at least <strong>two</strong> nodes. If you have only one node and that one crash (e.g. hardware failure or kernel panic) it will take some minutes before this is detected and it will take some minutes before a new node is up.</p> <blockquote> <p>The inactive service requests cpu is set to 100m because it will not work well if it is less than 100m when it is busy.</p> </blockquote> <blockquote> <p>I think the problem is that although these services require 100m of cpu to work properly, they are mostly idle.</p> </blockquote> <p>The CPU <em>request</em> is a guaranteed reserved resource amount. Here you reserve too much resources for your almost idling services. Set the CPU request lower, maybe as low as <code>20m</code> or even <code>5m</code>? But since these services will need more resources during busy periods, set a higher <em>limit</em> so that the container can &quot;burst&quot; and also use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> for these. When using the Horizontal Pod Autoscaler more replicas will be created and the traffic will be load balanced across all replicas. Also see <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Managing Resources for Containers</a>.</p> <p>This is also true for your &quot;busy services&quot;, reserve less CPU resources and use Horizontal Pod Autoscaling more actively so that the traffic is spread to more nodes during high load, but can scale down and save cost when the traffic is low.</p> <blockquote> <p>I really hope that all services can autoscaling, I think this is the benefit of kubernetes, which can help me assign pods more flexibly. Is my idea wrong?</p> </blockquote> <p>Yes, I agree with you.</p> <blockquote> <p>Shouldn't I set a request cpu for an inactive service?</p> </blockquote> <p>It is a good practice to always set some value for <em>request</em> and <em>limit</em>, at least for a production environment. The scheduling and autoscaling will not work well without <em>resource requests</em>.</p> <blockquote> <p>If I have more active services, even in off-peak hours, the requests cpu will exceed 2000m. Is there any solution?</p> </blockquote> <p>In general, try to use lower <em>resource requests</em> and use Horizontal Pod Autoscaling more actively. This is true for both your &quot;busy services&quot; and your &quot;inactive services&quot;.</p> <blockquote> <p>I find that kubernetes more often has more than two nodes.</p> </blockquote> <p>Yes, there are two aspects of this.</p> <p>If you only use two nodes, your environment probably is small and the Kubernetes control plane probably consists of more nodes and is the majority of the cost. For very small environments, Kubernetes may be expensive and it would be more attractive to use e.g. a serverless alternative like <a href="https://cloud.google.com/run" rel="nofollow noreferrer">Google Cloud Run</a></p> <p>Second, for availability. It is good to have at least two nodes in case of an abrupt crash e.g. hardware failure or a kernel panic, so that your &quot;service&quot; is still available meanwhile the node autoscaler scales up a new node. This is also true for the number of <em>replicas</em> for a <code>Deployment</code>, if availability is important, use at least two replicas. When you e.g. <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">drain a node</a> for maintenance or node upgrade, the pods will be evicted - but not created on a different node first. The control plane will detect that the <code>Deployment</code> (technically ReplicaSet) has less than the desired number of replicas and create a new pod. But when a new Pod is created on a new node, the container image will first be pulled before the Pod is running. To avoid downtime during these events, use at least two replicas for your <code>Deployment</code> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">Pod Topology Spread Constraints</a> to make sure that those two replicas run on different nodes.</p> <hr /> <p>Note: You might run into the same problem as <a href="https://stackoverflow.com/questions/66879191/how-to-use-k8s-hpa-and-autoscaler-when-pods-normally-need-low-cpu-but-periodical">How to use K8S HPA and autoscaler when Pods normally need low CPU but periodically scale</a> and that should be mitigated by an upcoming Kubernetes feature: <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/6b7e77af527d8db82afb5060e5474ed524bdc0d6/kep/61-Trimaran-real-load-aware-scheduling" rel="nofollow noreferrer">KEP - Trimaran: Real Load Aware Scheduling</a></p>
Jonas
<p>When a Kubernetes Spring-Boot app is launched with 8 instances, the app running in each node needs to fetch sequence number of the pod/container. There should be no repeating numbers for the pods/containers running the same app. Assume that a pod runs a single container, and a container runs only one instance of the app.</p> <p>There are a few unique identifiers the app can pull from Kubernetes API for each pod such as:</p> <ul> <li>MAC address (<code>networkInterface.getHardwareAddress()</code>)</li> <li>Hostname</li> <li>nodeName (<code>aks-default-12345677-3</code></li> <li>targetRef.name (<code>my-sample-service-sandbox-54k47696e9-abcde</code>)</li> <li>targetRef.uid (<code>aa7k6278-abcd-11ef-e531-kdk8jjkkllmm</code>)</li> <li>IP address (<code>12.34.56.78</code>)</li> </ul> <p>But the app getting this information from the API cannot safely generate and assign a unique number to itself within the specified range of pods [0 - Max Node Count-1]. Any reducer step (bitwise &amp;) running over these unique identifiers will eventually repeat the numbers. And communicating with the other pods is an anti-pattern although there are approaches which take a consensus/agreement patterns to accomplish this.</p> <p>My Question is: <strong>Is there a simple way for Kubernetes to assign a sequential number for each node/container/pod when it's created - possibly in an environment variable in the pod?</strong> The numbers can to begin with 0 or 1 and should reach uptown the max count of the number of pods.</p> <p><em>Background info and some research:</em> Executing <code>UUID.randomUUID().hashCode() &amp; 7</code> eight times will get you repeats of numbers between 0 &amp; 7. Ref <a href="https://apoorvtyagi.tech/generating-unique-ids-in-a-large-scale-distributed-environment" rel="nofollow noreferrer">article</a> with this mistake in <code>createNodeId()</code>. Sample outputs on actual runs of reducer step above.</p> <pre><code>{0=2, 1=1, 2=0, 3=3, 4=0, 5=1, 6=1, 7=0} {0=1, 1=0, 2=0, 3=1, 4=3, 5=0, 6=2, 7=1} {0=1, 1=0, 2=2, 3=1, 4=1, 5=2, 6=0, 7=1} </code></pre> <p>I've went ahead and <a href="https://gist.github.com/agoli-dd/51d8ce2a12be63c2c33a669db1d88a08" rel="nofollow noreferrer">executed a 100 Million runs</a> of the above code and found that only 0.24% of the cases has even distribution.</p> <p><code>Uneven Reducers: 99760174 | Even Reducers: 239826</code></p>
Ashok Goli
<blockquote> <p>app is launched with 8 instances, the app running in each node needs to fetch sequence number of the pod</p> </blockquote> <p>It sounds like you are requesting a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity" rel="noreferrer">stable Pod identity</a>. If you deploy your Spring Boot app as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSet</a> instead of as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a>, then this identity is a &quot;provided feature&quot; from Kubernetes.</p>
Jonas
<p>What exactly are the practical consequences of missing consensus on a Kubernetes cluster? Or in other words: which functions on a Kubernetes cluster require consensus? What will work, what won't work? </p> <p>For example (and really only for example):</p> <ul> <li>will existing pods keep running?</li> <li>can pods still be scaled horizontally?</li> </ul> <p>Example scenario: A cluster with two nodes loses one node. No consensus possible.</p>
stefan.at.kotlin
<p>Consensus is fundamental to etcd - the distributed database that Kubernetes is built upon. Without consensus you can <em>read</em> but not <em>write</em> from the database. E.g. if only 1 of 3 nodes is available.</p> <blockquote> <p>When you lose quorum etcd goes into a <strong>read only</strong> state where it can respond with data, but no new actions can take place since it will be unable to decide if the action is allowed.</p> </blockquote> <p><a href="https://blog.containership.io/etcd/" rel="nofollow noreferrer">Understanding Etcd Consensus and How to Recover from Failure </a></p> <p>Kubernetes is designed so pods only need kubernetes for changes, e.g. deployment. After that they run independent of kubernetes in a loosely coupled fashion.</p> <p>Kubernetes is contstructed for keeping <em>desired state</em> in the etcd database. Then controllers watch etcd for changes and act upon change. This means that you can not scale or change any configuration of pods if etcd doesn't have consensus. Kubernetes does many self-healing operations, but they will not work if etcd is not available since all operations is done through the ApiServer and etcd.</p> <blockquote> <p>Loosing quorum means that <strong>no new actions</strong> can take place. Everything that is running will continue to run until there is a failure.</p> </blockquote> <p><a href="https://www.youtube.com/watch?v=n9VKAKwBj_0" rel="nofollow noreferrer">Understanding Distributed Consensus in etcd and Kubernetes</a></p>
Jonas
<p>I am running an Argo workflow and getting the following error in the pod's log:</p> <pre><code>error: a container name must be specified for pod &lt;name&gt;, choose one of: [wait main] </code></pre> <p>This error only happens some of the time and only with some of my templates, but when it does, it is a template that is run later in the workflow (i.e. not the first template run). I have not yet been able to identify the parameters that will run successfully, so I will be happy with tips for debugging. I have pasted the output of describe below.</p> <p>Based on searches, I think the solution is simply that I need to attach &quot;-c main&quot; somewhere, but I do not know where and cannot find information in the Argo docs.</p> <p>Describe:</p> <pre><code>Name: message-passing-1-q8jgn-607612432 Namespace: argo Priority: 0 Node: REDACTED Start Time: Wed, 17 Mar 2021 17:16:37 +0000 Labels: workflows.argoproj.io/completed=false workflows.argoproj.io/workflow=message-passing-1-q8jgn Annotations: cni.projectcalico.org/podIP: 192.168.40.140/32 cni.projectcalico.org/podIPs: 192.168.40.140/32 workflows.argoproj.io/node-name: message-passing-1-q8jgn.e workflows.argoproj.io/outputs: {&quot;exitCode&quot;:&quot;6&quot;} workflows.argoproj.io/template: {&quot;name&quot;:&quot;egress&quot;,&quot;arguments&quot;:{},&quot;inputs&quot;:{... Status: Failed IP: 192.168.40.140 IPs: IP: 192.168.40.140 Controlled By: Workflow/message-passing-1-q8jgn Containers: wait: Container ID: docker://26d6c30440777add2af7ef3a55474d9ff36b8c562d7aecfb911ce62911e5fda3 Image: argoproj/argoexec:v2.12.10 Image ID: docker-pullable://argoproj/argoexec@sha256:6edb85a84d3e54881404d1113256a70fcc456ad49c6d168ab9dfc35e4d316a60 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: argoexec wait State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 17 Mar 2021 17:16:43 +0000 Finished: Wed, 17 Mar 2021 17:17:03 +0000 Ready: False Restart Count: 0 Environment: ARGO_POD_NAME: message-passing-1-q8jgn-607612432 (v1:metadata.name) Mounts: /argo/podmetadata from podmetadata (rw) /mainctrfs/mnt/logs from log-p1-vol (rw) /mainctrfs/mnt/processed from processed-p1-vol (rw) /var/run/docker.sock from docker-sock (ro) /var/run/secrets/kubernetes.io/serviceaccount from argo-token-v2w56 (ro) main: Container ID: docker://67e6d6d3717ab1080f14cac6655c90d990f95525edba639a2d2c7b3170a7576e Image: REDACTED Image ID: REDACTED Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /bin/bash -c Args: State: Terminated Reason: Error Exit Code: 6 Started: Wed, 17 Mar 2021 17:16:43 +0000 Finished: Wed, 17 Mar 2021 17:17:03 +0000 Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /mnt/logs/ from log-p1-vol (rw) /mnt/processed/ from processed-p1-vol (rw) /var/run/secrets/kubernetes.io/serviceaccount from argo-token-v2w56 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: podmetadata: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations -&gt; annotations docker-sock: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock HostPathType: Socket processed-p1-vol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: message-passing-1-q8jgn-processed-p1-vol ReadOnly: false log-p1-vol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: message-passing-1-q8jgn-log-p1-vol ReadOnly: false argo-token-v2w56: Type: Secret (a volume populated by a Secret) SecretName: argo-token-v2w56 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m35s default-scheduler Successfully assigned argo/message-passing-1-q8jgn-607612432 to ack1 Normal Pulled 7m31s kubelet Container image &quot;argoproj/argoexec:v2.12.10&quot; already present on machine Normal Created 7m31s kubelet Created container wait Normal Started 7m30s kubelet Started container wait Normal Pulled 7m30s kubelet Container image already present on machine Normal Created 7m30s kubelet Created container main Normal Started 7m30s kubelet Started container main </code></pre>
user3877654
<p>This happens when you try to see logs for a pod with multiple containers and not specify for what container you want to see the log. Typical command to see logs:</p> <pre><code>kubectl logs &lt;podname&gt; </code></pre> <p>But your Pod has two container, one named &quot;wait&quot; and one named &quot;main&quot;. You can see the logs from the container named &quot;main&quot; with:</p> <pre><code>kubectl logs &lt;podname&gt; -c main </code></pre> <p>or you can see the logs from all containers with</p> <pre><code>kubectl logs &lt;podname&gt; --all-containers </code></pre>
Jonas
<p>I'm writing a Kubernetes Operator in Go and I would like to generate events in the same way Pods do, i.e. at each point of the reconciliation I want to write an event which can be examined using <code>kubectl describe myresource</code>.</p> <p>I found the package that would allow me to do that, but I don't understand how to use it: <a href="https://github.com/kubernetes/client-go/blob/master/tools/record/event.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/tools/record/event.go</a></p> <p>Example skeleton code:</p> <pre class="lang-golang prettyprint-override"><code>type MyResourceReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } var logger logr.Logger func (r *MyResourceReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). Named(&quot;MyResource-controller&quot;). For(&amp;v1.MyResource{}). Complete(r) } func (r *MyResourceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { logger = r.Log.V(0).WithValues(&quot;MyResource&quot;, req.NamespacedName) logger.Info(&quot;reconcile called&quot;) // TODO: Record event for req.NamespacedName return reconcile.Result{}, nil } </code></pre>
Cris
<p>The Kubebuilder v1 book has a good example on how to <em>create</em> and <em>write</em> <code>Events</code> using an <a href="https://github.com/kubernetes/client-go/blob/master/tools/record/event.go#L88" rel="nofollow noreferrer">EventRecorder</a> from <code>client-go</code>.</p> <p>See <a href="https://book-v1.book.kubebuilder.io/beyond_basics/creating_events.html" rel="nofollow noreferrer">Kubebuilder v1 book - Create Events and Write Events</a></p>
Jonas
<p>I'm trying to set a grace shutdown period for my pods. I found out you can add a field called <code>terminationGracePeriodSeconds</code> to the helm charts to set the period. I then looked for example and crossed upon these:</p> <p><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace</a></p> <p>In the above link they define the value in a <code>kind: pod</code> template.</p> <p><a href="https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html" rel="nofollow noreferrer">https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html</a></p> <p>In the above link they define the value in a <code>kind: deployment</code> template.</p> <p>Is there a difference between the 2 kinds in regard to where I define this value?</p>
CodeMonkey
<blockquote> <p>Is there a difference between the 2 kinds in regard to where I define this value?</p> </blockquote> <p>A <code>Deployment</code> has a field <code>template:</code> and that is actually a <code>PodTemplate</code> (most structure of a <code>Pod</code>) that includes the <code>terminationGracePeriodSeconds</code> property.</p> <p>A good way to check documentations for fields is to use <code>kubectl explain</code>.</p> <p>E.g.</p> <pre><code>kubectl explain Deployment.spec.template.spec </code></pre> <p>and</p> <pre><code>kubectl explain Pod.spec </code></pre>
Jonas
<p>I am trying to determine a reliable setup to use with K8S to scale one of my deployments using an HPA and an autoscaler. I want to minimize the amount of resources overcommitted but allow it to scale up as needed.</p> <p>I have a deployment that is managing a REST API service. Most of the time the service will have very low usage (0m-5m cpu). But periodically through the day or week it will spike to much higher usage on the order of 5-10 CPUs (5000m-10000m).</p> <p>My initial pass as configuring this is:</p> <ul> <li>Deployment: 1 replica</li> </ul> <pre><code>&quot;resources&quot;: { &quot;requests&quot;: { &quot;cpu&quot;: 0.05 }, &quot;limits&quot;: { &quot;cpu&quot;: 1.0 } } </code></pre> <ul> <li>HPA:</li> </ul> <pre><code>&quot;spec&quot;: { &quot;maxReplicas&quot;: 25, &quot;metrics&quot;: [ { &quot;resource&quot;: { &quot;name&quot;: &quot;cpu&quot;, &quot;target&quot;: { &quot;averageValue&quot;: 0.75, &quot;type&quot;: &quot;AverageValue&quot; } }, &quot;type&quot;: &quot;Resource&quot; } ], &quot;minReplicas&quot;: 1, ... } </code></pre> <p>This is running on an AWS EKS cluster with autoscaler running. All instances have 2 CPUs. The goal is that as the CPU usage goes up the HPA will allocate a new pod that will be unschedulable and then the autoscaler will allocate a new node. As I add load on the service, the CPU usage for the first pod spikes up to approximately 90-95% at max.</p> <p>I am running into two related problems:</p> <ol> <li>Small request size</li> </ol> <p>By using such a small request value (cpu: 0.05), the newly requested pods can be easily scheduled on the current node even when it is under high load. Thus the autoscaler never find a pod that can't be scheduled and doesn't allocate a new node. I could increase the small request size and overcommit, but this then means that for the vast majority of the time when there is no load I will be wasting resources I don't need.</p> <ol start="2"> <li>Average CPU reduces as more pods are allocated</li> </ol> <p>Because the pods all get allocated on the same node, once a new pod is allocated it starts sharing the node's available 2 CPUs. This in turn reduces the amount of CPU used by the pod and thus keeps the average value below the 75% target.</p> <p>(ex: 3 pods, 2 CPUs ==&gt; max 66% Average CPU usage per pod)</p> <p>I am looking for guidance here on how I should be thinking about this problem. I think I am missing something simple.</p> <p>My current thought is that what I am looking for is a way for the Pod resource request value to increase under heavier load and then decrease back down when the system doesn't need it. That would point me toward using something like a VPA, but everything I have read says that using HPA and VPA at the same time leads to <em>very bad things</em>.</p> <p>I think increasing the request from 0.05 to something like 0.20 would probably let me handle the case of scaling up. But this will in turn waste a lot of resources and could suffer issues if the scheduler find space on an existing pod. My example is about one service but there are many more services in the production deployment. I don't want to have nodes sitting empty with committed resources but no usage.</p> <p>What is the best path forward here?</p>
Allen
<p>Sounds like you need a Scheduler that take actual CPU utilization into account. This is not supported yet.</p> <p>There seem to be work on a this feature: <strong><a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/6b7e77af527d8db82afb5060e5474ed524bdc0d6/kep/61-Trimaran-real-load-aware-scheduling" rel="nofollow noreferrer">KEP - Trimaran: Real Load Aware Scheduling</a></strong> using <a href="https://github.com/kubernetes-sigs/scheduler-plugins/pull/115" rel="nofollow noreferrer">TargetLoadPackin plugin</a>. Also see <a href="https://github.com/kubernetes/kubernetes/issues/73269" rel="nofollow noreferrer">New scheduler priority for real load average and free memory</a>.</p> <p>In the meanwhile, if the CPU limit is 1 Core, and the Nodes autoscale under high CPU utilization, it sounds like it should work <em>if the nodes is substantially bigger than the CPU limits for the pods</em>. E.g. try with nodes that has 4 Cores or more and possibly slightly larger <em>CPU request</em> for the Pod?</p>
Jonas
<p>I have a PVC mounted on a pod. someone from my team has deleted files from it. Is there a way to recover those files? Also, can we find who deleted the files?</p>
Tad
<blockquote> <p>Is there a way to recover those files?</p> </blockquote> <p>Persistent Volumes and Persistent Volume Claims are only APIs provided by Kubernetes. Below these APIs, there is a storage system that <strong>may</strong> or <strong>may not</strong> implement backup features or mirroring. Check the documentation for your cloud provider or your on-prem storage systems.</p>
Jonas
<p>I noticed that we can create node by <a href="https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/node-v1/" rel="nofollow noreferrer">Kubernetes API</a></p> <ol> <li>What is the difference between <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/" rel="nofollow noreferrer">Kubeadm join</a> and this api?</li> <li>Is it possible to create a new worker node only by Kubernetes Api (without kubeadm)?</li> </ol>
Justin
<h2>Node object in Kubernetes API</h2> <p>You <em>can</em> create <em>Node objects</em> via Kubernetes API - these Node objects are just <em>representations</em> for nodes in the cluster, they must also exists e.g. a machine with a <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a>.</p> <p>See <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Nodes</a> in the Kubernetes documentation.</p> <h2>Join a machine to a cluster with kubeadm join</h2> <p><code>kubeadm join</code> is a tool and command to join a machine to the cluster as a node. This includes <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow" rel="nofollow noreferrer">many steps</a> including bootstrapping the node using cryptographic certiticates.</p>
Jonas
<p><code>kubectl apply &lt;file.yaml&gt; --save-config</code> creates or updates a deployment and saves the deployment as metadata.</p> <p>In the documentation it says</p> <blockquote> <p>--save-config[=false]: If true, the configuration of current object will be saved in its annotation. This is useful when you want to perform kubectl apply on this object in the future.</p> </blockquote> <p>Why do I need <code>save-config</code>? I can still update my deployment using <code>kubectl apply</code> if I do not <code>--save-config</code>.</p>
User12547645
<h2>kubectl apply</h2> <p><code>kubectl apply</code> use the data in an annotation <code>kubectl.kubernetes.io/last-applied-configuration</code> to see e.g. if any fields has been removed since the last apply. This is needed because some fields or annotations may have been added live in the cluster by e.g. a controller or mutating webhook.</p> <p>See e.g. <a href="https://luispreciado.blog/posts/kubernetes/core-concepts/kubectl-apply" rel="noreferrer">Understanding the Kubectl Apply Command</a></p> <blockquote> <p>I can still update my deployment using kubectl apply if I do not --save-config</p> </blockquote> <p>Yes, <code>--save-config</code> is only used when migrating from an <em>imperative</em> workflow. See more details below. The following <code>kubectl apply</code> commands does not need <code>--save-config</code> flag because the annotation is already there.</p> <h2>Workflows with kubectl</h2> <p>When working with configurations for Kubernetes, this can be done in multiple ways, they are both <em>imperative</em> or <em>declarative</em>:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="noreferrer">Managing Kubernetes Objects Using Imperative Commands</a></li> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/" rel="noreferrer">Imperative Management of Kubernetes Objects Using Configuration Files</a></li> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="noreferrer">Declarative Management of Kubernetes Objects Using Configuration Files</a></li> </ul> <p><code>kubectl apply</code> is used for <em>declarative</em> configuration management.</p> <h2>Migrating from imperative to declarative config mangement</h2> <p>Using <code>kubectl</code> with the <code>--save-config</code> flag is a way to write config to the annotation <code>kubectl.kubernetes.io/last-applied-configuration</code> that <code>kubectl apply</code> uses. This is useful when migrating from an <em>imperative</em> to an <em>declarative</em> workflow.</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#migrating-from-imperative-command-management-to-declarative-object-configuration" rel="noreferrer">Migrating from imperative command management to declarative object configuration</a></li> <li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#migrating-from-imperative-object-configuration-to-declarative-object-configuration" rel="noreferrer">Migrating from imperative object configuration to declarative object configuration</a></li> </ul>
Jonas
<p>I am creating kubernetes secrets using the below command</p> <pre class="lang-sh prettyprint-override"><code>kubectl create secret generic test-secret --save-config --dry-run=client --from-literal=a=data1 --from-literal=a=data2 -o yaml | kubectl apply -f - </code></pre> <p>Now, I need to add new literals using kubectl imperative command how to do that?? say eg:</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply secret generic test-secret --from-literal=c=data3 -o yaml | kubectl apply -f - </code></pre> <p>but gave the below error</p> <p>Error: unknown flag: --from-literal See 'kubectl apply --help' for usage. error: no objects passed to apply</p> <p>Any quick help is appreciated</p>
magic
<blockquote> <p>add new literals using kubectl imperative command</p> </blockquote> <p>When working with <em>imperative commands</em> it typically means that you don't save the change in a place outside the cluster. You can <strong>edit</strong> a Secret in the cluster directly:</p> <pre><code>kubectl edit secret test-secret </code></pre> <p>But if you want to <em>automate</em> your &quot;addition&quot;, then you most likely save your Secret another place before applying to the cluster. How to do this depends on how you manage Secrets. One way of doing it is by adding it to e.g. <a href="https://www.vaultproject.io/" rel="nofollow noreferrer">Vault</a> and then have it automatically injected. When working in an automated way, it is easier to practice <em>immutable</em> Secrets, and create new ones instead of mutating - because you typically need to redeploy your app as well, to make sure it uses the new. Using <a href="https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/" rel="nofollow noreferrer">Kustomize with secretGenerator</a> might be a good option if you work with immutable Secrets.</p>
Jonas
<ul> <li><p>This is more of a conceptual question and I could not find any information on the web or the documentation. I am trying to learn about the networking of kubernetes, so my question is more focuses on the conceptual part and not on the convention of deploying applications. From what I learnt so far, I can use Minikube to run kubernetes locally but let's say I have a pod running on my computer and another one running on my colleague's computer or in other words a pod running on an external computer on an external network. Is there a possible way to make these pods communicate with each other? Or the only way I can achieve this is via a cloud service provider?</p> </li> <li><p>From what I understand is that, in this scenario, there are two pods in two different clusters and the thing I am trying to achieve is the networking of the clusters ultimately making it possible for pods in these different clusters to communicate with each other.</p> <p>Thanks in advance.</p> </li> </ul>
Mark R. Chandar
<blockquote> <p>Is there a possible way to make these pods communicate with each other?</p> </blockquote> <p>They are indeed on different clusters, but networking and nodes are managed outside of Kubernetes. Depending on how you configure your networking (e.g. routing and subnets) this is possible. You can e.g. install Kubernetes on nodes that are public on Internet - and also install Kubernetes within a Company Network - it is common that Companies that setup Kubernetes (e.g. OpenShift) locally also let the services communicate with other non-Kubernetes services e.g. databases outside of the cluster - that is no different from another Kubernetes cluster. It all depends on how you configure your networking - and that is independent of Kubernetes.</p> <p>You can also expose services in a cluster to the external world by creating LoadBalancers or Proxies that are reachable from other networks. This is the typical way Kubernetes is setup, with cluster local nodes.</p>
Jonas
<p>I've multiple secrets created from different files. I'd like to store all of them in common directory <code>/var/secrets/</code>. Unfortunately, I'm unable to do that because kubernetes throws <strong>'Invalid value: "/var/secret": must be unique</strong> error during pod validation step. Below is an example of my pod definition. </p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: alpine-secret name: alpine-secret spec: containers: - command: - sleep - "3600" image: alpine name: alpine-secret volumeMounts: - name: xfile mountPath: "/var/secrets/" readOnly: true - name: yfile mountPath: "/var/secrets/" readOnly: true volumes: - name: xfile secret: secretName: my-secret-one - name: yfile secret: secretName: my-secret-two </code></pre> <p>How can I store files from multiple secrets in the same directory?</p>
Lukasz Dynowski
<h2>Projected Volume</h2> <p>You can use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/" rel="noreferrer">projected volume</a> to have two secrets in the same directory</p> <p><strong>Example</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: alpine-secret name: alpine-secret spec: containers: - command: - sleep - "3600" image: alpine name: alpine-secret volumeMounts: - name: xyfiles mountPath: "/var/secrets/" readOnly: true volumes: - name: xyfiles projected: sources: - secret: name: my-secret-one - secret: name: my-secret-two </code></pre>
Jonas
<p>I have two apps - one is a java based REST application (“A”) and the other one is a go lang based rego policy framework (“B”).</p> <p>I have run these two apps as containers in a single pod in K8s. However, I am not sure how can I get the incoming HTTP requests to first hit the “B” rego policy framework and based on the policy decision, the request be forwarded to “A”. Is there a way this can be achieved?</p>
N Deepak Prasath
<blockquote> <p>I am not sure how can I get the incoming HTTP requests to first hit the “B” rego policy framework</p> </blockquote> <p>A &quot;rego policy framework&quot;, e.g. OpenPolicyAgent are typically used as an assisting container.</p> <p>In this setup, your application receives the request, then <strong>ask</strong> the &quot;rego policy framework&quot; container, &quot;is this request allowed?&quot;, then your application continue to process the request.</p> <p>See e.g <a href="https://www.openpolicyagent.org/docs/latest/http-api-authorization/" rel="nofollow noreferrer">OpenPolicyAgent example - HTTP API Authorization</a> with this part, to <strong>ask if the request is allowed</strong>.</p> <pre><code># ask OPA for a policy decision # (in reality OPA URL would be constructed from environment) rsp = requests.post(&quot;http://127.0.0.1:8181/v1/data/httpapi/authz&quot;, json=input_dict) if rsp.json()[&quot;allow&quot;]: # HTTP API allowed else: # HTTP API denied </code></pre>
Jonas
<p>We have a kops based k8s cluster running on AWS with deployments using EFS as Persistent Volume; Now we would to migrate to EKS with PVC Deployments</p> <p>could some one help me in migrating deployments using Persistent Volume claims to EKS cluster in AWS.</p>
Balu Virigineni
<p>You can not <em>move</em> <code>PersistentVolumeClaims</code> to another cluster, you need to re-create them in the new cluster. You need to backup the data and restore from backup in the new cluster.</p>
Jonas
<p>My requirement is to start a pod when a particular batch job is completed. </p> <p>Batch job yaml file </p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: topics spec: ttlSecondsAfterFinished: 100 template: metadata: labels: app: topics spec: containers: - env: name: topics image: confluentinc/cp-kafka:5.3.0 command: - sh - -c - {{.Values.KafkaTopics}} </code></pre> <p>2 Deployment yaml </p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: opp spec: replicas: 1 strategy: {} template: metadata: labels: app: opp initContainers: - name: init image: busybox command: ['sh', '-c', 'until nc -z someservice:8093; do echo waiting for init; sleep 2; done;'] </code></pre> <p>Init container fine when I am checking for some service to be up. Not able to figure it out for batch job. </p>
pythonhmmm
<blockquote> <p>start a pod</p> </blockquote> <p>What you have described with a <code>Deployment</code> is a <strong>deployment of a service</strong>, not only <em>starting</em> a pod.</p> <h2>Watch status of Kubernetes objects</h2> <blockquote> <p>when a particular batch job is completed.</p> </blockquote> <p>If you want to watch Kubernetes objects and do actions depending on change of status of a particular object, you need to interact with the Kubernetes API server.</p> <p><strong>Use a Kubernetes client</strong></p> <p>The easiest way to interact with the Kubernetes API, especially for <em>watch</em> is to use a pre-built client, e.g. <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> or <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">kubernetes-client Java</a>.</p> <p><strong>Use the Kubernetes REST API</strong></p> <p>Alternatively you can use the <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">Kubernetes REST API</a> directly.</p> <p><strong>API Authentication and Authorization</strong></p> <p>Beware that you should use a <em>Service Account</em> for authentication and set properly RBAC rules for authorization.</p> <h2>Kafka Consumer</h2> <p>An alternative solution, since your <code>Job</code> hints that you are using Kafka. Your <code>Job</code> could publish an event on Kafka, and you can have a <em>Kafka Consumer</em> to <em>subscribe</em> and act upon those events. But if the consumer should <em>deploy</em> a service on an event, it also need an <em>Service Account</em> to interact with the Kubernetes API server.</p>
Jonas
<p>looking to understand the order in which kubenetes examine the pods using the 3 type of probes- startup, readiness and live.</p> <p>How to understand or design these 3 probes correctly for normal applications? What is the chance of getting conflict or breaking the application if the startup probe has wrong entries</p>
Vowneee
<h2>Startup probe</h2> <p><strong>This runs first.</strong> When it succeeds, the Readiness Probe and Liveness Probe are run continuously. If this fails, the container is killed.</p> <p>Use this for &quot;slow staring apps&quot;, you can use the same command as Liveness if you want.</p> <blockquote> <p>The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.</p> </blockquote> <p>From <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">configuring probes</a></p> <h2>Liveness probe</h2> <p>This is used to kill the container, in case of a <em>deadlock</em> in the application.</p> <h2>Readiness probe</h2> <p>This is used to check that the container can receive traffic.</p>
Jonas
<p>I have been shifting a project from kube to openshift. In minikube the project was working fine, but in minishift it gives the error</p> <pre><code> — Crash loop back off </code></pre> <p>This is from the minishift logs</p> <pre><code>[WARN] $TIMEZONE not set. [INFO] Docker date set to: Tue Apr 20 17:39:02 UTC 2021 [INFO] $PHP_FPM_ENABLE not set. PHP-FPM support disabled. [INFO] $CUSTOM_HTTPD_CONF_DIR not set. No custom include directory added. [INFO] Starting Server version: Apache/2.2.15 (Unix) whoami: cannot find name for user ID 1000140000 </code></pre> <p>Here is the relevant deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (HEAD) creationTimestamp: null labels: io.kompose.service: occtool name: occtool spec: replicas: 1 selector: matchLabels: io.kompose.service: occtool strategy: {} template: metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (HEAD) creationTimestamp: null labels: io.kompose.network/backend: &quot;true&quot; io.kompose.network/frontend: &quot;true&quot; io.kompose.service: occtool spec: containers: - image: private.registry.com/image:tag imagePullPolicy: IfNotPresent name: occtool ports: - containerPort: 80 - containerPort: 443 resources: {} restartPolicy: Always status: {} </code></pre> <p>Here is the Dockerfile</p> <pre><code>FROM cytopia/apache-2.2:0.9 # lines that copied files were omitted for convenience USER root </code></pre> <p>I haven't found much relevant information. <code>USER root</code> had been omitted originally so the user was apache. using minishift ssh and docker exec I noticed the user apache doesn't exist in the pod, but when building the image I am unable to run a command to create the user because the user does exist in the image. I believe this is the basis of the problem, but I haven't found a way to create the user in openshift, nor do I know why the user is removed when the pod is built.</p>
Brandon Kauffman
<p>OpenShift ignores the <code>USER</code>-directive from Dockerfiles and instead generates a random UID for the user in the container. There are some idea about security behind this.</p> <p>From OpenShift <a href="https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p><strong>Support Arbitrary User IDs</strong></p> </blockquote> <blockquote> <p>By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node.</p> </blockquote> <blockquote> <p>For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.</p> </blockquote>
Jonas
<p>I restarted a pod in my kubernetes cluster but I feel like that someone rollbacked it because when I check, I see the last restarting date does not match with the date I restarted it.</p> <p>What I want is to have all restart date history. Assume I restarted my pod yesterday and today, I want to have yesterday date and today date in my restart history. In this way I can be sure that someone restarted it after my restart</p> <p>So my question is: Is there a way to achieve this ?</p> <p>Thanks in advance.</p>
kasko
<p>There is no easy built-in way to get the info that you want.</p> <p>The best would probably built a specific service to provide this info if it is important for you. You could listen for Pod-changes and Events in a such service to collect the data that you need.</p>
Jonas
<p>I am <a href="https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/" rel="nofollow noreferrer">reading about blue green deployment with database changes on Kubernetes.</a> It explains very clearly and in detail how the process works:</p> <ol> <li>deploy new containers with the new versions while still directing traffic to the old containers</li> <li>migrate database changes and have the services point to the new database</li> <li>redirect traffic to the new containers and remove the old containers when there are no issues</li> </ol> <p>I have some questions particularly about the moment we switch from the old database to the new one.</p> <p>In step 3 of the article, we have <code>person-v1</code> and <code>person-v2</code> services that both still point to the unmodified version of the database (postgres v1):</p> <p><a href="https://i.stack.imgur.com/pCvg1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCvg1.png" alt="before database migration" /></a></p> <p>From this picture, having <code>person-v2</code> point to the database is probably needed to establish a TCP connection, but it would likely fail due to incompatibility between the code and DB schema. But since all incoming traffic is still directed to <code>person-v1</code> this is not a problem.</p> <p>We now modify the database (to postgres v2) and switch the traffic to <code>person-v2</code> (step 4 in the article). <strong>I assume that both the DB migration and traffic switch happen at the same time?</strong> That means it is impossible for <code>person-v1</code> to communicate with postgres v2 or <code>person-v2</code> to communicate with postgres v1 at any point during this transition? Because this can obviously cause errors (i.e. inserting data in a column that doesn't exist yet/anymore).</p> <p><a href="https://i.stack.imgur.com/PcJO5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PcJO5.png" alt="after database migration" /></a></p> <p>If the above assumption is correct, then <strong>what happens if during the DB migration new data is inserted in postgres v1</strong>? Is it possible for data to become lost with unlucky timing? Just because the traffic switch happens at the same time as the DB switch, does not mean that any ongoing processes in <code>person-v1</code> can not still execute DB statements. It would seem to me that any new inserts/deletes/updates would need to propagate to postgres v2 as well for as long as the migration is still in progress.</p>
Babyburger
<blockquote> <p>I am reading about blue green deployment with database changes on Kubernetes. It explains very clearly and in detail how the process works</p> </blockquote> <p>It's an interesting article. But I would not do database migration as described there. And Blue-Green deployment does not make this much easier, you cannot atomically swap the traffic, since replicas will still possibly process requests on the old version - and you don't want to cut on-going requests.</p> <p>The DB-change must be done in a way so that it does not break the first version of the code. Perhaps this must be done in multiple steps.</p> <p>Considering the same example, there is multiple different solutions. E.g. first add a <em>view</em> with the new column-names, then deploy a version of the code that uses the view, then change the column-names and finally deploy a newer version of the code that use the new column-names. Alternatively you can <strong>add</strong> columns with the new column-names <em>besides</em> the old column-names and let the old version of the code still use the old column-names and the new version of code use the new column-names, and finally remove old column-names when there is <em>no running replica of the old code</em>.</p> <p>As described above, both rolling-upgrades or blue-green deployments can be practiced.</p>
Jonas
<p>In my openshift cluster, I noticed that all my pods have a port that's open without me specifying it. Its the pott 443 which is apperantly used for the k8s api as mentiond in <a href="https://stackoverflow.com/questions/47523136/whats-the-purpose-of-the-default-kubernetes-service#:%7E:text=AFAIK%20the%20kubernetes%20service%20in,(%20Typically%20kubernetes%20API%20server).&amp;text=Please%20note%20that%20the%20kubernetes,the%20Endpoints%20IP%20of%20kubernetes.">this post</a>.</p> <p>Even after reading, i still don't understand something.</p> <p>I understand that the service exists and forwards to all pods. But for the pods to receive and send requests using this service. The port must be open in the containers. But somehow even without specifying a port on my pods container. That default 443 port is open. Which allows me to do something like this:</p> <ol> <li>Create service with target port set to 443</li> <li>Setup pod with no container port open.</li> <li>Successfully use service to communicate with container.</li> </ol> <p>Is this safe?, What opens the container port without me specifying it? Is there a way to prevent this from happening?</p>
Daniel Karapishchenko
<blockquote> <p>I noticed that all my pods have a port that's open without me specifying it.</p> </blockquote> <p>Yes, the <code>contanerPort:</code> is just metadata, the container might listen to other ports as well.</p> <blockquote> <p>Is this safe?, What opens the container port without me specifying it? Is there a way to prevent this from happening?</p> </blockquote> <p>Yes, this is what <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Kubernetes Network Policies</a> are for.</p>
Jonas
<p>I am using the below yaml file in which I am running 3container inside a pod using a tomcat image, I have given the service type as loadbalancer, but I am not able to access the tomcat in the external browser.</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: tomcat labels: name: tomcat --- apiVersion: apps/v1 kind: Deployment metadata: namespace: tomcat name: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: jpetstores1 image: petstoremysql2:latest imagePullPolicy: Never env: - name: sname value: petstore1 - name: jpetstores2 image: petstoremysql2:latest imagePullPolicy: Never env: - name: sname value: petstore2 - name: jpetstores3 image: petstoremysql2:latest imagePullPolicy: Never env: - name: sname value: petstore3 --- apiVersion: v1 kind: Service metadata: namespace: tomcat name: tomcat-service spec: selector: app: tomcat ports: - protocol: TCP port: 81 targetPort: 8080 type: LoadBalancer --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: tomcat namespace: tomcat spec: maxReplicas: 3 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: tomcat metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 97 --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: tomcat-pdb1 namespace: tomcat spec: minAvailable: 1 selector: matchLabels: run: tomcat </code></pre> <p><strong>Note :If only one container is running in it, I can access it in the browser</strong></p> <p>I need to run all three containers in the same pod with the same image, and access the tomcat in the browser through the port given in the loadbalancer.</p>
AnandArc
<blockquote> <p>I want a simple configuration where I am creating multiple instances of Tomcat and load balancing to them. Say 3 instances to start with and then can go up to 5.</p> </blockquote> <p>You achieve this by using a single <code>Deployment</code> manifest, with <strong>one</strong> instance of your container.</p> <p>You can adjust the number of replicas to 3, by changing (from <code>1</code> to <code>3</code>) this line in your <code>Deployment</code>:</p> <pre><code>spec: replicas: 3 </code></pre> <p>If you want to use the <code>HorizontalPodAutoscaler</code> and scale from 3 to 5, you can use these values in the manifest:</p> <pre><code>spec: maxReplicas: 5 minReplicas: 3 </code></pre>
Jonas
<p>I want to architect a message application using Websockets running on Kubernetes and want to know how to solve some problems...</p> <p><strong>Context</strong></p> <p>So, say you are building a chat application... Your chat application needs to frequently communicate with the back-end to work (e.g. receive sent messages, send messages etc.), so assuming the back-end will be built using Node &amp; the front-end is built using Electron, I think it would make sense to use Web-sockets in this scenario.</p> <p><strong>Problem 1 - Load Balancing</strong></p> <p>Your Web-socket server is suffering from bad performance, so you want to fix that. In this scenario I think it would make sense to make several instances of the Web-socket server &amp; balance the incoming traffic among them equally (load-balancer). For HTTP/HTTPS requests this makes sense, which server instance the request is redirected does not matter as its a "one time" request, but Web-sockets are different, if the client connected to instance 3, it would make sense if the rest of the incoming requests came into instance 3 (as the server might keep client state (like whether or not the client is authenticated))</p> <p><strong>Problem 2 - Division by Concerns</strong></p> <p>As the chat application gets bigger &amp; bigger, more &amp; more things need to be handled by the Web-socket servers... So it would make sense to split it into different concerns... (e.g. messaging, user authentication etc.) But assuming client state has to be kept, how can these different concerns know that state? (Shared state among concerns)</p> <p><strong>Problem 3 - Event Emitting</strong></p> <p>You implemented an event which fires, for each client, every time a user sends a message. How can this be achieved when there are several instances? (e.g. Client 1 is connected to Web-socket server instance 1, client 1 sends a message... Client 2 is connected to Web-socket server instance 2 &amp; the event needs to be fired for the client...)</p>
VimHax
<h2>Websockets: One request - long running connection</h2> <p>Your problem about <strong>Load balancing</strong> will be handled. Clients will be load balanced to different instances. When using Websockets, clients do <strong>one request</strong> to connect, they then keep that TCP connection to the backend and send multiple messages on the <em>same connection</em>.</p> <h2>Separation of concerns</h2> <blockquote> <p>more things need to be handled by the Web-socket servers... So it would make sense to split it into different concerns... (e.g. messaging, user authentication etc.)</p> </blockquote> <p>Yes, you should do <a href="https://en.wikipedia.org/wiki/Separation_of_concerns" rel="nofollow noreferrer">separation of concerns</a>. E.g. you could have one authentication service do a <a href="https://openid.net/connect/" rel="nofollow noreferrer">OpenID Connect</a> authentication, and the user can use the <em>access token</em> when connecting with e.g. Websockets or send other API requests.</p> <p>A web client usually allow up to two Websocket connections to the same domain, so it is better to only have one Websocket service. But you could you some kind of message broker, e.g. <a href="https://www.hivemq.com/blog/mqtt-essentials-special-mqtt-over-websockets/" rel="nofollow noreferrer">MQTT over Websocket</a> and route messages to different services.</p> <h2>Emitting messages</h2> <blockquote> <p>You implemented an event which fires, for each client, every time a user sends a message. </p> </blockquote> <p>If you use a message broker as I described above, all clients can <em>subscribe</em> to <em>channels</em> and when you <em>publish</em> a message, it will be routed to all subscribers.</p>
Jonas
<p>I have a Kubernetes project managed by Kustomized (Kubernetes). This project deploys two deployments in the same namespace.</p> <p>Basically, I have the following directory structure:</p> <pre><code>kustomize -&gt; app1 -&gt; kustomization.yaml kustomize -&gt; app1 -&gt; namespace.yaml kustomize -&gt; app1 -&gt; app1.yaml kustomize -&gt; app2 -&gt; kustomization.yaml kustomize -&gt; app2 -&gt; namespace.yaml kustomize -&gt; app2 -&gt; app2.yaml </code></pre> <p>The files <code>namespace.yaml</code> create in both the case the same namespace so that the first application deployed, create the namespace and the second reuse it. Obviously, the problem is when I try to remove only one of these applications:</p> <pre><code>kubectl delete -k kustomize/app1 </code></pre> <p>remove both the applications because the namespace is removed and app2 too. An easy solution to this problem is to move <code>namespace.yaml</code> outside the folders and just call it standalone. However, this approach requires user must remember to run:</p> <pre><code>kubectl apply -f namespace.yaml </code></pre> <p>before of:</p> <pre><code>kubectl apply -k kustomize/app1 kubectl apply -k kustomize/app2 </code></pre> <p>I know another possible solution is via script. My question is that exists a way to better manage namespace removal with Kustomize so that it is removed only if it is empty.</p>
Salvatore D'angelo
<p>You can have this directory structure:</p> <pre><code>kustomize -&gt; ns -&gt; namespace.yaml kustomize -&gt; app1 -&gt; kustomization.yaml kustomize -&gt; app1 -&gt; app1.yaml kustomize -&gt; app2 -&gt; kustomization.yaml kustomize -&gt; app2 -&gt; app2.yaml </code></pre> <p>Also you can add a <code>kustomization.yaml</code> at the root, so that you only need this to apply all:</p> <pre><code>kubectl -k kustomize/ </code></pre> <p>That will create the namespace and both apps.</p> <p>And you can still delete only one app if you want:</p> <pre><code>kubectl delete -k kustomize/app1 </code></pre> <p>And since you don't have an <code>namespace.yaml</code> in that directory, it does not delete the namespace.</p>
Jonas
<p>We have a docker container which is a CLI application, it runs, does it s things and exits.</p> <p>I got the assignment to put this into kubernetes but that containers can not be deployed as it exits and then is considered a crashloop.</p> <p>So the next question is if it can be put in a job. The job runs and gets restarted every time a request comes in over the proxy. Is that possible? Can job be restarted externally with different parameters in kubernetes?</p>
Serve Laurijssen
<blockquote> <p>So the next question is if it can be put in a job.</p> </blockquote> <p>If it is supposed to just run once, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Kubernetes Job</a> is a good fit.</p> <blockquote> <p>The job runs and gets restarted every time a request comes in over the proxy. Is that possible?</p> </blockquote> <p>This can not easyli be done without external add-ons. Consider using <a href="https://knative.dev/" rel="nofollow noreferrer">Knative</a> for this.</p> <blockquote> <p>Can job be restarted externally with different parameters in kubernetes?</p> </blockquote> <p>Not easyli, you need to interact with the Kubernetes API, to create a new Job for this, if I understand you correctly. One way to do this, is to have a Job with <code>kubectl</code>-image and proper RBAC-permissions on the ServiceAccount to create new jobs - but this will involve some latency since it is two jobs.</p>
Jonas
<p>I am surprised that nobody has yet asked this, but what exactly is <code>deployment.apps</code>?</p> <p>I see it often in commands e.g</p> <pre><code>kubectl rollout pause deployment.apps/nginx-deployment </code></pre> <p>or even used interchangably for the <code>deployments</code> keyword: <code>kubectl get deployments</code>= <code>kubectl get deployment.apps</code></p> <p>I do not understand what it indicates though. Even in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">K8s official docs</a>, they just take for granted that the reader understands the term.</p> <p>Could someone please explain it to me?</p>
Dimi
<p>Kubernetes API has its different resources (e.g. Pods, Deployments, Ingress) grouped in what they call &quot;<a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">api groups</a>&quot; and in the notation <code>deployment.apps</code> - &quot;<a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/" rel="nofollow noreferrer">deployment</a>&quot; is the resource name and the &quot;apps&quot; is the api group name.</p> <p>Also see the motivation for <a href="https://github.com/kubernetes/community/blob/a8d041d470d8b72f8a7fb4e8661ccda16b6b4c0f/contributors/design-proposals/api-machinery/api-group.md" rel="nofollow noreferrer">API groups</a></p>
Jonas
<p>We currently have VM environment setup with an internal network and a DMZ network. Historically we had no open ports between these environments, but needs arose for communication between the internet and services/APIs running on our internal servers.</p> <p>We decided to use our DMZ network as a proxy/gateway, where we specifically use Kong Gateway, exposing ports 80/443 to the internet, and then proxying/forwarding requests through a different port opened up between the DMZ server and the specific internal server that needs to handle this communication. A random, non-standard, high port is being used for <em>all</em> requests between the DMZ server and our internal network, and we then use a reverse proxy on our internal server to route specific request via hostnames to specific APIs/services on the internal server.</p> <p>Now, we're in the process of converting our internal environment to a k8s cluster, and I'm interested in knowing if there'd be any &quot;real&quot; difference to security, if we were to forego the DMZ proxy, and exposing ports 80/443 directly from the internet to our internal k8s cluster, and handle all the security/authentication/authorization through the ingress controller on our cluster.</p> <p>It would simplify our infrastructure a decent bit, to not have this DMZ proxy running.</p> <p>From my understanding the purpose of the DMZ proxy was that if a breach were to happen in the chain, it would be much harder to further penetrate our internal network, if the breach was only on the DMZ server. But my networking and security knowledge is not good enough to say if this is actually true, and it just provides a false sense of extra security, in which case, we'd have the exact same level of security with exposing those same ports directly on our internal k8s cluster, while simplifying the overall infrastructure.</p>
Dynde
<blockquote> <p>if there'd be any &quot;real&quot; difference to security, if we were to forego the DMZ proxy, and exposing ports 80/443 directly from the internet to our internal k8s cluster, and handle all the security/authentication/authorization through the ingress controller on our cluster.</p> </blockquote> <blockquote> <p>It would simplify our infrastructure a decent bit, to not have this DMZ proxy running.</p> </blockquote> <p>You probably want a &quot;Gateway&quot; outside the cluster, with a <strong>static</strong> IP-address. The nodes in the cluster are more <em>dynamic</em>, you want to throw away the old and create new when upgrading e.g. the linux kernel.</p> <blockquote> <p>From my understanding the purpose of the DMZ proxy was that if a breach were to happen in the chain, it would be much harder to further penetrate our internal network, if the breach was only on the DMZ server.</p> </blockquote> <p>The book <a href="https://rads.stackoverflow.com/amzn/click/com/1491962194" rel="nofollow noreferrer" rel="nofollow noreferrer">Zero Trust Networks</a> is good about this. Things has changed, the older way of using &quot;DMZ&quot; to protect internal networks, called &quot;perimeter security&quot; is now replaced with a &quot;Zero Trust Networking&quot; model. Now every host (or Pod) should be responsible for its security, on Kubernetes, to get this hardened, you can use a &quot;Service Mesh&quot; to implement mutual TLS between all services, see e.g. <a href="https://istio.io/" rel="nofollow noreferrer">istio</a>.</p>
Jonas
<p>We have a Kubernetes cluster that has several deployments, each of which can have multiple pods running at a time (so far so standard). We need to do some database migrations (not hosted on the cluster), and can't have any of our code potentially altering values while that is happening - as such we need to take offline all pods of a few of the deployments for a short while, before spinning them back up again.</p> <p>What we would like to do is find a reasonable way to have traffic that would be routed to those pods instead routed to just simple HTML error pages (where appropriate) while we're working, but without having to manually touch each pod as they can always be restarted or scaled while we're working.</p> <p>Some relevant information that may help answer our query:</p> <ul> <li>We have a load balancer for the cluster as a whole which sits on top of our SSL terminator/reverse proxy (currently running on multiple pods as well).</li> <li>We have load balancers sitting in front of each deployment (i.e. the load balancer is responsible for routing subdomain traffic between the pods for a given deployment)</li> <li>We are hosted on Azure Kubernetes Service (if that makes a difference)</li> <li>The pods we want to take offline are running linux containers with Nginx</li> </ul>
Jake Conkerton-Darby
<blockquote> <p>How can I take all pods of a Kubernetes deployment offline?</p> </blockquote> <p>I would recommend to scale the Deployment to <strong>0 replicas</strong> in this case.</p> <p>Use e.g.</p> <pre><code>kubectl scale deployment &lt;my-app&gt; --replicas=0 </code></pre> <p>You can easyli restore this by scaling up to the number of replicas that you want.</p> <p>You can also scale multiple deployments at the same time, if they have a common <em>label</em>:</p> <pre><code>kubectl scale deployment -l my-label=my-value --replicas=0 </code></pre> <p>You can add labels to a Deployment with</p> <pre><code>kubectl label deployment &lt;my-app&gt; my-label=my-value </code></pre>
Jonas
<p>I made a demo with kubernetes/go-client where i tried to list pods from my cluster.</p> <pre><code> config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) } clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } pods, err := clientset.CoreV1().Pods(&quot;&quot;).List(context.TODO(), metav1.ListOptions{}) fmt.Fprint(w, &quot;There are d pods in the cluster\n&quot;, len(pods.Items)) </code></pre> <p>I created serviceaccount token to assign to the pod where this code is running in.</p> <p>But when code is executed pods.Items has no pods.</p> <p>I deployed this pod inside minikube. When I launch some kubectl command for listing pods, this way I can get resources so it is no t permissions problems.</p> <p>I wonder what is happening and how i can fix it.</p> <hr /> <p>Repository <a href="https://github.com/srpepperoni/inframanager.git" rel="nofollow noreferrer">https://github.com/srpepperoni/inframanager.git</a></p> <p>Image is pushed into: <a href="https://hub.docker.com/r/jaimeyh/inframanager" rel="nofollow noreferrer">https://hub.docker.com/r/jaimeyh/inframanager</a></p> <p>The endpoint I have problems with is this one :</p> <pre><code>mux.HandleFunc(&quot;/getPods&quot;, GetPodsFromNamespace) </code></pre>
Jaime Yera
<p>You need to check if the <code>err</code> on the last line is non-nil.</p> <pre><code>pods, err := clientset.CoreV1().Pods(&quot;&quot;).List(context.TODO(), metav1.ListOptions{}) </code></pre> <blockquote> <p>OK, there is the problem. pods is forbidden: User &quot;system:serviceaccount:mis-pruebas:sa-prueba-go&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope</p> </blockquote> <p>As the error message indicates, the ServiceAccount does not have permission to list pods at cluster scope. You need to create Role and bind it to the ServiceAccount.</p> <p>The article <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a> even has a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-example" rel="nofollow noreferrer">role example</a> for how to create such a role.</p>
Jonas
<p><strong>Context:</strong> I am using Linux and Windows nodes in a Kubernetes cluster. Depending on the OS where a pod is deployed, I need to use a specific image.</p> <p><strong>Question:</strong> Is there a way to express this in a Kubernetes yaml files: &quot;if this label exist on the pod you are deploying, then use this image. Otherwise, use this other image.&quot;.</p> <p><strong>Other options considered:</strong></p> <ul> <li>Have two copies of the same yaml but each configured with a OS-specific image with a nodeSelector in each yaml targeting either Linux or Windows nodes. This is not ideal as we need to keep both yaml files in sync if we need to change something in one.</li> <li>Helm charts. I guess that would solve the issue of having to maintain two similar yaml files by using templates. But still, it seems overkill for what I need if there is an easy way to do it in yaml.</li> </ul>
Absolom
<p>The proper way to do this is to build &quot;multi-arch&quot; images, e.g. so that your container image contains binaries for multiple architectures. See e.g. <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/building-windows-multi-arch-images" rel="nofollow noreferrer">Building Windows Server multi-arch images</a> and <a href="https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/" rel="nofollow noreferrer">Docker: Multi-arch build and images, the simple way</a> - but it still seem to be an &quot;experimental feature&quot;. A drawback with this is that the images will end up to be bigger, this is not so welcome if you want good elasticity (e.g. be able to quickly scale up with more pods) - this is especially true for windows images.</p> <p>Alternatively, you need to use a separate <code>Deployment</code> for each architecture and use different <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a> on the nodes and the pods.</p> <p>You can keep this relatively clean by using <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kubectl kustomize</a> and only override a small part of the manifests.</p>
Jonas
<p>I have just created a GKE cluster on Google Cloud platform. I have installed in the cloud console <code>helm</code> :</p> <pre><code>$ helm version version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} </code></pre> <p>I have also created the necessary <code>serviceaccount</code> and <code>clusterrolebinding</code> objects:</p> <pre><code>$ cat helm-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system $ kubectl apply -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created </code></pre> <p>However trying to initialise <code>tiller</code> gives me the following error:</p> <pre><code>$ helm init --service-account tiller --history-max 300 Error: unknown flag: --service-account </code></pre> <p>Why is that?</p>
pkaramol
<blockquote> <p>However trying to initialise tiller gives me the following error:</p> <p>Error: unknown flag: --service-account</p> <p>Why is that?</p> </blockquote> <p><a href="https://helm.sh/blog/helm-3-released/" rel="noreferrer">Helm <strong>3</strong> is a major upgrade</a>. The <strong>Tiller</strong> component is now obsolete.</p> <p>There is no command <code>helm init</code> therefore also the flag <code>--service-account</code> is removed.</p> <blockquote> <p>The internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change is the <strong>removal of Tiller</strong>.</p> </blockquote>
Jonas
<pre><code>initContainers: - name: git-clone-openg2p image: bitnami/odoo command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: ['apt-get git &amp;&amp; git clone https://github.com/repo.git &amp;&amp; git clone https://github.com/repo.git /bitnami/odoo'] volumeMounts: - name: odoo-data mountPath: /bitnami/odoo </code></pre> <p>I need to add add-ons by cloning git repository into <code>/bitnami/odoo</code>. This is my init container configuration in yaml file. When I <code>helm install</code> and create pod it says &quot;Invalid operation git&quot; in the logs of pod.</p>
Ricardo1998
<p>As far as I know, there is no command <code>apt-get get</code>, you probably want:</p> <pre><code>apt-get install -y git </code></pre>
Jonas
<p>How do I ping my api which is running in kubernetes environment in other namespace rather than default. Lets say I have pods running in 3 namespaces - default, dev, prod. I have ingress load balancer installed and configured the routing. I have no problem in accessing default namespace - <a href="https://localhost/myendpoint" rel="nofollow noreferrer">https://localhost/myendpoint</a>.... But how do I access the apis that are running different image versions in other namespaces eg dev or prod? Do I need to add additional configuration in service or ingress-service files?</p> <p>EDIT: my pods are restful apis that communicates over http requests. All I’m asking how to access my pod which runs in other namespace rather than default. The deployments communicate between each other with no problem. Let’s say I have a front end application running and want to access it from the browser, how is it done? I can access if the pods are in the default namespace by hitting <a href="http://localhost/path" rel="nofollow noreferrer">http://localhost/path</a>... but if I delete all the pods from default namespace and move all the services and deoloyments into dev namespace, I cannot access it anymore from the browser with the same url. Does it have a specific path for different namespaces like <a href="http://localhost/dev/path" rel="nofollow noreferrer">http://localhost/dev/path</a>? Do I need to cinfigure it</p> <p>Hopefully it's clear enough. Thank you</p>
Vinod T Kumar
<h2>Route traffic with Ingress to Service</h2> <p>When you want to route request from external clients, via <code>Ingress</code> to a <code>Service</code>, you should put the <code>Ingress</code> and <code>Service</code> object in the <em>same namespace</em>. I recommend to use different <em>domains</em> in your <code>Ingress</code> for the environments.</p> <h2>Route traffic from Service to Service</h2> <p>When you want to route traffic from a pod in your cluster to a <code>Service</code>, possible in another namespace, it is easiest to use <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Service Discovery with DNS</a>, e.g. send request to:</p> <pre><code>&lt;service-name&gt;.&lt;namespace&gt;.svc.&lt;configured-cluster-name&gt;.&lt;configured-name&gt; </code></pre> <p>this is most likely</p> <pre><code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local </code></pre>
Jonas
<p>I tried to use K8s to setup spark cluster (I use standalone deployment mode, and I cannot use k8s deployment mode for some reason)</p> <p>I didn't set any cpu related arguments.</p> <p>for spark, that means:</p> <blockquote> <p>Total CPU cores to allow Spark applications to use on the machine (default: all available); only on worker</p> <p><a href="http://spark.apache.org/docs/latest/spark-standalone.html" rel="nofollow noreferrer">http://spark.apache.org/docs/latest/spark-standalone.html</a></p> </blockquote> <p>for k8s pods, that means:</p> <blockquote> <p>If you do not specify a CPU limit for a Container, then one of these situations applies:</p> <ul> <li><p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.</p></li> <li><p>The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.</p></li> </ul> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></p> </blockquote> <pre><code>... Addresses: InternalIP: 172.16.197.133 Hostname: ubuntu Capacity: cpu: 4 memory: 3922Mi pods: 110 Allocatable: cpu: 4 memory: 3822Mi pods: 110 ... </code></pre> <p>But my spark worker only use 1 core (I have 4 cores on the worker node and the namespace has no resource limits).</p> <p>That means the spark worker pod only used 1 core of the node (which should be 4).</p> <p>How can I write yaml file to set the pod to use all available cpu cores?</p> <p>Here is my yaml file:</p> <pre><code>--- apiVersion: v1 kind: Namespace metadata: name: spark-standalone --- kind: DaemonSet apiVersion: apps/v1 metadata: name: spark-slave namespace: spark-standalone labels: k8s-app: spark-slave spec: selector: matchLabels: k8s-app: spark-slave updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: name: spark-slave namespace: spark-standalone labels: k8s-app: spark-slave spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: Exists hostNetwork: true containers: - name: spark-slave image: spark:2.4.3 command: ["/bin/sh","-c"] args: - " ${SPARK_HOME}/sbin/start-slave.sh spark://$(SPARK_MASTER_IP):$(SPARK_MASTER_PORT) --webui-port $(SPARK_SLAVE_WEBUI_PORT) &amp;&amp; tail -f ${SPARK_HOME}/logs/* " env: - name: SPARK_MASTER_IP value: "10.4.20.34" - name: SPARK_MASTER_PORT value: "7077" - name: SPARK_SLAVE_WEBUI_PORT value: "8081" --- </code></pre>
Green
<h2>Kubernetes - No upper bound</h2> <blockquote> <p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running</p> </blockquote> <p>Unless you confgure a <code>limit</code> on CPU for your pod, it <em>can</em> use all available CPU resources on the node.</p> <p><strong>Consider dedicated nodes</strong></p> <p>If you are running other workload on the same node, they also consume CPU resources, and may be guaranteed CPU resources if they have configured <code>request</code> for CPU. Consider use a dedicated node for your workload using <code>NodeSelector</code> and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a>.</p> <h2>Spark - No upper bound</h2> <p>You <a href="https://spark.apache.org/docs/latest/spark-standalone.html" rel="nofollow noreferrer">configure the slave</a> with parameters to the <code>start-slave.sh</code> e.g. <code>--cores X</code> to <em>limit</em> CPU core usage.</p> <blockquote> <p>Total CPU cores to allow Spark applications to use on the machine (<strong>default: all available</strong>); only on worker</p> </blockquote> <h2>Multithreaded workload</h2> <p>In the end, if pod can use multiple CPU cores depends on how your application uses threads. Some things only uses a single thread, so the application must be designed for <strong>multithreading</strong> and have something <strong>parallelized</strong> to do.</p>
Jonas
<p>I have a GKE cluster running on Google Cloud. I created a persistence volume and mounted my deployments, So the connectivity between my application and persistence are bounded successfully.</p> <p>I also have filebeat running on the same cluster using the below link <a href="https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml" rel="nofollow noreferrer">https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml</a></p> <p>Both the application and filebeat also mounted successfully. The PV volume are created using <strong>access modes: ReadWriteOnce</strong> which is with GCE. But my cluster has many nodes running and my application is not mounted for all running pods. In google Cloud PV volumes are not supporting <strong>access modes: ReadWriteMany</strong>. So My filebeat too fails because of the application not mounted properly and filebeat has the capability of running in many nodes using deamonset. Is there a way to resolve the above issue.</p>
klee
<p>FileBeat should use volumes a bit different than volumes. Typically applications logs to <em>stdout</em> and then the <em>container runtime</em> (e.g. Docker daemon or containerd) persist the logs on the local node.</p> <p>FileBeat need to run on every node, so it should be deployed using <code>DaemonSet</code> as you say. But it should also mount the volumes from the node using <code>hostPath</code> volumes.</p> <p>See this part of the <code>DaemonSet</code> that you linked (no Persistent Volumes is used here):</p> <pre><code> volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate </code></pre>
Jonas
<p>I have 2 pods, one that is writing files to a persistent volume and the other one supposedly reads those files to make some calculations.</p> <p>The first pod writes the files successfully and when I display the content of the persistent volume using <code>print(os.listdir(persistent_volume_path))</code> I get all the expected files. However, the same command on the second pod shows an empty directory. (The mountPath directory <code>/data</code> is created but empty.)</p> <p>This is the TFJob yaml file:</p> <pre><code>apiVersion: kubeflow.org/v1 kind: TFJob metadata: name: pod1 namespace: my-namespace spec: cleanPodPolicy: None tfReplicaSpecs: Worker: replicas: 1 restartPolicy: Never template: spec: containers: - name: tensorflow image: my-image:latest imagePullPolicy: Always command: - &quot;python&quot; - &quot;./program1.py&quot; - &quot;--data_path=./dataset.csv&quot; - &quot;--persistent_volume_path=/data&quot; volumeMounts: - mountPath: &quot;/data&quot; name: my-pv volumes: - name: my-pv persistentVolumeClaim: claimName: my-pvc </code></pre> <p>(respectively pod2 and program2.py for the second pod)</p> <p>And this is the volume configuration:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc namespace: my-namespace labels: type: local app: tfjob spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <hr /> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: my-pv namespace: my-namespace labels: type: local app: tfjob spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/data&quot; </code></pre> <p>Does anyone have any idea where's the problem exactly and how to fix it?</p>
camelia
<p>When two pods should access a shared Persistent Volume with <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> <code>ReadWriteOnce</code>, concurrently - then the two pods must be running on the <strong>same node</strong> since the volume can only be mounted on a single node at a time with this access mode.</p> <p>To achieve this, some form of <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Pod Affinity</a> must be applied, such that they are scheduled to the same node.</p>
Jonas
<p>I'm kinda new to Kubernetes and I would like to know which kind of configuration/state/metadata files etcd cluster holds? didn't find any examples on that. just general explanation.</p> <p>Thanks :)</p>
monsal
<blockquote> <p>which kind of configuration/state/metadata files etcd cluster holds?</p> </blockquote> <p>Etcd is the database for Kubernetes, so it essentially store all configuration/state/metadata that you have in your cluster.</p> <h3>How Kubernetes works</h3> <p>Kubernetes is an eventual consistency system.</p> <ol> <li>When you want to create something in the cluster, you save the <em>configuration</em> in for your <em>desired state</em> - the state that you want to achieve.</li> <li>Kubernetes has controllers, that regularly (or on change) check what <em>desired state</em> is stored, then checks the <em>current state</em> - if there is a mismatch, the controller will try to make it to the <em>desired state</em>.</li> </ol> <p><strong>Example</strong></p> <p>There is no app in the cluster. Now you want to deploy an app that you have created.</p> <ol> <li>You create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>-manifest in the cluster with the image for your app.</li> <li>Controllers in the cluster will detect what you want, it also see that the app is not in the cluster. The controllers now have to achieve the state you asked for, by e.g. scheduling the instances to nodes, the nodes then need to pull the image from a registry and then start the app (Pod).</li> <li>Controllers continuously maintain the <em>desired state</em>. If your app crashes, the controllers again need to work to achieve the <em>desired state</em>, and create a new instance (Pod).</li> </ol> <h2>Resources in etcd</h2> <p>In the above example, you created an resource, a <code>Deployment</code> that is stored in etcd. But also all the controllers also create resources, e.g. <code>ReplicaSet</code> and <code>Pod</code> when you created your <code>Deployment</code>.</p> <p><strong>Separation of Concern</strong></p> <p>When you created the <code>Deployment</code>-manifest, you wrote some <em>metadata</em> and then a <code>spec:</code> - this is the <em>desired state</em>. The controllers write its state in <code>status:</code> and you can inspect this with e.g. <code>kubectl get deployment my-app -o yaml</code> and you will see if the controllers have written any issues or that the <em>condition</em> is <em>Running</em>.</p>
Jonas
<p>I've been looking for documentation for a long time and still couldn't find any clear connection procedure. I came up with this code sample :</p> <pre><code>package aws import ( &quot;fmt&quot; &quot;net/http&quot; &quot;github.com/aws/aws-sdk-go/aws/session&quot; &quot;github.com/aws/aws-sdk-go/service/eks&quot; &quot;github.com/joho/godotenv&quot; ) func Connect() { godotenv.Load(&quot;.env&quot;) session := session.Must(session.NewSession()) svc := eks.New(session) clusters, err := svc.ListClusters(&amp;eks.ListClustersInput{}) if err != nil { fmt.Println(err.Error()) } fmt.Println(clusters) } </code></pre> <p>i mean, this still returns a 403 forbidden error because of env variable mess, but the code is valid i guess. My question is, having this connection established : how to convert this <code>svc</code> variable into the <code>*kubernetes.Clientset</code> one from the go driver ?</p>
raphael.oester
<p>Have you had a look at the <a href="https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go" rel="nofollow noreferrer">client-go example</a> on how to authenticate in-cluster?</p> <p>Code that authenticate to the Kubernetes API typically start like this:</p> <pre><code> // creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) } // creates the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } </code></pre>
Jonas
<p>What type of edits will change a ReplicaSet and StatefulSet AGE(CreationTimeStamp)?</p> <p>I'm asking this because I noticed that</p> <ol> <li>If I change a Deployment image, a new ReplicaSet will be created.</li> <li>The old ReplicaSet continues to exist with DESIRED set to 0.</li> <li>If I change back to the previous container image, the 2 ReplicaSets don't change their age nor are recreated.</li> </ol> <p>So, what is the best way to verify if there were recent updates to a Deployment/ReplicaSet and StatefulSet?</p> <p>So far, I'm using client-go to check these resources ages:</p> <pre><code>func statefulsetCheck(namespace string, clientset *kubernetes.Clientset) bool { // get the statefulsets in the namespace statefulsets, err := clientset.AppsV1().StatefulSets(namespace).List(context.TODO(), metav1.ListOptions{}) if errors.IsNotFound(err) { log.Fatal(&quot;\nNo statefulsets in the namespace&quot;, err) } else if err != nil { log.Fatal(&quot;\nFailed to fetch statefulsets in the namespace: &quot;, err) } var stsNames []string for _, sts := range statefulsets.Items { stsNames = append(stsNames, sts.Name) } fmt.Printf(&quot;\nStatefulsets in the namespace: %v&quot;, stsNames) // check if the statefulsets are older than the 9 days for _, sts := range statefulsets.Items { stsAge := time.Since(sts.CreationTimestamp.Time) fmt.Printf(&quot;\nStatefulset %v age: %v&quot;, sts.Name, stsAge) if stsAge.Minutes() &lt; 5 { fmt.Printf(&quot;\nStatefulset %v had recent updates. Skipping...&quot;, sts.Name) return true } } return false } func replicasetCheck(namespace string, clientset *kubernetes.Clientset) bool { // get the replicasets in the namespace replicasets, err := clientset.AppsV1().ReplicaSets(namespace).List(context.TODO(), metav1.ListOptions{}) if errors.IsNotFound(err) { log.Fatal(&quot;\nNo replicasets in the namespace&quot;, err) } else if err != nil { log.Fatal(&quot;\nFailed to fetch replicasets in the namespace&quot;, err) } var rpsNames []string for _, rps := range replicasets.Items { rpsNames = append(rpsNames, rps.Name) } fmt.Printf(&quot;\nReplicasets in the namespace: %v&quot;, rpsNames) // check if the replicasets have recent updates for _, rps := range replicasets.Items { rpsAge := time.Since(rps.CreationTimestamp.Time) fmt.Printf(&quot;\nReplicaset %v age: %v&quot;, rps.Name, rpsAge) if rpsAge.Minutes() &lt; 5 { fmt.Printf(&quot;\nReplicaset %v had recent updates...&quot;, rps.Name) return true } } return false } </code></pre>
Kaio H. Cunha
<blockquote> <p>AGE(CreationTimeStamp)</p> </blockquote> <p>A resource's <code>CreationTimeStamp</code> (and thereby its age) is set when a resource is <strong>created</strong>. E.g. to change it, you must <em>delete</em> the resource and create it again.</p>
Jonas
<p>When deciding on update strategy for a kubernetes application, there is an option to use <code>Recreate</code> strategy.</p> <p>How would this be different from just uninstalling and installing the app?</p>
as.tek
<p><code>Recreate</code> strategy will delete your Pods and then add new Pods - you will get short downtime, but on the other side you will not use much extra resources during upgrade.</p> <p>You typically want <code>RollingUpgrade</code> since that takes a few Pods at a time and you can deploy stateless applications without downtime.</p>
Jonas
<p>I mean is there a one to one or many to one relationship between pod and PVC? Can I connect two or more pods to the same PVC(persistent volume claims) without deleting or disconnecting the earlier created pods?</p>
Aniket
<blockquote> <p>Can I connect two or more pods to the same PVC(persistent volume claims) without deleting or disconnecting the earlier created pods?</p> </blockquote> <p>Yes, this works. But in practice this is a bit more complicated.</p> <p>Persistent Volumes can be created with different <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">access modes</a>. Your storage system may limit what <em>access modes</em> you can use. E.g. the access mode <code>ReadWriteMany</code> is only available in some storage systems. The access mode <code>ReadWriteOnce</code> is most commonly available.</p> <p>For multiple Pods accessing a Persistent Volume mounted with access mode <code>ReadWriteOnce</code>, they must be scheduled to the <strong>same node</strong> to concurrently access the volume.</p> <p>For multiple Pods accessing a Persistent Volume mounted with access mode <code>ReadWriteMany</code> or <code>ReadOnlyMany</code>, the Pods can be scheduled to different nodes. But in a &quot;cloud provider&quot; environment, where you use multiple Availability Zones in a Region, your Persistent Volume is typically only accessible within <strong>one Availability Zone</strong>, so you must make sure that your Pods are scheduled to the same Availability Zone. Cloud providers typically offer Regional volumes as well, but they are more expensive and you need to use a specific <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storage class</a> for this.</p>
Jonas
<p>For <code>kubectl describe</code> I can abbreviate a few classes of resources, for example:</p> <pre><code>po/xxx -&gt; pods/xxx rs/xxx -&gt; replicasets/xxx </code></pre> <p>Where can I find the full list?</p> <p>I'm trying to find the abbreviation for deployments.</p>
gabriel
<p>To get a full list of your resources, including their <em>shortname</em>, use:</p> <pre><code>kubectl api-resources </code></pre> <p>e.g. Deployment has the shorthname <code>deploy</code>.</p> <hr /> <p>Example output from <code>kubectl api-resources</code>:</p> <pre><code>NAME SHORTNAMES APIVERSION NAMESPACED KIND daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet ... </code></pre>
Jonas
<p>If I have two services ServiceA and ServiceB. Both are of ServiceType ClusterIP, so if I understand correctly both services are not accessible from outside of the cluster.</p> <p>Do I then need to setup encryption for these services or is in-cluster-communication considered as secure?</p>
Matthias Gilch
<blockquote> <p>Do I then need to setup encryption for these services or is in-cluster-communication considered as secure?</p> </blockquote> <p>The level of security you want to use is up to you. In regulated industries, e.g. in banks, it is popular to apply a <a href="https://rads.stackoverflow.com/amzn/click/com/1491962194" rel="nofollow noreferrer" rel="nofollow noreferrer">zero trust</a> security architecture, where no <strong>network</strong> is considered secure - e.g. in this case, it is common to use <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/" rel="nofollow noreferrer">mutual TLS</a> between applications within the cluster - with both <em>authentication</em>, <em>authorization</em> and <em>encryption</em>. On Kubernetes its common to use a <em>service mesh</em> like e.g. Istio to implement this.</p> <p>In-cluster networking is typically its own local network, it is up to you to consider that secure enough for your use-case.</p> <blockquote> <p>If I have two services ServiceA and ServiceB. Both are of ServiceType ClusterIP, so if I understand correctly both services are not accessible from outside of the cluster.</p> </blockquote> <p>Commonly, yes. But there are now common with load balancers that can route traffic to applications with Service type ClusterIP. This depends on what load balancer / Gateway you use.</p>
Jonas
<p>In kubernetes POD I have an option to mount a secret or a configmap as a volume mounted to the POD. It would be difficult to access these files as environment variables. So why should I be doing it instead of using them as environment variables?</p>
Aditya Bhuyan
<p>This depends on how the application expect to load the secret.</p> <p>E.g. if the application expect to load an <a href="https://www.tutorialsteacher.com/https/ssl-certificate-format" rel="nofollow noreferrer">SSL certificate file</a>, it is possible to have the certificated as a file in a Secret and mount the Secret so that the application can read it as file.</p>
Jonas
<p>Below is kubernetes POD definition</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: static-web labels: role: myrole spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP </code></pre> <p>as I have not specified the resources, how much Memory &amp; CPU will be allocated? Is there a kubectl to find what is allocated for the POD?</p>
One Developer
<p>If resources are not specified for the Pod, the Pod will be scheduled to any node and resources are not considered when choosing a node.</p> <p>The Pod might be &quot;terminated&quot; if it uses more memory than available or get little CPU time as Pods with specified resources will be prioritized. It is a good practice to set resources for your Pods.</p> <p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">Configure Quality of Service for Pods</a> - your Pod will be classified as &quot;Best Effort&quot;:</p> <blockquote> <p>For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.</p> </blockquote>
Jonas
<p>I have a Cassandra statefulSet in my Kubernetes cluster with a high terminationGracePeriod to handle data handover currently.</p> <p>the problem is when a host machine goes down, K8s waits whole terminationGracePeriod in termination phase before rescheduling my pod on another node.</p> <p>how can i make K8s to ignore terminationGracePeriod when the node is down and reschedule pods immediately?</p>
Aref Riant
<blockquote> <p>the problem is when a host machine goes down, K8s waits whole terminationGracePeriod in termination phase before rescheduling my pod on another node.</p> </blockquote> <p>I think this is wrong assumption. When a host machine goes down, the node health check is used to detect this. Typically this is e.g. 5 minutes. Only after that, the pods are scheduled to other nodes.</p> <p>See <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">Node Condition and pod eviction</a>:</p> <blockquote> <p>If the status of the Ready condition remains Unknown or False for longer than the pod-eviction-timeout (an argument passed to the kube-controller-manager), then the node controller triggers API-initiated eviction for all Pods assigned to that node. The default eviction timeout duration is <strong>five minutes</strong>.</p> </blockquote> <blockquote> <p>how can i make K8s to ignore terminationGracePeriod when the node is down and reschedule pods immediately?</p> </blockquote> <p>I don't think <code>terminationGracePeriod</code> is related to this. A pod gets a <code>SIGTERM</code> to shutdown, only if it hasn't successfully been shutdown during the whole <em>terminationGracePeriod</em>, it will be killed with <code>SIGKILL</code>.</p>
Jonas
<p>I have a deployment.yaml which has a readiness probe on the container. (The readiness probe is intended to fail here)</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: my-nginx-deployment spec: replicas: 2 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx name: nginx readinessProbe: exec: command: - cat - /server/xyz.txt initialDelaySeconds: 50 periodSeconds: 10 resources: {} status: {} </code></pre> <p>The pods in the deployment are served using a service of type ClusterIP.</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: nginx-service name: nginx-service spec: ports: - port: 8080 protocol: TCP targetPort: 80 selector: app: nginx type: ClusterIP status: loadBalancer: {} </code></pre> <p>after applying these yamls using <code>kubectl apply</code>, container in the pods is never ready as the readiness probe is failing, which is expected.</p> <pre><code>NAME READY STATUS RESTARTS AGE my-nginx-deployment-6b788b89c6-f69j7 0/1 Running 0 9m50s my-nginx-deployment-6b788b89c6-m5qf6 0/1 Running 0 9m50s </code></pre> <p>So since these pods are not ready, they should not serve the traffic but when I do</p> <pre><code>kubectl port-forward services/nginx-service 8086:8080 </code></pre> <p>I am able to get 200 response and nginx home page on <code>http://127.0.0.1:8086/</code> and I can also see pods logs about serving traffic.</p> <p>question is, why pods are serving traffic when readiness probe is failing.</p> <p>PS: I have created cluster on my machine using Kind</p>
saurabh_garg
<p>The <code>port-forward</code> <a href="https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#create-connect-portforward-pod-v1-core" rel="nofollow noreferrer">api</a> is for a Pod. The <code>kubectl port-forward</code> command just use <em>service</em> to make it easy to use, but your port are actually forwarded to a Pod - so the <em>readiness</em> status is not applicable.</p>
Jonas
<p>I have created a simple flask api with swagger integration using flask_restplus library. It is working fine in localhost. But when I use it in gcp kubernetes ingress, it is giving results for endpoints but not able to show the documentation or swagger ui. Here are the browser console errors <a href="https://i.stack.imgur.com/DPBeu.jpg" rel="nofollow noreferrer">browser console errors</a></p> <p>Here is <code>ingress.yml</code> file</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-restplustest annotations: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.global-static-ip-name: "web-static-ip" spec: rules: - http: paths: - path: /rt backend: serviceName: restplustest servicePort: 5000</code></pre> </div> </div> In local system localhost:5000/rt shows the swagger-ui</p>
HarshR
<p>Your endpoint return a script that references other scripts located on <code>/swaggerui/*</code> but that path is not defined in your Ingress.</p> <p>It may be solved if you add that path to your service as well</p> <pre><code> - path: /swaggerui/* backend: serviceName: restplustest servicePort: 5000 </code></pre>
Jonas
<p>Github repo: <a href="https://github.com/oussamabouchikhi/udagram-microservices" rel="nofollow noreferrer">https://github.com/oussamabouchikhi/udagram-microservices</a></p> <p>After I configured the kubectl with the AWS EKS cluster, I deployed the services using these commands</p> <pre><code>kubectl apply -f env-configmap.yaml kubectl apply -f env-secret.yaml kubectl apply -f aws-secret.yaml # this is repeated for all services kubectl apply -f svcname-deploymant.yaml kubectl apply -f svcname-service.yaml </code></pre> <p><a href="https://i.stack.imgur.com/Qpe14.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qpe14.png" alt="enter image description here" /></a></p> <p>But the pods took hours and still in PENDING state, and when I run the command <code>kubectl describe pod &lt;POD_NAME&gt;</code> I get the follwing info</p> <p>reverseproxy-667b78569b-2c6hv pod: <a href="https://pastebin.com/3xF04SEx" rel="nofollow noreferrer">https://pastebin.com/3xF04SEx</a> <br/> udagram-api-feed-856bbc5c45-jcgtk pod: <a href="https://pastebin.com/5UqB79tU" rel="nofollow noreferrer">https://pastebin.com/5UqB79tU</a> <br/> udagram-api-users-6fbd5cbf4f-qbmdd pod: <a href="https://pastebin.com/Hiqe1LAM" rel="nofollow noreferrer">https://pastebin.com/Hiqe1LAM</a></p>
Oussama Bouchikhi
<p>From your <code>kubectl describe pod &lt;podname&gt;</code></p> <blockquote> <p>Warning FailedScheduling 2m19s (x136 over 158m) default-scheduler 0/2 nodes are available: 2 Too many pods.</p> </blockquote> <p>When you see this, it means that your nodes in AWS EKS is full.</p> <p><strong>To solve this, you need to add more (or bigger) nodes.</strong></p> <p>You can also investigate your nodes, e.g. list your nodes with:</p> <pre><code>kubectl get nodes </code></pre> <p>and investigate a specific node (check how many pods it has <em>capacity</em> for - and how many pods that runs on the node) with:</p> <pre><code>kubectl describe node &lt;node-name&gt; </code></pre>
Jonas
<p>I've worked through <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">my first Kubernetes tutorial</a>.</p> <p>Near the end, this tutorial introduced the concept of &quot;add-ons&quot; -- for example, you can install the <code>metrics-server</code> add-on by running</p> <pre><code>$ minikube addons enable metrics-server </code></pre> <p>What are these add-ons? Are they a feature that's specifically related to minikube, or can they be used with any kubernetes cluster and <code>minikube addons</code> is just some syntactic sugar that points at some shell scripts that can do anything?</p> <p>Put another way -- what is happening behind the scenes when I run</p> <pre><code>$ minikube addons enable some-add-on </code></pre> <p>Are all add-ons enabled the same way (like, maybe they create a deployment?) -- or will different add-ons be installed in different ways depending on their functionality?</p> <p>I'm basically trying to understand <em>how</em> a programmer could extend kubernetes themselves. When I go looking for documentation on this I find either lists of add-ons <a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/" rel="nofollow noreferrer">I can use</a> (which point to add-ons being more than a <code>minikube</code> thing), or <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/" rel="nofollow noreferrer">very broad documentation on ways to extend kubernetes</a> that don't make any mention of &quot;add-ons&quot; by name.</p>
Alana Storm
<blockquote> <p>What are these add-ons? Are they a feature that's specifically related to minikube</p> </blockquote> <p>Yes, this is specific to Minikube.</p> <p>Kubernetes is manly a container orchestrator. It is typically installed in an environment with lots of servers, e.g. a Cloud Provider like AWS or GCP. Kubernetes does not work isolated, it has abstractions and need <em>real</em> infrastructure from outside.</p> <p>Some examples:</p> <ul> <li>Load Balancer were your app traffic arrives through</li> <li>Virtual Machine or Physical Machines to e.g scale out your cluster with more Nodes when you want autoscaling.</li> <li>Disk volumes, either local volumes on the node, or storage via a network protocol.</li> </ul> <p>In a cloud environment like e.g. Amazon Web Services, these things would be provided with other AWS services like e.g. <a href="https://aws.amazon.com/elasticloadbalancing/" rel="nofollow noreferrer">Elastic Load Balancer</a>, <a href="https://aws.amazon.com/ec2/" rel="nofollow noreferrer">EC2</a> virtual machines or <a href="https://aws.amazon.com/ebs/" rel="nofollow noreferrer">Elastic Block Storage</a>. Other providers like e.g. RedHat OpenShift, specialized on Kubernetes for on-prem environments has other ways to provide these resources, e.g. via <a href="https://www.vmware.com/products/vsphere.html" rel="nofollow noreferrer">VMWare vSphere</a></p> <p><strong>Minikube</strong> is specialized for running Kubernetes on your local machine and to allow you use Kubernetes as how it would appear in a environment with many servers, it need to mimic those features, e.g. use your local machine for <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes</a>.</p> <p>You can see the Minikube add-ons with this command:</p> <pre><code>minikube addons list </code></pre>
Jonas
<p>I have a kubernetes HPA set up in my cluster, and it works as expected scaling up and down instances of pods as the cpu/memory increases and decreases.</p> <p>The only thing is that my pods handle web requests, so it occasionally scales down a pod that's in the process of handling a web request. The web server never gets a response back from the pod that was scaled down and thus the caller of the web api gets an error back.</p> <p>This all makes sense theoretically. My question is does anyone know of a best practice way to handle this? Is there some way I can wait until all requests are processed before scaling down? Or some other way to ensure that requests complete before HPA scales down the pod?</p> <p>I can think of a few solutions, none of which I like:</p> <ol> <li>Add retry mechanism to the caller and just leave the cluster as is.</li> <li>Don't use HPA for web request pods (seems like it defeats the purpose).</li> <li>Try to create some sort of custom metric and see if I can get that metric into Kubernetes (e.x <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics</a>) </li> </ol> <p>Any suggestions would be appreciated. Thanks in advance!</p>
harbinja
<h1>Graceful shutdown of pods</h1> <p>You must design your apps to support <em>graceful shutdown</em>. First your pod will receive a <code>SIGTERM</code> signal and after 30 seconds (can be configured) your pod will receive a <code>SIGKILL</code> signal and be removed. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">Termination of pods</a></p> <p><strong>SIGTERM</strong>: When your app receives termination signal, your pod will not receive <strong>new requests</strong> but you should try to fulfill responses of already received requests.</p> <h2>Design for idempotency</h2> <p>Your apps should also be designed for <strong>idempotency</strong> so you can safely <strong>retry</strong> failed requests.</p>
Jonas
<p>I have an OpenShift/Tekton pipeline which in <code>Task A</code> deploys an application to a test environment. In <code>Task B</code>, the application's test suite is run. If all tests pass, then the application is deployed to another environment in <code>Task C</code>.</p> <p>The problem is that <code>Task A</code>'s pod is deployed (with <code>oc apply -f &lt;deployment&gt;</code>), and before the pod is actually ready to receive requests, <code>Task B</code> starts running the test suite, and all the tests fail (because it can't reach the endpoints defined in the test cases).</p> <p>Is there an elegant way to make sure the pod from <code>Task A</code> is ready to receive requests, before starting the execution of <code>Task B</code>? One solution I have seen is to do HTTP GET requests against a health endpoint until you get a HTTP 200 response. We have quite a few applications which do not expose HTTP endpoints, so is there a more &quot;generic&quot; way to make sure the pod is ready? Can I for example query for a specific record in <code>Task A</code>'s log? There is a log statement which always shows when the pod is ready to receive traffic.</p> <p>If it's of any interest, here is the definition for <code>Task A</code>:</p> <pre><code>apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: create-and-deploy-integration-server spec: params: - name: release-name type: string description: The release name of the application - name: image-name type: string description: The name of the container image to use - name: namespace type: string description: The namespace (OCP project) the application resides in - name: commit-id type: string description: The commit hash identifier for the current HEAD commit steps: - name: create-is-manifest image: image-registry.openshift-image-registry.svc:5000/openshift/origin-cli script: | echo &quot;Creating IntegrationServer manifest&quot; cat &lt;&lt; EOF &gt; integrationserver.yaml apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: $(params.release-name) namespace: $(params.namespace) spec: license: accept: true license: L-KSBM-BZWEAT use: CloudPakForIntegrationNonProduction pod: containers: runtime: image: image-registry.openshift-image-registry.svc:5000/$(params.namespace)/$(params.image-name)-$(params.commit-id) imagePullPolicy: Always resources: limits: cpu: 500m memory: 500Mi requests: cpu: 300m memory: 300Mi adminServerSecure: true router: timeout: 120s designerFlowsOperationMode: disabled service: endpointType: http version: 11.0.0.11-r2 replicas: 1 barURL: '' EOF - name: deploy-is-manifest image: image-registry.openshift-image-registry.svc:5000/openshift/origin-cli script: | echo &quot;Applying IntegrationServer manifest to OpenShift cluster&quot; oc apply -f integrationserver.yaml </code></pre>
Andreas Bradahl
<p>After your <em>step</em> that do <code>oc apply</code>, you can add a step to wait for the deployment to become &quot;available&quot;. This is for <code>kubectl</code> but should work the same way with <code>oc</code>:</p> <pre><code>kubectl wait --for=condition=available --timeout=60s deployment/myapp </code></pre> <p>Then the next Task can depend on this Task with <code>runAfter: [&quot;create-and-deploy-integration-server&quot;]</code></p>
Jonas
<p>I have built a self-service platform based on Kubernetes, where we create namespaces for each team and allow them to 'do whatever they want within the namespace' (we set resource limits so no one can kill the whole cluster).</p> <p>However, now I want to implement some kind of standard across the organization. For example, I want every PodSpec to define its own resource limits, and I want every resource to have a label that specifies what application it belongs to.</p> <p>Is there a mechanism that will allow the API server to check the manifests being applied against a set of rules, and if it fails the check the manifest is rejected.</p> <p>For example, the following manifest would be rejected because it has neither a label nor are resource limits set.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre> <p>But the following manifest would succeed because it satisfies all the rules:</p> <pre> apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment <b>labels:</b> <b>app: foobar</b> spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 <b>resources:</b> <b>limits:</b> <b>cpu: "1"</b> <b>requests:</b> <b>cpu: "0.5"</b> </pre>
dayuloli
<blockquote> <p>Is there a mechanism that will allow the API server to check the manifests being applied against a set of rules, and if it fails the check the manifest is rejected.</p> </blockquote> <p>In general, this may be solved by an <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">custom admission controller</a> alternatively by a custom proxy. It depends on your needs and may not be so easy.</p> <h2>Resource limits by namespace</h2> <blockquote> <p>we create namespaces for each team and allow them to 'do whatever they want within the namespace' (we set resource limits so no one can kill the whole cluster).</p> <p>I want every PodSpec to define its own resource limits</p> </blockquote> <p>What you are looking for here is probably <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">Limit Ranges</a> per namespace, and possibly default values.</p> <blockquote> <p>With Resource quotas, cluster administrators can restrict the resource consumption and creation on a namespace basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace’s resource quota. There is a concern that one Pod or Container could monopolize all of the resources. Limit Range is a policy to constrain resource by Pod or Container in a namespace.</p> </blockquote> <p><strong>Mandatory Labels</strong> As what I know this is not possibly, <a href="https://github.com/kubernetes/kubernetes/issues/15390" rel="nofollow noreferrer">yet</a></p>
Jonas
<p>I have a pod in kubernetes with the request CPU 100m and Limit cpu with 4000m.</p> <p>The application spins up 500+ threads and it works fine. The application becomes unresponsive during heavy load and could be because of the thread limit issue.</p> <p><strong>Question:-</strong></p> <p>The number of threads are related to CPU's.</p> <p>Since, the request CPU is 100m, Will there be any problem in the thread limitation, or, pod can still spin up more threads as the Limit is 4000m.</p>
user1578872
<blockquote> <p>Since, the request CPU is 100m, Will there be any problem in the thread limitation, or, pod can still spin up more threads as the Limit is 4000m.</p> </blockquote> <p>The CPU limitation is 4000m (4 cores), so it can use as many threads as it wants and use CPU utilization up to 4 cores.</p> <p>The CPU request of 100m is almost only used for Pod scheduling, so your pod might end up on a node with few resources compared to your limitation, and might be evicted if there are few available CPU resources on the node.</p>
Jonas
<p>Does Kubernetes have a way of reusing manifests without copying and paste them? Something akin to Terraform templates.</p> <p>Is there a way of passing values between manifests?</p> <p>I am looking to deploy the same service to multiple environments and wanted a way to call the necessary manifest and pass in the environment specific values.</p> <p>I'd also like to do something like:</p> <p><strong>Generic-service.yaml</strong></p> <pre><code>Name={variablename} </code></pre> <p><strong>Foo-service.yaml</strong></p> <pre><code>Use=General-service.yaml variablename=foo-service-api </code></pre> <p>Any guidance is appreciated.</p>
Confounder
<p><a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a>, now part of <code>kubectl apply -k</code> is a way to <em>parameterize</em> your Kubernetes manifests files.</p> <p>With Kustomize, you have a <em>base manifest</em> file (e.g. of <code>Deployment</code>) and then multiple <em>overlay</em> directories for parameters e.g. for <em>test</em>, <em>qa</em> and <em>prod</em> environment.</p> <p>I would recommend to have a look at <a href="https://speakerdeck.com/spesnova/introduction-to-kustomize" rel="nofollow noreferrer">Introduction to kustomize</a>.</p> <p>Before Kustomize it was common to use Helm for this.</p>
Jonas
<p>I'm new to Kubernetes (K8s). It's my understanding that in order to &quot;do things&quot; in a kubernetes cluster, we interact with a kuberentes REST API endpoint and create/update/delete objects. When these objects are created/updated/deleted K8s will see those changes and take steps to bring the system in-line with the state of your objects.</p> <p>In other words, you tell K8s you want a &quot;deployment object&quot; with container image <code>foo/bar</code> and 10 replicas and K8s will create 10 running pods with the <code>foo/bar</code> image. If you update the deployment to say you want 20 replicas, K8s will start more pods.</p> <p>My Question: Is there a canonical description of all the possible configuration fields for these objects? That is -- tutorials liks <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">this one</a> do a good job of describing the simplest possible configuration to get an object like a deployment working, but now I'm curious what else it's possible to do with deployments that go beyond these hello world examples.</p>
Alana Storm
<blockquote> <p>Is there a canonical description of all the possible configuration fields for these objects?</p> </blockquote> <p>Yes, there is the <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="noreferrer">Kubernetes API reference</a> e.g. for <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/" rel="noreferrer">Deployment</a>.</p> <p>But when developing, the easiest way is to use <code>kubectl explain &lt;resource&gt;</code> and navigate deeper, e.g:</p> <pre><code>kubectl explain Deployment.spec </code></pre> <p>and then deeper, e.g:</p> <pre><code>kubectl explain Deployment.spec.template </code></pre>
Jonas
<p>I have a main pod that accesses and makes Kubernetes API calls to deploy other pods (the code similar below). It works fine. Now, I don't want to use the config file. I know it's possible with a service account. <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/</a>. How do I configure a service account (e.g default service account) that allows my pod to access the APIs?</p> <pre><code>public class KubeConfigFileClientExample { public static void main(String[] args) throws IOException, ApiException { // file path to your KubeConfig String kubeConfigPath = &quot;~/.kube/config&quot;; // loading the out-of-cluster config, a kubeconfig from file-system ApiClient client = ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build(); // set the global default api-client to the in-cluster one from above Configuration.setDefaultApiClient(client); // the CoreV1Api loads default api-client from global configuration. CoreV1Api api = new CoreV1Api(); // invokes the CoreV1Api client V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null); System.out.println(&quot;Listing all pods: &quot;); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } } } </code></pre>
Kevin N
<p>The official Java client has example for <a href="https://github.com/kubernetes-client/java/blob/master/examples/examples-release-13/src/main/java/io/kubernetes/client/examples/InClusterClientExample.java" rel="nofollow noreferrer">in-cluster client example</a>.</p> <p>It is quite similar to your code, you need to use a different <em>ClientBuilder</em>:</p> <pre><code>ApiClient client = ClientBuilder.cluster().build(); </code></pre> <p>and use it like this:</p> <pre><code> // loading the in-cluster config, including: // 1. service-account CA // 2. service-account bearer-token // 3. service-account namespace // 4. master endpoints(ip, port) from pre-set environment variables ApiClient client = ClientBuilder.cluster().build(); // set the global default api-client to the in-cluster one from above Configuration.setDefaultApiClient(client); // the CoreV1Api loads default api-client from global configuration. CoreV1Api api = new CoreV1Api(); </code></pre>
Jonas
<p>I am currently switching from Service Fabric to Kubernetes and was wondering how to do custom and more complex load balancing.</p> <p>So far I already read about Kubernetes offering "Services" which do load balancing for pods hidden behind them, but this is only available in more plain ways.</p> <p>What I want to rewrite right now looks like the following in Service Fabric:</p> <p>I have this interface: </p> <pre><code>public interface IEndpointSelector { int HashableIdentifier { get; } } </code></pre> <p>A context keeping track of the account in my ASP.Net application e.g. inherits this. Then, I wrote some code which would as of now do service discovery through the service fabric cluster API and keep track of all services, updating them when any instances die or are being respawned.</p> <p>Then, based on the deterministic nature of this identifier (due to the context being cached etc.) and given multiple replicas of the target service of a frontend -> backend call, I can reliably route traffic for a certain account to a certain endpoint instance.</p> <p>Now, how would I go about doing this in Kubernetes?</p> <p>As I already mentioned, I found "Services", but it seems like their load balancing does not support custom logic and is rather only useful when working with stateless instances.</p> <p>Is there also a way to have service discovery in Kubernetes which I could use here to replace my existing code at some points?</p>
Sossenbinder
<h1>StatefulSet</h1> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSet</a> is a <em>building block</em> for stateful workload on Kubernetes with certain guarantees.</p> <h2>Stable and unique network identity</h2> <blockquote> <p>StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage.</p> </blockquote> <p>As an example, if your StatefulSet has the name <code>sharded-svc</code></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: sharded-svc </code></pre> <p>And you have e.g. 3 replicas, those will be named by <code>&lt;name&gt;-&lt;ordinal&gt;</code> where <em>ordinal</em> starts from 0 up to replicas-1.</p> <p>The name of your pods will be:</p> <pre><code>sharded-svc-0 sharded-svc-1 sharded-svc-2 </code></pre> <p>and those pods can be reached with a dns-name:</p> <pre><code>sharded-svc-0.sharded-svc.your-namespace.svc.cluster.local sharded-svc-1.sharded-svc.your-namespace.svc.cluster.local sharded-svc-2.sharded-svc.your-namespace.svc.cluster.local </code></pre> <p>given that your <em>Headless Service</em> is named <code>sharded-svc</code> and you deploy it in namespace <code>your-namespace</code>.</p> <h1>Sharding or Partitioning</h1> <blockquote> <p>given multiple replicas of the target service of a frontend -> backend call, I can reliably route traffic for a certain account to a certain endpoint instance.</p> </blockquote> <p>What you describe here is that your stateful service is what is called <em>sharded</em> or <em>partitioned</em>. This does not come out of the box from Kubernetes, but you have all the needed <em>building blocks</em> for this kind of service. <em>It may happen that it exists an 3rd party service providing this feature that you can deploy, or it can be developed.</em></p> <h2>Sharding Proxy</h2> <p>You can create a service <code>sharding-proxy</code> consisting of one of more pods (possibly from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> since it can be stateless). This app need to watch the pods/service/<a href="https://stackoverflow.com/questions/52857825/what-is-an-endpoint-in-kubernetes">endpoints</a> in your <code>sharded-svc</code> to know where it can route traffic. This can be developed using <a href="https://github.com/kubernetes/client-go" rel="noreferrer">client-go</a> or other alternatives.</p> <p>This service implements the logic you want in your sharding, e.g. <em>account-nr</em> modulus 3 is routed to the corresponding pod <em>ordinal</em></p> <p><strong>Update:</strong> There are 3rd party proxies with <strong>sharding</strong> functionallity, e.g. <a href="https://github.com/gojek/weaver" rel="noreferrer">Weaver Proxy</a></p> <blockquote> <p>Sharding request based on headers/path/body fields</p> </blockquote> <p>Recommended reading: <a href="https://medium.com/@rbshetty/weaver-proxying-at-scale-b3b8b425a58e" rel="noreferrer">Weaver: Sharding with simplicity</a></p> <h1>Consuming sharded service</h1> <p>To consume your sharded service, the clients send request to your <code>sharding-proxy</code> that then apply your <em>routing</em> or <em>sharding logic</em> (e.g. request with <em>account-nr</em> modulus 3 is routed to the corresponding pod <em>ordinal</em>) and forward the request to <em>the replica</em> of <code>sharded-svc</code> that match your logic.</p> <h1>Alternative Solutions</h1> <p><strong>Directory Service:</strong> It is probably easier to implement <code>sharded-proxy</code> as a <em>directory service</em> but it depends on your requirements. The clients can ask your <em>directory service</em> to what statefulSet replica should I send <em>account-nr X</em> and your serice reply with e.g. <code>sharded-svc-2</code></p> <p><strong>Routing logic in client:</strong> The probably most easy solution is to have your <em>routing logic</em> in the client, and let this logic calculate to what statefulSet replica to send the request.</p>
Jonas
<p>I am using the ECK <a href="https://github.com/elastic/cloud-on-k8s" rel="nofollow noreferrer">operator</a>, to create an <code>Elasticsearch</code> instance.</p> <p>The instance uses a <code>StorageClass</code> that has <code>Retain</code> (instead of <code>Delete</code>) as its reclaim policy.</p> <p>Here are my <code>PVC</code>s <strong>before</strong> deleting the <code>Elasticsearch</code> instance</p> <pre><code>▶ k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE elasticsearch-data--es-multirolenodes1-0 Bound pvc-ba157213-67cf-4b81-8fe2-6211b771e62c 20Gi RWO balanced-retain-csi 8m15s elasticsearch-data--es-multirolenodes1-1 Bound pvc-e77dbb00-7cad-419f-953e-f3398e3860f4 20Gi RWO balanced-retain-csi 7m11s elasticsearch-data--es-multirolenodes1-2 Bound pvc-b258821b-0d93-4ea3-8bf1-db590b93adfd 20Gi RWO balanced-retain-csi 6m5s </code></pre> <p>I deleted and re-created the <code>Elasticsearch</code> instance with the hope that due to the <code>Retain</code> policy, the new pods (i.e. their <code>PVC</code>s would bind to the existing <code>PV</code>s (and data wouldn't get lost)</p> <p>However now my pods of the <code>nodeSet</code> are all in pending state with this error</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m37s default-scheduler persistentvolumeclaim &quot;elasticsearch-data--es-multirolenodes1-0&quot; is being deleted Normal NotTriggerScaleUp 2m32s cluster-autoscaler pod didn't trigger scale-up: 2 persistentvolumeclaim &quot;elasticsearch-data--es-multirolenodes1-0&quot; not found Warning FailedScheduling 12s (x7 over 2m37s) default-scheduler persistentvolumeclaim &quot;elasticsearch-data--es-multirolenodes1-0&quot; not found </code></pre> <p>Why is this happening?</p> <p><strong>edit</strong>: Here are the corresponding <code>PV</code>s</p> <pre><code>▶ k get pv pvc-b258821b-0d93-4ea3-8bf1-db590b93adfd 20Gi RWO Retain Released elastic/elasticsearch-data--es-multirolenodes1-2 balanced-retain-csi 20m pvc-ba157213-67cf-4b81-8fe2-6211b771e62c 20Gi RWO Retain Released elastic/elasticsearch-data--es-multirolenodes1-0 balanced-retain-csi 22m pvc-e77dbb00-7cad-419f-953e-f3398e3860f4 20Gi RWO Retain Released elastic/elasticsearch-data--es-multirolenodes1-1 balanced-retain-csi 21m </code></pre> <p>There is of course no <code>PVC</code> now</p> <pre><code>▶ k get pvc No resources found in elastic namespace. </code></pre> <p>The <code>StorageClass</code> under consideration is using the <code>csi</code> driver for the GCP persistent disk, fwiw</p>
pkaramol
<blockquote> <p>with the hope that due to the Retain policy, the new pods (i.e. their PVCs would bind to the existing PVs (and data wouldn't get lost)</p> </blockquote> <p>It is explicitly written in the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain" rel="nofollow noreferrer">documentation</a> that this is not what happens. the PVs are <strong>not</strong> available for another PVC after delete of a PVC.</p> <blockquote> <p>the PersistentVolume still exists and the volume is considered &quot;released&quot;. But it is not yet available for another claim because the previous claimant's data remains on the volume.</p> </blockquote>
Jonas
<p>In kubernetes we can set limits and requests for cpu. If the container exceeds the limit, from my understanding it will be throttled. However if the container exceeds the requested but is still under the limit what would happen?</p> <p>Would I see performance issues on my application?</p>
user2962698
<blockquote> <p>However if the container exceeds the requested but is still under the limit what would happen?</p> </blockquote> <p>Nothing happens. The resource request is used for scheduling your pod to a node with capacity.</p> <p>If the resources are scarce on your node, it may be evicted.</p>
Jonas
<p>According to <a href="https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/" rel="nofollow noreferrer">Control Plane-Node Communication</a>,</p> <p>the only way to communicate securely in an insecure network would be API-kubelet communication using parameter or SSH; and the other way mentioned could not be used productively without risk. Is this correct?</p>
Sergio Barrientos
<p>As I read it, it is possible to use use the <code>--kubelet-certificate-authority</code> flag to provide a root certificate bundle.</p> <p>But this is a narrow use case, that you also could avoid in a production environment, perhaps?</p> <blockquote> <p>To verify this connection, use the --kubelet-certificate-authority flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.</p> </blockquote> <p>From <a href="https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/" rel="nofollow noreferrer">apiserver to kubelet docs</a></p>
Jonas
<p>I'm completely new to Kubernetes, I'm a bit lost of were to search. I would like to have blue-green deployment between with a web application solution. I've been told that the blue pods are destroyed when there is no user session anymore associated to the blue pods. Is this right? In some web pages I'm reading that there is a flip between one and the other. Is it mandatory to use the session? In my case I've got a stateless application</p>
Elena
<p><strong>Blue Green Deployment</strong></p> <p>Blue Green deployment is not a standard feature in Kubernetes. That means that there are many different 3rd party products for this, or patterns. And all products and pattern differ in <strong>how</strong> they do this.</p> <p><strong>Example:</strong> <a href="https://kubernetes.io/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/" rel="nofollow noreferrer">Zero-downtime Deployment in Kubernetes with Jenkins</a> is using two <code>Deployment</code> with different <code>labels</code> and <em>update</em> the <code>Service</code> to point to the other service for <em>switching</em>. It is not the easiest strategy to get right.</p> <p><strong>Stateless</strong></p> <blockquote> <p>In my case I've got a stateless application</p> </blockquote> <p>This is great! With a <strong>stateless</strong> app, is is much easier to get the deployment strategy as you want.</p> <p><strong>Default Deployment Strategy</strong></p> <p>The default deployment strategy for <code>Deployment</code> (stateless workload) is <em>Rolling Deployment</em> and if that fits your needs, it is the easiest deployment strategy to use.</p>
Jonas
<p>I learnt that Kubernetes running via Minikube or kind (that's what I'm using) does not have a Load Balancer. That functionality comes from cloud providers. However, when I created a simple deployment with 3 replicas and a service:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create deployment kiada --image=luksa/kiada:0.1 kubectl scale deployment kiada --replicas=3 kubectl expose deployment kiada --type=LoadBalancer --port 8080 </code></pre> <p>I am able to reach different pods via :8080.</p> <p>My local cluster has 2 worker nodes. When I hit the :8080 I sometimes get a response from a pod running on worker-1, and sometimes I get a response from another pod running on node worker-2. Isn't that load balancing?</p> <p>With that, I do not understand why it is said that Kubernetes does not provide load balancing by itself, since I can see that it clearly does.</p>
mnj
<p>A Kubernetes Service will load balance requests to any of the Pods matching the labels specified in the Service.</p> <p>Don't mix this with <code>type: LoadBalancer</code> which is a way to <strong>expose</strong> your Service using a <em>Cloud Load Balancer</em>, typically with an <em>external IP address</em>.</p>
Jonas
<p>I'm new to Kubernetes and wondering, if there's a <code>kubectl</code> command to figure out what namespace I'm currently working in?</p> <p>Running the <code>kubectl get ns</code> command prints out all the namespaces but doesn't show which one I'm in at present.</p>
Metro
<p>You want to inspect the local config for <code>kubectl</code> and see current context. This shows your current context with namespace.</p> <pre><code>kubectl config get-contexts </code></pre> <p>Example output - can also be multiple clusters, but only one &quot;current&quot;:</p> <pre><code>$kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube minikube minikube default </code></pre>
Jonas
<p>I tries running rabbitmq following the book Kubernetes for developers (page 180): <strong>rabbitmq.yml</strong></p> <pre><code>--- # EXPORT SERVICE INTERFACE kind: Service apiVersion: v1 metadata: name: message-queue labels: app: rabbitmq role: master tier: queue spec: ports: - port: 5672 targetPort: 5672 selector: app: rabbitmq role: master tier: queue --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rabbitmq-pv-claim labels: app: rabbitmq spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: rabbitmq spec: replicas: 1 selector: matchLabels: app: rabbitmq role: master tier: queue template: metadata: labels: app: rabbitmq role: master tier: queue spec: containers: - name: rabbitmq image: bitnami/rabbitmq:3.7 envFrom: - configMapRef: name: bitnami-rabbitmq-config ports: - name: queue containerPort: 5672 - name: queue-mgmt containerPort: 15672 livenessProbe: exec: command: - rabbitmqctl - status initialDelaySeconds: 120 timeoutSeconds: 5 failureThreshold: 6 readinessProbe: exec: command: - rabbitmqctl - status initialDelaySeconds: 10 timeoutSeconds: 3 periodSeconds: 5 volumeMounts: - name: rabbitmq-storage mountPath: /bitnami volumes: - name: rabbitmq-storage persistentVolumeClaim: claimName: rabbitmq-pv-claim # kubectl describe pod rabbitmq-5499d4b67d-cdlb8 Name: rabbitmq-5499d4b67d-cdlb8 Namespace: default Priority: 0 Node: &lt;none&gt; Labels: app=rabbitmq pod-template-hash=5499d4b67d role=master tier=queue Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: ReplicaSet/rabbitmq-5499d4b67d Containers: rabbitmq: Image: bitnami/rabbitmq:3.7 Ports: 5672/TCP, 15672/TCP Host Ports: 0/TCP, 0/TCP Liveness: exec [rabbitmqctl status] delay=120s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [rabbitmqctl status] delay=10s timeout=3s period=5s #success=1 #failure=3 Environment Variables from: bitnami-rabbitmq-config ConfigMap Optional: false Environment: &lt;none&gt; Mounts: /bitnami from rabbitmq-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xh899 (ro) Conditions: Type Status PodScheduled False Volumes: rabbitmq-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: rabbitmq-pv-claim ReadOnly: false default-token-xh899: Type: Secret (a volume populated by a Secret) SecretName: default-token-xh899 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 28s (x13 over 12m) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. </code></pre>
Ciasto piekarz
<blockquote> <p>0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.</p> </blockquote> <p>The pod cannot be scheduled because the PVC that it is using is not &quot;bound&quot;. You need to investigate why the PVC is not bound, something related to the storage system that you use.</p>
Jonas
<p>I'm currently writing the manifests for a few services in my home server that require persistent storage. I want to use PVs and PVCs. Do I create one single big PV and share that among all services? Or is it a 1:1 relation between PVCs and PVs?</p> <p>I'm not asking about the different between PVs and PVCs. This has already been answered on Stack Overflow. For example <a href="https://stackoverflow.com/questions/48956049/what-is-the-difference-between-persistent-volume-pv-and-persistent-volume-clai">here</a>.</p>
trallnag
<p>It is a one-to-one relationship.</p> <p>You can have many PVs in your environment. A specific PVC is a <strong>claim</strong> for a specific instance that match your requested criterias, e.g. size and type. The volume will be claimed and hold your data as long as your PVC-resource exist in your cluster, but if you delete your PVC, the data might be lost.</p> <p>From <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a <strong>one-to-one mapping</strong>, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.</p> </blockquote>
Jonas