Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have two Api's on the same cluster and when I run the get services I get the following.</p>
<pre><code>dh-service ClusterIP 10.233.48.45 <none> 15012/TCP 70d
api-service ClusterIP 10.233.54.208 <none> 15012/TCP
</code></pre>
<p>Now I want to make a Api call from one API to the other, When I do it using the Ingress address for the two Images I get 404 Not Found.</p>
<p>What address should I use for my post calls? Will the cluster ip work ?</p>
| Mr.Gomer | <blockquote>
<p>I want to make a Api call from one API to the other</p>
</blockquote>
<p>If they are in the same namespace and you use http, you can use:</p>
<pre><code>http://dh-service
http://api-service
</code></pre>
<p>to access them.</p>
<p>If e.g. the <code>api-service</code> is located in a different namespace e.g. <code>blue-namespace</code> you can access it with:</p>
<pre><code>http://api-service.blue-namespace
</code></pre>
<p>See more on <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a></p>
| Jonas |
<p>I am configuring a statefulset deploying 2 Jira DataCenter nodes. The statefulset results in 2 pods. Everything seems fine until the 2 pods try to connect to eachother. They do this with their <strong>short hostname</strong> being <em>jira-0</em> and <em>jira-1</em>.</p>
<p>The jira-1 pod reports <em>UnknownHostException</em> when connecting to jira-0. The hostname can not be resolved.</p>
<p>I read about adding a <strong>headless service</strong> which I didn't have yet. After adding that I can resolve the FQDN but still no luck for the short name.</p>
<p>Then I read this page: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a> and added:</p>
<pre><code> dnsConfig:
searches:
- jira.default.svc.cluster.local
</code></pre>
<p>That solves my issue but I think it shouldn't be necessary to add this?</p>
<p>Some extra info:</p>
<ul>
<li>Cluster on AKS with CoreDNS</li>
<li>Kubernetes v1.19.9</li>
<li>Network plugin: Kubenet</li>
<li>Network policy: none</li>
</ul>
<p>My full yaml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jira
labels:
app: jira
spec:
clusterIP: None
selector:
app: jira
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jira
spec:
serviceName: jira
replicas: 0
selector:
matchLabels:
app: jira
template:
metadata:
labels:
app: jira
spec:
containers:
- name: jira
image: atlassian/jira-software:8.12.2-jdk11
readinessProbe:
httpGet:
path: /jira/status
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
livenessProbe:
httpGet:
path: /jira/
port: 8080
initialDelaySeconds: 600
periodSeconds: 10
envFrom:
– configMapRef:
name: jira-config
ports:
- containerPort: 8080
dnsConfig:
searches:
- jira.default.svc.cluster.local
</code></pre>
| Charlie | <blockquote>
<p>That solves my issue but I think it shouldn't be necessary to add this?</p>
</blockquote>
<p>From the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet documentation</a>:</p>
<blockquote>
<p>StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. <strong>You are responsible for creating this Service.</strong></p>
</blockquote>
<blockquote>
<p>The example above will create three Pods named web-0,web-1,web-2. A StatefulSet can use a Headless Service to control the domain of its Pods.</p>
</blockquote>
<p>The pod-identity is will be subdomain to the <em>governing service</em>, eg. in your case it will be e.g:</p>
<pre><code>jira-0.jira.default.svc.cluster.local
jira-1.jira.default.svc.cluster.local
</code></pre>
| Jonas |
<p>we have defined our YAML with</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
name: mypod
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
</code></pre>
<p>and we will before the deployment create secret with <code>kubectl</code> command:</p>
<pre><code>$AKS_PERS_STORAGE_ACCOUNT_NAME
$STORAGE_KEY
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
</code></pre>
<p>We already have that existing file share as Azure File Share resource and we have file stored in it.</p>
<p>I am confused if we need to manage and define as well yamls for
<code>kind: PersistentVolume</code>
and
<code>kind: PersistentVolumeClaim</code></p>
<p>or the above YAML is completely enough?
Are PV and PVC required only if we do not have our file share already created on Azure?
I've read the docs <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> but still feeling confused when they need to be defined and when it is OK not to use them at all during the overall deployment process.</p>
| vel | <p>Your Pod Yaml is ok.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes Persistent Volumes</a> is a newer abstraction. If your application instead uses <code>PersistentVolumeClaim</code> it is <strong>decoupled</strong> from the type of storage you use (in your case Azure File Share) so your app can be deployed to e.g. AWS or Google Cloud or Minikube on your desktop without any changes. Your cluster need to have some support for <code>PersistentVolumes</code> and that part can be tied to a specific storage system.</p>
<p>So, to decouple your app yaml from specific infrastructure, it is better to use <code>PersistentVolumeClaims</code>.</p>
<h2>Persistent Volume Example</h2>
<p><em>I don't know about Azure File Share, but there is good documentation on</em> <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer">Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS)</a>.</p>
<h3>Application config</h3>
<p><strong>Persistent Volume Claim</strong></p>
<p>Your app, e.g. a <code>Deployment</code> or <code>StatefulSet</code> can have this PVC resource</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: my-azurefile
resources:
requests:
storage: 5Gi
</code></pre>
<p>Then you need to create a <code>StorageClass</code> resource that probably is unique for each type of environment, but need to have the <strong>same name</strong> and support the <strong>same access modes</strong>. If the environment does not support <em>dynamic volume provisioning</em> you may to have manually create <code>PersistentVolume</code> resource as well.</p>
<p>Examples in different environments:</p>
<ul>
<li>The linked doc <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer">Dynamically create and use a persistent volume with Azure Files in AKS)</a> describes for Azure.</li>
<li>See <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">AWS EFS doc</a> for creating <code>ReadWriteMany</code> volumes in AWS.</li>
<li><a href="http://pietervogelaar.nl/minikube-nfs-mounts" rel="nofollow noreferrer">Blog about ReadWriteMany storage in Minikube</a></li>
</ul>
<p><strong>Pod using Persistent Volume Claim</strong></p>
<p>You typically deploy apps using a <code>Deployment</code> or a <code>StatefulSet</code> but the part declaring the Pod template is similar, except that you probably want to use <code>volumeClaimTemplate</code> instead of <code>PersistentVolumeClaim</code> for <code>StatefulSet</code>.</p>
<p>See full example on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pod" rel="nofollow noreferrer">Create a Pod using a PersistentVolumeClaim</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: file-share
persistentVolumeClaim:
claimName: my-azurefile # this must match your name of PVC
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: file-share
</code></pre>
| Jonas |
<p>I have to create a readyness and liveness probe for a node.js container (docker) in kubernetes. My problem is that the container is NOT a server, so I cannot use an http request to see if it is live. </p>
<p>My container runs a <a href="https://www.npmjs.com/package/cron" rel="nofollow noreferrer">node-cron</a> process that download some csv file every 12 h, parse them and insert the result in elasticsearch. </p>
<p>I know I could add express.js but I woud rather not do that just for a probe.</p>
<p>My question is:</p>
<ol>
<li>Is there a way to use some kind of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">liveness command probe</a>? If it is possible, what command can I use?</li>
<li>Inside the container, I have <a href="https://www.npmjs.com/package/pm2" rel="nofollow noreferrer">pm2</a> running the process. Can I use it in any way for my probe and, if so, how?</li>
</ol>
| OLIVIER | <p><strong>Liveness command</strong></p>
<p>You <em>can</em> use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">Liveness command</a> as you describe. However, I would recommend to design your job/task for Kubernetes.</p>
<p><strong>Design for Kubernetes</strong></p>
<blockquote>
<p>My container runs a node-cron process that download some csv file <strong>every 12 h</strong>, parse them and insert the result in elasticsearch.</p>
</blockquote>
<p>Your job is not executing so often, if you deploy it as a service, it will <strong>take up resources all the time</strong>. And when you write that you want to use <a href="https://www.npmjs.com/package/pm2" rel="nofollow noreferrer">pm2</a> for your process, I would recommend another design. As what I understand, PM2 is a process manager, but Kubernetes is also a process manager <em>in a way</em>.</p>
<p><strong>Kubernetes native CronJob</strong></p>
<p>Instead of handling a <em>process</em> with pm2, implement your <strong>process</strong> as a <strong>container image</strong> and schedule your job/task with <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob</a> where you specify your <strong>image</strong> in the <code>jobTemplate</code>. With this design, you don't have any <em>livenessProbe</em> but your task will be restarted if it fails, e.g. fail to insert the result to elasticSearch due to a network problem.</p>
| Jonas |
<p>After upgrade to new version of kubernetes, I found it add <code>annotation</code> and <code>managedFields</code> field config. Now the yaml file is so long that 96% of content is the two field, but I did not think I would use the field and change it, is it possible to hidden the content and make the yaml config more simpler and cleaner?</p>
| Dolphin | <p>If you upgrade you <code>kubectl</code> client to 1.21 or newer, these fields are hidden.</p>
| Jonas |
<p>When I run <code>skaffold init</code> in my app directory it shows me:</p>
<pre><code>one or more valid Kubernetes manifests are required to run skaffold
</code></pre>
<p>The content of the directory:</p>
<p><a href="https://i.stack.imgur.com/LnTop.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LnTop.png" alt="enter image description here" /></a></p>
<p>Do I have to provide Kubernetes manifests file with for example Pod, Service, etc?</p>
| softshipper | <p>Yes, you need Kubernetes manifests in the same project. Typically a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a>-manifest and perhaps Service and Ingress as well if you want.</p>
<p>A Deployment-manifest can be generated with (using <code>></code> to direct output to a file):</p>
<pre><code>kubectl create deployment my-app --image=my-image --dry-run -o yaml > deployment.yaml
</code></pre>
<p>Note: There is a alpha feature flag <a href="https://skaffold.dev/docs/pipeline-stages/init/#--generate-manifests-flag" rel="noreferrer">--generate-manifests</a> that might do this for you.</p>
<p>E.g. with</p>
<pre><code>skaffold init --generate-manifests
</code></pre>
| Jonas |
<p>A simple question about scalability. I have been studying about scalability and I think I understand the basic concept behind it. You use an orchestrator like Kubernetes to manage the automatic scalability of a system. So in that way, as a particular microservice gets an increase demand of calls, the orchestrator will create new instances of it, to deal with the requirement of the demand. Now, in our case, we are building a microservice structure similar to the example one at Microsoft's "eShop On Containers":</p>
<p><a href="https://i.stack.imgur.com/iP0TE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iP0TE.png" alt="eShop On Containers" /></a></p>
<p>Now, here each microservice has its own database to manage just like in our application. My question is: When upscaling this system, by creating new instances of a certain microservice, let's say "Ordering microservice" in the example above, wouldn't that create a new set of databases? In the case of our application, we are using SQLite, so each microservice has its own copy of the database. I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server. But if that was the case, wouldn't that be a bottle neck? I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?</p>
| Luca Prodan | <blockquote>
<p>In the case of our application, we are using SQLite, so each microservice has its own copy of the database.</p>
</blockquote>
<p>One of the most important aspects of services that scale-out is that they are <a href="https://12factor.net/processes" rel="nofollow noreferrer">stateless</a> - services on Kubernetes should be designed according to the 12-factor principles. This means that service-instances cannot have its own copy of the database, unless it is a cache.</p>
<blockquote>
<p>I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server.</p>
</blockquote>
<p>yes, if you want to be able to scale-out, you need to use a database that are outside the instances and shared between the instances.</p>
<blockquote>
<p>But if that was the case, wouldn't that be a bottle neck?</p>
</blockquote>
<p>This depend very much on how you design your system. Comparing microservices to monoliths; when using a monolith, the whole thing typically used one big database, but with microservices it is easier to use multiple different databases, so it should be much easier to scale-out the database this way.</p>
<blockquote>
<p>I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?</p>
</blockquote>
<p>There are many ways to scale a database system as well, e.g. caching read-operations (but be careful). But this is a large topic in itself and depends very much on what and how you do things.</p>
| Jonas |
<p>As the title suggests, I can view Kubernetes bearer tokens in the Jenkins logs (/logs/all endpoints). Isn't this a security concern? Is there a way to stop it without having to meddle with the Kubernetes plugin source code? </p>
<p>Edit:</p>
<p>Example log:</p>
<pre><code>Aug 29, 2020 7:39:41 PM okhttp3.internal.platform.Platform log
INFO: Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3Nlcn
</code></pre>
| user5056973 | <p>See the documentation for <a href="https://github.com/square/okhttp/tree/master/okhttp-logging-interceptor" rel="nofollow noreferrer">okhttp</a></p>
<blockquote>
<p><strong>Warning:</strong> The logs generated by this interceptor when using the HEADERS or BODY levels have the potential to leak sensitive information such as "Authorization" or "Cookie" headers and the contents of request and response bodies. This data should only be logged in a controlled way or in a non-production environment.</p>
</blockquote>
<p>So you should probably not activate that logging in an environment where you have <strong>sensitive tokens</strong>.</p>
| Jonas |
<p>Let's say I have deployed a full stack application in my minikube cluster(with frontend and several backend APIs), and I want the authentication api pod to scale from 0 replica to n only when I clicked "login" in the frontend UI, is it possible to achieve this through service discovery? If so, how? Thanks!</p>
| efgdh | <p>It's unclear what you mean by "pod to speed up only when..." but if you mean that you want to have your app deployed, but scaled down to <strong>0 replicas</strong>, and only scale up to <strong>n replicas</strong> when there is traffic to the service, then <a href="https://knative.dev/docs/serving/" rel="nofollow noreferrer">Knative Serving</a> is an option.</p>
| Jonas |
<p>I have installed the latest Minishift release <code>1.34.3</code> on Windows 10 Hyper-V. The OpenShift client version is <code>4.6.16</code> as expected however the Kubernetes version is <code>1.11</code>.</p>
<pre><code>PS C:\Tools> minishift version
minishift v1.34.3+4b58f89
PS C:\Tools> oc version
Client Version: 4.6.16
Kubernetes Version: v1.11.0+d4cacc0
</code></pre>
<p>From what I understand, OpenShift 4.6 <em>should</em> be <a href="https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-release-notes.html#ocp-4-6-about-this-release" rel="nofollow noreferrer">running Kubernetes v.1.19</a> under the hood. How can I upgrade my OpenShift cluster to run a later version of Kubernetes?</p>
| pirateofebay | <p><a href="https://github.com/minishift/minishift" rel="nofollow noreferrer">minishift</a> is based on OpenShift 3, not the newer OpenShift 4.</p>
<blockquote>
<p><strong>Note:</strong> Minishift runs OpenShift 3.x clusters. Due to different installation methods, OpenShift 4.x clusters are not supported.</p>
</blockquote>
<p>The client, <code>oc</code> you are using is a newer version.</p>
| Jonas |
<p>Is there a way to prevent a Pod from deploying onto Kubernetes if it does not have memory resource requests & limits set?</p>
| DarVar | <p>Yes, you can apply <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">Limit Ranges</a>. See e.g. <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/" rel="nofollow noreferrer">Configure Minimum and Maximum CPU Constraints for a Namespace</a> for an example for CPU resources, but it can be applied for e.g. memory and storage as well.</p>
| Jonas |
<p>According to the Kubernetes docs a cluster can have up to 5000 nodes. Imagine I have then node0 with 1CPU, node1 with 2CPU, etc.</p>
<p>Let's say I have a single-core, synchronous process which I want to run instances of. Am I then limited by the smallest node in a heterogeneous cluster of nodes? If not, can I enforce choosing the node largest CPU (which would then limit me to the single node) or smallest CPU (which would then allow seamless and consistent parallel execution) or will Kubernetes do it in the background?</p>
| Peter Badida | <blockquote>
<p>Am I then limited by the smallest node in a heterogeneous cluster of nodes? If not, can I enforce choosing the node largest CPU (which would then limit me to the single node) or smallest CPU (which would then allow seamless and consistent parallel execution) or will Kubernetes do it in the background?</p>
</blockquote>
<p>The scheduler does not have insight about what instances you use. So it will schedule the app to any node.</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#example-use-cases" rel="nofollow noreferrer">Taints and Tolerations</a> to classify the nodes and then you can declare to what kind of nodes you want your app to run on by setting <em>tolerations</em>. Or alternatively use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a> and labels on nodes.</p>
| Jonas |
<p>In Kubernetes, I have a statefulset with a number of replicas.
I've set the updateStrategy to RollingUpdate.
I've set podManagementPolicy to Parallel.
My statefulset instances do not have a persistent volume claim -- I use the statefulset as a way to allocate ordinals 0..(N-1) to pods in a deterministic manner.</p>
<p>The main reason for this, is to keep availability for new requests while rolling out software updates (freshly built containers) while still allowing each container, and other services in the cluster, to "know" its ordinal.</p>
<p>The behavior I want, when doing a rolling update, is for the previous statefulset pods to linger while there are still long-running requests processing on them, but I want new traffic to go to the new pods in the statefulset (mapped by the ordinal) without a temporary outage.</p>
<p>Unfortunately, I don't see a way of doing this -- what am I missing?</p>
<p>Because I don't use volume claims, you might think I could use deployments instead, but I really do need each of the pods to have a deterministic ordinal, that:</p>
<ul>
<li>is unique at the point of dispatching new service requests (incoming HTTP requests, including public ingresses)</li>
<li>is discoverable by the pod itself</li>
<li>is persistent for the duration of the pod lifetime</li>
<li>is contiguous from 0 .. (N-1)</li>
</ul>
<p>The second-best option I can think of is using something like zookeeper or etcd to separately manage this property, using some of the traditional long-poll or leader-election mechanisms, but given that kubernetes already knows (or can know) about all the necessary bits, AND kubernetes service mapping knows how to steer incoming requests from old instances to new instances, that seems more redundant and complicated than necessary, so I'd like to avoid that.</p>
| Jon Watte | <p>I assume that you need this for a <em>stateful workload</em>, a workload that e.g. requires writes. Otherwise you can use Deployments with multiple pods online for your shards. A key feature with StatefulSet is that they provide <strong>unique stable network identities</strong> for the instances.</p>
<blockquote>
<p>The behavior I want, when doing a rolling update, is for the previous statefulset pods to linger while there are still long-running requests processing on them, but I want new traffic to go to the new pods in the statefulset.</p>
</blockquote>
<p>This behavior is supported by Kubernetes pods. But you also need to implement support for it in your application.</p>
<ul>
<li>New traffic will not be sent to your "old" pods.</li>
<li>A <code>SIGTERM</code> signal will be sent to the pod - your application may want to listen to this and do some action.</li>
<li>After a <em>configurable</em> "termination grace period", your pod will get killed.</li>
</ul>
<p>See <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">Kubernetes best practices: terminating with grace</a> for more info about pod termination.</p>
<p>Be aware that you should connect to <em>services</em> instead of directly to <em>pods</em> for this to work. E.g. you need to create <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless services</a> for the replicas in a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p>
<p>If your clients are connecting to a specific <em>headless service</em>, e.g. <code>N</code>, this means that it will not be available for some times during upgrades. You need to decide if your clients should <strong>retry</strong> their connections during this time period <em>or</em> if they should connect to another <em>headless service</em> if <code>N</code> is not available.</p>
<p>If you are in a case where you need:</p>
<ul>
<li>stateful workload (e.g. support for write operations)</li>
<li>want high availability for your instances</li>
</ul>
<p>then you need a form of distributed system that does some form of replication/synchronization, e.g. using <a href="https://raft.github.io/" rel="nofollow noreferrer">raft</a> or a product that implements this. Such system is easiest deployed as a StatefulSet.</p>
| Jonas |
<p>I have 2 services deployed in Kubernetes both should be ssl end to end. </p>
<ol>
<li>Web based applications</li>
<li>Business service</li>
</ol>
<p>Web based Application needs sticky session, so its been exposed using Ingress.</p>
<pre><code>Web Based Application ---> Ingress(HTTPS) --> Service(ClusterIP) --> Pods(Enabled SSL)
Business Service --> Service(Load Balancer/Cluster IP) --> Pods(Enables SSL)
</code></pre>
<p>Here the requirement is Business Service should be accessible only by Web App and not by anyone else. With just HTTP, I can have ClusterIP and restrict business service access using Network policies. But, I need SSL from WebApp to Business service. It throws error("Domain name not matching") if I access using https. Is there any better way for this?</p>
| user1578872 | <p>It depends on how you access the <em>Business Service</em> from your <em>Web Based App</em>. You should use DNS service discovery here, see <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods
</a> and your https-certificate must reflect this <strong>URL</strong>, you could use a <strong>self-signed certificated</strong> here.</p>
| Jonas |
<ul>
<li><p>I am trying to figure out the networking in Kubernetes, and especially the handling of multicontainer pods. In my simple scenario, I have total of 3 pods. One has two containers in it and the other one has only one container which wants to communicate with a specific container in that multicontainer pod. I want to figure out how kubernetes handles the communication between such containers.</p>
<p>For this purpose I have simple multicontainer pod in a "sidecar architecture" the YAML file is as follows:</p>
</li>
</ul>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
</code></pre>
<ul>
<li><p>What I want to achieve with this YAML file is that, in the pod "nginx", have two containers, one running nginx and listens on the port 80 of that pod the other running a simple curl image (anything different than nginx to not violate one container per pod convention of kubernetes) and can listen communication on pod's port 5000.</p>
<p>Then I have another YAML file again running an nginx image. This container is going to trying to communicate with the nginx and curl images on the other pod. The YAML file is as follows:</p>
</li>
</ul>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-simple
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
</code></pre>
<ul>
<li><p>After deploying the pods I expose the nginx pod simply using the following command:</p>
<p><code>kubectl expose pods/nginx</code></p>
</li>
<li><p>Then I start a terminal inside the nginx-simple container.(Pod with one container). When I curl the ip address I get from <code>kubectl get svc</code> which is the service generated from my previous expose command with port 80, I can easily get the welcome message of nginx. However, the problem starts when I try to communicate with the curl container. When I curl the same ip adrress but this time with port 5000(containerPort I set in the Yaml file), I get a connection refused error. What am I doing wrong here?</p>
</li>
</ul>
<p>Thanks in advance.</p>
<p>P.S: I would also be more than happy to hear your learning material suggestions for this topic. Thank you so much.</p>
| Mark R. Chandar | <p><a href="https://curl.se/" rel="nofollow noreferrer">curl</a> is a command line tool. It is not a server that is listening to a port, but a client tool that can be used to access servers.</p>
<p>This container does not contain a <em>server</em> that listen to a port:</p>
<pre><code> - name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
</code></pre>
<p>Services deployed on Kubernetes are typically containers containing some form of webserver, but might be other kind of services as well.</p>
<hr />
<blockquote>
<p>shouldnt i at least be able to ping the curl container?</p>
</blockquote>
<p>Nope, containers are not Virtual Machines. Containers typically only contain a single <strong>process</strong> and a container can only do what that process do. On Kubernetes these processes are typically webservers listening e.g. on port 8080, so commonly you can only check if they are alive by sending them an HTTP-request. See e.g. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Configure Liveness, Readiness and Startup Probes</a>.</p>
<blockquote>
<p>When i run telnet pod-ip 5000 i cannot ping this curl container.</p>
</blockquote>
<p>The <a href="https://curl.se/" rel="nofollow noreferrer">curl</a> binary is not a process that listen to any port. E.g. it cannot respond to <a href="https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol" rel="nofollow noreferrer">ICMP</a>. You can typically ping nodes but not containers. Curl is a http-client that typically is used to **send and http-request, wait for the http-response and then the process terminates. You can probably see this by inspecting the Pod, that the curl container has terminated.</p>
<blockquote>
<p>I am trying to figure out how communication is handled in a multicontainer pod. Each pod has their own unique ip address and containers in the pod can use localhost. I get it but how a container in a different pod can target a specific container in a multicontainer pod?</p>
</blockquote>
<p>I suggest that you add <strong>two</strong> webservers (e.g. two nginx containers) to a pod. But they have to listen to different ports, e.g. port 8080 and port 8081. A client can choose what container it want to interact with by using the Pod IP and the container Port, <code><Pod IP>:<containerPort></code>. E.g. add two nginx-containers, configure them to listen to different ports and let them serve different content.</p>
| Jonas |
<p>I'm still a Kubernetes newbie but I am already confronted with what I think is a mammoth task. My company is hosting a Kubernetes cluster. Because of internal policies, we are obliged to have everything georedundand. Which means we are supposed to build a second cluster (identical to the already existing one) in a different data center. Only ever one of them is active and the remaining one acts as a backup.</p>
<p>My task is to come up with different approaches on how we can synchronize those two clusters in real time, so that in case of failover operations can continue without much interruption.</p>
<p>From what I've heard such a replication can happen on different levels (hypervisor, storage, application...). I want to find out more about these different approaches but I lack the proper lingo to find any good online sources for my problem.
Do you guys know any sources who cover my question? Is there any literature on this? Or any personal experience you might want to share? Thanks in advance!</p>
| Macus Smith | <p>Kubernetes is already a distributed system, by design.</p>
<p>It is more common to run a Kubernetes cluster by using <strong>3</strong> data centers - since it is built upon consensus algorithms like <a href="https://raft.github.io/" rel="nofollow noreferrer">raft</a>. This replaces older ways to run systems by using <strong>2</strong> data centers in an <a href="https://www.jscape.com/blog/active-active-vs-active-passive-high-availability-cluster" rel="nofollow noreferrer">active-passive</a> fashion.</p>
| Jonas |
<p>I am unable to identify what the exact issue with the permissions with my setup as shown below. I've looked into all the similar QAs but still unable to solve the issue. The aim is to deploy Prometheus and let it <strong>scrape</strong> <code>/metrics</code> endpoints that my other applications in the cluster expose fine.</p>
<pre class="lang-sh prettyprint-override"><code>Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"endpoints\" in API group \"\" at the cluster scope"
Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"services\" in API group \"\" at the cluster scope"
...
...
</code></pre>
<p>The command below returns <code>no</code> to all services, nodes, pods etc.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl auth can-i get services --as=system:serviceaccount:default:default -n default
</code></pre>
<p><strong>Minikube</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ minikube start --vm-driver=virtualbox --extra-config=apiserver.Authorization.Mode=RBAC
😄 minikube v1.14.2 on Darwin 11.2
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube" ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
▪ apiserver.Authorization.Mode=RBAC
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard
🏄 Done! kubectl is now configured to use "minikube" by default
</code></pre>
<p><strong>Roles</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-cluster-role
rules:
- apiGroups: [""]
resources: ["nodes", "services", "pods", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-service-account
namespace: default
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-cluster-role-binding
roleRef:
kind: ClusterRole
name: monitoring-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: monitoring-service-account
namespace: default
</code></pre>
<p><strong>Prometheus</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config-map
namespace: default
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: default
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
ports:
- name: http
protocol: TCP
containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/prometheus/
- name: storage
mountPath: /prometheus/
volumes:
- name: config
configMap:
name: prometheus-config-map
- name: storage
emptyDir: {}
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: default
spec:
type: NodePort
selector:
app: prometheus
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
</code></pre>
| BentCoder | <blockquote>
<p>User "system:serviceaccount:default:default" cannot list resource "endpoints" in API group "" at the cluster scope"</p>
</blockquote>
<blockquote>
<p>User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" at the cluster scope"</p>
</blockquote>
<blockquote>
<p>User "system:serviceaccount:default:default" cannot list resource "services" in API group "" at the cluster scope"</p>
</blockquote>
<p>Something running with ServiceAccount <code>default</code> in namespace <code>default</code> is doing things it does not have permissions for.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-service-account
</code></pre>
<p>Here you create a specific ServiceAccount. You also give it some Cluster-wide permissions.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: default
</code></pre>
<p>You run Prometheus in namespace <code>default</code> but do not specify a specific ServiceAccount, so it will run with ServiceAccount <code>default</code>.</p>
<p>I think your problem is that you are supposed to set the ServiceAccount that you create in the Deployment-manifest for Prometheus.</p>
| Jonas |
<p>I have a really strange and annoying Kubernetes issue. I developed sign-in service (like <a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-webapp" rel="nofollow noreferrer">this</a>) and it is working without errors when running it on my Windows laptop. In addition, it is also working fine when running it on my local Kubernetes single-node cluster, activated with Docker Desktop. Docker desktop, in my situation, uses Linux containers with WSL (2) integration. It want the behaviour to be the same on my EKS cluster which is simply not happening. Let my first describe the relevant files.</p>
<p>This is my deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxxx
spec:
selector:
matchLabels:
app: xxxxxxxx
replicas: 3
template:
metadata:
labels:
app: xxxxxxxx
spec:
containers:
- name: xxxxxxxx
image: yyyyyy.dkr.ecr.qqqqq.amazonaws.com/xxxxxxxx:2676
livenessProbe:
httpGet:
path: /health/live
port: 80
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health/ready
port: 80
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "KubernetesDevelopment"
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: secrets
secret:
secretName: secret-appsettings
imagePullSecrets:
- name: awspull
</code></pre>
<p>This is my Dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:5.0.4-alpine3.13 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["QQQQQ.API/QQQQQ.API.csproj", "QQQQQ.API/"]
RUN dotnet restore "QQQQQ.API/QQQQQ.API.csproj"
COPY . .
WORKDIR "/src/QQQQQ.API"
RUN dotnet build "QQQQQ.API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "QQQQQ.API.csproj" -c Release -o /app/publish
FROM base AS final
RUN addgroup -S lirantal && adduser -S lirantal -G lirantal
WORKDIR /app
COPY --from=publish /app/publish .
RUN chown -R lirantal:lirantal /app
USER lirantal
CMD cp /secret/*.* /app && dotnet QQQQQ.API.dll
</code></pre>
<p>This is what has been logged (on my own local Kubernetes cluster):</p>
<blockquote>
<p>warn:
Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]</p>
<p>Storing keys in a directory
'/home/lirantal/.aspnet/DataProtection-Keys' that may not be persisted
outside of the container. Protected data will be unavailable when
container is destroyed.</p>
<p>warn:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]</p>
<p>No XML encryptor configured. Key
{e13ce6cb-c64d-4aaf-ad4f-1a345c73f5bc} may be persisted to storage in
unencrypted form.</p>
<p>info: Microsoft.Hosting.Lifetime[0]</p>
<p>Now listening on: http://[::]:80</p>
<p>info: Microsoft.Hosting.Lifetime[0]</p>
<p>Application started. Press Ctrl+C to shut down.</p>
<p>info: Microsoft.Hosting.Lifetime[0]
Hosting environment: KubernetesDevelopment
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]</p>
<p>Failed to determine the https port for redirect.</p>
</blockquote>
<p>And this is what has been logged when running it in AWS EKS:</p>
<blockquote>
<p>Loading... [40m[1m[33mwarn[39m[22m[49m:
Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
Storing keys in a directory
'/home/lirantal/.aspnet/DataProtection-Keys' that may not be persisted
outside of the container. Protected data will be unavailable when
container is destroyed. [40m[1m[33mwarn[39m[22m[49m:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35] No
XML encryptor configured. Key {04076fdc-f191-4253-9dc4-dbc77981d9b3}
may be persisted to storage in unencrypted form.
[41m[1m[37mcrit[39m[22m[49m:
<strong>> Microsoft.AspNetCore.Server.Kestrel[0] Unable to start Kestrel.</strong>
System.Net.Sockets.SocketException (13): Permission denied at
System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError
error, String callerName) at System.Net.Sockets.Socket.DoBind(EndPoint
endPointSnapshot, SocketAddress socketAddress) at
System.Net.Sockets.Socket.Bind(EndPoint localEP) at
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.g__BindSocket|13_0(<>c__DisplayClass13_0&
) at
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint
endpoint, CancellationToken cancellationToken) at
Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint
endPoint, ConnectionDelegate connectionDelegate, EndpointConfig
endpointConfig) at
Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.<>c__DisplayClass29_0<code>1.<<StartAsync>g__OnBind|0>d.MoveNext() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context) at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context) at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IEnumerable</code>1
listenOptions, AddressBindContext context) at
Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.BindAsync(CancellationToken
cancellationToken) at
Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.StartAsync[TContext](IHttpApplication<code>1 application, CancellationToken cancellationToken) Unhandled exception. System.Net.Sockets.SocketException (13): Permission denied at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName) at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Bind(EndPoint localEP) at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& ) at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind() at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig) at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.<>c__DisplayClass29_0</code>1.<g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location --- at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions
endpoint, AddressBindContext context) at
Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext
context) at
Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext
context) at
Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext
context) at
Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IEnumerable<code>1 listenOptions, AddressBindContext context) at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.BindAsync(CancellationToken cancellationToken) at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.StartAsync[TContext](IHttpApplication</code>1
application, CancellationToken cancellationToken) at
Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken
cancellationToken) at
Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken
cancellationToken) at
Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost
host, CancellationToken token) at
Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost
host, CancellationToken token) at
Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost
host) at
QQQQ.API.Program.Main(String[] args)
in /src/Kinly.SMPD.XXXXXXX.API/Program.cs:line 10</p>
</blockquote>
<p>This is really strange. What I do is just entering:</p>
<pre><code>kubectl apply -f deployment.yml
</code></pre>
<p>The pods then start logging things, which is normal. However, Kestrel should work and as does work, but only on my "own" Kubernetes cluster, not when using the EKS Kubernetes cluster. How come? And how to fix? I find this so strange as this is an EKS only problem I have. My service simply does not log any errors, except when running it on my EKS Cluster. So please tell me how this is possible and how to fix it.</p>
| Daan | <blockquote>
<p>Microsoft.AspNetCore.Server.Kestrel[0] Unable to start Kestrel. System.Net.Sockets.SocketException (13): Permission denied at</p>
</blockquote>
<p>Looks like your container did not have enough permissions to listen on the port that you want.</p>
<p>In Linux, you typically need privileged permissions to listen to a port 1024 and lover. I suggest that you change your app to listen to port 8080 and 8433 instead. You can have a Service that expose port 80 but map that to targetPort 8080.</p>
| Jonas |
<p>We have microservices running AWS EKS cluster and many of the microservices having more than 10 pod replicas, for monitoring we are using grafana. unfortunately some of the pods in same microsevices are showing very high CPU usage say 80% and some are lke 0.35%. we have understanding like kubernetes will do the load balancing equally to distribute load. What we are missing here.?</p>
| Vishwanath.M | <p>How traffic is distributed from outside the cluster to your pods depends on the Load Balancer Controller, e.g. <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/how-it-works/" rel="nofollow noreferrer">AWS Load Balancer Controller</a>.</p>
<p>But the Load Balancer Controller typically does not take CPU usage in consideration, it only spreads traffic evenly to your replicas.</p>
<p>Typically, CPU load depends heavily on what your replicas are doing, e.g. some paths may use more CPU and some other HTTP paths is easier to handle. You need more insight to decide what to do, e.g. add some caching.</p>
| Jonas |
<p><strong>What I'm trying to solve :</strong> Have a Java microservice be aware of total number of Replicas. This replica count is dynamic in nature</p>
<p><strong>Problem:</strong> Kubernetes downward API has limited metadata that doesnt include this information. Is there a way to qausi-query a kubectl-like command natively from a container?</p>
<p><strong>Why am I trying to do this:</strong> In relation to Kafka, new replica will cause a rebalance. Looking to mitigate rebalancing when new containers come online/offline with additional business logic.</p>
<p><strong>Goal:</strong> Create an arbiter java-service that detects replica count for deployment <em>xyz</em> and orchestrates corresponding pods to yield on Kafka connection until criteria is met</p>
<p><em>also if this is crazy, I wont take offense. In that case I'm asking for a friend</em></p>
| stackoverflow | <blockquote>
<p>Kubernetes downward API has limited metadata that doesnt include this information. Is there a way to qausi-query a kubectl-like command natively from a container?</p>
</blockquote>
<p>You need to query the Kubernetes API server for info about the number of replicas for a specific Deployment. You can do this with <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">kubernetes-client java</a>.</p>
<blockquote>
<p>Why am I trying to do this: In relation to Kafka, new replica will cause a rebalance. Looking to mitigate rebalancing when new containers come online/offline with additional business logic.</p>
</blockquote>
<p>Sounds like you want <em>consistent</em> number of Pods all the time, e.g. <strong>avoiding Rolling Deployment</strong>? In your Deployment, you can set <code>strategy:</code> to <code>type: Recreate</code> - then the current Pod will be removed first, and then the new will be created - so at most 1 is running at the same time (or the same number as replicas).</p>
<h2>StatefulSet</h2>
<p>When you want <strong>at most X number of replicas</strong> you should consider using <code>StatefulSet</code> as its behavior differ from <code>Deployment</code> when e.g. a Node becomes unreachable. <code>Deployment</code> has the behavior, <strong>at least X number of replicas</strong>.</p>
| Jonas |
<p>Currently our k8s project has following use case were the namespaces are hardcoded into the values.yaml and application source codes</p>
<pre><code>(apps) namespace - NS1
> micro-service-A1
> micro-service-A2
> micro-service-A3
(database) namespace - DB1
> mongo-service
(messaging) namespace - MB1
> kafka-zk-service
</code></pre>
<p>We want to run multiple sets of above services(apps, database, messaging) in unique namespaces defined by each Engineer(Developer)
such that each developer can safely bring down/play-around changing the complete set belonging to him without worrying impact on other Developers' namespace.</p>
<p><strong># Developer1 (set)</strong></p>
<hr>
<pre><code>(apps) namespace - Dev1
> micro-service-A1
> micro-service-A2
> micro-service-A3
(database) namespace - Dev1_DB
> mongo-service
(messaging) namespace - Dev1_MB
> kafka-zk-service
</code></pre>
<p><strong># Developer2 (set)</strong></p>
<hr>
<pre><code>(apps) namespace - Dev2
> micro-service-A1
> micro-service-A2
> micro-service-A3
(database) namespace - Dev2_DB
> mongo-service
(messaging) namespace - Dev2_MB
> kafka-zk-service
</code></pre>
<p>What should be the configuration of the yamls & application source code such that dynamic deployment is feasible in any namespace of developers choice?</p>
| Bhavani Prasad | <h2>Externalize configuration</h2>
<p>It is good to <em>externalize your configuration</em> so that you can use a different configuration without building a new image.</p>
<p>Use a ConfigMap for configuration with address e.g. to other services or databases. See <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a> for adressing.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
SERVICE_A: service-a.a-namespace.svc.cluster.local
SERVICE_B: service-b.b-namespace.svc.cluster.local
DB: db.local
</code></pre>
<p>Use the values from your ConfigMap as <em>environment variables</em> in your app by mapping it in your <code>Pod</code> or <code>Deployment</code> in the PodTemplate</p>
<pre><code> containers:
- name: app-container
image: k8s.gcr.io/app-image
env:
- name: SERVICE_A_ADDRESS
valueFrom:
configMapKeyRef:
name: config
key: SERVICE_A
- name: SERVICE_B_ADDRESS
valueFrom:
configMapKeyRef:
name: config
key: SERVICE_B
</code></pre>
<h2>Service with External Name</h2>
<p>If you want to move a service to a new namespace but keep the addressing, you can leave a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">Service with External Name</a></p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: service-a
namespace: namespace-a
spec:
type: ExternalName
externalName: service-c.namespace-c.svc.cluster.local
ports:
- port: 80
</code></pre>
| Jonas |
<p>Is it possible to specify or change the service account to be used when accessing the kube api from within the cluster using rest.InClusterConfig in golang?
It seems to use the default service account (or the service account the pod running is under) but i want to use another service account.
I am aware that i can use BuildConfigFromFlags and use the configs from a config file that may be tied to a service account, but i wanted to see if it is possible to override the service account with rest.InClusterConfig</p>
| NonoPa Naka | <p>In Kubernetes, a Pod (or multiple for the same service) has a ServiceAccount. That is the way it is designed.</p>
<p>This ServiceAccount can be a specific that you create, you don't have to use a default ServiceAccount in a Namespace.</p>
| Jonas |
<p>I am using the kubernetes operator to create a custom resource in the cluster, the CR has the <code>Status</code> field populated, but when the object gets created the <code>Status</code> field is empty.</p>
<p>This is how I am creating the CR:</p>
<pre><code>reconcile.Create(ctx, &object)
</code></pre>
<p>This is what I am trying to accomplish with k8s operator:</p>
<p><a href="https://i.stack.imgur.com/IiurP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IiurP.png" alt="enter image description here" /></a></p>
| Vishrant | <p>The architecture of Kubernetes API and resources follows a pattern.</p>
<ol>
<li><p>Clients may create resources, by specifying a <em>desired state</em> (This is the <code>spec:</code> part of a resource). This is a "create" request sent to the API Server.</p>
</li>
<li><p>Controllers, subscribe/watch to changes of resources, while doing actions in a <em>reconciliation loop</em>, they might update the Status of the resource (this is the <code>status:</code> part of the resource).</p>
</li>
</ol>
<p>For an example of how a controller is implemented and updates the status, see the <a href="https://book.kubebuilder.io/cronjob-tutorial/controller-implementation.html#2-list-all-active-jobs-and-update-the-status" rel="nofollow noreferrer">Kubebuilder book: Implementing a Controller - Update the Status</a>.</p>
<p>The client in the example is a "controller runtime client":</p>
<pre><code>"sigs.k8s.io/controller-runtime/pkg/client"
</code></pre>
<p>Example code, where the <em>reconciler</em> updates the <code>status</code> sub-resource:</p>
<pre><code>if err := r.Status().Update(ctx, &cronJob); err != nil {
log.Error(err, "unable to update CronJob status")
return ctrl.Result{}, err
}
</code></pre>
| Jonas |
<p>I have an ASP.NET Core app that I want to configure with HTTPS in my local kubernetes clustur using minikube.</p>
<p>The deployment yaml file is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-volume
labels:
app: kube-volume-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ckubevolume
image: kubevolume
imagePullPolicy: Never
ports:
- containerPort: 80
- containerPort: 443
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
volumeMounts:
- name: ssl
mountPath: "/app/https"
volumes:
- name: ssl
configMap:
name: game-config
</code></pre>
<p>You can see i have added <strong>environment variables</strong> for https in yaml file.</p>
<p>I also created a service for this deployment. The yaml file of the service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
type: NodePort
selector:
component: web
ports:
- name: http
protocol: TCP
port: 100
targetPort: 80
- name: https
protocol: TCP
port: 200
targetPort: 443
</code></pre>
<p>But unfortunately the app is not opening by the service when I run the <strong>minikube service service-1</strong> command.</p>
<p>However when I remove the env variables for https then the app is opening by the service. These are the lines which when I remove the app opens:</p>
<pre><code>- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
</code></pre>
<p>I also confirmed with the shell that the certificate is present in the /app/https folder.</p>
<p>Whay I am doing wrong?</p>
| yogihosting | <p>I think your approach does not fit well with the architecture of Kubernetes. A TLS certificate (for https) is coupled to a hostname.</p>
<p>I would recommend one of two different approaches:</p>
<ul>
<li>Expose your app with a Service of <code>type: LoadBalancer</code></li>
<li>Expose your app with an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress resource</a></li>
</ul>
<h2>Expose your app with a Service of type LoadBalancer</h2>
<p>This is typically called a Network LoadBalancer as it exposes your app for TCP or UDP directly.</p>
<p>See <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#loadbalancer-access" rel="nofollow noreferrer">LoadBalancer access</a> in the Minikube documentation. But beware that your app get an external address from your LoadBalancer, and your TLS certificate probably has to match that.</p>
<h2>Expose your app with an Ingress resource</h2>
<p>This is the most common approach for Microservices in Kubernetes. In addition to your Service of <code>type: NodePort</code> you also need to create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress resource</a> for your app.</p>
<p>The cluster needs an Ingress controller and the gateway will handle your TLS certificate, instead of your app.</p>
<p>See <a href="https://minikube.sigs.k8s.io/docs/tutorials/custom_cert_ingress/" rel="nofollow noreferrer">How to use custom TLS certificate with ingress addon</a> for how to configure both Ingress and TLS certificate in Minikube.</p>
<p>I would recommend to go this route.</p>
| Jonas |
<p>My requirement is to copy large files from Kubernetes Prod PVC to Non-Prod PVC? This has to happen by a scheduled job. What options do I have to achieve this? Any suggestions, please.</p>
| Visweswara Sriadibhatla | <p>In order to do that please remember that the Prod PVC has to be on filesystem/volume which supports multiple access modes (for example, NFS can support multiple read/write clients).</p>
<p>Filesystems like ext4 are not clustered and you cannot have two different systems accessing the same ext4 filesystem (unless you involve clustered software ...).
This web page <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes</a> shows which k8s volumes can be used with "ReadWriteMany" mode i.e.:</p>
<ol>
<li>NFS</li>
<li>CephFS</li>
<li>Glusterfs</li>
<li>Portworx Volumes</li>
</ol>
<p>Another option is to have a sidecar container (part of Prod pod) which has access to the Prod PVC (this is supported) and periodically copy the file to the Non-Prod PVC (which also has to be mounted in the pod).</p>
| pb100 |
<p>I'm not quite sure if which of the following approaches is the better approach to create a controller in kubernetes however I know that:</p>
<ul>
<li>I don't want to create a custom resource by any means.</li>
<li>I do only want to fetch information about k8s native resources (pods, ...) given that there might be a lot of pods in each namespace</li>
</ul>
<p>I have seens some patterns like:</p>
<pre><code>ctrl, err := controller.New("name-here", mgr, controller.Options{
Reconciler: &ReconcilePod{Client: mgr.GetClient(), Logger: log},
})
</code></pre>
<p>which <code>ReconcilePod</code> is a struct that has a function <code>Reconcile</code> that keep whole business logic.</p>
<p>Another approach I have seens is like following:</p>
<pre><code>type Controller struct {
indexer cache.Indexer
queue workqueue.RateLimitingInterface
informer cache.Controller
}
</code></pre>
<p>and then defining <code>shared informer</code> and <code>watcher</code> etc.
And the third pattern that I have seen is using <code>operators</code></p>
<p>what I don't get perhaps is what is the main differences between mentioned approaches above and which one fits my need at scale.</p>
| Elaheh | <p>If you don't want to "control" anything, there is no need to create a <em>controller</em>.</p>
<p>If you just want to "read" and "watch" resources, you can use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> and see e.g. <a href="https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/" rel="nofollow noreferrer">Extend Kubernetes via a shared informer</a> for inspiration about how to <em>read</em> and <em>watch</em> resources.</p>
<blockquote>
<p>To stay informed about when these events get triggered you can use a primitive exposed by Kubernetes and the client-go called SharedInformer, inside the cache package. Let’s see how it works in practice.</p>
</blockquote>
<p>Controllers are more complex and contains a <em>reconciliation loop</em> since they should realize/manage a <em>desired state</em>.</p>
<p>An "operator" is a <em>controller</em> as well.</p>
| Jonas |
<p>Everytime I try to access a NodePort on my machine, it says "Error Connection Refused." I don't understand since the examples I read online imply that I can run Docker Desktop on my laptop, connect to the cluster, and access services via their nodeport.</p>
<p>My machine:</p>
<ul>
<li>Windows 10</li>
<li>Docker Desktop (tested additionally with <code>k3s</code> and <code>minikube</code> with similar results)</li>
<li>Kubernetes 1.19+</li>
</ul>
<p>Kubernetes Configuration:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ngnix-service
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
</code></pre>
<p>Output and cURL test:</p>
<pre><code>PS C:\Users\ME\nginx> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 169m
ngnix-service NodePort 10.108.214.243 <none> 80:30007/TCP 7m19s
PS C:\Users\ME\nginx> curl.exe http://localhost:30007
curl: (7) Failed to connect to localhost port 30007: Connection refused
</code></pre>
<p>I've also tried with the node ip:</p>
<pre><code>PS C:\Users\ME\nginx> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready master 6d v1.19.7 192.168.65.4 <none> Docker Desktop 5.10.25-linuxkit docker://20.10.5
PS C:\Users\ME\nginx> curl.exe http://192.168.65.4:30007
curl: (7) Failed to connect to 192.168.65.4 port 30007: Timed out
</code></pre>
<p>I get the same response when trying to access a NodePort from my browser (Chrome). <code>ERR_CONNECTION_REFUSED</code></p>
<p>Is there something I'm missing? Why are all NodePorts inaccessible?</p>
| Veridian Dynamics | <p>Kubernetes run locally, still runs on its internal network.</p>
<blockquote>
<p>curl.exe <a href="http://192.168.65.4:30007" rel="nofollow noreferrer">http://192.168.65.4:30007</a></p>
</blockquote>
<p>Here you use an IP address that is internal Kubernetes network. You must expose your Kubernetes service so that it gets an cluster-external address.</p>
<p>See this part:</p>
<pre><code>EXTERNAL-IP
<none>
</code></pre>
<p>You typically expose the service outside the cluster with a Service of <code>type: Loadbalancer</code> or use an Ingress-gateway.</p>
<p>See this <a href="https://stackoverflow.com/a/50178697/213269">answer</a> on how you can change your Service from <code>type:NodePort</code> to <code>type: LoadBalancer</code> to expose it to your localhost.</p>
<p>The easiest way to access your service is to use <code>kubectl port-forward</code>, e.g.</p>
<pre><code>kubectl port-forward ngnix-service 8080:80
</code></pre>
<p>Then you can access it on <code>localhost:8080</code>.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Use Port Forwarding to Access Applications in a Cluster</a></p>
| Jonas |
<p>I'm having trouble with finding a solution that allows to terminate only certain pods in a deployment.</p>
<p>The application running inside the pods does some processing which can a take lot of time to be finished.</p>
<p>Let's say I have 10 tasks that are stored in a database and I issue a command to scale the deployment to 10 pods.</p>
<p>Let's say that after some time 3 of the pods have finished their tasks and are no longer required.
How can i scale down the deployment from 10 to 7 while terminate only the pods that have finished the tasks and not the pods that are still processing those tasks?</p>
<p>I don't know if more details are needed but i will happily edit the question if there are more details needed to give an answer for this kind of problem.</p>
| Aurel Drejta | <p>In this case Kubernetes Job might be better suited for this kind of task.</p>
| pb100 |
<p>I am looking for how to have a spare/cold replica/pod in my Kubernetes configuration. I assume it would go in my Kuberentes deployment or HPA configuration. Any idea how I would make it so I have 2 spare/cold instances of my app always ready, but only get put into the active pods once HPA requests another instance? My goal is to have basically zero startup time on a new pod when HPA says it needs another instance.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: someName
namespace: someNamespace
labels:
app: someName
version: "someVersion"
spec:
replicas: $REPLICAS
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: someMaxSurge
maxUnavailable: someMaxUnavailable
selector:
matchLabels:
app: someName
version: someVersion
template:
metadata:
labels:
app: someName
version: "someVersion"
spec:
containers:
- name: someName
image: someImage:someVersion
imagePullPolicy: Always
resources:
limits:
memory: someMemory
cpu: someCPU
requests:
memory: someMemory
cpu: someCPU
readinessProbe:
failureThreshold: someFailureThreshold
initialDelaySeconds: someInitialDelaySeconds
periodSeconds: somePeriodSeconds
timeoutSeconds: someTimeoutSeconds
livenessProbe:
httpGet:
path: somePath
port: somePort
failureThreshold: someFailureThreshold
initialDelaySeconds: someInitialDelay
periodSeconds: somePeriodSeconds
timeoutSeconds: someTimeoutSeocnds
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: someName
namespace: someNamespace
spec:
minAvailable: someMinAvailable
selector:
matchLabels:
app: someName
version: "someVersion"
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: someName-hpa
namespace: someNamespace
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: someName
minReplicas: someMinReplicas
maxReplicas: someMaxReplicas
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: someAverageUtilization
</code></pre>
| Brian | <blockquote>
<p>I am just wanting to always have 2 spare for scaling, or if one becomes unavailable or any reason</p>
</blockquote>
<p>It is a good practice to have at least two replicas for services on Kubernetes. This helps if e.g. a node goes down or you need to do <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">maintenance of the node</a>. Also set <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">Pod Topology Spread Constraints</a> so that those pods are scheduled to run on different nodes.</p>
<p>Set the number of replicas that you minimum want as desired state. In Kubernetes, traffic will be load balanced to the replicas. Also use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> to define when you want to autoscale to more replicas. You can set the requirements low for autoscaling, if you want to scale up early.</p>
| Jonas |
<p>I have a problem mounting 2 files in a pod, one is being treated as a directory for some reason (maybe stupid one, but I looked and looked, couldn't find a solution).</p>
<p>in my <code>config</code> folder there's 2 files:</p>
<pre><code>config
|- log4j.properties
|- server.main.properties
</code></pre>
<p>Running <code>StatefulSet</code>, here's the Volumes part of the manifest file:</p>
<pre><code>containers:
...
volumeMounts:
- mountPath: /usr/local/confluent/config/log4j.properties
name: log4j-properties
subPath: log4j.properties
- mountPath: /usr/local/confluent/config/server.properties
name: server-properties
subPath: server.properties
restartPolicy: Always
volumes:
- name: log4j-properties
configMap:
name: log4j-properties
defaultMode: 0777
- name: server-properties
configMap:
name: server-properties
defaultMode: 0777
volumeClaimTemplates:
- metadata:
name: confluent-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Created ConfigMaps:</p>
<pre><code>kubectl create configmap server-properties --from-file=config/server.main.properties
kubectl create configmap log4j-properties --from-file=config/log4j.properties
</code></pre>
<p><code>kubectl describe pod</code> gives mounted volumes as:</p>
<pre><code> Volumes:
confluent-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: confluent-persistent-storage-confluent-0
ReadOnly: false
server-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: server-properties
Optional: false
log4j-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: log4j-properties
Optional: false
</code></pre>
<p>It's being initialized for 5-6 minutes and in logs I can see that server.properties is not a file, but a folder, and when I exec into pod, I can see that the actual folder is been created instead of file.. What am I doing wrong here?</p>
| dejanmarich | <blockquote>
<p>subPath: server.properties</p>
</blockquote>
<p>Wouldn't you want to use it as below?</p>
<blockquote>
<p>subPath: server.main.properties</p>
</blockquote>
| Jonas |
<p>I have an 7.4.0 ES cluster using ECK 1.0 and after my 3 dedicated master nodes got out of disk space, I deleted them along with the volumes to test a critical scenario.</p>
<p>Once the new eligible masters were created, they couldn't elect a new member. Now the cluster is stuck forever although it sees the new master eligible servers (pods in k8s).</p>
<p>Is there a way to force ES to elect a new master even though the previous ones are out of the picture?</p>
<p>Be aware that the masters had no data. All the data resides on data only nodes. Unfortunately, I cannot access them as long as a master is not elected.</p>
| gmolaire | <blockquote>
<p>Be aware that the masters had no data.</p>
</blockquote>
<p>This is not really true. The master nodes hold the cluster metadata which Elasticsearch needs to correctly understand the data stored on the data nodes. Since you've deleted the metadata, the data on the data nodes is effectively meaningless.</p>
<p>At this point your best option is to start again with a new cluster and restore your data from a recent snapshot.</p>
| Dave Turner |
<p>In most resource managers, we can set the container's CPU usage and memory usage.
But I'm curious about the technical reasons for not supporting disk I/O resource allocation for containers.</p>
| 김민우 | <p>According to <a href="https://andrestc.com/post/cgroups-io/" rel="nofollow noreferrer">Using cgroups to limit I/O</a> this requires <strong>cgroups v2</strong> and that is quite recent feature for container runtimes.</p>
<p>For Kubernetes support, you should probably follow <a href="https://github.com/kubernetes/enhancements/pull/1907" rel="nofollow noreferrer">initial kep for qos of storage v0.1</a> and <a href="https://github.com/kubernetes/enhancements/issues/2254" rel="nofollow noreferrer">cgroups v2 was added as alpha feature in Kubernetes 1.22</a>.</p>
<p>So it looks like there is work in progress on this.</p>
| Jonas |
<p>I'm having a hard time getting EKS to expose an IP address to the public internet. Do I need to set up the ALB myself or do you get that for free as part of the EKS cluster? If I have to do it myself, do I need to define it in the terraform template file or in the kubernetes object yaml?</p>
<p>Here's my EKS cluster defined in Terraform along with what I think are the required permissions.</p>
<pre class="lang-golang prettyprint-override"><code>// eks.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "${local.env_name}-eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_kms_key" "eks_key" {
description = "EKS KMS Key"
deletion_window_in_days = 7
enable_key_rotation = true
tags = {
Environment = local.env_name
Service = "EKS"
}
}
resource "aws_kms_alias" "eks_key_alias" {
target_key_id = aws_kms_key.eks_key.id
name = "alias/eks-kms-key-${local.env_name}"
}
resource "aws_eks_cluster" "eks_cluster" {
name = "${local.env_name}-eks-cluster"
role_arn = aws_iam_role.eks_cluster_role.arn
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
vpc_config {
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.eks_key.arn
}
}
tags = {
Environment = local.env_name
}
}
resource "aws_iam_role" "eks_node_group_role" {
name = "${local.env_name}-eks-node-group"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_eks_node_group" "eks_node_group" {
instance_types = var.node_group_instance_types
node_group_name = "${local.env_name}-eks-node-group"
node_role_arn = aws_iam_role.eks_node_group_role.arn
cluster_name = aws_eks_cluster.eks_cluster.name
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
// Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
// Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
]
</code></pre>
<p>And here's my kubernetes object yaml:</p>
<pre class="lang-yaml prettyprint-override"><code># hello-kubernetes.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
backend:
serviceName: hello-kubernetes
servicePort: 80
</code></pre>
<p>I've run <code>terraform apply</code> and the cluster is up and running. I've installed <code>eksctl</code> and <code>kubectl</code> and run <code>kubectl apply -f hello-kubernetes.yaml</code>. The pods, service, and ingress appear to be running fine.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kubernetes-6cb7cd595b-25bd9 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-lccdj 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-snwvr 1/1 Running 0 6h13m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes LoadBalancer 172.20.102.37 <pending> 80:32086/TCP 6h15m
$ kubectl get ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
hello-ingress <none> * 80 3h45m
</code></pre>
<p>What am I missing and which file does it belong in?</p>
| williamcodes | <p>You need to install the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/" rel="nofollow noreferrer">AWS Load Balancer Controller</a> by following the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/deploy/installation/" rel="nofollow noreferrer">installation instructions</a>; first you need to create IAM Role and permissions, this can be done with Terraform; then you need to apply Kubernetes Yaml for installing the controller into your cluster, this can be done with Helm or Kubectl.</p>
<p>You also need to be aware of the <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">subnet tagging</a> that is needed for e.g. creating a public or private facing load balancer.</p>
| Jonas |
<p>I'm new to Kubernetes and to supporting a particular website hosted in Kubernetes. I'm trying to figure out why cert-manager did not renew the certificate in the QA environment a few weeks back.</p>
<p>Looking at the details of various certificate-related resources, the problem seems to be that the challenge failed:</p>
<pre><code>State: invalid, Reason: Error accepting authorization: acme: authorization error for [DOMAIN]: 400 urn:ietf:params:acme:error:connection: Fetching http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]: Timeout during connect (likely firewall problem)
</code></pre>
<p>I assume that error means Let's Encrypt wasn't able to access the challenge file at http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]</p>
<p>(Domain and challenge token string redacted)</p>
<p>I've tried connecting to the URL via PowerShell:</p>
<p><code>PS C:\Users\Simon> invoke-webrequest -uri http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING] -SkipCertificateCheck</code></p>
<p>and it returns a 200 OK.</p>
<p>However, PowerShell follows redirects automatically and checking with WireShark the Nginx web server is performing a 308 permanent redirect to https://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]</p>
<p>(same URL but just redirecting HTTP to HTTPS)</p>
<p>I understand that Let's Encrypt should be able to handle HTTP to HTTPS redirects.</p>
<p>Given that the URL Let's Encrypt was trying to reach is accessible from the internet I'm at a loss as to what the next step should be in investigating this issue. Could anyone provide any advice?</p>
<p>Here is the full output of the kubectl cert-manager plugin, checking the status of the certificate and associated resources:</p>
<pre><code>PS C:\Users\Simon> kubectl cert-manager status certificate -n qa containers-tls-secret
Name: containers-tls-secret
Namespace: qa
Created at: 2020-10-16T08:40:14+13:00
Conditions:
Ready: False, Reason: Expired, Message: Certificate expired on Sun, 14 Mar 2021 17:41:12 UTC
Issuing: False, Reason: Failed, Message: The certificate request has failed to complete and will be retried: Failed to wait for order resource "containers-tls-secret-q2cwr-3223066309" to become ready: order is in "invalid" state:
DNS Names:
- [DOMAIN]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 31s (x236 over 9d) cert-manager Renewing certificate as renewal was scheduled at 2021-02-12 17:41:12 +0000 UTC
Normal Reused 31s (x236 over 9d) cert-manager Reusing private key stored in existing Secret resource "containers-tls-secret"
Warning Failed 31s (x236 over 9d) cert-manager The certificate request has failed to complete and will be retried: Failed to wait for order resource "containers-tls-secret-q2cwr-3223066309" to become ready: order is in "invalid" state:
Issuer:
Name: letsencrypt
Kind: ClusterIssuer
Conditions:
Ready: True, Reason: ACMEAccountRegistered, Message: The ACME account was registered with the ACME server
Events: <none>
Secret:
Name: containers-tls-secret
Issuer Country: US
Issuer Organisation: Let's Encrypt
Issuer Common Name: R3
Key Usage: Digital Signature, Key Encipherment
Extended Key Usages: Server Authentication, Client Authentication
Public Key Algorithm: RSA
Signature Algorithm: SHA256-RSA
Subject Key ID: dadf29869b58d05e980c390fdc8783f52369228d
Authority Key ID: 142eb317b75856cbae500940e61faf9d8b14c2c6
Serial Number: 04f7356add94a7909afab94f0847a3457765
Events: <none>
Not Before: 2020-12-15T06:41:12+13:00
Not After: 2021-03-15T06:41:12+13:00
Renewal Time: 2021-02-13T06:41:12+13:00
CertificateRequest:
Name: containers-tls-secret-q2cwr
Namespace: qa
Conditions:
Ready: False, Reason: Failed, Message: Failed to wait for order resource "containers-tls-secret-q2cwr-3223066309" to become ready: order is in "invalid" state:
Events: <none>
Order:
Name: containers-tls-secret-q2cwr-3223066309
State: invalid, Reason:
Authorizations:
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/10810339315, Identifier: [DOMAIN], Initial State: pending, Wildcard: false
FailureTime: 2021-02-13T06:41:59+13:00
Challenges:
- Name: containers-tls-secret-q2cwr-3223066309-2302286353, Type: HTTP-01, Token: [CHALLENGE TOKEN STRING], Key: [CHALLENGE TOKEN STRING].8b00cc-ysOWGQ8vtmpOJobWOFa2cEQUe4Sun5NUKCws, State: invalid, Reason: Error accepting authorization: acme: authorization error for [DOMAIN]: 400 urn:ietf:params:acme:error:connection: Fetching http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]: Timeout during connect (likely firewall problem), Processing: false, Presented: false
</code></pre>
<p>By the way, the invoke-webrequest results show an HTML page was returned:</p>
<pre><code><!doctype html><html lang="en"><head><meta charset="utf-8"><title>Containers</title><base href="./"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" href="favicon.ico…
</code></pre>
<p>Could that be the issue? I don't know what Let's Encrypt expects to find at the URL of the HTTP01 challenge. Is a web page allowed or is it expecting something different?</p>
<p><strong>EDIT:</strong> I now suspect the HTML page returned by invoke-webrequest is not normal, since I understand the file should include the Let's Encrypt token and a key. Here is the full HTML page:</p>
<pre><code><!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Wineworks</title>
<base href="./">
<meta name="viewport" content="width=device-width,initial-scale=1">
<link rel="icon" href="favicon.ico">
<link rel="apple-touch-icon-precomposed" href="favicon-152.png">
<meta name="msapplication-TileColor" content="#FFFFFF">
<meta name="msapplication-TileImage" content="favicon-152.png">
<script src="https://secure.aadcdn.microsoftonline-p.com/lib/1.0.16/js/adal.min.js"/>
<link href="styles.025a840d59ecfcfe427e.bundle.css" rel="stylesheet"/>
</head>
<body>
<app-root/>
<script type="text/javascript" src="inline.ce954cfcbe723b5986e6.bundle.js"/>
<script type="text/javascript" src="polyfills.7edc676f7558876c179d.bundle.js"/>
<script type="text/javascript" src="main.da3590aac44ee76e7b3a.bundle.js"/>
</body>
</html>
</code></pre>
<p>Any idea what might cause cert-manager to drop the wrong kind of file at the challenge location?</p>
| Simon Elms | <p>In the end I was unable to determine the cause of the certificate renewal failure. However, events on one of the certificate-related resources suggested previous renewals had worked. So I thought it was possible whatever the problem was might have been transient or a one-off, and that trying again to renew the certificate may work.</p>
<p>Reading various articles and blog posts it appeared that deleting the CertificateRequest object would prompt cert-manager to create a new one, which should result in a certificate renewal. Also, deleting the CertificateRequest object would automatically delete the associated ACME Order and Challenge objects as well, so it wouldn't be necessary to delete them manually.</p>
<p>Deleting the CertificateRequest object did work: The certificate was renewed successfully. However, it didn't renew straight away. Further reading suggests it may take an hour for the certificate renewal (I didn't check the exact time it took so can't verify this).</p>
<p>To delete a CertificateRequest:</p>
<pre><code>kubectl delete certificaterequest <certificateRequest name>
</code></pre>
<p>For example:</p>
<pre><code>kubectl delete certificaterequest my-certificate-zrt6p -n qa
</code></pre>
<p>If you wish to force an immediate renewal, rather than waiting an hour, after deleting the CertificateRequest object and cert-manager creating a new one run the following kubectl command, if you have the <strong>kubectl cert-manager plugin</strong> installed:</p>
<pre><code>kubectl cert-manager renew <certificate name>
</code></pre>
<p>For example, to renew certificate my-certificate in namespace qa:</p>
<pre><code>kubectl cert-manager renew my-certificate -n qa
</code></pre>
<p><strong>NOTE:</strong> The easiest way to install the kubectl cert-manager plugin is via the <strong>Krew</strong> plugin manager:</p>
<pre><code>kubectl krew install cert-manager
</code></pre>
<p>See <a href="https://krew.sigs.k8s.io/docs/user-guide/setup/install/" rel="nofollow noreferrer">https://krew.sigs.k8s.io/docs/user-guide/setup/install/</a> for details of how to install Krew (which is useful for all kubectl plugins, not just cert-manager).</p>
<p>One further thing I found from researching this is that sometimes the old certificate secret can get "stuck", preventing a new secret from being created. You can delete the certificate secret to avoid this problem. For example:</p>
<pre><code>kubectl delete secret my-certificate -n qa
</code></pre>
<p>I assume, however, that without a certificate secret your website will have no certificate, which may prevent browsers from accessing it. So I would only delete the existing secret as a last resort.</p>
| Simon Elms |
<p><strong>Setup</strong>:</p>
<ul>
<li>Azure Kubernetes Service</li>
<li>Azure Application Gateway</li>
</ul>
<p>We have kubernetes cluster in Azure which uses Application Gateway for managing network trafic. We are using appgw over Load Balancer because we need to handle trafic at layer 7, hence path-based http rules. We use kubernetes ingress controller for configuring appgw. See config below.</p>
<p>Now I want a service that both accept requests on HTTP (layer 7) and TCP (layer 4).</p>
<p>How do I do that? The exposed port should not be public on the big internet, but public on the azure network. Do I need to add another Ingress Controller that is not configured to use appgw?</p>
<p>This is what I want to accomplish:
<a href="https://i.stack.imgur.com/oDJVd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oDJVd.png" alt="enter image description here" /></a></p>
<p>This is the config for the ingress controller using appgw:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service1
labels:
app: service1
annotations:
appgw.ingress.kubernetes.io/backend-path-prefix: /
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /service1/*
backend:
serviceName: service1
servicePort: http
</code></pre>
<p>Current setup:
<a href="https://i.stack.imgur.com/nn3xt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nn3xt.png" alt="enter image description here" /></a></p>
| Michael | <blockquote>
<p>The exposed port should not be public, but public in the kubernetes cluster.</p>
</blockquote>
<p>I assume that you mean that your application should expose a port for clients within the Kubernetes cluster. You don't have to do any special in Kubernetes for Pods to do this, they can accept TCP connection to any port. But you may want to create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> of <code>type: ClusterIP</code> for this, so it will be easer for clients.</p>
<p>Nothing more than that should be needed.</p>
| Jonas |
<p>I have a cluster in AWS EKS and 1 node group which has 1 node, how to display node and pods using aws api? I have credentials for service account, how to use these credentials in an API and get list of available nodes and pods?
when I try to execute command <code>kubectl get pods</code> it shows an error:</p>
<blockquote>
<p>An error occurred (AccessDenied) when calling the AssumeRole operation: User: >arn:aws:iam:xxxx:user/xx is not authorized to perform: sts:AssumeRole on resource: >arn:aws:iam::xx:user/xx</p>
</blockquote>
| dev | <p>You need to do two things before accessing your cluster.</p>
<ol>
<li><p>Add your IAM Roles or Users to the <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">aws-auth ConfigMap</a> to configure who can access the cluster. The IAM role that was used for creating the cluster, already have access.</p>
</li>
<li><p>When accessing the cluster, you must authenticate and populate your <code>kubeconfig</code>. This can be done with <a href="https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html#examples" rel="nofollow noreferrer">aws eks update-kubeconfig command</a>:</p>
<p><code>aws eks update-kubeconfig --name <my-cluster-name></code></p>
</li>
</ol>
| Jonas |
<p>I am trying to install the <code>aws-encryption-provider</code> following the steps at <a href="https://github.com/kubernetes-sigs/aws-encryption-provider" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-encryption-provider</a>. After I added the <code>--encryption-provider-config=/etc/kubernetes/aws-encryption-provider-config.yaml</code> parameter to <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> the apiserver process did not restart. Nor do I see any error messages.</p>
<p>What technique can I use to see errors created when <code>apiserver</code> starts?</p>
| David Medinets | <p>Realizing that the apiserver is running inside a docker container, I connected to one of my controller nodes using SSH. Then I started a container using the following command to get a shell prompt using the same docker image that apiserver is using.</p>
<pre class="lang-sh prettyprint-override"><code>docker run \
-it \
--rm \
--entrypoint /bin/sh \
--volume /etc/kubernetes:/etc/kubernetes:ro \
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
--volume /etc/pki:/etc/pki:ro \
--volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro \
--volume /etc/pki/tls:/etc/pki/tls:ro \
--volume /etc/ssl/etcd/ssl:/etc/ssl/etcd/ssl:ro \
--volume /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro \
--volume /var/run/kmsplugin:/var/run/kmsplugin \
k8s.gcr.io/kube-apiserver:v1.18.5
</code></pre>
<p>Once inside that container, I could run the same command that is setup in <code>kube-apiserver.yaml</code>. This command was:</p>
<pre class="lang-sh prettyprint-override"><code>kube-apiserver \
--encryption-provider-config=/etc/kubernetes/aws-encryption-provider-config.yaml \
--advertise-address=10.250.203.201 \
...
--service-node-port-range=30000-32767 \
--storage-backend=etcd3 \
</code></pre>
<p>I elided the bulk of the command since you'll need to get specific values from your own <code>kube-apiserver.yaml</code> file.</p>
<p>Using this technique showed me the error message:</p>
<pre><code>Error: error while parsing encryption provider configuration file
"/etc/kubernetes/aws-encryption-provider-config.yaml": error while parsing
file: resources[0].providers[0]: Invalid value:
config.ProviderConfiguration{AESGCM:(*config.AESConfiguration)(nil),
AESCBC:(*config.AESConfiguration)(nil), Secretbox:(*config.SecretboxConfiguration)
(nil), Identity:(*config.IdentityConfiguration)(nil), KMS:(*config.KMSConfiguration)
(nil)}: provider does not contain any of the expected providers: KMS, AESGCM,
AESCBC, Secretbox, Identity
</code></pre>
| David Medinets |
<p>I am trying to create a deployment using its deployment yaml file in minikube.
I have saved the deployment file locally.Please share the minikube kubectl command to create the deployment from the yaml file.</p>
| Sailee Das | <p>Using native <code>kubectl</code> client you do this with the <code>kubectl apply</code> command and pass the <code>--filename</code> flag followed by the name of your yaml-file.</p>
<p>Example:</p>
<pre><code>kubectl apply --filename my-deployment.yaml
</code></pre>
<p>When using <a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/" rel="nofollow noreferrer">minikube kubectl</a> you prepend kubectl commands with <code>minikube </code> and pass the command name after <code>--</code>, e.g.</p>
<pre><code>minikube kubectl -- apply --filename my-deployment.yaml
</code></pre>
| Jonas |
<p>I see that Kubernetes <code>Job</code> & <code>Deployment</code> provide very similar configuration. Both can deploy one or more pods with certain configuration. So I have few queries around these:</p>
<ul>
<li>Is the pod specification <code>.spec.template</code> different in <code>Job</code> & <code>Deployment</code>?</li>
<li>What is difference in <code>Job</code>'s <code>completions</code> & <code>Deployment</code>'s <code>replicas</code>?</li>
<li>If a command is run in a <code>Deployment</code>'s only container and it completes (no server or daemon process containers), the pod would terminate. The same is applicable in a <code>Job</code> as well. So how is the pod lifecycle different in either of the resources?</li>
</ul>
| Mukund Jalan | <p>Many resources in Kubernetes use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates" rel="noreferrer">Pod template</a>. Both <code>Deployments</code> and <code>Jobs</code> use it, because they manage Pods.</p>
<blockquote>
<p>Controllers for workload resources create Pods from a pod template and manage those Pods on your behalf.</p>
</blockquote>
<blockquote>
<p>PodTemplates are specifications for creating Pods, and are included in workload resources such as Deployments, Jobs, and DaemonSets.</p>
</blockquote>
<p>The main difference between <code>Deployments</code> and <code>Jobs</code> is <strong>how they handle a Pod that is terminated</strong>. A Deployment is intended to be a "service", e.g. it should be up-and-running, so it will try to restart the Pods it manage, to match the desired number of replicas. While a Job is intended to execute and successfully terminate.</p>
| Jonas |
<p>In my application I am using <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">c# kubernetes client</a>. I need to use Task.WhenAll but everytime namespaces is different and it fails compiling.</p>
<pre><code> var configMap = await client.ListConfigMapForAllNamespacesAsync(cancellationToken: cancellationToken);
var secret = await client.ListSecretForAllNamespacesAsync(cancellationToken: cancellationToken);
var service = await client.ListServiceForAllNamespacesAsync(cancellationToken: cancellationToken);
var pvc = await client.ListPersistentVolumeClaimForAllNamespacesAsync(
cancellationToken: cancellationToken);
var deployment = await client.ListDeploymentForAllNamespacesAsync(cancellationToken: cancellationToken);
var sts = await client.ListStatefulSetForAllNamespacesAsync(cancellationToken: cancellationToken); // var is V1StatefulSetList
var daemonSet = await client.ListDaemonSetForAllNamespacesAsync(cancellationToken: cancellationToken); // var is V1DaemonSetList (and it is same for appropriate namespaces)
Task.WhenAll(configMap, secret,service,pvc,deployment,sts,daemonSet); // I would like use like this
</code></pre>
<p>Simply I would like call all async actions as Paralelly or call using Task.WhenAll</p>
| Arzu Suleymanov | <p>By using <code>await</code> every call is made and completed before the next runs. So, just remove the awaits and then you have a bunch of <code>Task</code>s which you can wait all:</p>
<pre><code>var configMap = client.ListConfigMapForAllNamespacesAsync(cancellationToken: cancellationToken);
var secret = client.ListSecretForAllNamespacesAsync(cancellationToken: cancellationToken);
var service = client.ListServiceForAllNamespacesAsync(cancellationToken: cancellationToken);
var pvc = client.ListPersistentVolumeClaimForAllNamespacesAsync( cancellationToken: cancellationToken);
var deployment = client.ListDeploymentForAllNamespacesAsync(cancellationToken: cancellationToken);
var sts = client.ListStatefulSetForAllNamespacesAsync(cancellationToken: cancellationToken); // var is V1StatefulSetList
var daemonSet = client.ListDaemonSetForAllNamespacesAsync(cancellationToken: cancellationToken); // var is V1DaemonSetList (and it is same for appropriate namespaces)
var result = await Task.WhenAll(configMap, secret,service,pvc,deployment,sts,daemonSet);
</code></pre>
<p>by <code>await</code>ing the <code>WhenAll</code> you can wait until all the tasks have completed.</p>
| Jamiec |
<p>For any example, the client-go connect to the kubernetes cluster with the kubeconfig file, but I don't want to do that. I've createed a service account, now I have a ServiceAccount Token, how to connect to the kubernetes cluster with this token outside of the kubernetes cluster?</p>
<pre><code>package main
import (
"flag"
"k8s.io/client-go/tools/clientcmd"
"log"
"k8s.io/client-go/kubernetes"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"fmt"
)
var clientset *kubernetes.Clientset
func main() {
k8sconfig := flag.String("k8sconfig","./k8sconfig","kubernetes config file path")
flag.Parse()
config , err := clientcmd.BuildConfigFromFlags("",*k8sconfig)
if err != nil {
log.Println(err)
}
clientset , err = kubernetes.NewForConfig(config)
if err != nil {
log.Fatalln(err)
} else {
fmt.Println("connect k8s success")
}
pods,err := clientset.CoreV1().Pods("").List(metav1.ListOptions{})
if err != nil {
log.Println(err.Error())
}
}
</code></pre>
| yzhengwei | <p>The client-go already has built-in authentication both <strong>In Cluster Authentication</strong> (to be used from a Pod with a ServiceAccount) and also <strong>Out of Cluster Authentication</strong> (to be used from outside the cluster, e.g. for local development)</p>
<p>The client-go has examples of both:</p>
<ul>
<li><a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go#L44-L62" rel="nofollow noreferrer">out of cluster authentication example</a></li>
<li><a href="https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go#L41-L50" rel="nofollow noreferrer">in cluster authentication example</a> - it is using ServiceAccount token</li>
</ul>
<p>The in-cluster exampe is quite short:</p>
<pre><code> // creates the in-cluster config
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
</code></pre>
<p>You need to import <code>"k8s.io/client-go/rest"</code></p>
| Jonas |
<p>I'm new to k8s, so this question might be kind of weird, please correct me as necessary.</p>
<p>I have an application which requires a <code>redis</code> database. I know that I should configure it to connect to <code><redis service name>.<namespace></code> and the cluster DNS will get me to the right place, <em>if it exists</em>.</p>
<p>It feels to me like I want to express the relationship between the application and the database. Like I want to say that the application shouldn't be deployable until the database is there and working, and maybe that it's in an error state if the DB goes away. Is that something you'd normally do, and if so - how? I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.</p>
<p>Is the alternative to try to connect early and <code>exit 1</code>, so that the cluster keeps on retrying? Feels like that would work but it's not very declarative.</p>
| Iain Lane | <h2>Design for resiliency</h2>
<p>Modern applications and Kubernetes are (or should be) designed for resiliency. The applications should be designed without <em>single point of failure</em> and be resilient to changes in e.g. network topology. Also see <a href="https://12factor.net/backing-services" rel="nofollow noreferrer">Twelve factor-app: IV. Backing services</a>.</p>
<p>This means that your Redis typically should be a cluster of e.g. 3 instances. It also means that your app should <strong>retry connections</strong> if connections fails - this can also happens same time after running - since upgrades of a cluster (or rolling upgrade of an app) is done by terminating one instance at a time meanwhile a new instance at a time is launched. E.g. the instance (of a cluster) that your app <strong>currently is connected to might go away</strong> and your app need to reconnect, perhaps establish a connection to a different instance in the same cluster.</p>
<h2>SQL Databases and schemas</h2>
<blockquote>
<p>I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.</p>
</blockquote>
<p>Yes, this is a different case. On Kubernetes your app is typically deployed with at least 2 replicas, or more (for high-availability reasons). You need to consider that when managing schema changes for your app. Common tools to manage the schema are <a href="https://flywaydb.org/" rel="nofollow noreferrer">Flyway</a> or <a href="https://www.liquibase.org/" rel="nofollow noreferrer">Liquibase</a> and they can be run as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a>. E.g. first launch a Job to create your DB-tables and after that deploy your app. And after some weeks you might want to change some tables and launch a new Job for this schema migration.</p>
| Jonas |
<p>What api endpoint can I call to get a pod or service's yaml?</p>
<p>The kubectl command to get a pod's yaml is</p>
<blockquote>
<p>kubectl get pod my-pod -o yaml</p>
</blockquote>
<p>but what endpoint does kubectl use to get it?</p>
| Cinnabams | <blockquote>
<p>kubectl get pod my-pod -o yaml</p>
</blockquote>
<blockquote>
<p>but what endpoint does kubectl use to get it?</p>
</blockquote>
<p>If you add <code>-v 7</code> or <code>-v 6</code> to the command, you get verbose logs that show you all the <strong>API requests</strong></p>
<p>Example:</p>
<pre><code>kubectl get pods -v 6
I0816 22:59:03.047132 11794 loader.go:372] Config loaded from file: /Users/jonas/.kube/config
I0816 22:59:03.060115 11794 round_trippers.go:454] GET https://127.0.0.1:52900/api/v1/namespaces/default/pods?limit=500 200 OK in 9 milliseconds
</code></pre>
<p>So you see that it does this API request:</p>
<pre><code>/api/v1/namespaces/default/pods?limit=500
</code></pre>
<p>The API only returns the response in Json and the client can transform to Yaml when using <code>-o yaml</code>.</p>
| Jonas |
<p>I wanted to understand if a sidecar container can send a unix signal to the main container process).</p>
<p>The use-case is I have Nginx running as the main content serving app container and I want the sidecar container to receive Nginx config updates and reload Nginx by sending a signal. These two containers would be running in a single pod.</p>
<p>PS: I don't have an environment to try this out but wanted to check if people have used such a pattern?</p>
| Bhakta Raghavan | <p>You can share process namespace by setting <code>shareProcessNamespace: true</code>.</p>
<p>The Kubernetes documentation has an example where a sidecar sends a <code>SIGHUP</code> to an nginx container in the same pod: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/#configure-a-pod" rel="noreferrer">Share Process Namespace between Containers in a Pod</a>. As shown in the example, you might to add some capabilities to the container.</p>
| Jonas |
<p>I’m getting started with Kubernetes, Knative and Gloo. My goal is to deploy a simple http service to a gke cluster. I’ve managed to setup knative, gloo and deploy a healthy service there named <code>backend</code>. Next step is to setup routing <code>/api/v1</code> -> <code>backend</code>. I’ve created a virtualservice named <code>public-api</code>, now I need to add a route. According to docs, I need to run</p>
<pre><code>glooctl add route \
--path-exact /api/v1 \
--dest-name ???dest-name??? \
--prefix-rewrite /
</code></pre>
<p>And I’m confused. I suppose this would be easier if I just installed plain gloo on plain gke. But with Knative I see <em>four</em> upstreams:</p>
<pre><code>| mb-backend-bdtr2-4tdfq-9090 | Kubernetes | Accepted | svc name: |
| | | | backend-bdtr2-4tdfq |
| | | | svc namespace: mb |
| | | | port: 9090 |
| | | | |
| mb-backend-bdtr2-4tdfq-9091 | Kubernetes | Accepted | svc name: |
| | | | backend-bdtr2-4tdfq |
| | | | svc namespace: mb |
| | | | port: 9091 |
| | | | |
| mb-backend-bdtr2-80 | Kubernetes | Accepted | svc name: backend-bdtr2 |
| | | | svc namespace: mb |
| | | | port: 80 |
| | | | |
| mb-backend-bdtr2-zz6t9-80 | Kubernetes | Accepted | svc name: |
| | | | backend-bdtr2-zz6t9 |
| | | | svc namespace: mb |
| | | | port: 80 |
</code></pre>
<p>I have four questions:</p>
<ol>
<li>which one to use? mb-backend-bdtr2-80 or mb-backend-bdtr2-zz6t9-80</li>
<li>why do I have two upstreams with port 80?</li>
<li>what are these upstreams with ports 9090 and 9091?</li>
<li>how can I define more descriptive names? Gloo’s system upstreams are named nicer without any postfix.</li>
</ol>
| Andrey Kuznetsov | <p>Thanks to great community help on solo.io slack, I've got answers.</p>
<ol>
<li>I should route to <code>backend-bdtr2</code>. This value can be obtained by running <code>kubectl get proxy -n gloo-system knative-external-proxy -oyaml</code>.</li>
<li>Two upstreams with 80 port are Knative's placeholder services routing to the original <code>backend</code> service. They will dynamically route to the knative activator when the service needs to be scaled up. Apparently the one upstream is for external and the second is for internal routing (but not sure for now).</li>
<li>Upstreams with ports 9090 and 9091 are knative sidecars.</li>
<li>Names are generated by knative and apparently there is no solution to have descriptive postfix for them right now.</li>
</ol>
| Andrey Kuznetsov |
<p>I have a web applicaton (e.g. <code>"india"</code>) that depends on postgres and redis (e.g. a typical Rails application).</p>
<p>I have a <code>docker-compose.yml</code> file that composes the containers to start this application.</p>
<pre><code>version: '3'
services:
redis-india:
image: redis:5.0.6-alpine
# .....
postgres-india:
image: postgres:11.5-alpine
# ....
india:
depends_on:
- postgres-india
- redis-india
image: india/india:local
# ....
</code></pre>
<p>I'd like to run this application deployment with Kubernetes. I'm trying to figure out how to build the k8s resource objects correctly, and I'm weighing two options:</p>
<ol>
<li><p>I can build <code>india</code>, <code>postgres-india</code>, and <code>redis-india</code> as separate deployments (and therefore separate Services) in k8s</p>
</li>
<li><p>I can build <code>india</code>, <code>postgres-india</code>, and <code>redis-india</code> as a single deployment (and therfore a single <code>pod</code> / <code>service</code>)</p>
</li>
</ol>
<p>#2 makes more sense to me personally - all 3 items here comprise the entire "application service" that should be exposed as a single service URL (i.e. the frontend for the web application).</p>
<p>However, if I use an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/" rel="nofollow noreferrer">automated tool like <code>kompose</code></a> to translate my <code>docker-compose.yml</code> file into k8s resources, it follows approach #1 and creates three individual k8s Services.</p>
<p>Is there a "right way" or standard I should follow?</p>
<p>Thanks!</p>
| user2490003 | <h1>Independent components</h1>
<p>Your three components should run as separate deployments on Kubernetes. You want these three components to be:</p>
<ul>
<li>Independently upgradable and deployable (e.g. you deploy a new version of Redis but not your app or database)</li>
<li>Independently scalable - e.g. you might get many users and want to scale up to multiple instances (e.g. 5 replicas) of your app.</li>
</ul>
<h3>State</h3>
<p>Your app should be designed to be <a href="https://12factor.net/processes" rel="nofollow noreferrer">stateless</a>, and can be deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. But the Redis and PostgreSQL are <em>stateful</em> components and should be deployed as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p>
<h3>Availability</h3>
<p>In a production environment, you typically want to:</p>
<ul>
<li>Avoid downtime when upgrading any application or database</li>
<li>Avoid downtime if you or the cloud provider upgrade the node</li>
<li>Avoid downtime if the node crash, e.g. due to hardware failure or kernel crash.</li>
</ul>
<p>With a <em>stateless</em> app deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>, this is trivial to solve - run at least two instances (replicas) of it - and make sure they are deployed on different nodes in the cluster. You can do this using <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">Topology Spread Constraints</a>.</p>
<p>With a <em>stateful</em> component as e.g. Redis or PostgreSQL, this is more difficult. You typically need to run it as a cluster. See e.g. <a href="https://redis.io/topics/cluster-tutorial" rel="nofollow noreferrer">Redis Cluster</a>. But it is more difficult for PostgreSQL, you could consider a PostgreSQL-compatible db that has a distributed design, e.g. <a href="https://www.cockroachlabs.com/product/sql/" rel="nofollow noreferrer">CockroachDB</a> that is designed to be <a href="https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-with-kubernetes.html" rel="nofollow noreferrer">run on Kubernetes</a> or perhaps consider <a href="https://access.crunchydata.com/documentation/postgres-operator/v5/" rel="nofollow noreferrer">CrunchyData PostgreSQL Operator</a>.</p>
<h1>Pod with multiple containers</h1>
<p>When you deploy a Pod with <a href="https://kubernetes.io/docs/concepts/workloads/pods/#how-pods-manage-multiple-containers" rel="nofollow noreferrer">multiple containers</a>, one container is the "main" application and the other containers are supposed to be "helper" / "utility" containers to fix a problem for the "main container" - e.g. if your app logs to two different files - you could have helper containers to tail those files and output it to stdout, as is the recommended log output in <a href="https://12factor.net/logs" rel="nofollow noreferrer">Twelve-Factor App</a>. You typically only use "multiple containers" for apps that are not designed to be run on Kubernetes, or if you want to extend with some functionality like e.g. a Service Mesh.</p>
| Jonas |
<p><strong>Current state:</strong> From local command line, after authenticating to the cluster and setting the right context, I am using Kubectl to get list of "Completed" pods and then deleting them using a simple one liner. This works, but we want to automate it.</p>
<p>These are pods NOT jobs which are in "Completed" state. I am aware of ttl settings for jobs and but I cannot find similar settings for Pods.</p>
<p><strong>Future state:</strong> We want to be able to deploy a pod/cronjob inside a namespace which will just look for "Completed" pods and delete them, without using Kubectl. My understanding is that this would be a security risk if we allow a Pod to have Kubectl access. Correct me if I am wrong. That being said, how could we do it if there is a way?</p>
| shan | <blockquote>
<p>We want to be able to deploy a pod/cronjob inside a namespace which will just look for "Completed" pods and delete them, without using Kubectl.</p>
</blockquote>
<p>This should work perfectly fine. Just make sure that the ServiceAccount has RBAC permissions to delete those pods.</p>
<blockquote>
<p>My understanding is that this would be a security risk if we allow a Pod to have Kubectl access. Correct me if I am wrong.</p>
</blockquote>
<p>This should be fine. But you should practice <em>least privilege</em> and configure the RBAC permissions to only allow just this operation, e.g. only delete Pods, and only within the same namespace.</p>
<blockquote>
<p>That being said, how could we do it if there is a way?</p>
</blockquote>
<p>This should be possible with an image containing <code>kubectl</code> and proper RBAC permissions for the ServiceAccount.</p>
| Jonas |
<p>I am designing a chat like application where I am running 2 pods of same service for scalability.</p>
<p>Now assume user 1 connected to pod1 through web socket connection and user 2 connected to pod2. User1 want to interact with user2 but both got connected to different pods. Now how to establish inter pod communication on K8S. Are there any options available to connect using its pod name or any other good mechanism available for inter pod communication to exchange messages between different pods of same service.</p>
| veer vignesh | <blockquote>
<p>Now how to establish inter pod communication on K8S.</p>
</blockquote>
<p>You can do this using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> - it provides a "stable network identiy".</p>
<p>But the easiest way to handle this for you is to use some form of "communication hub" using a Pub-Sub protocol. An example is to use <a href="https://redis.io/" rel="nofollow noreferrer">Redis</a>, and both your pods can publish messages to Redis and also subscribe on messages.</p>
| Jonas |
<p>I'm new to Kubernetes and I'm trying to get a deploy running.</p>
<p>After I pushed the deploy config the replica set is created and that one will create the pod. But the pod stays in the <code>Pending</code> state.</p>
<p>The pod has an event listed that it can't be scheduled because there are no nodes available. Output <code>kubectl describe pod foo-qa-1616599440</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 25m default-scheduler 0/6 nodes are available: 6 Insufficient pods.
Warning FailedScheduling 18m default-scheduler 0/6 nodes are available: 6 Insufficient pods.
Warning FailedScheduling 11m default-scheduler 0/6 nodes are available: 6 Insufficient pods.
Warning FailedScheduling 5m18s default-scheduler 0/6 nodes are available: 6 Insufficient pods.
</code></pre>
<p>But there are Nodes available. Output of <code>kubectl get nodes</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
ip-xxx-xx-xx-xx.eu-central-1.compute.internal Ready <none> 64d v1.17.12-eks-7684af
ip-xxx-xx-xx-xx.eu-central-1.compute.internal Ready <none> 64d v1.17.12-eks-7684af
ip-xxx-xx-xx-xx.eu-central-1.compute.internal Ready <none> 54d v1.17.12-eks-7684af
ip-xxx-xx-xx-xx.eu-central-1.compute.internal Ready <none> 64d v1.17.12-eks-7684af
ip-xxx-xx-xx-xx.eu-central-1.compute.internal Ready <none> 54d v1.17.12-eks-7684af
ip-xxx-xx-xx-xx.eu-central-1.compute.internal Ready <none> 64d v1.17.12-eks-7684af
</code></pre>
<p>Another thing that I noticed is that a lot of same jobs are being created and all with the status <code>Pending</code>. I don't know if this is normal behavior or not but there are more then 200 of them and counting.. Output <code>kubectl get jobs</code>:</p>
<pre><code>...
cron-foo-qa-1616598720 0/1 17m 17m
cron-foo-qa-1616598780 0/1 16m 16m
cron-foo-qa-1616598840 0/1 15m 15m
cron-foo-qa-1616598900 0/1 14m 14m
cron-foo-qa-1616598960 0/1 13m 13m
cron-foo-qa-1616599020 0/1 12m 12m
cron-foo-qa-1616599080 0/1 11m 11m
cron-foo-qa-1616599200 0/1 9m2s 9m2s
cron-foo-qa-1616599260 0/1 8m4s 8m4s
cron-foo-qa-1616599320 0/1 7m7s 7m7s
cron-foo-qa-1616599380 0/1 6m11s 6m12s
cron-foo-qa-1616599440 0/1 5m1s 5m1s
cron-foo-qa-1616599500 0/1 4m4s 4m4s
cron-foo-qa-1616599560 0/1 3m6s 3m6s
cron-foo-qa-1616599620 0/1 2m10s 2m10s
cron-foo-qa-1616599680 0/1 74s 74s
cron-foo-qa-1616599740 0/1 2s
</code></pre>
<p>I see some scheduling happening if I'm correct when I inspect the events list. Output of <code>kubectl get events --sort-by='.metadata.creationTimestamp'</code>:</p>
<pre><code>...
3s Warning FailedScheduling pod/cron-foobar-prod-1616590260-vwqsk 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-acc-1616590260-j29vx 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-prod-1616569560-g8mn2 0/6 nodes are available: 6 Insufficient pods.
3s Normal Scheduled pod/cron-foobar-acc-1616560380-6x88z Successfully assigned middleware/cron-foobar-acc-1616560380-6x88z to ip-xxx-xxx-xxx-xxx.eu-central-1.compute.internal
3s Warning FailedScheduling pod/cron-foobar-prod-1616596560-hx895 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-prod-1616598180-vwls2 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-qa-1616536260-vh7bl 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-acc-1616571840-68l54 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-qa-1616564760-4wg7l 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-prod-1616571840-7wmlc 0/6 nodes are available: 6 Insufficient pods.
3s Normal Started pod/cron-foobar-prod-1616564700-6gk58 Started container cron
3s Warning FailedScheduling pod/cron-foobar-acc-1616587260-hrcmq 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-qa-1616595720-x5njq 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-acc-1616525820-x5vhr 0/6 nodes are available: 6 Insufficient pods.
3s Warning FailedScheduling pod/cron-foobar-qa-1616558100-x4p96 0/6 nodes are available: 6 Insufficient pods.
</code></pre>
<p>Can someone help me in the right direction?</p>
| Sven van Zoelen | <blockquote>
<p>But the pod stays in the Pending state.</p>
</blockquote>
<blockquote>
<p>The pod has an event listed that it can't be scheduled because there are no nodes available.</p>
</blockquote>
<p>This is as expected if you have reached your capacity. You can check the capacity of any node with:</p>
<pre><code>kubectl describe node <node_name>
</code></pre>
<p>And to get a node name, use:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>To mitigate this, use more nodes, or fewer pods or configure so that the cluster can autoscale when this happens.</p>
| Jonas |
<p>I have the following <code>pv.yaml</code> Kubernetes/Kustomization file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-common-pv
namespace: myapp
labels:
app.kubernetes.io/name: myapp-common-pv
app.kubernetes.io/component: common-pv
app.kubernetes.io/part-of: myapp
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteMany
nfs:
path: /myapp_nfs_share
server: <omitted for security purposes>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-common-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: myapp-common-pv
resources:
requests:
storage: 30gi
</code></pre>
<p>When I run this I get:</p>
<pre><code>persistentvolume/myapp-common-pv unchanged
Error from server (BadRequest): error when creating "/Users/myuser/workspace/myapp/k8s/pv": PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: v1.PersistentVolumeClaim.Spec: v1.PersistentVolumeClaimSpec.StorageClassName: Resources: v1.ResourceRequirements.Requests: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|ge":"30gi"}},"storag|..., bigger context ...|teMany"],"resources":{"requests":{"storage":"30gi"}},"storageClassName":"","volumeName":"myapp-common|...
</code></pre>
<p>Above, <code><omitted for security purposes></code> <em>is</em> a valid IP address, I just removed it for...security purposes.</p>
<p>I'm setting <code>storageClassName: ""</code> due to <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">this article explaining why its necessary</a>.</p>
<p><strong>Can anyone spot what's wrong with my <code>pv.yaml</code> file?</strong> And what I need to do (<em>specifically!</em>) to fix it?</p>
| simplezarg | <blockquote>
<p>quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]<em>[-+]?[0-9]</em>)$', error found in #10 byte of ...|ge":"30gi"}}</p>
</blockquote>
<p>Change</p>
<pre><code>storage: 30gi
</code></pre>
<p>to</p>
<pre><code>storage: 30Gi
</code></pre>
<p>The <code>Gi</code> part must follow the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage" rel="noreferrer">predefined units</a>.</p>
| Jonas |
<p>I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.</p>
<p>I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.</p>
<p>What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.</p>
<p>The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:</p>
<pre><code>
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
</code></pre>
<p>Any help at all is greatly appreciated and I'm happy to provide more info.</p>
| biscuit_cakes | <p>What you describe is how nginx works out of the box with http. However</p>
<ol>
<li>Nginx has a detailed understanding of http</li>
<li>HTTP is a message based protocol i.e. uses requests and replies</li>
</ol>
<p>Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.</p>
<p>You need to implement a protol-aware mitm.</p>
| symcbean |
<p>I have what I would consider a common use case but I am really struggling to find a solution:</p>
<p>I want to reuse a variable in <code>Kustomize</code> patches in our deployments. Specifically, we are using commit IDs to reference image tags (Use Case A) and k8s Jobs related to the deployments (Use Case B).</p>
<p>We use a setup where for each ArgoCD app we have a <code>/base/</code> folder and <code>/overlays/[environment-name]</code>, this base is patched with a <code>kustomization.yaml</code>.</p>
<h3>Use Case A:</h3>
<p>A very straightforward usage - in <code>/overlays/[environment-name]</code> we have a <code>kustomization.yaml</code> which uses:</p>
<pre><code>images:
- name: our-aws-repo-url
newName: our-aws-repo-url
newTag: commit-id
</code></pre>
<p>Works like a charm since we can re-use this both for the Deployment itself as well as its related Jobs all with one commit reference.</p>
<h3>Use Case B:</h3>
<p>The problem:</p>
<p>We use N Jobs to e.g. do migrations for 0 downtime deployments where we run alembic containers that run the migration and we have a <code>waitforit</code> <code>initContainer</code> that listens for the Job to get completed i.e. when the migration was successful in order to deploy.</p>
<p>The problem is now that I need to touch 4 files in one service's overlay to patch the id everywhere (which we use to recognize the Job):</p>
<ul>
<li>deployment.yaml like so:</li>
</ul>
<pre><code>- image: groundnuty/k8s-wait-for:v1.4
imagePullPolicy: IfNotPresent
args:
- "job"
- "job-commit-id"
</code></pre>
<ul>
<li>job.yaml itself to change re-trigger of Job for new deployment/potential migration:</li>
</ul>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: job-commit-id
</code></pre>
<ul>
<li>kustomization.yaml as described in Use Case A.</li>
</ul>
<p>What I think should be possible instead is to:</p>
<ol>
<li>define variable <code>commit-id</code> somehow in kustomization.yaml and</li>
<li>for Use Case A & B do something like:</li>
</ol>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: job-${commit-id}
</code></pre>
<pre><code>- image: groundnuty/k8s-wait-for:v1.4
imagePullPolicy: IfNotPresent
args:
- "job"
- "job-${commit-id}"
</code></pre>
<pre><code>images:
- name: our-aws-repo-url
newName: our-aws-repo-url
newTag: ${commit-id}
</code></pre>
<p>Goal: when developers do PRs for releases, they should only touch one reference to the commit ID to prevent typos etc. (also easier to review instead of checking commit ID in N places)</p>
<p>Caveat: I am sure there is also another way to do migrations instead of Jobs but this is generally a common thing: how to re-use a variable inside kustomize.</p>
<p>I know I can reference ENV variable in kustomize but I want to reuse a variable within the manifests.</p>
| tech4242 | <blockquote>
<p>but I want to reuse a variable within the manifests.</p>
</blockquote>
<p>This is not how you typically work with Kustomize. Its a good thing that things are <em>declarative</em> and <em>explicit</em> when working with Kustomize.</p>
<blockquote>
<p>when developers do PRs for releases, they should only touch one reference to the commit ID to prevent typos etc. (also easier to review instead of checking commit ID in N places)</p>
</blockquote>
<p>yes and no.</p>
<p>That there is a change in four places should not be seen as a problem, in my opinion. That there is <em>human toil</em> to update four places <strong>is the problem</strong>.</p>
<p>The solution to human toil is typically <strong>automation</strong>. By using <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer">yq</a> in an automated pipeline (e.g. Jenkins - or a shellscript) can you automate your manifest-update to take a single parameter - this can optionally be automated directly for each build after you have a git "commit id" available. The pipeline need to run four <code>yq</code>-commands to update the four Yaml fields. See e.g. <a href="https://mikefarah.gitbook.io/yq/operators/assign-update" rel="nofollow noreferrer">assign operation</a> and <a href="https://mikefarah.gitbook.io/yq/usage/github-action" rel="nofollow noreferrer">github action - pipeline example</a>. No other variables are needed.</p>
| Jonas |
<p>I have an app that get data from a third-party data source, it will send data to my app automatically and I can't filter it, I can only receive all. When data arrive, my app will transmit this data to a rocketmq topic.</p>
<p>Now I have to make this app a container and deploy it in k8s deployment with 3 replica. But these pods will all get same data and send to the same rocketmq topic.</p>
<p>How do I make this app horizontal scalable without sending duplicate msg to the same rocketmq topic?</p>
| Wayne Chang | <blockquote>
<p>Now I have to make this app a container and deploy it in k8s deployment with 3 replica. But these pods will all get same data and send to the same rocketmq topic.</p>
</blockquote>
<blockquote>
<p>There is no request. My app connect to a server and it will send data to app by TCP. Every Pod will connect to that server.</p>
</blockquote>
<p>If you want to do this with more than one instance, they need to coordinate in some way.</p>
<p><a href="https://medium.com/hybrid-cloud-hobbyist/leader-election-architecture-kubernetes-32600da81e3c" rel="nofollow noreferrer">Leader Election pattern</a> is a way to run multiple instances, but only one can be active (e.g. when you read from the same queue). This is a pattern to coordinate - only one instance is active at the time. So this pattern only use your replicas for <em>higher availability</em>.</p>
<p>If you want that all your replicas actively work, this can be done with techniques like <a href="https://stackoverflow.com/a/58678622/213269">sharding or partitioning</a>. This is also how e.g. <a href="https://www.instaclustr.com/the-power-of-kafka-partitions-how-to-get-the-most-out-of-your-kafka-cluster/" rel="nofollow noreferrer">Kafka</a> (e.g. similar to a queue) makes <strong>concurrent work</strong> from queues.</p>
<p>There are other ways to solve this problem as well, e.g. to implement some form of locks to coordinate - but partitioning or sharding as in Kafka is probably the most "cloud native" solution.</p>
| Jonas |
<p>I am trying to create a secret using JSON file content and stringData like below but giving some error which I am not able to identify after multiple tries.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: image-secret
type: Opaque
stringData:
creds: _json_key:{"type": "service_account","project_id": "xyz","private_key_id": "9b0eb25b41ae9161123dbfh56mgj","private_key": "-----BEGIN PRIVATE KEY-----\nmch0iiFz1DAdM8vQTXiETI+3gvSnknXQ0M5WmkA1dkiJgyhe3r8tpeb42jo4FCd\nbHLf9eeIql8TKEm9BAk+qnQZq8FykWEnQLuU7APrFNZ0qtYP8t1Y7HSGpdVmmCyK\nykJAGznKaiEf9SJiNy8HqJy1kOhajn1fL3CdcShWcY793qRLyeFyrIZ\n6lfnjSE9IW5iEOBmxEpXf5Q=\n-----END PRIVATE KEY-----\n","client_email": "[email protected].","client_id": "113522222222222222222222222","auth_uri": "https://accounts.google.com,"token_uri": "https://oauth.googleap,"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth/v1/certs","client_x509_cert_url": "https://www.googleapis.com/v1"}
</code></pre>
<p>username as <strong>_json_key</strong> and password is <strong>"json file content"</strong></p>
<p>The error which I am getting is as below:-</p>
<pre><code>error: error parsing argocd-image-updater-secret.yaml: error converting YAML to JSON: yaml: line 7: mapping values are not allowed in this context
</code></pre>
| gaurav agnihotri | <p>You're getting bitten by a yaml-ism, as <code>yaml2json</code> or <code>yamllint</code> would inform you</p>
<pre><code>Error: Cannot parse as YAML (mapping values are not allowed here
in "<byte string>", line 5, column 28:
creds: _json_key:{"type": "service_account","project_id" ...
^)
</code></pre>
<p>what you'll want is to fold that scalar so the <code>: </code> is clearly character data and not parsed as a yaml key</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
name: image-secret
type: Opaque
stringData:
creds: >-
_json_key:{"type": "service_account","project_id": "xyz","private_key_id": "9b0eb25b41ae9161123dbfh56mgj","private_key": "-----BEGIN PRIVATE KEY-----\nmch0iiFz1DAdM8vQTXiETI+3gvSnknXQ0M5WmkA1dkiJgyhe3r8tpeb42jo4FCd\nbHLf9eeIql8TKEm9BAk+qnQZq8FykWEnQLuU7APrFNZ0qtYP8t1Y7HSGpdVmmCyK\nykJAGznKaiEf9SJiNy8HqJy1kOhajn1fL3CdcShWcY793qRLyeFyrIZ\n6lfnjSE9IW5iEOBmxEpXf5Q=\n-----END PRIVATE KEY-----\n","client_email": "[email protected].","client_id": "113522222222222222222222222","auth_uri": "https://accounts.google.com,"token_uri": "https://oauth.googleap,"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth/v1/certs","client_x509_cert_url": "https://www.googleapis.com/v1"}
</code></pre>
| mdaniel |
<p>I accidentally did: <code>k delete service/kubernetes</code>. it sounds like an essential service... so i would think deleting would break the kubernetes cluster but somehow the service just came back.</p>
<h3>will deleting the service "service/kubernetes" break my kubernetes cluster? if no why?</h3>
<p>related question: what causes the service <code>service/kubernetes</code> to come back automatically?</p>
| Trevor Boyd Smith | <blockquote>
<p>what causes the service service/kubernetes to come back automatically?</p>
</blockquote>
<p>A part of the control plane run controllers, and there is a controller that is responsible for the <code>kubernetes</code> Service. See <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/controller.go" rel="nofollow noreferrer">controlplane controller</a></p>
| Jonas |
<p>The following command is messy, but works just fine:</p>
<pre class="lang-yaml prettyprint-override"><code>dict(
"resources" (ternary $args.resources (ternary $.Values.machines.micro (ternary $.Values.machines.small (ternary $.Values.machines.medium $.Values.machines.large (eq $args.machine "medium")) (eq $args.machine "small")) (eq $args.machine "micro")) (eq $args.machine nil))
)
</code></pre>
<p>The following 2 fail:</p>
<pre class="lang-yaml prettyprint-override"><code>"resources" (ternary $args.resources (index $.Values.machines $args.machine) (eq $args.machine nil))
"resources" (ternary $args.resources (get $.Values.machines $args.machine) (eq $args.machine nil))
</code></pre>
<p>With 2 respective errors:</p>
<pre><code># index
Error: template: tensor-etl-sol-chart/templates/template_deploys.yaml:65:40: executing "tensor-etl-sol-chart/templates/template_deploys.yaml" at <index $.Values.machines $args.machine>: error calling index: value is nil; should be of type string
# get
Error: template: tensor-etl-sol-chart/templates/template_deploys.yaml:65:67: executing "tensor-etl-sol-chart/templates/template_deploys.yaml" at <$args.machine>: wrong type for value; expected string; got interface {}
</code></pre>
<p>Why? And how to make it work?</p>
<p>Values.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>machines:
micro:
cpu: "250m"
memory: "500Mi"
small:
cpu: "500m"
memory: "1400Mi"
medium:
cpu: "1000m"
memory: "3200Mi"
large:
cpu: "1500m"
memory: "6900Mi"
...
- name: 'inflate-bad-tmeta'
# machine: small #<--- want to be able to uncomment this line, and should fallback to "resources" on the next lnie
resources:
cpu: '888m'
memory: '888Mi'
</code></pre>
| ilmoi | <p>As best I can tell, you'd want <code>|default</code> to make the <code>index</code> structure intact when they don't specify a <code>machine:</code> key at all, and then a separate <code>| default</code> for falling back to the inlined <code>resources:</code> block on bogus machine reference (as will be the case for both a missing <code>machine:</code> key as well as one that doesn't exist)</p>
<pre class="lang-yaml prettyprint-override"><code># templates/debug.yaml
data:
example: |
{{ range .Values.things }}
machine {{ .name }} is
{{ (index
$.Values.machines
(.machine | default "nope")
) | default .resources }}
{{ end }}
</code></pre>
<p>then, given</p>
<pre class="lang-yaml prettyprint-override"><code>machines:
micro:
cpu: "250m"
memory: "500Mi"
small:
cpu: "500m"
memory: "1400Mi"
medium:
cpu: "1000m"
memory: "3200Mi"
large:
cpu: "1500m"
memory: "6900Mi"
things:
- name: 'inflate-bad-tmeta'
# machine: small #<--- want to be able to uncomment this line, and should fallback to "resources" on the next lnie
resources:
cpu: '888m'
memory: '888Mi'
</code></pre>
<p>it says</p>
<pre><code> machine inflate-bad-tmeta is
map[cpu:888m memory:888Mi]
</code></pre>
<p>but given</p>
<pre class="lang-yaml prettyprint-override"><code>things:
- name: 'inflate-bad-tmeta'
machine: small
resources:
cpu: '888m'
memory: '888Mi'
</code></pre>
<p>it says</p>
<pre><code> machine inflate-bad-tmeta is
map[cpu:500m memory:1400Mi]
</code></pre>
<p>at your discretion whether you'd want to actually <code>fail</code> if <em>both</em> things are set, since it can cause the consumer to think their <code>resources:</code> win</p>
| mdaniel |
<p>Following is the basic k8 setup deployed using kubeadm tool. when I delete pods like etcd,api-server,sheduler and controller it re-created immediately. I am wodering who is really monitoring these pods as these are not part of replicaset or deployments and these are just stand alone pods.</p>
<pre><code>root@kmaster:~# oc get all -n kube-system <br/>
NAME READY STATUS RESTARTS AGE
pod/calico-kube-controllers-7659fb8886-jfnfq 1/1 Running 7 (120m ago) 3d18h
pod/calico-node-7xkvm 1/1 Running 1 (3d7h ago) 3d18h
pod/calico-node-q8l4d 1/1 Running 54 (120m ago) 3d18h
pod/calico-node-v698m 1/1 Running 51 (119m ago) 3d18h
pod/coredns-78fcd69978-ftmwz 1/1 Running 7 (120m ago) 3d18h
pod/coredns-78fcd69978-kg9r5 1/1 Running 7 (120m ago) 3d18h
pod/etcd-kmaster 1/1 Running 7 (120m ago) 3d18h
pod/kube-apiserver-kmaster 1/1 Running 7 (120m ago) 3d18h
pod/kube-controller-manager-kmaster 1/1 Running 7 (120m ago) 44m
pod/kube-proxy-jcl8n 1/1 Running 1 (3d7h ago) 3d18h
pod/kube-proxy-tg8x9 1/1 Running 7 (120m ago) 3d18h
pod/kube-proxy-x58b8 1/1 Running 7 (119m ago) 3d18h
pod/kube-scheduler-kmaster 1/1 Running 7 (120m ago) 3d18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d18h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/calico-node 3 3 2 3 2 kubernetes.io/os=linux 3d18h
daemonset.apps/kube-proxy 3 3 2 3 2 kubernetes.io/os=linux 3d18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/calico-kube-controllers 1/1 1 1 3d18h
deployment.apps/coredns 2/2 2 2 3d18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/calico-kube-controllers-7659fb8886 1 1 1 3d18h
replicaset.apps/coredns-78fcd69978 2 2 2 3d18h
root@kmaster:~#
</code></pre>
| Nag Devineni | <p>These pods are typically supervised by the Kubelet, directly on the node.</p>
<blockquote>
<p>Static Pods are always bound to one Kubelet on a specific node. The main use for static Pods is to run a self-hosted control plane: in other words, using the kubelet to supervise the individual control plane components.</p>
</blockquote>
<p>See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#static-pods" rel="nofollow noreferrer">Static Pods</a>.</p>
| Jonas |
<p>Trying to deploy <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/" rel="nofollow noreferrer">aws-load-balancer-controller</a> on Kubernetes.</p>
<p>I have the following TF code:</p>
<pre><code>resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
</code></pre>
<p>This works up to version <code>1.9</code> as required. As soon as I upgrade to version <code>2.2.3</code> the pod fails to update and on the pod get the following error:<code>{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}</code></p>
<p>I have read the update the doc and have amended the IAM policy as they state but they also mention:</p>
<blockquote>
<p>updating the TargetGroupBinding CRDs</p>
</blockquote>
<p>And that where I am not sure how to do that using terraform</p>
<p>If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.</p>
| alexis | <p>With your Terraform code, you apply an <code>Deployment</code> and an <code>Ingress</code> resource, but you must also add the <code>CustomResourceDefinitions</code> for the <code>TargetGroupBinding</code> custom resource.</p>
<p>This is described under "Add Controller to Cluster" in the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/" rel="noreferrer">Load Balancer Controller installation documentation</a> - with examples for Helm and Kubernetes Yaml provided.</p>
<p>Terraform has <a href="https://www.hashicorp.com/blog/beta-support-for-crds-in-the-terraform-provider-for-kubernetes" rel="noreferrer">beta support for applying CRDs</a> including an <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest#example-create-a-kubernetes-custom-resource-definition" rel="noreferrer">example of deploying CustomResourceDefinition</a>.</p>
| Jonas |
<p>Considering the scenario where all of the nodes are fully utilized and there is a user's pod in the scheduler queue that has a higher priority, and there are no more Best Effort and Burstable pods left, only Guaranteed with lower priority, then can that Guaranteed pod be evicted to make space for the higher priority one?</p>
| woj.sierak | <p>From <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#interactions-of-pod-priority-and-qos" rel="nofollow noreferrer">Interactions between Pod priority and quality of service</a></p>
<blockquote>
<p>The scheduler's preemption logic does not consider QoS when choosing preemption targets. Preemption considers Pod priority and attempts to choose a set of targets with the lowest priority.</p>
</blockquote>
<p>So a Pod with QoS "Guaranteed" may be evicted if a higher priority Pod gets scheduled.</p>
| Jonas |
<p>I am creating an Helm Chart and I am having problems when it comes to importing files:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: vcl-template
namespace: {{.Release.Namespace}}
data:
{{- (.Files.Glob "config/varnish/default.vcl.tmpl").AsConfig | nindent 2 }}
{{- (.Files.Glob "config/varnish/nginx.conf").AsConfig | nindent 2 }}
</code></pre>
<p>This imports the file <code>config/varnish/nginx.conf</code> just fine but the file <code>config/varnish/default.vcl.tmpl</code> is imported with <code>\n</code> instead of newlines, so the data on the ConfigMap gets all buggy:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: vcl-template
namespace: default
data:
default.vcl.tmpl: "vcl 4.0;\n\nimport std;\nimport directors;\n\n{{ range .Frontends
}}\nbackend {{ .Name }} {\n .host = \"{{ .Host }}\";\n .port = \"{{ .Port
}}\";\n}\n{{- end }}\n\n{{ range .Backends }}\nbackend be-{{ .Name }} {\n .host
= \"{{ .Host }}\";\n .port = \"{{ .Port }}\";\n}\n{{- end }}\n\nacl purge {\n
\ \"127.0.0.1\";\n \"localhost\";\n \"::1\";\n {{- range .Frontends }}\n
\ \"{{ .Host }}\";\n {{- end }}\n {{- range .Backends }}\n \"{{ .Host
}}\";\n {{- end }}\n}\n\nsub vcl_init {\n new cluster = directors.hash();\n\n
\ {{ range .Frontends -}}\n cluster.add_backend({{ .Name }}, 1);\n {{ end
}}\n\n new lb = directors.round_robin();\n\n {{ range .Backends -}}\n lb.add_backend(be-{{
.Name }});\n {{ end }}\n}\n\nsub vcl_recv {\n\n unset req.http.x-cache;\n
\ set req.backend_hint = cluster.backend(req.url);\n set req.http.x-shard =
req.backend_hint;\n if (req.http.x-shard != server.identity) {\n return(pass);\n
\ }\n set req.backend_hint = lb.backend();\n\n if (req.method == \"PURGE\")
{\n if (client.ip !~ purge) {\n return (synth(405, \"Method not
allowed\"));\n }\n # To use the X-Pool header for purging varnish
during automated deployments, make sure the X-Pool header\n # has been added
to the response in your backend server config. This is used, for example, by the\n
\ # capistrano-magento2 gem for purging old content from varnish during it's
deploy routine.\n if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool)
{\n return (synth(400, \"X-Magento-Tags-Pattern or X-Pool header required\"));\n
\ }\n if (req.http.X-Magento-Tags-Pattern) {\n ban(\"obj.http.X-Magento-Tags
~ \" + req.http.X-Magento-Tags-Pattern);\n }\n if (req.http.X-Pool)
{\n ban(\"obj.http.X-Pool ~ \" + req.http.X-Pool);\n }\n return
(synth(200, \"Purged\"));\n }\n\n if (req.method != \"GET\" &&\n req.method
!= \"HEAD\" &&\n req.method != \"PUT\" &&\n req.method != \"POST\"
&&\n req.method != \"TRACE\" &&\n req.method != \"OPTIONS\" &&\n req.method
!= \"DELETE\") {\n /* Non-RFC2616 or CONNECT which is weird. */\n return
(pipe);\n }\n\n # We only deal with GET and HEAD by default\n if (req.method
!= \"GET\" && req.method != \"HEAD\") {\n return (pass);\n }\n\n #
Bypass shopping cart, checkout and search requests\n if (req.url ~ \"/checkout\"
|| req.url ~ \"/catalogsearch\") {\n return (pass);\n }\n\n # Bypass
admin\n if (req.url ~ \"^/admin($|/.*)\") {\n return (pass);\n }\n\n
\ # Bypass health check requests\n if (req.url ~ \"/pub/health_check.php\")
{\n return (pass);\n }\n\n # Set initial grace period usage status\n
\ set req.http.grace = \"none\";\n\n # normalize url in case of leading HTTP
scheme and domain\n set req.url = regsub(req.url, \"^http[s]?://\", \"\");\n\n
\ # collect all cookies\n std.collect(req.http.Cookie);\n\n # Compression
filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression\n if (req.http.Accept-Encoding)
{\n if (req.url ~ \"\\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$\")
{\n # No point in compressing these\n unset req.http.Accept-Encoding;\n
\ } elsif (req.http.Accept-Encoding ~ \"gzip\") {\n set req.http.Accept-Encoding
= \"gzip\";\n } elsif (req.http.Accept-Encoding ~ \"deflate\" && req.http.user-agent
!~ \"MSIE\") {\n set req.http.Accept-Encoding = \"deflate\";\n }
else {\n # unknown algorithm\n unset req.http.Accept-Encoding;\n
\ }\n }\n\n # Remove all marketing get parameters to minimize the cache
objects\n if (req.url ~ \"(\\?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=\")
{\n set req.url = regsuball(req.url, \"(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?\",
\"\");\n set req.url = regsub(req.url, \"[?|&]+$\", \"\");\n }\n\n #
Static files caching\n if (req.url ~ \"^/(pub/)?(media|static)/\") {\n return
(pass);\n }\n\n return (hash);\n}\n\nsub vcl_hash {\n if (req.http.cookie
~ \"X-Magento-Vary=\") {\n hash_data(regsub(req.http.cookie, \"^.*?X-Magento-Vary=([^;]+);*.*$\",
\"\\1\"));\n }\n\n # For multi site configurations to not cache each other's
content\n if (req.http.host) {\n hash_data(req.http.host);\n } else
{\n hash_data(server.ip);\n }\n\n if (req.url ~ \"/graphql\") {\n call
process_graphql_headers;\n }\n\n # To make sure http users don't see ssl warning\n
\ if (req.http.X-Forwarded-Proto) {\n hash_data(req.http.X-Forwarded-Proto);\n
\ }\n \n}\n\nsub process_graphql_headers {\n if (req.http.Store) {\n hash_data(req.http.Store);\n
\ }\n if (req.http.Content-Currency) {\n hash_data(req.http.Content-Currency);\n
\ }\n}\n\nsub vcl_backend_response {\n\n set beresp.grace = 3d;\n\n if (beresp.http.content-type
~ \"text\") {\n set beresp.do_esi = true;\n }\n\n if (bereq.url ~ \"\\.js$\"
|| beresp.http.content-type ~ \"text\") {\n set beresp.do_gzip = true;\n
\ }\n\n if (beresp.http.X-Magento-Debug) {\n set beresp.http.X-Magento-Cache-Control
= beresp.http.Cache-Control;\n }\n\n # cache only successfully responses and
404s\n if (beresp.status != 200 && beresp.status != 404) {\n set beresp.ttl
= 0s;\n set beresp.uncacheable = true;\n return (deliver);\n }
elsif (beresp.http.Cache-Control ~ \"private\") {\n set beresp.uncacheable
= true;\n set beresp.ttl = 86400s;\n return (deliver);\n }\n\n
\ # validate if we need to cache it and prevent from setting cookie\n if (beresp.ttl
> 0s && (bereq.method == \"GET\" || bereq.method == \"HEAD\")) {\n unset
beresp.http.set-cookie;\n }\n\n # If page is not cacheable then bypass varnish
for 2 minutes as Hit-For-Pass\n if (beresp.ttl <= 0s ||\n beresp.http.Surrogate-control
~ \"no-store\" ||\n (!beresp.http.Surrogate-Control &&\n beresp.http.Cache-Control
~ \"no-cache|no-store\") ||\n beresp.http.Vary == \"*\") {\n # Mark
as Hit-For-Pass for the next 2 minutes\n set beresp.ttl = 120s;\n set
beresp.uncacheable = true;\n }\n\n return (deliver);\n}\n\nsub vcl_deliver
{\n if (resp.http.X-Magento-Debug) {\n if (resp.http.x-varnish ~ \" \")
{\n set resp.http.X-Magento-Cache-Debug = \"HIT\";\n set resp.http.Grace
= req.http.grace;\n } else {\n set resp.http.X-Magento-Cache-Debug
= \"MISS\";\n }\n } else {\n unset resp.http.Age;\n }\n\n #
Not letting browser to cache non-static files.\n if (resp.http.Cache-Control
!~ \"private\" && req.url !~ \"^/(pub/)?(media|static)/\") {\n set resp.http.Pragma
= \"no-cache\";\n set resp.http.Expires = \"-1\";\n set resp.http.Cache-Control
= \"no-store, no-cache, must-revalidate, max-age=0\";\n }\n\n unset resp.http.X-Magento-Debug;\n
\ unset resp.http.X-Magento-Tags;\n unset resp.http.X-Powered-By;\n unset
resp.http.Server;\n unset resp.http.X-Varnish;\n unset resp.http.Via;\n unset
resp.http.Link;\n}\n\nsub vcl_hit {\n if (obj.ttl >= 0s) {\n # Hit within
TTL period\n return (deliver);\n }\n if (std.healthy(req.backend_hint))
{\n if (obj.ttl + 300s > 0s) {\n # Hit after TTL expiration, but
within grace period\n set req.http.grace = \"normal (healthy server)\";\n
\ return (deliver);\n } else {\n # Hit after TTL and
grace expiration\n return (miss);\n }\n } else {\n #
server is not healthy, retrieve from cache\n set req.http.grace = \"unlimited
(unhealthy server)\";\n return (deliver);\n }\n}\n"
nginx.conf: |
worker_processes auto;
events {
worker_connections 1024;
}
pcre_jit on;
error_log /var/log/nginx/error.log warn;
include /etc/nginx/modules/*.conf;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
client_max_body_size 15m;
keepalive_timeout 30;
sendfile on;
tcp_nodelay on;
gzip_vary on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p><code>nginx.conf</code>:</p>
<pre><code>worker_processes auto;
events {
worker_connections 1024;
}
pcre_jit on;
error_log /var/log/nginx/error.log warn;
include /etc/nginx/modules/*.conf;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
client_max_body_size 15m;
keepalive_timeout 30;
sendfile on;
tcp_nodelay on;
gzip_vary on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p><code>default.vcl.tmpl</code>:</p>
<pre><code>vcl 4.0;
import std;
import directors;
{{ range .Frontends }}
backend {{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
{{- end }}
{{ range .Backends }}
backend be-{{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
{{- end }}
acl purge {
"127.0.0.1";
"localhost";
"::1";
{{- range .Frontends }}
"{{ .Host }}";
{{- end }}
{{- range .Backends }}
"{{ .Host }}";
{{- end }}
}
sub vcl_init {
new cluster = directors.hash();
{{ range .Frontends -}}
cluster.add_backend({{ .Name }}, 1);
{{ end }}
new lb = directors.round_robin();
{{ range .Backends -}}
lb.add_backend(be-{{ .Name }});
{{ end }}
}
sub vcl_recv {
unset req.http.x-cache;
set req.backend_hint = cluster.backend(req.url);
set req.http.x-shard = req.backend_hint;
if (req.http.x-shard != server.identity) {
return(pass);
}
set req.backend_hint = lb.backend();
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass shopping cart, checkout and search requests
if (req.url ~ "/checkout" || req.url ~ "/catalogsearch") {
return (pass);
}
# Bypass admin
if (req.url ~ "^/admin($|/.*)") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "/pub/health_check.php") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove all marketing get parameters to minimize the cache objects
if (req.url ~ "(\?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=") {
set req.url = regsuball(req.url, "(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
return (pass);
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
if (req.url ~ "/graphql") {
call process_graphql_headers;
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
}
sub process_graphql_headers {
if (req.http.Store) {
hash_data(req.http.Store);
}
if (req.http.Content-Currency) {
hash_data(req.http.Content-Currency);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses and 404s
if (beresp.status != 200 && beresp.status != 404) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 86400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} else {
unset resp.http.Age;
}
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 300s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (miss);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
</code></pre>
<p>How come that the second file is not imported correctly? Latest Helm version and latest Go version.</p>
<p>Anyone has any ideas? The encoding of both files on VSCode shows as <code>UTF8</code>.</p>
| Rafael Moreira | <p>They're actually equivalent from YAML's PoV, just not as pretty, but most important for your specific case it's because yaml cannot represent <strong>trailing</strong> whitespace without quoting it, which is what it did due to line 164 of your .tmpl file, as seen by the <code>\n \n</code> in:</p>
<pre class="lang-yaml prettyprint-override"><code> \ }\n \n}\n\nsub process_graphql_headers {\n if (req.http.Store) {\n hash_data(req.http.Store);\n
</code></pre>
<pre><code>$ sed -ne 164p default.vcl.tmpl | xxd
00000000: 2020 2020 0a .
</code></pre>
<p>turning on "strip trailing whitespace" in your editor will help that, or for this specific case you can just fix line 164</p>
| mdaniel |
<p>From this page: <a href="https://www.pingidentity.com/en/company/blog/posts/2019/jwt-security-nobody-talks-about.html" rel="noreferrer">https://www.pingidentity.com/en/company/blog/posts/2019/jwt-security-nobody-talks-about.html</a>:</p>
<blockquote>
<p>The fourth security-relevant reserved claim is "iss." This claim indicates the identity > of the party that issued the JWT. The claim holds a simple string, of which the value is > at the discretion of the issuer. The consumer of a JWT should always check that the > "iss" claim matches the expected issuer (e.g., sso.example.com).</p>
</blockquote>
<p>As an example, in Kubernetes when I configure the kubernetes auth like this for using a JWT for a vault service account (from helm), I no longer get an ISS error when accessing the vault:</p>
<pre class="lang-sh prettyprint-override"><code>vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
issuer="https://kubernetes.default.svc.cluster.local"
</code></pre>
<p>But what does this URL mean? Is it a somewhat arbitrary string that was set when the JWT was generated?</p>
| Aaron | <p><em>JWT token issuer</em> - is the <strong>party</strong> that "created" the token and signed it with its private key.</p>
<p>Anyone can create tokens, make sure that the tokens you receive is created by a party that you trust.</p>
| Jonas |
<p>At page 67 of <a href="https://www.oreilly.com/library/view/kubernetes-up-and/9781492046523/" rel="nofollow noreferrer">Kubernetes: Up and Running, 2nd Edition</a>, the author uses the command below in order to create a <code>Deployment</code>:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=2 \
--labels="ver=1,app=alpaca,env=prod"
</code></pre>
<p>However this command is deprecated with kubectl 1.19+, and it now creates a pod:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=2 \
--labels="ver=1,app=alpaca,env=prod"
Flag --replicas has been deprecated, has no effect and will be removed in the future.
pod/alpaca-prod created
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Is there a way to use <code>kubectl run</code> to create a deployment with replicas and custom label with <code>kubectl</code> 1.19+?</p>
| Fabrice Jammes | <p>It is now preferred to use <code>kubectl create</code> to create a new <code>Deployment</code>, instead of <code>kubectl run</code>.</p>
<p>This is the corresponsing command to your <code>kubectl run</code></p>
<pre><code>kubectl create deployment alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=2
</code></pre>
<h3>Labels</h3>
<p>By default from <code>kubectl create deployment alpaca-proc</code> you will get the label <code>app=alpaca</code>.</p>
<p>The get the other labels, you need to add them later. Use <code>kubectl label</code> to add labels to the <code>Deployment</code>, e.g.</p>
<pre><code>kubectl label deployment alpaca-prod ver=1
</code></pre>
<p><strong>Note:</strong> this only adds the label to the <code>Deployment</code> and <strong>not</strong> to the Pod-template, e.g. the Pods will not get the label. To also add the label to the pods, you need to edit the <code>template:</code> part of the Deployment-yaml.</p>
| Jonas |
<p>I am currently attempting to use the lookup function via Helm 3.1 to load a variable during installation.</p>
<pre><code>{{ $ingress := (lookup "v1" "Ingress" "mynamespace" "ingressname").status.loadBalancer.ingress[0].hostname }}
</code></pre>
<p>Of course, this returns, "bad character [." If I remove it, it returns "nil pointer evaluating interface {}.loadBalancer".</p>
<p>Is what I am attempting to do even possible?</p>
<p>Thanks</p>
| CriticalFoxes | <p>You are attempting to use "normal" array indexing syntax, but helm charts use "golang templates" and thus array indexing is done via <a href="https://helm.sh/docs/chart_template_guide/function_list/#index" rel="nofollow noreferrer">the <code>index</code> function</a></p>
<pre class="lang-yaml prettyprint-override"><code>{{ $ingress := (index (lookup "v1" "Ingress" "mynamespace" "ingressname").status.loadBalancer.ingress 0).hostname }}
</code></pre>
<hr />
<p>after further thought, I can easily imagine that <code>nil</code> pointer error happening during <code>helm template</code> runs, since <a href="https://github.com/helm/helm/issues/9309#issuecomment-771579215" rel="nofollow noreferrer"><code>lookup</code> returns <code>map[]</code> when running offline</a></p>
<p>In that case, you'd want to use the <code>index</code> function for <strong>every</strong> path navigation:</p>
<pre class="lang-yaml prettyprint-override"><code>{{ $ingress := (index (index (index (index (index (lookup "v1" "Ingress" "mynamespace" "ingressname") "status") "loadBalancer") "ingress") 0) "hostname") }}
</code></pre>
<p>or, assert the lookup is in "offline" mode and work around it:</p>
<pre class="lang-yaml prettyprint-override"><code> {{ $ingress := "fake.example.com" }}
{{ $maybeLookup := (lookup "v1" "Ingress" "mynamespace" "ingressname") }}
{{ if $maybeLookup }}
{{ $ingress = (index $maybeLookup.status.loadBalancer.ingress 0).hostname }}
{{ end }}
</code></pre>
| mdaniel |
<p>I am getting <code>unknown image flag</code> when creating a deployment using <code>minikube</code> on <code>windows 10</code> <code>cmd</code>. Why?</p>
<pre><code>C:\WINDOWS\system32>minikube kubectl create deployment nginxdepl --image=nginx
Error: unknown flag: --image
See 'minikube kubectl --help' for usage.
C:\WINDOWS\system32>
</code></pre>
| Manu Chadha | <p>When using <a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/" rel="noreferrer">kubectl bundled with minikube</a> the command is little different.</p>
<p>From the <a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/" rel="noreferrer">documentation</a>, your command should be:</p>
<pre><code>minikube kubectl -- create deployment nginxdepl --image=nginx
</code></pre>
<p>The difference is the <code>--</code> right after <code>kubectl</code></p>
| Jonas |
<p>I have a project where we are consuming data from kafka and publishing to mongo. In fact the code base does only one task, may be mongo to kafka migration, kafka to mongo migration or something else.</p>
<p>we have to consume from different kafka topics and publish to different mongo collections. Now these are parallel streams of work.</p>
<p>Current design is to have one codebase which can consume from Any topic and publish to Any mongo collection which is configurable using Environment variables. So we created One kubernetes Pod and have multiple containers inside it. each container has different environment variables.</p>
<p>My questions:</p>
<ol>
<li>Is it wise to use multiple containers in one pod. Easy to distinguish, but as they are tightly coupled , i am guessing high chance of failure and not actually proper microservice design.</li>
<li>Should I create multiple deployments for each of these pipelines ? Would be very difficult to maintain as each will have different deployment configs.</li>
<li>Is there any better way to address this ?</li>
</ol>
<p>Sample of step 1:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: test-raw-mongodb-sink-apps
namespace: test-apps
spec:
selector:
matchLabels:
app: test-raw-mongodb-sink-apps
template:
metadata:
labels:
app: test-raw-mongodb-sink-apps
spec:
containers:
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-alchemy
- name: INPUT_TOPIC
value: test.raw.ptv.alchemy
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8081"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/dpl/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-alchemy
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-bloomberg
- name: INPUT_TOPIC
value: test.raw.pretrade.bloomberg
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8082"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-bloomberg
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-calypso
- name: INPUT_TOPIC
value: test.raw.ptv.calypso
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8083"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-calypso
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-dtres
- name: INPUT_TOPIC
value: test.raw.ptv.dtres
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8084"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-dtres
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-feds
- name: INPUT_TOPIC
value: test.raw.ptv.feds
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8085"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-feds
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-hoops
- name: INPUT_TOPIC
value: test.raw.ptv.hoops
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8086"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-hoops
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxcore
- name: INPUT_TOPIC
value: test.raw.ptv.murex_core
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8087"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxcore
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxeqd
- name: INPUT_TOPIC
value: test.raw.ptv.murex_eqd_sa
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8088"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxeqd
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxgts
- name: INPUT_TOPIC
value: test.raw.ptv.murex_gts_sa
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8089"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxgts
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxmr
- name: INPUT_TOPIC
value: test.raw.ptv.murex_mr
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8090"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxmr
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxgtscf
- name: INPUT_TOPIC
value: test.raw.cashflow.murex_gts_sa
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8091"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxgtscf
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxcoll
- name: INPUT_TOPIC
value: test.raw.collateral.mxcoll
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8092"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxcoll
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxcoll-link
- name: INPUT_TOPIC
value: test.raw.collateral.mxcoll_link
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8093"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxcoll-link
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-ost
- name: INPUT_TOPIC
value: test.raw.ptv.ost
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8094"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-ost
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-posmon
- name: INPUT_TOPIC
value: test.raw.ptp.posmon
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8095"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-posmon
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
</code></pre>
<p>Thanks</p>
| spattanaik75 | <blockquote>
<p>Is it wise to use multiple containers in one pod. Easy to distinguish, but as they are tightly coupled , i am guessing high chance of failure and not actually proper microservice design.</p>
</blockquote>
<p>You most likely want to deploy them as separate services, so that you can update or re-configure them independently of eachother.</p>
<blockquote>
<p>Should I create multiple deployments for each of these pipelines ? Would be very difficult to maintain as each will have different deployment configs.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Kustomize</a> is a built-in tool in <strong>kubectl</strong> that is a good choice when you want to deploy the same manifest in multiple environments with different configurations. This solution require no additional tool other than <code>kubectl</code>.</p>
<h2>Deploying to multiple environments with Kustomize</h2>
<p>Directory structure:</p>
<pre><code>base/
- deployment.yaml # fully deployable manifest - no templating
- kustomization.yaml # default values e.g. for dev environment
app1/
- kustomization.yaml # specific values for app1
app2/
- kustomization.yaml # specific values for app2
</code></pre>
<h3>Example Deployment manifest with Kustomization</h3>
<p>Here, the environment variables is loaded from a <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMap</a> such that we can use <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/" rel="nofollow noreferrer">configMapGenerator</a>. This file is <code>base/deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-sink
namespace: test-apps
spec:
template: // some fiels, e.g. labels are omitted in example
spec:
containers:
- name: mongodb-sink
image: mongodb-map:0.0.7.0-SNAPSHOT
env:
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: GROUP_ID
valueFrom:
configMapKeyRef:
name: my-values
key: GROUP_ID
- name: INPUT_TOPIC
valueFrom:
configMapKeyRef:
name: my-values
key: INPUT_TOPIC
...
</code></pre>
<p>Also add a <code>base/kustomization.yaml</code> file to describe the <em>configMapGenerator</em> and related files.</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
configMapGenerator:
- name: my-values
behavior: replace
literals:
- GROUP_ID=test-mongodb-sink-calypso
- INPUT_TOPIC=test.raw.ptv.calypso
... # also add your other values
</code></pre>
<p><strong>Preview Manifests</strong></p>
<pre><code>kubectl kustomize base/
</code></pre>
<p><strong>Apply Manifests</strong></p>
<pre><code>kubectl apply -k base/
</code></pre>
<h3>Add config for app1 and app2</h3>
<p>With <strong>app1</strong> we now want to use the manifest we have in <code>base/</code> and just overlay what is different for <strong>app1</strong>. This file is <code>app1/kustomization.yaml</code> and similar for <code>app2/kustomization.yaml</code>.</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
namePrefix: bloomberg-sink- # this gives your Deployment a prefixed name
configMapGenerator:
- name: my-values
behavior: replace
literals:
- GROUP_ID=test-mongodb-sink-bloomberg
- INPUT_TOPIC=test.raw.pretrade.bloomberg
... # also add your other values
</code></pre>
<p><strong>Preview Manifests</strong></p>
<pre><code>kubectl kustomize app1/
</code></pre>
<p><strong>Apply Manifests</strong></p>
<pre><code>kubectl apply -k app1/
</code></pre>
<h2>Documentation</h2>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Kubernetes: Declarative Management of Kubernetes Objects Using Kustomize</a></li>
<li><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/" rel="nofollow noreferrer">SIG CLI: Kustomization file</a></li>
</ul>
| Jonas |
<p>I have been using kubebuilder for writing custom controller, and aware of Get(), Update(), Delete() methods that it provides. But Now I am looking for a method which mimic the behaviour of <code>kubectl rollout restart deployment</code>. If there is no such direct method then I am looking for correct way to mimic the same.</p>
<pre class="lang-golang prettyprint-override"><code>type CustomReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
func (r *CustomReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
configMap := &v1.ConfigMap{}
err = r.Get(ctx, req.namespacedName, configMap)
if err != nil {
logger.Error(err, "Failed to GET configMap")
return ctrl.Result{}, err
}
</code></pre>
<p>Say in above code I read a deployment name from the configmap and rollout restart the same as follows:</p>
<pre class="lang-golang prettyprint-override"><code> val := configMap.Data["config.yml"]
config := Config{}
if err := yaml.Unmarshal([]byte(val), &config); err != nil {
logger.Error(err, "failed to unmarshal config data")
return ctrl.Result{}, err
}
// Need equivalent of following
// r.RolloutRestart(config.DeploymentName)
</code></pre>
| hrishi | <p>In all cases where you wish to replicate <code>kubectl</code> behavior, the answer is always to increase its verbosity and it'll show you exactly -- sometimes down to the wire payloads -- what it is doing.</p>
<p>For rollout restart, one will find that it just <a href="https://sourcegraph.com/github.com/kubernetes/[email protected]/-/blob/staging/src/k8s.io/kubectl/pkg/polymorphichelpers/objectrestarter.go?L41" rel="nofollow noreferrer">bumps an annotation</a> on the Deployment/StatefulSet/whatever and that causes the outer object to be "different," and triggering a reconciliation run</p>
<p>You can squat on their annotation, or you can make up your own, or you can use a label change -- practically any "meaningless" change will do</p>
| mdaniel |
<p>How can I speedup the rollout of new images in Kubernetes?</p>
<p>Currently, we have an automated build job that modifies a yaml file to point to a new revision and then runs <code>kubectl apply</code> on it.</p>
<p>It works, but it takes long delays (up to 20 minutes PER POD) before all pods with the previous revision are replaced with the latest.</p>
<p>Also, the deployment is configured for 3 replicas. We see one pod at a time is started with the new revision. (Is this the Kubernetes "surge" ?) But that is too slow, I would rather kill all 3 pods and have 3 new ones with the new image.</p>
| Leonel | <blockquote>
<p>I would rather kill all 3 pods and have 3 new ones with the new image.</p>
</blockquote>
<p>You can do that. Set <code>strategy.type:</code> to <code>Recreate</code> instead of the default <code>RollingUpdate</code> in your <code>Deployment</code>. See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">strategy</a>.</p>
<p>But you probably get some downtime during deployment.</p>
| Jonas |
<p>As mentioned in this <a href="https://stackoverflow.com/a/37423281/3317808">answer</a>: allow for easy updating of a Replica Set as well as the ability to roll back to a previous deployment.</p>
<p>So, <code>kind: Deployment</code> scales replicasets, which scales Pods, supports zero-downtime updates by creating and destroying replicasets</p>
<hr />
<p>What is the purpose of <code>HorizontalPodAutoscaler</code> resource type?</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: xyz
spec:
maxReplicas: 4
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: xyz
targetCPUUtilizationPercentage: 70
</code></pre>
| overexchange | <p>As you write, with a <code>Deployment</code> it is easy to <em>manually</em> scale an app horizontally, by changing the numer of replicas.</p>
<p>By using a <code>HorizontalPodAutoscaler</code>, you can <em>automate</em> the horizontal scaling by e.g. configuring some metric thresholds, therefore the name <strong>autoscaler</strong>.</p>
| Jonas |
<p>In Helm's v3 documentation: <a href="https://helm.sh/docs/chart_template_guide/accessing_files/" rel="nofollow noreferrer">Accessing Files Inside Templates</a>, the author gives an example of 3 properties (toml) files; where each file has only one key/value pair.</p>
<p>The configmap.yaml looks like this. I'm only adding one <em>config.toml</em> for simplicity.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
data:
{{- $files := .Files }}
{{- range tuple "config.toml" }}
{{ . }}: |-
{{ $files.Get . }}
{{- end }}
</code></pre>
<p>This works fine, until I add a <em>second</em> line to the config.toml file.</p>
<p>config.toml</p>
<pre><code>replicaCount=1
foo=bar
</code></pre>
<p>Then I get an Error: <code>INSTALLATION FAILED: YAML parse error on deploy/templates/configmap.yaml: error converting YAML to JSON: yaml: line 9: could not find expected ':'</code></p>
<p>Any thoughts will be appreciated.
Thanks</p>
| paiego | <p>Helm will read in that file, but it is (for good or bad) a <strong>text</strong> templating engine. It does not understand that you are trying to compose a YAML file and thus it will not help you. That's actually why you will see so many, many templates in the wild with <code>{{ .thing | indent 8 }}</code> or <code>{{ .otherThing | toYaml }}</code> -- because <a href="https://helm.sh/docs/chart_template_guide/control_structures/#controlling-whitespace" rel="nofollow noreferrer">you need to help Helm</a> know in what context it is emitting the <em>text</em></p>
<p>Thus, in your specific case, you'll want the <a href="https://masterminds.github.io/sprig/strings.html#indent" rel="nofollow noreferrer"><code>indent</code> filter</a> with a value of 4 because your current template has two spaces for the key indent level, and two more spaces for the value block scalar</p>
<pre><code>data:
{{- $files := .Files }}
{{- range tuple "config.toml" }}
{{ . }}: |-
{{ $files.Get . | indent 4 }}
{{/* notice this ^^^ template expression is flush left,
because the 'indent' is handling whitespace, not the golang template itself */}}
{{- end }}
</code></pre>
<hr />
<p>Also, while this is the specific answer to your question, don't overlook the <a href="https://helm.sh/docs/chart_template_guide/accessing_files/#configmap-and-secrets-utility-functions" rel="nofollow noreferrer"><code>.AsConfig</code> section on that page</a> which seems much more likely to be what you really want to happen, and requires less <code>indent</code> math</p>
| mdaniel |
<p>I have a very simple kustomization.yaml:</p>
<pre><code>configMapGenerator:
- name: icecast-conifg
files:
- icecast.xml
</code></pre>
<p>When I run <code>kubectl kustomize .</code> it spits out a generated configMap properly, but how do I actually load it into my cluster? I'm missing some fundamental step.</p>
| user3056541 | <p>With Kustomize you can use the <code>-k</code> (or <code>--kustomize</code>) flag instead of <code>-f</code> when using <code>kubectl apply</code>. Example:</p>
<pre><code>kubectl apply -k <my-folder-or-file>
</code></pre>
<p>See <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Declarative Management of Kubernetes Objects Using Kustomize</a></p>
| Jonas |
<p>Service abstract Pod IP address from consumers, load balances between pods, relies on labels to associate a service with a Pod, holds virtual IP provided by Node's kube-proxy, non-ephemeral</p>
<p>Given below services:</p>
<pre><code>$ kubectl -n mynamespace get services | more
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-app1 NodePort 192.168.112.249 <none> 80:32082/TCP,2121:30581/TCP 50d
my-app2 NodePort 192.168.113.154 <none> 80:30704/TCP,2121:30822/TCP 50d
my-app3 NodePort 192.168.114.232 <none> 80:32541/TCP,2121:32733/TCP 5d2h
my-app4 NodePort 192.168.115.182 <none> 80:30231/TCP,2121:30992/TCP 5d2h
</code></pre>
<hr />
<p>Is "service" type kubernetes object launched as a separate Pod container in data plane?</p>
| overexchange | <blockquote>
<p>Is "service" type kubernetes object launched as a separate Pod container in data plane?</p>
</blockquote>
<p>Nope, a <code>Service</code> is an <strong><em>abstract</em></strong> resource in Kubernetes.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> documentation:</p>
<blockquote>
<p>An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.</p>
</blockquote>
| Jonas |
<p><strong>Getting error</strong>
postgres deployment for service is getting fail. Checked yaml with yamllint and it is valid, but still getting the error. Deployment file contains ServiceAccount , Service and Statefulset.</p>
<pre><code>install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /builds/xxx/xyxy/xyxyxy/xx/xxyy/src/main/helm
Error: YAML parse error on postgresdeployment.yaml: error converting YAML to JSON: yaml: line 24: did not find expected key
helm.go:75: [debug] error converting YAML to JSON: yaml: line 24: did not find expected key
YAML parse error on postgresdeployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
/home/circleci/helm.sh/helm/pkg/action/install.go:489
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:230
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:223
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:113
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
</code></pre>
<p><strong>postgresdeployment.yaml</strong></p>
<ol>
<li>Is there is any invalid yaml syntax?</li>
<li>Any indentation is missing ?</li>
<li>Which node is missing here?</li>
</ol>
<pre><code>{{- if contains "-dev" .Values.istio.suffix }}
# Postgre ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres
---
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
---
# Postgre StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: postgres
spec:
serviceAccountName: postgres
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ default 60 .Values.terminationGracePeriodSeconds }}
volumes:
{{ include "xxx.volumes.logs.spec" . | indent 8 }}
- emptyDir: { }
name: postgres-disk
containers:
- name: postgres
image: "{{ template "xxx.dockerRegistry.hostport" . }}/postgres:latest"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: postgres
containerPort: 5432
livenessProbe:
tcpSocket:
port: 5432
failureThreshold: 3
initialDelaySeconds: 240
periodSeconds: 45
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 5432
failureThreshold: 2
initialDelaySeconds: 180
periodSeconds: 5
timeoutSeconds: 20
resources:
{{ if .Values.lowResourceMode }}
{{- toYaml .Values.resources.low | nindent 12 }}
{{ else }}
{{- toYaml .Values.resources.high | nindent 12 }}
{{ end }}
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-disk
mountPath: /var/lib/postgresql/data
{{- end }}
</code></pre>
| Yashika Chandra | <p>The templating mustaches in helm (and its golang text/template peer) must be one token, otherwise yaml believes that <code>{</code> opens a dict, and then <code>{</code> tries to open a <em>child</em> dict and just like in JSON that's not a valid structure</p>
<p>So you'll want:</p>
<pre class="lang-yaml prettyprint-override"><code> serviceAccountName: postgres
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
</code></pre>
| mdaniel |
<p>In the below yaml syntax:</p>
<pre><code> readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 3
timeoutSeconds: 3
periodSeconds: 10
failureThreshold: 3
</code></pre>
<hr />
<p>Readiness probe is used during initial deployments of Pod.</p>
<ol>
<li><p>For rolling out new version of application, using rolling deployment strategy,
Is readiness probe used for rolling deployment?</p>
</li>
<li><p><code>path</code> & <code>port</code> field allows to input url & port number of a specific service, but not dependent service. how to verify, if dependent service is also ready?</p>
</li>
</ol>
| overexchange | <blockquote>
<p>using rolling deployment strategy, Is readiness probe used for rolling deployment?</p>
</blockquote>
<p>Yes, the new version of Pods is rolled out and older Pods are not terminated until the new version has Pods in <em>ready</em> state.</p>
<p>E.g. if you roll out a new version, that has a bug so that the Pods does not become ready - the old Pods will still be running and the traffic is only routed to the <em>ready</em> old Pods.</p>
<p>Also, if you don't specify a <em>readinessProbe</em>, the <em>process</em> status is used, e.g. a <em>process</em> that terminates will not be seen as <em>ready</em>.</p>
<blockquote>
<p>how to verify, if dependent service is also ready?</p>
</blockquote>
<p>You can configure a custom <em>readinessProbe</em>, e.g. a http-endpoint on <code>/healtz</code> and it is up to you what logic you want to use in the implementation of that endpoint. A http response code of 2xx is seen as <em>ready</em>.</p>
| Jonas |
<p>I was trying to debug some mount problems and the mount logs led me to paths under <code>/var/lib/kubelet/pods</code>, i.e</p>
<p><code>/var/lib/kubelet/pods/f6affad1-941d-4df1-a0b7-38e3f2ab99d5/volumes/kubernetes.io~nfs/my-pv-e0dbe341a6fe475c9029fb372e</code></p>
<p>How can I map the guid of the root directory under <code>pods</code> to the actual running pod or container?</p>
<p>(<code>f6affad1-941d-4df1-a0b7-38e3f2ab99d5</code> in the example above)</p>
<p>I don't see any correlation to the values returned by <code>kubectl</code> or <code>crictl</code>.</p>
| Mugen | <p>They're the <code>.metadata.uid</code> of the Pod; one can map them back by using your favorite mechanism for querying all pods and filtering on its <code>.metadata.uid</code>, and optionally restricting to just those pods scheduled on that Node if you have a so many Pods as to make the <code>-A</code> infeasible</p>
<pre class="lang-sh prettyprint-override"><code>for d in /var/lib/kubelet/pods/*; do
p_u=$(basename "$d")
kubectl get po -A -o json | \
jq --arg pod_uuid "$p_u" -r '.items[]
| select(.metadata.uid == $pod_uuid)
| "uuid \($pod_uuid) is \(.metadata.name)"'
done
</code></pre>
<p>I'm sure there is a <code>-o jsonpath=</code> or <code>-o gotemplate=</code> form that removes the need for <code>jq</code> but that'd be a lot more work to type out in a textarea</p>
<p>with regard to your <code>crictl</code> question, I don't this second have access to my containerd cluster, but the docker based one labels the local containers with <code>io.kubernetes.pod.uid</code> so I would guess containerd does something similar:</p>
<pre class="lang-json prettyprint-override"><code> "Labels": {
"annotation.io.kubernetes.container.hash": "e44bee94",
"annotation.io.kubernetes.container.restartCount": "4",
"annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
"annotation.io.kubernetes.container.terminationMessagePolicy": "File",
"annotation.io.kubernetes.pod.terminationGracePeriod": "30",
"io.kubernetes.container.logpath": "/var/log/pods/kube-system_storage-provisioner_b4aa3b1c-62c1-4661-a302-4c06b305b7c0/storage-provisioner/4.log",
"io.kubernetes.container.name": "storage-provisioner",
"io.kubernetes.docker.type": "container",
"io.kubernetes.pod.name": "storage-provisioner",
"io.kubernetes.pod.namespace": "kube-system",
"io.kubernetes.pod.uid": "b4aa3b1c-62c1-4661-a302-4c06b305b7c0",
"io.kubernetes.sandbox.id": "3950ec60121fd13116230cad388a4c6c4e417c660b7da475436f9ad5c9cf6738"
}
</code></pre>
| mdaniel |
<p>I'm making a realtime multiplayer game and I have an idea on how to structure the backend, but I don't even know if its possible, let alone how to build and deploy it. Basically I want to deploy my backend as a container to cloud run but instead of syncing data through a database common to all instances I want to store the game data locally in each instance and just connect players in the same game to the same instance. Essentially I would need a custom load balancer and scaling logic. I haven't been able to find any useful material on this topic so any help/input is greatly appreciated.</p>
| Ben Baldwin | <p>This is possible to do on Kubernetes, but I doubt that this can be easily done on Google Cloud Run. Your game-pod is essentially <em>stateful</em> workload and does not fit well on Google Cloud Run, but you can run this on Kubernetes as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p>
<blockquote>
<p>Essentially I would need a custom load balancer and scaling logic.</p>
</blockquote>
<p>Yes. For this you need to use an <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters" rel="nofollow noreferrer">UDP/TCP load balancer</a> and you most likely need to deploy <a href="https://stackoverflow.com/questions/44601191/kubernetes-on-gce-ingress-timeout-configuration/59671993#59671993">custom configuration to allow longer connections</a> than default. This network load balancer will forward traffic to your <em>custom load balancer</em> - probably running as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>.</p>
<h2>Custom Load Balancer - Sharding Proxy</h2>
<p>What you need as <em>custom load balancer</em> is a <a href="https://stackoverflow.com/questions/58675990/sharded-load-balancing-for-stateful-services-in-kubernetes/58678622#58678622">Sharding Proxy</a> that can forward traffic to correct Game Session Pod and potentially also to scale up to more Game Session Pods. See the answer to <a href="https://stackoverflow.com/questions/58675990/sharded-load-balancing-for-stateful-services-in-kubernetes/58678622#58678622">Sharded load balancing for stateful services in Kubernetes</a> for more info on this.</p>
<h2>Game Session App</h2>
<p>The app that handles the Game Session need to be deployed as a <code>StatefulSet</code> so that they get an <em>unique network identity</em> e.g. <code>game-session-0</code>, <code>game-session-1</code> and <code>game-session-2</code> - so that your <em>custom load balancer</em> can use logic to direct the traffic to correct Game Session Pod.</p>
<blockquote>
<p>I want to store the game data locally in each instance and just connect players in the same game to the same instance.</p>
</blockquote>
<p>This is possible, use a volume of type <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a>.</p>
<h2>Game Server Example on GKE</h2>
<p>For an extensive example of this type of deployment, see <a href="https://cloud.google.com/solutions/gaming/running-dedicated-game-servers-in-kubernetes-engine" rel="nofollow noreferrer">Running dedicated game servers in GKE</a></p>
| Jonas |
<p>I would like to understand if through PVC/PV a pod that is using a volume after a failure will be always re-attached to the same volume or not. Essentially I know that this can be a case for Statefulset but I am trying to understand if this can be also achieved with PVC and PV. Essentially assuming that a Pod_A is attached to Volume_X, then Pod_A fails but in the meantime a Volume_Y was added to the cluster that can potentially fulfil the PVC requirements. So what does it happen when Pod_A is re-created, does it get always mounted to Volume_X or is there any chance that it gets mounted to the new Volume_Y?</p>
| toto' | <blockquote>
<p>a pod that is using a volume after a failure will be always re-attached to the same volume or not</p>
</blockquote>
<p>yes, the Pod will be re-attached to the same volume, because it still has the same PVC declared in its manifest.</p>
<blockquote>
<p>Essentially assuming that a Pod_A is attached to Volume_X, then Pod_A fails but in the meantime a Volume_Y was added to the cluster that can potentially fulfil the PVC requirements.</p>
</blockquote>
<p>The Pod still has the same PVC in its manifest, so it will use the same volume. But if you create a new PVC, it might be bound to the new volume.</p>
<blockquote>
<p>So what does it happen when Pod_A is re-created, does it get always mounted to Volume_X or is there any chance that it gets mounted to the new Volume_Y?</p>
</blockquote>
<p>The Pod still has the same PVC in its manifest, so it will use the volume that is bound by that PVC. Only when you create a new PVC, that claim can be bound the new volume.</p>
| Jonas |
<p>I am new to this platform and this is my second question. For one month, I have been trying to set up a Kubernetes cluster using AWS unsuccessfully. But every day, I get a new error, but this time, I could not solve this error.</p>
<p>I am using Kali Linux in Virtual Box with Windows as a host. I am following a tutorial from Udemy for the setup.</p>
<ol>
<li><p>I have installed Kops, Kubectl, and AWSCli successfully.</p>
</li>
<li><p>I have configured the keys correctly, using AWS configure (For learning purpose, I have given my user full administrator rights)</p>
</li>
<li><p>I created the S3 bucket (Gave it public access)</p>
</li>
<li><p>Now to create the hosted zone, I used AWS Route 53.
<a href="https://i.stack.imgur.com/sPXvz.png" rel="nofollow noreferrer">Here are specs of my hosted zone</a></p>
</li>
<li><p>Since, I do not have a domain, I bought a free subdomain from freenom.com and configured the nameservers correctly.
<a href="https://i.stack.imgur.com/tt8Nf.png" rel="nofollow noreferrer">Free domain configuration</a></p>
</li>
<li><p>After that, I created a pair of keys using ssh-keygen for logging in to the cluster.</p>
</li>
<li><p>In the end, I am running this command,</p>
</li>
</ol>
<pre><code>kops create cluster --name=kubernetes.hellaswell.ml --state=s3://kops-state-crap --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=kubernetes.hellaswell.ml 1 ⚙
I0418 22:49:10.855151 12216 new_cluster.go:238] Inferred "aws" cloud provider from zone "eu-west-1a"
I0418 22:49:10.855313 12216 new_cluster.go:962] Cloud Provider ID = aws
I0418 22:49:12.604015 12216 subnets.go:180] Assigned CIDR 172.20.32.0/19 to subnet eu-west-1a
unable to determine machine architecture for InstanceGroup "master-eu-west-1a": unsupported architecture for instance type "t2.micro": i386
</code></pre>
| yousuf | <blockquote>
<p>unsupported architecture for instance type "t2.micro": i386</p>
</blockquote>
<p>Some EC2 t2.micro instances are actually 32-bit machines. See <a href="https://stackoverflow.com/a/11565748/213269">How to find if my Amazon EC2 instance is 32 bit or 64 bit?</a>.</p>
<p>Your Kubernetes container probably contains a binary that is compiled for 64-bit machines. I suggest that you choose a different EC2 instance type, e.g. t3.small.</p>
| Jonas |
<p>I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.</p>
<p>Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.</p>
<p>Error from Kubernetes:</p>
<blockquote>
<p>The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters</p>
</blockquote>
<p>I thought of passing the <code>.sql</code> files as a <code>hostPath</code>, but as I understand these <code>hostPath</code>'s content is probably not going to be there</p>
<p>Is there any other way to pass configuration from the K8s directory to pods? Thanks.</p>
| aclowkay | <p>The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the <code>last-applied-configuration</code> annotation that <code>kubectl apply</code> automatically <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-to-create-objects" rel="noreferrer">creates</a> on each <code>apply</code>. If you use <code>kubectl create -f foo.yaml</code> instead of <code>kubectl apply -f foo.yaml</code>, it should work. </p>
<p>Please note that in doing this you will lose the ability to use <code>kubectl diff</code> and do incremental updates (without replacing the whole object) with <code>kubectl apply</code>.</p>
| David |
<p>I'm trying to deploy a Quarkus app to a Kubernetes cluster, but I got the following stacktrace:</p>
<pre><code>exec java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -XX:+ExitOnOutOfMemoryError -cp . -jar /deployments/quarkus-run.jar
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2021-05-11 16:47:19,455 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.lang.NumberFormatException: SRCFG00029: Expected an integer value, got "tcp://10.233.12.82:80"
at io.smallrye.config.Converters.lambda$static$60db1e39$1(Converters.java:104)
at io.smallrye.config.Converters$EmptyValueConverter.convert(Converters.java:949)
at io.smallrye.config.Converters$TrimmingConverter.convert(Converters.java:970)
at io.smallrye.config.Converters$BuiltInConverter.convert(Converters.java:872)
at io.smallrye.config.Converters$OptionalConverter.convert(Converters.java:790)
at io.smallrye.config.Converters$OptionalConverter.convert(Converters.java:771)
at io.smallrye.config.SmallRyeConfig.getValue(SmallRyeConfig.java:225)
at io.smallrye.config.SmallRyeConfig.getOptionalValue(SmallRyeConfig.java:270)
at io.quarkus.arc.runtime.ConfigRecorder.validateConfigProperties(ConfigRecorder.java:37)
at io.quarkus.deployment.steps.ConfigBuildStep$validateConfigProperties1249763973.deploy_0(ConfigBuildStep$validateConfigProperties1249763973.zig:328)
at io.quarkus.deployment.steps.ConfigBuildStep$validateConfigProperties1249763973.deploy(ConfigBuildStep$validateConfigProperties1249763973.zig:40)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:576)
at io.quarkus.runtime.Application.start(Application.java:90)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:100)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:66)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:42)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:119)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.quarkus.bootstrap.runner.QuarkusEntryPoint.doRun(QuarkusEntryPoint.java:48)
at io.quarkus.bootstrap.runner.QuarkusEntryPoint.main(QuarkusEntryPoint.java:25)
</code></pre>
<p>I build the Docker image with the <a href="https://github.com/quarkusio/quarkus-quickstarts/blob/main/getting-started/src/main/docker/Dockerfile.jvm" rel="nofollow noreferrer">default dockerfile</a>, and my quarkus-related dependencies are the following:</p>
<pre class="lang-kotlin prettyprint-override"><code>dependencies {
implementation(enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}"))
implementation("org.optaplanner:optaplanner-quarkus")
implementation("io.quarkus:quarkus-resteasy")
implementation("io.quarkus:quarkus-vertx")
implementation("io.quarkus:quarkus-resteasy-jackson")
implementation("io.quarkus:quarkus-undertow-websockets")
implementation("io.quarkus:quarkus-smallrye-health")
}
</code></pre>
<p>I'm using Quarkus 1.13.3.Final, and I've written a helm chart for my deployment by hand. The deployed dockerfile runs fine on my machine, and the kubernetes deployment descriptor does not have that IP address in it. I think that IP is a ClusterIP of the cluster.</p>
<p>Any idea? Thanks</p>
| Nagy Vilmos | <p>It's due to the <a href="https://github.com/kubernetes/kubernetes/blob/v1.20.0/pkg/kubelet/envvars/envvars.go#L87-L90" rel="noreferrer">docker link variables</a> that kubernetes mimics for <code>Service</code> names in scope; it bites people a lot when they have generically named services such as <code>{ apiVersion: v1, kind: Service, metadata: { name: http }, ...</code> as it will cheerfully produce environment variables of the form <code>HTTP_PORT=tcp://10.233.12.82:80</code> in the Pod, and things such as Spring boot or evidently Quarkus which coerce env-vars into configuration overrides can cause the exact outcome you're experiencing</p>
<p>The solution is (a) don't name <code>Services</code> with bland names (b) "mask off" the offensive env-vars for the Pod:</p>
<pre class="lang-yaml prettyprint-override"><code>...
containers:
- ...
env:
- name: HTTP_PORT
# it doesn't need a value:, it just needs the name to be specified
# so it hides the injected version
- ... any remaining env-vars you really want
</code></pre>
| mdaniel |
<p>I've split out the initial <code>azure-pipelines.yml</code> to use templates, iteration, etc... For whatever reason, the new images are not being deployed despite using <code>latest</code> tag and/or <code>imagePullPolicy: Always</code>.</p>
<p>Also, I basically have two pipelines <code>PR</code> and <code>Release</code>:</p>
<ul>
<li><code>PR</code> is triggered when a PR request is submitted to merge to <code>production</code>. It automatically triggers this pipeline to run unit tests, build the Docker image, do integration tests, etc. and then pushes the image to ACR if everything passed.</li>
<li>When the <code>PR</code> pipeline is passing, and the PR is approved, it is merged into <code>production</code> which then triggers the <code>Release</code> pipeline.</li>
</ul>
<p>Here is an example of one of my <code>k8s</code> deployment manifests (the pipeline says <code>unchanged</code> when these are applied):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-prod
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
imagePullPolicy: Always
image: appacr.azurecr.io/app-admin-v2:latest
ports:
- containerPort: 4001
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
</code></pre>
<p>And here are the various pipeline related <code>.yamls</code> I've been splitting out:</p>
<h3>Both PR and Release:</h3>
<pre><code># templates/variables.yaml
variables:
dockerRegistryServiceConnection: '<GUID>'
imageRepository: 'app'
containerRegistry: 'appacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)'
tag: '$(Build.BuildId)'
imagePullSecret: 'appacr1c5a-auth'
vmImageName: 'ubuntu-latest'
</code></pre>
<hr />
<h3>PR:</h3>
<pre><code># pr.yaml
trigger: none
resources:
- repo: self
pool:
vmIMage: $(vmImageName)
variables:
- template: templates/variables.yaml
stages:
- template: templates/changed.yaml
- template: templates/unitTests.yaml
- template: templates/build.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
- template: templates/integrationTests.yaml
</code></pre>
<pre><code># templates/build.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
steps:
- ${{ each service in parameters.services }}:
- task: Docker@2
displayName: Build and push an ${{ service }} image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)-${{ service }}
dockerfile: $(dockerfilePath)/${{ service }}/Dockerfile
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
</code></pre>
<hr />
<h3>Release:</h3>
<pre><code># release.yaml
trigger:
branches:
include:
- production
resources:
- repo: self
variables:
- template: templates/variables.yaml
stages:
- template: templates/publish.yaml
- template: templates/deploy.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
</code></pre>
<pre><code># templates/deploy.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Deploy
displayName: Deploy stage
dependsOn: Publish
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'App Production AKS'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
kubernetesServiceConnection: 'App Production AKS'
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- ${{ each service in parameters.services }}:
- task: KubernetesManifest@0
displayName: Deploy to ${{ service }} Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'App Production AKS'
manifests: |
$(Pipeline.Workspace)/k8s/aks/${{ service }}.yaml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository)-${{ service }}:$(tag)
</code></pre>
<ul>
<li>Both <code>PR</code> and <code>Release</code> pass...</li>
<li>The new images are in ACR...</li>
<li>I've pulled the images to verify they have the latest changes...</li>
<li>They just aren't getting deployed to AKS.</li>
</ul>
<p>Any suggestions for what I am doing wrong here?</p>
| cjones | <blockquote>
<p>For whatever reason, the new images are not being deployed despite using latest tag</p>
</blockquote>
<p>How should Kubernetes know that there is a new image? Kubernetes config is <em>declarative</em>. Kubernetes is already running what once was "latest" image.</p>
<blockquote>
<p>Here is an example of one of my k8s deployment manifests (the pipeline says unchanged when these are applied)</p>
</blockquote>
<p>Yeah, it is <em>unchanged</em> because the <em>declarative disered state</em> has <strong>not</strong> changed. The Deployment-manifest states <em>what</em> should be deployed, it is not a command.</p>
<h2>Proposed solution</h2>
<p>Whenever you build an image, always give it a unique name. And whenever you want to deploy something, always set a unique name of <em>what</em> should be running - then Kubernetes will manage this in an elegant zero-downtime way using rolling deployments - unless you configure it to behave different.</p>
| Jonas |
<p>I have a StatefulSet with several replicas, for example 3 and multiple kubernetes Node in two zones.</p>
<p>I want to place the <strong>FIRST</strong> replica in the selected zone and the rest in the others zones.</p>
<ul>
<li>mysts-0-0 < <strong>main_zone</strong></li>
<li>mysts-0-1 < back_zone</li>
<li>mysts-0-2 < back_zone</li>
</ul>
<p>Why the first one? Each pod is a database instance. Replication is configured between them. And by default the first node is master.</p>
<p>So I want to be able to influence where the first node is installed.</p>
| jesmart | <p>This is not supported in <code>StatefulSet</code>, as the Pod-template is identical for each replica.</p>
<p>You might be able to achieve this by writing a custom scheduler. For inspiration, see this article for a custom scheduler for CockroachDB: <a href="https://kubernetes.io/blog/2020/12/21/writing-crl-scheduler/" rel="nofollow noreferrer">A Custom Kubernetes Scheduler to Orchestrate Highly Available Applications</a></p>
| Jonas |
<p>Generally in the kubernetes we can create definition files. for non-running and running pods, namespaces, deployments etc. If we generate yaml file for non running and non existing pods it create required defination file. However, if we have to get the defination file from running pod it also generates lots tags of live environment.</p>
<p>How to get only required elements while we generate yaml defination from a running pod</p>
<p>Is there any way if we can avoid getting below details after we generate pod yaml file from a running pod</p>
<p>For example if we see that after running below command it also generates lot of not required elements.</p>
<pre><code>k get po nginxs14 -n=devs14 -o yaml>pod1.yaml
like:
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"run":"nginx"},"name":"nginxs14","namespace":"devs14"},"
creationTimestamp: "2021-04-24T11:09:56Z"
labels:
run: nginx
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:run: {}
f:spec:
f:containers:
k:{"name":"nginx"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":9080,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-04-24T11:09:56Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ContainersReady"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Initialized"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Ready"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:containerStatuses: {}
f:hostIP: {}
f:phase: {}
f:podIP: {}
f:podIPs:
.: {}
</code></pre>
| user190245 | <blockquote>
<p>after running below command it also generates lot of not required elements</p>
</blockquote>
<p>A large part of the problem you describe is the <code>managedFields:</code>. I agree this is very verbose and mostly annoying output.</p>
<p>However, this is now fixed. If you upgrade to <code>kubectl</code> version 1.21+ this is <a href="https://github.com/kubernetes/kubernetes/issues/90066#issuecomment-814063884" rel="nofollow noreferrer">not shown as default</a>. Now you need to add <code>--show-managed-fields</code> to show these fields.</p>
| Jonas |
<p>I'm refactoring a helm chart, and wanted to put some values from <code>deployment.yaml</code> to <code>values.yaml</code> and that value is</p>
<pre><code>hosts:
- {{ include "myApp.externalHostName" . | quote }}
</code></pre>
<p>but it gives me the error</p>
<pre><code>[ERROR] values.yaml: unable to parse YAML: error converting YAML to
JSON: yaml: invalid map key: map[interface {}]interface {}{"toJson
include \"myApp.externalHostName\" . | quote":interface {}(nil)}
[ERROR] templates/: cannot load values.yaml: error converting YAML to
JSON: yaml: invalid map key: map[interface {}]interface {}{"toJson
include \"myApp.externalHostName\" . | quote":interface {}(nil)}
</code></pre>
<p>it would work if I just used</p>
<pre><code>hosts:
- myExternalHostname.something
</code></pre>
<p>but is it possible to run include in values.yaml?</p>
| CptDolphin | <p>The <code>values.yaml</code> files are not subject to golang interpolation. If you need dynamic content, you'll need to update files inside the <code>templates</code> directory (which are subject to golang interpolation), or generate the <code>values.yaml</code> content using another mechanism</p>
<p>In this specific case, you may find yaml anchors to be helpful:</p>
<pre class="lang-yaml prettyprint-override"><code>myApp:
externalHostName: &externalHostName myapp.example.com
theIngressOrWhatever:
hosts:
- *externalHostName
</code></pre>
| mdaniel |
<p>I am running on prem kubernetes. I have a release that is running with 3 pods. At one time (I assume) I deployed the helm chart with 3 replicas. But I have since deployed an update that has 2 replicas.</p>
<p>When I run <code>helm get manifest my-release-name -n my-namespace</code>, it shows that the deployment yaml has replicas set to 2.</p>
<p>But it still has 3 pods when I run <code>kubectl get pods -n my-namespace</code>.</p>
<p><strong>What is needed (from a helm point of view) to get the number of replicas down to the limit I set?</strong></p>
<p><strong>Update</strong><br />
I noticed this when I was debugging a crash loop backoff for the release.</p>
<p>This is an example of what a <code>kubectl describe pod</code> looks like on one of the three pods.</p>
<pre>
Name: my-helm-release-7679dc8c79-knd9x
Namespace: my-namespace
Priority: 0
Node: my-kube-cluster-b178d4-k8s-worker-1/10.1.2.3
Start Time: Wed, 05 May 2021 21:27:36 -0600
Labels: app.kubernetes.io/instance=my-helm-release
app.kubernetes.io/name=my-helm-release
pod-template-hash=7679dc8c79
Annotations:
Status: Running
IP: 10.1.2.4
IPs:
IP: 10.1.2.4
Controlled By: ReplicaSet/my-helm-release-7679dc8c79
Containers:
my-helm-release:
Container ID: docker://9a9f213efa63ba8fd5a9e0fad84eb0615996c768c236ae0045d1e7bec012eb02
Image: dockerrespository.mydomain.com/repository/runtime/my-helm-release:1.9.0-build.166
Image ID: docker-pullable://dockerrespository.mydomain.com/repository/runtime/my-helm-release@sha256:a11179795e7ebe3b9e57a35b0b27ec9577c5c3cd473cc0ecc393a874f03eed92
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 11 May 2021 12:24:04 -0600
Finished: Tue, 11 May 2021 12:24:15 -0600
Ready: False
Restart Count: 2509
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-82gnm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-82gnm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-82gnm
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 10m (x3758 over 5d15h) kubelet Readiness probe failed: Get http://10.1.2.4:80/: dial tcp 10.1.2.4:80: connect: connection refused
Warning BackOff 15s (x35328 over 5d14h) kubelet Back-off restarting failed container</pre>
| Vaccano | <blockquote>
<p>What is needed (from a helm point of view) to get the number of replicas down to the limit I set?</p>
</blockquote>
<p>Your pods need to be in a "healthy" state. Then they are in your desired number of replicas.</p>
<p>First, you deployed 3 replicas. This is managed by a ReplicaSet.</p>
<p>Then you deployed a new revision, with 2 replicas. A "rolling deployment" will be performed. First pods with your new revision will be created, but replicas of your old ReplicaSet will only be scaled down when you have healthy instances of your new revision.</p>
| Jonas |
<p>I having my Pod manifest as below:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod-nginx-container
spec:
containers:
- name: nginx-alpine-container-1
image: nginx:alpine
ports:
- containerPort: 80
</code></pre>
<p>And I can get a shell to the Container running my Nginx using <code>kubectl exec --stdin --tty pod-nginx-container -- /bin/sh</code></p>
<p>My question is does Kubernetes always give a shell to the running container? I mean suppose I have created my own image of Tomcat webserver, and when I use that image then will I still get the shell to login to the container running Tomcat?</p>
| pjj | <h2>Kubernetes</h2>
<p>Kubernetes schedules <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="noreferrer">Pods</a> to nodes. A Pod consists of one or more containers - that are instantiated from container images.</p>
<h2>Container image</h2>
<p>A container image contains a command that will run as the main process but it can also contain other binaries and also a full Linux "userland" like e.g. Ubuntu with shell and lots of tools.</p>
<p>Container images <em>can</em> be built from "scratch" without any other software than e.g. your app, but typically contain some more software for your app to be runnable e.g. <code>glibc</code>. See <a href="https://github.com/GoogleContainerTools/distroless" rel="noreferrer">distroless</a> for minimal base images that does not contain a <em>shell</em>.</p>
<h2>Conclusion</h2>
<blockquote>
<p>My question is does Kubernetes always give a shell to the running container? I mean suppose I have created my own image of Tomcat webserver, and when I use that image then will I still get the shell to login to the container running Tomcat?</p>
</blockquote>
<p>Your container contain a shell, only if you have built in a shell - most likely by using a base image that contain a shell e.g. alpine or ubuntu.</p>
<p>It depends on what do you do in your <code>Dockerfile</code> before building a container image with <code>docker build</code></p>
| Jonas |
<p>I'm new to Kubeflow and k8s. I have setup a single node k8s cluster and installed Kubeflow on this. I'm now trying the 'conditional pipeline' simple example from "Kubeflow for Machine Learning" book but I am getting "cannot post /apis/v1beta1/experiments" error ...</p>
<pre><code>Reason: Not Found
HTTP response headers: HTTPHeaderDict({'x-powered-by': 'Express', 'content-security-policy': "default-src 'none'", 'x-content-type-options': 'nosniff', 'content-type': 'text/html; charset=utf-8', 'content-length': '164', 'date': 'Fri, 11 Jun 2021 20:47:13 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'envoy'})
HTTP response body: <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot POST /apis/v1beta1/experiments</pre>
</body>
</html>
</code></pre>
<p>Any pointer what could be going wrong?</p>
<p>If I do "kubectl get -A svc | grep dashboard", I only see Kubeflow central dashboard. Could this error be related to k8s dashboard not running?</p>
<p>This is the example I am trying:</p>
<p><a href="https://github.com/intro-to-ml-with-kubeflow/intro-to-ml-with-kubeflow-examples/blob/master/pipelines/ControlStructures.ipynb" rel="nofollow noreferrer">https://github.com/intro-to-ml-with-kubeflow/intro-to-ml-with-kubeflow-examples/blob/master/pipelines/ControlStructures.ipynb</a></p>
<p>Before this I've tried below MNIST example also and I faced exact same issue -
<a href="https://github.com/anjuls/fashion-mnist-kfp-lab/blob/master/KF_Fashion_MNIST.ipynb" rel="nofollow noreferrer">https://github.com/anjuls/fashion-mnist-kfp-lab/blob/master/KF_Fashion_MNIST.ipynb</a></p>
<p>Finally, I tried to modify the kfp.Client() line to following:
kfp.Client(host='http://127.0.0.1:8001').create_run_from_pipeline_func(conditional_pipeline, arguments={})</p>
<p>After this I'm getting error related to 'healtz' -</p>
<pre><code>MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=8001): Max retries exceeded with url: /apis/v1beta1/healthz (Caused by ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))
</code></pre>
<p>I did following:
kubectl logs ml-pipeline-ui-7ddcd74489-xrss8 -c ml-pipeline-ui -n kubeflow</p>
<p>It seems ml-pipeline is running on http://localhost:3000. So I modified the client call to following:
client = kfp.Client(host='http://localhost:3000')</p>
<p>I still get an error - this time "connection refused".</p>
<pre><code>MaxRetryError: HTTPConnectionPool(host='localhost', port=3000): Max retries exceeded with url: /apis/v1beta1/healthz (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fffd1ca0f28>: Failed to establish a new connection: [Errno 111] Connection refused',))
</code></pre>
| soumeng78 | <p>I've made some progress. However, issues are not fully resolved but I can at least proceed with client creation, do health check, list existing pipelines.</p>
<p>I could find ml-pipeline service is running on following internal IP:</p>
<p>kubeflow service/ml-pipeline ClusterIP 172.19.31.229</p>
<p>I then used this IP in kfp.Client() API - this resulted in RBAC access issue. I then patched my k8s with following with some hint from another issue -</p>
<pre><code>apiVersion: rbac.istio.io/v1alpha1
kind: ClusterRbacConfig
metadata:
name: default
spec:
mode: "OFF"
</code></pre>
<p>This resolved issues I was facing with kfp.Client(). But now, I'm facing below error when I try to create_experiment():</p>
<pre><code>ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Wed, 16 Jun 2021 22:40:54 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'envoy', 'transfer-encoding': 'chunked'})
HTTP response body: {"error":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.","message":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.","code":3,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid resource references for experiment. ListExperiment requires filtering by namespace.","error_details":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace."}]}
</code></pre>
| soumeng78 |
<p>I am learning about highly available distributed systems and some of the concepts that keep coming up are load balancing (Nginx) and container orchestration (Kubernetes). Right now my simplified understanding of them is as so:</p>
<h3>Nginx</h3>
<ul>
<li>Web server that handles Http requests</li>
<li>Performs load balancing via reverse proxy to other servers (usually done in a round robin manner)</li>
<li>Maps a single IP (the IP of the Nginx server) to many IPs (nodes which we are load balancing over).</li>
</ul>
<h3>Kubernetes</h3>
<ul>
<li>Container orchestration tool which keeps a defined state of a container cluster.</li>
<li>Maps a single IP (the IP of the control plane?) to many IPs (nodes which have a container instance running on them).</li>
</ul>
<p>So my question is, do we use both of these tools in conjunction? It seems like there is some overlap?</p>
<p>For example, if I was creating a NodeJS app to act as a microservice which exposes a REST API, would I just simply deploy my app in a Docker container, then let Kubernetes manage it? I would not need a load balancer like Nginx in front of my Kubernetes cluster?</p>
| nick2225 | <blockquote>
<p>So my question is, do we use both of these tools in conjunction? It seems like there is some overlap?</p>
</blockquote>
<p>You seem to have mixed a few concepts. Don't look to much on the number of IP addresses, but more on the <strong>role</strong> of the different components.</p>
<h2>Load Balancer / Gateway / Nginx</h2>
<p>You probably want some form of Gateway or reverse proxy with a <strong>static known IP address</strong> (and DNS name) so that traffic from Internet can find its way to your services in the cluster. When using Kubernetes, it is common that your services run in a local network, but the <strong>Gateway</strong> or reverse proxy is typically the way into your cluster.</p>
<h2>Kubernetes API / Control Plane</h2>
<p>This is an API for <strong>managing</strong> Kubernetes resources, e.g. deploy a new version of your apps. This API is only for management / administration. Your customer traffic does not use this API. You want to use strong authentication for this, only usable by you and your team. Pods in your cluster <em>can</em> use this API, but they need a <em>Service Account</em> and proper <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC Authorization</a>.</p>
| Jonas |
<p>This is my Pod manifest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod-nginx-container
spec:
containers:
- name: nginx-alpine-container-1
image: nginx:alpine
ports:
- containerPort: 80
</code></pre>
<p>Below is output of my "kubectl describe pod" command:</p>
<pre><code>C:\Users\so.user\Desktop\>kubectl describe pod pod-nginx-container
Name: pod-nginx-container
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 15 Feb 2021 23:44:22 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.29
IPs:
IP: 10.244.0.29
Containers:
nginx-alpine-container-1:
Container ID: cri-o://01715e35d3d809bdfe70badd53698d6e26c0022d16ae74f7053134bb03fa73d2
Image: nginx:alpine
Image ID: docker.io/library/nginx@sha256:01747306a7247dbe928db991eab42e4002118bf636dd85b4ffea05dd907e5b66
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 15 Feb 2021 23:44:24 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sxlc9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-sxlc9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sxlc9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m52s default-scheduler Successfully assigned default/pod-nginx-container to minikube
Normal Pulled 7m51s kubelet Container image "nginx:alpine" already present on machine
Normal Created 7m50s kubelet Created container nginx-alpine-container-1
Normal Started 7m50s kubelet Started container nginx-alpine-container-1
</code></pre>
<p>I couldn't understand what is IP address mentioned in "IPs:" field of this output. I am sure this is not my Node's IP, so I am wondering what IP is this. And please note that I have not exposed a Service, infact there is no Service in my Kubernetes cluster, so I not able to figure out this.</p>
<p>Also, how "Port" and "Host Port" are different, from Googling I could understand little bit but if someone can explain with an example then it would be great.</p>
<p><strong>NOTE:</strong> I have already Googled "explanation of kubectl describe pod command" and tried searching a lot, but I can't find my answers, so posting this question.</p>
| pjj | <h2>Pods</h2>
<p>A <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">pod</a> in Kubernetes is the smallest deployment unit. A pod is a group of one or more containers. The containers in a pod share storage and network resources.</p>
<h2>Pod networking</h2>
<p>In Kubernetes, each pod is assigned a <strong>unique IP address</strong>, this IP address is local within the cluster. Containers within the same pod use <code>localhost</code> to communicate with each other. Networking with other pods or services is done with <em>IP networking</em>.</p>
<p>When doing <code>kubectl describe pod <podname></code> you see the IP address for the pod.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking" rel="nofollow noreferrer">Pod networking</a></p>
<h2>Application networking in a cluster</h2>
<p>A pod is a single instance of an application. You typically run an application as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> with one ore more replicas (instances). When upgrading a Deployment with a new version of your container image, <strong>new pods</strong> is created - this means that all your instances get new IP addresses.</p>
<p>To keep a stable network address for your application, create a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Service</a> - and always use the service name when sending traffic to other applications within the cluster. The traffic addressed to a service is load balanced to the replicas (instances).</p>
<h2>Exposing an application outside the cluster</h2>
<p>To expose an application to clients outside the cluster, you typically use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource - it typically represents a load balancer (e.g. cloud load balancer) with <em>reverse proxy</em> functionality - and route traffic for some specific paths to your services.</p>
| Jonas |
<p>I am unable to connect to our Kubernetes cluster. The <code>kubectl</code> command does not seem to take the configuration into account...</p>
<p>When I issue a <code>kubectl cluster-info</code> (or <code>kubectl get pods</code>)
I get the following error message:</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
</blockquote>
<p>I was suspecting that the <code>~/.kube/config</code> was pointing to my minikube but it is not the case:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS...==
server: https://10.95.xx.yy:6443
name: cluster.local
contexts:
- context:
cluster: cluster.local
namespace: xxx-cluster-xxx-xx-username
user: username
name: username-context
current-context: ""
kind: Config
preferences: {}
users:
- name: username
user:
client-certificate: .certs/username.crt
client-key: .certs/username.key
</code></pre>
<p>Surprisingly, the <code>$KUBECONFIG</code> environment variable is set to the correct path:</p>
<pre><code>KUBECONFIG=/Users/username/.kube/config
</code></pre>
<p>and the <code>kubectl config view</code> works fine (a.k.a. is not pointing to <code>localhost</code> but to <code>https://10.95.xx.yy:6443</code>)</p>
<p>Finally, I also try to specify the path to the config file when invoking <code>kubectl</code> (<code>kubectl get pods --kubeconfig=/Users/username/.kube/config</code>), but the error remains the same...</p>
| E. Jaep | <p>Your current context is unset, as seen with <code>current-context: ""</code>; if you were to run <code>kubectl --context username-context get pods</code> I would expect it to do more what you want. If that turns out to be the case, one can run <code>kubectl config use-context username-context</code> to set the <code>current-context</code> going forward</p>
| mdaniel |
<p>I am trying to list all the workloads/deployments we're running on the clusters we're running on AKS. I don't see an endpoint for this in <a href="https://learn.microsoft.com/en-us/rest/api/aks/" rel="nofollow noreferrer">AKS API REST reference</a>, how do I get the deployments etc?</p>
| Sameer Mhaisekar | <p>AKS API is for managing clusters.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">Kubernetes API</a> if you want to access anything <em>within</em> a cluster. E.g. the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#-strong-workloads-apis-strong-" rel="nofollow noreferrer">workloads</a>.</p>
| Jonas |
<p>Currently I do this:</p>
<pre><code>configMapGenerator:
- name: sql-config-map
files:
- "someDirectory/one.sql"
- "someDirectory/two.sql"
- "someDirectory/three.sql"
</code></pre>
<p>and I would like to do sth. like this:</p>
<pre><code>configMapGenerator:
- name: sql-config-map
files:
- "someDirectory/*.sql"
</code></pre>
<p>Is this somehow possible?</p>
| eventhorizon | <p>Nope.</p>
<p>See discussion around that feature in <a href="https://github.com/kubernetes-sigs/kustomize/issues/189#issuecomment-409042317" rel="nofollow noreferrer">comment on "configMapGenerator should allow directories as input"</a></p>
<p>The main reason:</p>
<blockquote>
<p>To move towards explicit dependency declaration, we're moving away from allowing globs in the kustomization file</p>
</blockquote>
| Jonas |
<p>In a GitOps setting, there are usually two repositories - a code repo and an environment repo. My understanding is that there are some security benefits in separating the repos so developers only need to be given access to the code repo, and environment repo's write access can be limited to only the CI/CD tools. As the environment repo is the source-of-truth in GitOps, this is claimed to be more secure as it minimizes human involvement in the process.</p>
<p>My questions are:</p>
<ol>
<li><p>If the assumption above is correct, what CI/CD tools should be given access to the environment repo? Is it just the pipeline tools such as Tekton (CI) and Flux (CD), or can other tools invoked by the pipelines be also included in this "trusted circle"? What are the best practices around securing the environment repo in GitOps?</p>
</li>
<li><p>What is the thought process around sync'ing intermediate / dynamic states of the cluster back to the environment repo, e.g., number of replicas in a deployment controlled by an HPA, network routing controlled by a service mesh provider (e.g., Istio), etc.? From what I have seen, most of the CD pipelines are only doing uni-directional sync from the environment repo to the cluster, and never the other way around. But there could be benefit in keeping some intermediate states, e.g., in case one needs to re-create other clusters from the environment repo.</p>
</li>
</ol>
| hai huang | <blockquote>
<p>there are usually two repositories - a code repo and an environment repo. My understanding is that there are some security benefits in separating the repos so developers only need to be given access to the code repo, and environment repo's write access can be limited to only the CI/CD tools.</p>
</blockquote>
<p>It is a good practice to have a separate <em>code repo</em> and <em>configuration repo</em> when practicing any form of <em>Continuous Delivery</em>. This is described in the "classical" <a href="https://rads.stackoverflow.com/amzn/click/com/0321601912" rel="nofollow noreferrer" rel="nofollow noreferrer">Continuous Delivery</a> book. The reason is that the two repos change in a different cycle, e.g. first the code is changed and after a pipeline has verified changes, an updated to config repo can be made, with e.g. Image Digest.</p>
<p>The developer team should have access to both repos. They need to be able to change the code, and they need to be able to change the app configuration for different environments. A build tool, e.g. from a Tekton pipeline may only need write access to config repo, but read access to both repos.</p>
<blockquote>
<p>What is the thought process around sync'ing intermediate / dynamic states of the cluster back to the environment repo, e.g., number of replicas in a deployment controlled by an HPA, network routing controlled by a service mesh provider (e.g., Istio), etc.? From what I have seen, most of the CD pipelines are only doing uni-directional sync from the environment repo to the cluster, and never the other way around.</p>
</blockquote>
<p>Try to avoid sync'ing "current state" back to a Git repo, it will only be complicated. For you, it is only valueable to keep the "desired state" in a repo - it is useful to see e.g. who changes what an when - but also for disaster recovery or to create a new identical cluster.</p>
| Jonas |
<p>The duty of replication controller in K8S/Openshift is to ensure the actual state is same as desired state. So if the desired state is 2 Pods, then it ensures that exactly 2 pods are created/running. If a pod fails for some reason then the replication controller ensures that it will restart a new pod to compensate for the failed pod.</p>
<p>A thing i want to confirm, if the Pod/Container exists with an error - then will the replication controller care about the error code and find that the pod is failing due to error and hence decide to not start the pod any further. ?? Please Answer.</p>
| joven | <blockquote>
<p>A thing i want to confirm, if the Pod/Container exists with an error - then will the replication controller care about the error code and find that the pod is failing due to error and hence decide to not start the pod any further. ??</p>
</blockquote>
<p>Errors can be shown in many different ways:</p>
<ul>
<li>Error code in the application log</li>
<li>Error code in message payload</li>
<li>Error code in http status code in response</li>
<li>Process exit code</li>
</ul>
<p>Only the <em>last</em> - process exit code is helpful for the ReplicationController (or in newer Kubernetes, the ReplicaSet controller). If the process exit, the Pod is terminated and a new will be created by the controller.</p>
<p>In addition, to mitigate the other cases, you can implement a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">LivenessProbe</a>, so that the Pod will be killed in the presence of another kind of error.</p>
| Jonas |
<p>I have the following configuration for rewrite for user-service and it is supposed to remove either 'sso' or 'user' and use the rest of the path to redirect to the user-service</p>
<p>nginx-ingress-controller:0.32.0</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-service
namespace: pre-prod
annotations:
cert-manager.io/cluster-issuer: clusterissuer-selfsigned-default
ingress.kubernetes.io/rewrite-target: /$2/$4
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"
labels:
app: pre-prod
component: user-service
spec:
rules:
- host: example.com
http:
paths:
- path: /(user|sso)/(api|saml)(/|$)(.*)
backend:
serviceName: user-service
servicePort: 80
</code></pre>
<p>The path /sso/api/auth/sso/signin is supposed to be converted into /api/auth/sso/signin but the backend sends the response</p>
<pre><code>error: "Not Found"
message: "No message available"
path: "/sso/api/auth/sso/signin"
status: 404
timestamp: "2021-01-20T09:47:04.544+0000"
</code></pre>
<p><code>172.18.240.0 - - [20/Jan/2021:09:47:04 +0000] "GET /sso/api/auth/sso/signin HTTP/1.1" 404 161 "https://example.com/login" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" 693 0.023 [pre-prod-user-service-80] [] 172.18.64.24:9090 166 0.022 404 a5844078b332b882056516aa50b0eb2b</code></p>
<p>What could be the problem?</p>
<p>Thanks</p>
| Andrey Yaroshenko | <p>The annotation has a typo, in that it omitted the leading <code>nginx.</code> from <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="nofollow noreferrer"><code>nginx.ingress.kubernetes.io/rewrite-target:</code></a></p>
| mdaniel |
<p>I am deploying pyspark in my aks Kubernetes cluster using this guides:</p>
<ul>
<li><a href="https://towardsdatascience.com/ignite-the-spark-68f3f988f642" rel="nofollow noreferrer">https://towardsdatascience.com/ignite-the-spark-68f3f988f642</a></li>
<li><a href="http://blog.brainlounge.de/memoryleaks/getting-started-with-spark-on-kubernetes/" rel="nofollow noreferrer">http://blog.brainlounge.de/memoryleaks/getting-started-with-spark-on-kubernetes/</a></li>
</ul>
<p>I have deployed my driver pod as is explained in the links above:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: spark
name: my-notebook-deployment
labels:
app: my-notebook
spec:
replicas: 1
selector:
matchLabels:
app: my-notebook
template:
metadata:
labels:
app: my-notebook
spec:
serviceAccountName: spark
containers:
- name: my-notebook
image: pidocker-docker-registry.default.svc.cluster.local:5000/my-notebook:latest
ports:
- containerPort: 8888
volumeMounts:
- mountPath: /root/data
name: my-notebook-pv
workingDir: /root
resources:
limits:
memory: 2Gi
volumes:
- name: my-notebook-pv
persistentVolumeClaim:
claimName: my-notebook-pvc
---
apiVersion: v1
kind: Service
metadata:
namespace: spark
name: my-notebook-deployment
spec:
selector:
app: my-notebook
ports:
- protocol: TCP
port: 29413
clusterIP: None
</code></pre>
<p>Then I can create the spark cluster using the following code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
# Create Spark config for our Kubernetes based cluster manager
sparkConf = SparkConf()
sparkConf.setMaster("k8s://https://kubernetes.default.svc.cluster.local:443")
sparkConf.setAppName("spark")
sparkConf.set("spark.kubernetes.container.image", "<MYIMAGE>")
sparkConf.set("spark.kubernetes.namespace", "spark")
sparkConf.set("spark.executor.instances", "7")
sparkConf.set("spark.executor.cores", "2")
sparkConf.set("spark.driver.memory", "512m")
sparkConf.set("spark.executor.memory", "512m")
sparkConf.set("spark.kubernetes.pyspark.pythonVersion", "3")
sparkConf.set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
sparkConf.set("spark.kubernetes.authenticate.serviceAccountName", "spark")
sparkConf.set("spark.driver.port", "29413")
sparkConf.set("spark.driver.host", "my-notebook-deployment.spark.svc.cluster.local")
# Initialize our Spark cluster, this will actually
# generate the worker nodes.
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
</code></pre>
<p>It works.</p>
<p>How can I create an external pod that can execute a python script that lives in my my-notebook-deployment, I can do it in my terminal:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec my-notebook-deployment-7669bb6fc-29stw python3 myscript.py
</code></pre>
<p>But I would want to be able to automate it executing this command inside another pod</p>
| J.C Guzman | <p>In general you can spin up new pod with specified command running in it i.e.:</p>
<pre><code>kubectl run mypod --image=python3 --command -- <cmd> <arg1> ... <argN>
</code></pre>
<p>In your case you would need to provide the code of the myscript.py to the pod (i.e.: by mounting a ConfigMap with the script content) or build a new container image based on the python docker and adding the script to the image.</p>
| pb100 |
<p>As I understand it, most databases enable the use of replicas that can take over from a leader in case the leader is unavailable.</p>
<p>I'm wondering the necessity of having these replicas in a Kubernetes environment, when using say a StatefulSet. Once the pod becomes unresponsive, Kubernetes will restart it, right? And the PVC will make sure the data isn't lost.</p>
<p>Is it that leader election is a faster process than bringing up a new application?</p>
<p>Or is it that the only advantage of the replicas is to provide load balancing for read queries?</p>
| vmayer | <blockquote>
<p>As I understand it, most databases enable the use of replicas that can take over from a leader in case the leader is unavailable.</p>
</blockquote>
<blockquote>
<p>I'm wondering the necessity of having these replicas in a Kubernetes environment, when using say a StatefulSet.</p>
</blockquote>
<p>There has been a move to <em>distributed databases</em> from previous <em>single node datatbases</em>. Distributed databases typically run using 3 or 5 replicas / instances in a cluster. The primary purpose for this is High Availability and fault tolerance to e.g. node or disk failure. This is the same if the database is run on Kubernetes.</p>
<blockquote>
<p>the PVC will make sure the data isn't lost.</p>
</blockquote>
<p>The purpose of PVCs is to decouple the application configuration with the selection of storage system. This allows that you e.g. can deploy the same application on both Google Cloud, AWS and Minikube without any different configuration although you will use different storage systems. This does not change how the storage systems work.</p>
<blockquote>
<p>Is it that leader election is a faster process than bringing up a new application?</p>
</blockquote>
<p>Many different things can fail, the node, the storage system or the network can be partitioned so that you cannot reach a certain node.</p>
<p>Leader election is just a piece of the mitigations against these problems in a clustered setup, you also need replication of all data in a consistent way. <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft consensus algorithm</a> is a common solution for this in modern distributed databases.</p>
<blockquote>
<p>Or is it that the only advantage of the replicas is to provide load balancing for read queries?</p>
</blockquote>
<p>This might be an advantage in distributed databases, yes. But this is seldom the primary reason to using them, in my experience.</p>
| Jonas |
<p>From the documentation I see</p>
<blockquote>
<p>NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :.</p>
</blockquote>
<blockquote>
<p>LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created</p>
</blockquote>
<p>But it doesn't mention when to choose one over the other. One of the disadvantages with Nodeport I can think of is the security (opening port in firewall rules), was wondering if there are any additional considerations for choosing one over the other.</p>
| Always_Beginner | <p>They're the same underlying risk (with respect to what you wrote as "disadvantages") because a <code>type: LoadBalancer</code> <strong>is</strong> a <code>type: NodePort</code> which just additionally works with the cluster's configured cloud-provider to provision a cloud load balancer which points to the allocated NodePort and keeps the Node membership in sync with the cloud load balancer's target hosts</p>
<p>So, to answer your question: use <code>type: LoadBalancer</code> when you want the cloud-provider to provisioning a cloud resource for you, and manage its lifecycle; use a <code>type: NodePort</code> when you have other means of getting external traffic to the allocated port(s) on the Nodes.</p>
| mdaniel |
<p>I am using multiple ingresses resource on my GKE, say I have 2 ingress in different namespaces. I create the ingress resource as shown in the yaml below. With the annotations used in the below yaml, I clearly mention that I am using the GCE controller that comes with GKE(<a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce</a>). But every time I create an ingress I get different IPs, For instance sometimes I get 133.133.133.<strong><em>133</em></strong> and for the other times I get 133.133.133.<strong><em>134</em></strong>. And it alternates between only these two IPs (it's probably between only two IPs because of quotas limit). This is a problem when I just want to reserve one IP and load balance/terminate multiple apps on this IP only.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
name: http-ingress
spec:
backend:
serviceName: http-svc
servicePort: 80
</code></pre>
| Suhas Chikkanna | <p>In your Ingress resource you can specify you need the Load Balancer to use a specific IP address with the <code>kubernetes.io/ingress.global-static-ip-name</code> annotation like so:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: static-ip-name
name: http-ingress
spec:
backend:
serviceName: http-svc
servicePort: 80
</code></pre>
<p>You will need to create a global static IP first using the gcloud tool. See step 2(b) here: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip</a>.</p>
| Carlos Gomez |
<p>Let's say you are using either <em><strong>ServiceFabric</strong></em> or <em><strong>Kubernetes</strong></em>, and you are hosting a transaction data warehouse microservice (maybe a bad example, but suppose all it dose is a simple CQRS architecture consisting of Id of sender, receiver, date and the payment amount, writes and reads into the DB).</p>
<p>For the sake of the argument, if we say that this microservice needs to be replicated among different geographic locations to insure that the data will be recoverable if one database goes down.</p>
<p>Now the naïve approach that I'm thinking is to have an event which gets fired when the transaction is received, and the orchestrator microservice will except to receive event-processed acknowledgment within specific timeframe.
But the question stays that what about the database ? what will happen when we will scale out the microservices and a new microservice instances will be raise up?
they will write to the same database, no ?</p>
<p>One of solutions can be to put the database within the docker, and let it be owned by each replica, is this a good solution?</p>
<p>Please share your thoughts and best practices.</p>
| Alphas Supremum | <blockquote>
<p>what will happen when we will scale out the microservices and a new microservice instances will be raise up? they will write to the same database?</p>
</blockquote>
<p>Yes, the instances of your service, all share the same logical database. To achieve high availability, you typically run a distributed database cluster, but it appears as a single database system for your service.</p>
<blockquote>
<p>One of solutions can be to put the database within the docker, and let it be owned by each replica, is this a good solution?</p>
</blockquote>
<p>No, you typically want that all your instances of your service <strong>see the same consistent data</strong>. E.g. a read-request sent to two different instances of your service, should respond with the same data.</p>
<p>If the database becomes your bottleneck, then you can mitigate that by implementing caching or shard your data, or serve read-requests from specific read-instances.</p>
| Jonas |
<p>I'm using Helm on a Kubernetes cluster and have installed the stable <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq-ha" rel="nofollow noreferrer">rabbitmq-ha chart</a>. I would like to push data to an exchange in rabbitmq from Logstash. I am trying to use the <a href="https://github.com/helm/charts/tree/master/stable/logstash" rel="nofollow noreferrer">logstash stable chart</a>.</p>
<p>The rabbitmq-ha chart has created a secret that contains the password to connect to it. I'd like to be able to get that password and include it in the logstash configuration so that logstash can connect to it.</p>
<p>The ConfigMap for logstash is templated using items from the values file.</p>
<pre><code> outputs:
main: |-
output {
rabbitmq {
exchange => "exchange_name"
exchange_type => "fanout"
host => "rabbitmq-ha.default.svc.cluster.local"
password => "????"
}
}
</code></pre>
<p>I don't want to hard-code the password in the values file because that's not great for security and it would mean duplicating the configuration for each environment. I can't see a way to get logstash to read the password from an environment variable.</p>
<p>How do people normally do this?</p>
<p>I could use <a href="https://github.com/futuresimple/helm-secrets" rel="nofollow noreferrer">helm secrets</a> to store the whole <code>outputs</code> configuration and include hard-coded passwords. That would avoid having plain-text passwords in my repository but still doesn't feel like the best way.</p>
| Stephen Paulger | <p>Turns out that it is possible to get logstash to read values from the environment variables since at least version 5.0 of logstash. <a href="https://www.elastic.co/guide/en/logstash/current/environment-variables.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/logstash/current/environment-variables.html</a></p>
<p>So my values file can look like</p>
<pre><code> outputs:
main: |-
output {
rabbitmq {
exchange => "exchange_name"
exchange_type => "fanout"
host => "rabbitmq-ha.default.svc.cluster.local"
password => "${RMQ_PASSWORD}"
}
}
</code></pre>
<p>The logstash chart allows environment variables to be added to the statefulset using an <code>extraEnv</code> value. The extraEnv allows values to come from secrets.</p>
<pre><code> extraEnv:
- name: RMQ_PASSWORD
valueFrom:
secretKeyRef:
name: rabbitmq-ha
key: rabbitmq-password
</code></pre>
| Stephen Paulger |
Subsets and Splits