Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I'm using a GKE. For single zone it is free of charge(only instance cost). For DEV we are using it but for another environment like QA/DEV/Production do we need separate GKE Cluster or inside the current GKE a different namespace will work ? which is the best way in cost and security</p>
| Jithin Kumar S | <p>This depends on your requirements. Both options are viable.</p>
<blockquote>
<p>environment like QA/DEV/Production do we need separate GKE Cluster</p>
</blockquote>
<p>In a professional environment for a large enterprise, this would be different clusters in different <a href="https://cloud.google.com/vpc" rel="nofollow noreferrer">VPC</a> (one <a href="https://cloud.google.com/vpc" rel="nofollow noreferrer">VPC</a> per environment) and you might use separate projects as well, but not needed.</p>
<blockquote>
<p>inside the current GKE a different namespace will work</p>
</blockquote>
<p>this is cheaper, but you have less separation. Only you know what you need.</p>
| Jonas |
<p>What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?</p>
<p>As we all know there are different cloud provider and many types of settings that are related to the deployment of your ingress resource which depends on your target environments: AWS, OpenShift, plain vanilla K8S, google cloud, Azure.</p>
<p>On cloud deployments like Amazon, Google, etc., ingresses need also special annotations, most of which are common to all micro services in need of an ingress.</p>
<p>If we deploy also a mesh like Istio on top of k8s then we need to use an Istio gateway with ingress. if we use OCP then it has special kind called “routes”.</p>
<p>I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.</p>
<p><strong>So maybe the best approach is to create an operator to deploy the Ingress resource because of the many different setups here?</strong></p>
<p>Is it important to create some generic component to deploy the Ingress while keeping cloud agnostic?</p>
<p>How do other companies deploy their ingress resources to the k8s cluster?</p>
| Tal Avissar | <blockquote>
<p>What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?</p>
</blockquote>
<p>On AWS the common approach is to use ALB, and the <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller" rel="nofollow noreferrer">AWS ALB Ingress Controller</a>, but it has its own drawbacks in that it create <em>one ALB per Ingress resource</em>.</p>
<blockquote>
<p>Is we deploy also a mesh like Istio then we need to use Istio gateway with ingress.</p>
</blockquote>
<p>Yes, then the situation is different, since you will use <code>VirtualService</code> from Istio or use <a href="https://aws.amazon.com/app-mesh/" rel="nofollow noreferrer">AWS App Mesh</a> - that approach looks better, and you will not have an <code>Ingress</code> resource for your apps.</p>
<blockquote>
<p>I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.</p>
</blockquote>
<p>Yes, this is in the intersection between the cloud provider infrastructure and your cluster, so there are unfortunately many different setups here. It also depends on if your ingress gateway is within the cluster or outside of the cluster.</p>
<p>In addition, the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource, just become GA (stable) in the most recent Kubernetes, 1.19.</p>
| Jonas |
<p>I am planning to deploy the Asp.net application that uses session on Azure kubernetes. How do I make sure that the incoming request goes to the same pod where session is created.</p>
| One Developer | <p>It is recommended that apps deployed on Kubernetes has a design following the <a href="https://12factor.net/" rel="nofollow noreferrer">The Twelve Factor App</a>.</p>
<p>Everything will be easier if the app is <em>stateless</em> and <em>share nothing</em> with other instances. See <a href="https://12factor.net/processes" rel="nofollow noreferrer">Twelve Factor App - Processes</a></p>
<blockquote>
<p>Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.</p>
</blockquote>
<blockquote>
<p>Some web systems rely on “sticky sessions” – that is, caching user session data in memory of the app’s process and expecting future requests from the same visitor to be routed to the same process. Sticky sessions are a violation of twelve-factor and should never be used or relied upon. Session state data is a good candidate for a datastore that offers time-expiration, such as Memcached or Redis.</p>
</blockquote>
<p>Using <a href="https://redis.io/" rel="nofollow noreferrer">Redis</a> is one way to store temporary data belonging to the user.</p>
<p>For authentication, I would recommend to use OpenID Connect with JWT tokens in the <code>Authorization: Bearer <token></code> header. See e.g. <a href="https://azure.microsoft.com/en-us/services/active-directory/external-identities/b2c/" rel="nofollow noreferrer">Azure Active Directory B2C</a> for an example of an OpenID Connect provider.</p>
| Jonas |
<p>I'm working with microservice architecture using Azure AKS with Istio.</p>
<p>I configure all, and developers work with microservices to create the web platform, apis, etc.</p>
<p>But with this, I have a doubt. There is much yaml to configure for Istio and Kubernetes, e.g. <code>Ingress</code>, <code>VirtualService</code>, <code>Gateway</code> etc.</p>
<p>Is this configuration, part of the developer responsibility? should they create and configure this? or is these configuration files part of the responsibility for the DevOps team? so that developers only is responsible for creating nodejs project, and the DevOps team configure the nodejs project configuration to execute in k8s architecture?</p>
| mpanichella | <p>This is a good but difficult question.</p>
<p>Kubernetes has changed what the DevOps role means, as described in the article <a href="https://thenewstack.io/devops-before-and-after-kubernetes/" rel="nofollow noreferrer">DevOps Before and After Kubernetes</a>.</p>
<p>As you say, there are much Yaml to handle with Kubernetes and Istio. Now, DevOps teams need to help to automate the process of delivering apps to Kubernetes:</p>
<blockquote>
<p>For an app team, containerizing a typical medium-sized, microservices-based app would require several thousands of lines of K8s manifest files to be written and managed. Each new deployment would need a rebuild of container images and potential modifications of several manifest files. Clearly, DevOps in today’s world will be different from DevOps in the pre-Kubernetes era.</p>
</blockquote>
<blockquote>
<p>These new-world DevOps teams may do well with an automation process for delivery to Kubernetes so that efficiency gains and economic benefits can be realized sooner while also maintaining reliability and speed. Such automation along with a standardized process will further enable a clean hand-off interface between the IT teams managing the infrastructure and the app teams delivering apps to K8s. For enterprises pursuing agility and frictionless delivery at scale, finding the shortest path to Kubernetes will be at the heart of DevOps in times to come.</p>
</blockquote>
<p>This can be done in different ways. E.g. building abstractions or setting up CI/CD automation. In the end, how you do this, depend on how much your organization invest in this automation.</p>
<p>The presentation <a href="https://www.infoq.com/presentations/kubernetes-adoption-foundation/" rel="nofollow noreferrer">Kubernetes is Not Your Platform, It's Just the Foundation</a> is very interesting about creating abstractions on-top of Kubernetes to be an effective platform for app developers.</p>
<p>In an organization with <em>little</em> automation, the developers will get a Namespace and do all the Yaml themself. But in an organization with high degree of automation and investment in the Kubernetes plattform, a platform team typically creata an <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">Kubernetes CRD</a> e.g. <code>kind: Application</code> and a controller that configure the Istio <code>VirtualService</code> and <code>Deployment</code> in an <strong>opinionated</strong> way to reduce the <em>cognitive load</em> for the Developers - so they have very few Yaml-fields to manage. An example of such solution is <a href="https://doc.nais.io/nais-application/nais.yaml/min-example" rel="nofollow noreferrer">NAV application Yaml</a> - they even have fields for provisioning PostgreSQL databases or Redis caches.</p>
| Jonas |
<p>I am learning Kubernetes and quite new to it.</p>
<p>let us say a pod A needs to use some persistent volume in node A (i.e. meaning the container in the pod A will write some data into some path in node A). Then after some time, the pod A dies and a new pod B is scheduled to node B. Then, can pod B somehow remotely access that persistent volume in node A so that it can still work properly?</p>
<p>In other words, can Kubernetes provide some local persistent volume in a particular node that can be used by a pod and can still be accessed although the pod can be rescheduled to another node?</p>
| laventy | <p>It is recommended to use <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes</a> and <code>Persistent Volume Claims</code> when an application deployed on Kubernetes needs storage.</p>
<p>PV and PVC are abstractions and can be backed by several different storage systems, with its own properties/capabilities.</p>
<h2>Volumes located off-Node</h2>
<p>The most common backing of Kubernetes PV at cloud providers is <a href="https://aws.amazon.com/ebs" rel="nofollow noreferrer">AWS Elastic Block Storage</a> and <a href="https://cloud.google.com/persistent-disk" rel="nofollow noreferrer">Google Persistent Disk</a>. These systems are not local volumes on the Kubernetes Nodes, but accessed over the network. From the applications view, it is accessed through the filesystem, like a local volume. This has the advantage that these volumes <strong>are accessible from any Node</strong> within the Availability Zone.</p>
<h2>Volumes located on-Node</h2>
<p>However, the cloud providers also offer Local Disks that are physical disks on the Node. Those are much more expensive, you allocate larger volumes but you also get much better disk performance. The typical usage for those are <em>distributed databases</em>, e.g. deployed as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> and actively replicate the data between eachother, typically using <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft Consensus Algorithm</a> - this means that they can tolerate to loose an instance (including the disk) and recover from that state, by creating a new instance - that starts to catchup with the data replication.</p>
| Jonas |
<p>We have defined K8s liveness and readiness probes in our <strong>deployment</strong> resource (we have defied there liveness...), we need with <code>client-go</code> lib to access this liveness probe, how can we do that?</p>
<p>I've tried it with the <code>client-go</code> lib</p>
<p><a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go</a></p>
<p>as follows:</p>
<pre><code>client.Discovery().RESTClient().Get()
</code></pre>
<p>I've also tried to play around with the go library but did not find any <strong>deployment</strong> property on <code>client.CoreV1()</code>. However I do find <code>service</code> <code>pod</code> etc. What am I missing here?</p>
<p><code>PodList, err := client.CoreV1().Pods("mynamespace").List(metav1.ListOptions{LabelSelector: "run=liveness-app"})</code></p>
<p>At the end I need to get the pod liveness status according to the liveness probe which is defined in the deployment. I mean <strong>live or dead</strong></p>
| PJEM | <p>How to do this depends on what you want to do.</p>
<h2>Deployment</h2>
<p>The <code>Deployment</code> contains a PodTemplate, that is used for creating each replica.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myimage
name: myapp
livenessProbe:
# your desired liveness check
</code></pre>
<p>You can get the desired PodTemplate from deployments using client-go</p>
<p>For example:</p>
<pre class="lang-golang prettyprint-override"><code>clientset := kubernetes.NewForConfigOrDie(config)
deploymentClient := clientset.AppsV1().Deployments("mynamespace")
deployment, err := deploymentClient.Get("myapp", metav1.GetOptions{})
for _, container := range deployment.Spec.Template.Spec.Containers {
container.LivenessProbe // add your logic
}
</code></pre>
<p><strong>Note:</strong> The <code>Deployment</code> only contains the desired PodTemplate, so to look at any status, you have to look at the created Pods.</p>
<h2>Pods</h2>
<p>You can list the Pods created from the deployment by using the same labels as in the <em>selector</em> of the <code>Deployment</code>.</p>
<p>Example list of Pods:</p>
<pre class="lang-golang prettyprint-override"><code>pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{
LabelSelector: "app=myapp",
})
// check the status for the pods - to see Probe status
for _, pod := range pods.Items {
pod.Status.Conditions // use your custom logic here
for _, container := range pod.Status.ContainerStatuses {
container.RestartCount // use this number in your logic
}
}
</code></pre>
<p>The <code>Status</code> part of a <code>Pod</code> contain <code>conditions:</code> with some <code>Probe</code>-information and <code>containerStatuses:</code> with <code>restartCount:</code>, also illustrated in the Go example above. Use your custom logic to use this information.</p>
<p>A Pod is restarted whenever the <em>livenessProbe</em> fails.</p>
<p>Example of a <code>Pod Status</code></p>
<pre class="lang-yaml prettyprint-override"><code>status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-09-15T07:17:25Z"
status: "True"
type: Initialized
containerStatuses:
- containerID: docker://25b28170c8cec18ca3af0e9c792620a3edaf36aed02849d08c56b78610dec31b
image: myimage
imageID: docker-pullable://myimage@sha256:a432251b2674d24858f72b1392033e0d7a79786425555714d8e9a656505fa08c
name: myapp
restartCount: 0
</code></pre>
| Jonas |
<p>I have 2 pods running. They are;</p>
<ol>
<li>mail-services-pod</li>
<li>redis-pod</li>
</ol>
<p>I need to make sure that the redis server is (via redis-pod) up and running before creating the mail-services-pod as its is dependant on redis-pod.</p>
<p>I am new to kubernetes and would like to know what are the best ways to implement this check.</p>
<p>Cheers</p>
| Shanka Somasiri | <blockquote>
<p>I am new to kubernetes and would like to know what are the best ways to implement this check</p>
</blockquote>
<p>Kubernetes is a distributed environment and instances will change e.g. address on deployment on new versions of your apps.</p>
<p>It is important that your app is <em>recilient</em> e.g. to network issues, and that your app properly do retry if a connection fails.</p>
<p>When recilient connections is properly handled by your app, the start order of your apps is no longer an issue.</p>
| Jonas |
<p>I am trying to implement Prometheus for my application hosted on Azure kubernetes. Currently the application does not have any authentication enabled, Prometheus is working fine.</p>
<p>However I would be enabling the Azure AD authentication to protect the application. In this case, would it break the Prometheus metric collection?</p>
| One Developer | <blockquote>
<p>However I would be enabling the Azure AD authentication to protect the application. In this case, would it break the Prometheus metric collection?</p>
</blockquote>
<p>This depends on how the application is implemented.</p>
<p>It is not common to add authentication also to the metrics endpoint. But sometimes the metrics endpoint is served on another port, e.g. <a href="https://docs.spring.io/spring-boot/docs/1.5.3.RELEASE/reference/html/production-ready-monitoring.html#production-ready-customizing-management-server-port" rel="nofollow noreferrer">Management Server Port</a> for Spring Boot.</p>
| Jonas |
<p>I've got my K8 cluster which I need to update with new deployments. If I had my jenkins container inside the kluster itself, is this bad practice? The other option is to have a seperate server that ssh's inside my remote K8 cluster and handles new deployments then.</p>
<p>I've looked at this jenkins plugin <a href="https://plugins.jenkins.io/kubernetes-cd/" rel="nofollow noreferrer">https://plugins.jenkins.io/kubernetes-cd/</a> to handle the CI/CD process.</p>
| thatguyjono | <p>It is a good practice to use CI/CD - good start. I wouldn't say it is a "bad practice" to run Jenkins as a container on Kubernetes - but my experience is that it does not work very well, mostly because Jenkins is not designed for being run as a container on Kubernetes.</p>
<p>There are more modern alternatives, that is designed for containers and Kubernetes. <a href="https://jenkins-x.io/" rel="nofollow noreferrer">Jenkins X</a> is the next-gen version of Jenkins that is designed to be run on Kubernetes, see <a href="https://medium.com/@jdrawlings/serverless-jenkins-with-jenkins-x-9134cbfe6870" rel="nofollow noreferrer">Serverless Jenkins with Jenkins X</a> on how it is different from Jenkins.</p>
<p>Jenkins X is <a href="https://jenkins-x.io/blog/2020/03/11/tekton/" rel="nofollow noreferrer">built on-top of Tekton</a>, another Kubernetes native CI/CD <a href="https://tekton.dev/" rel="nofollow noreferrer">project</a> and Tekton can be run standalone as well, using <a href="https://github.com/tektoncd/pipeline" rel="nofollow noreferrer">Tekton Pipelines</a>, <a href="https://github.com/tektoncd/triggers" rel="nofollow noreferrer">Tekton Triggers</a> and <a href="https://github.com/tektoncd/dashboard" rel="nofollow noreferrer">Tekton Dashboard</a>. Tekton is a very active community, backed by <a href="https://cloud.google.com/tekton" rel="nofollow noreferrer">Google</a> and Red Hat and more companies to provide a great CI/CD solution designed to work on Kubernetes.</p>
| Jonas |
<p>I am learning concepts in kubernetes, when i am going through deployments and replicaset concepts i got one doubt that <code>can replicasets and deployment be independent to each other</code></p>
| vishwa | <p>You <em>can</em> create a <code>ReplicaSet</code> without creating a <code>Deployment</code>, but nowadays it does not make much sense. You will almost always use only <code>Deployment</code> for deploying an application, and for every change, e.g. updating the <code>image:</code>, it will manage the creation of a new <code>ReplicaSet</code> for you.</p>
| Jonas |
<p>I have written a custom resource as part of the deployment. As part of this in the reconcileKind function, I have written the logic to create pod as shown below using the Kubernetes APIs in Go itself.</p>
<p><a href="https://i.stack.imgur.com/bEQUu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bEQUu.png" alt="Image" /></a></p>
<p>I would like to convert this to knative serving (instead of creating the POD which will be always running) so that I can make of KPA feature. I am aware of creating knative serving using the .yaml way. But I would like to create it by using the Kubernetes API itself. I did search in the official documentation, but everything explained was using the .yaml way.</p>
<p>So I'm curious whether can we achieve knative serving by directly using Kubernetes APIs?</p>
| coders | <blockquote>
<p>How to create the knative serving using golang?</p>
</blockquote>
<p>You need to use the Go-client for Knative Serving, e.g. the client type - corresponding to the <code>corev1.Pod</code> that you used in your code.</p>
<p>The Go client for <a href="https://github.com/knative/serving/blob/master/pkg/client/clientset/versioned/clientset.go" rel="nofollow noreferrer">Knative v1.Serving</a> is in the Knative repository.</p>
<p>Instead of <code>CoreV1()</code> in your code, you can use <code>ServingV1()</code> from the Knative Go client.</p>
<p>But I would recommend to use Yaml manifests unless you have custom needs.</p>
| Jonas |
<h3>Scenario</h3>
<p>I have a PersistentVolume with <code>volumeMode</code>as <code>Block</code>. It is defined as:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: block-vol
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
local:
path: /dev/sdb # this path on the host specified below is used as a device mount
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <my-host>
persistentVolumeReclaimPolicy: Retain
storageClassName: block-storage
volumeMode: Block
</code></pre>
<p>When I mount this on a <code>statefulset</code> with a <code>VolumeClaimTemplate</code>, I specify it's <code>storage</code> field as <code>1Gi</code>. However, when exec'd in to the deployed pod, I see that the block size more than <code>1Gi</code> (It is the actual size of that device on physical machine)</p>
<p><code>StatefulSet</code> YAML:</p>
<pre class="lang-yaml prettyprint-override"><code>
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeDevices:
- name: rawdev0
devicePath: /dev/kdb0
volumeClaimTemplates:
- metadata:
name: rawdev0
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: block-storage
volumeMode: Block
resources:
requests:
storage: 1Gi
</code></pre>
<p>I have used <code>blockdev</code> to find the size of block in bytes:</p>
<pre><code>root@nginx-0:/# ls -lhrt /dev/kdb0
brw-rw----. 1 root disk 8, 16 Jan 13 19:49 /dev/kdb0
root@nginx-0:/# blockdev --getsize64 /dev/kdb0
536870912000 #size of block in bytes
</code></pre>
<h3>Question</h3>
<p>What does the <code>storage</code> field signify in this case?</p>
| S.Au.Ra.B.H | <p>Kubernetes can't do much about the storage size for <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local volumes</a>. The admin that created the <code>PersistentVolume</code> must set a proper size, for granular sizing he/she should probably create its own partition instead of mapping the local volume to a directory.</p>
<p>The storage size in the <code>PersistentVolumeClaim</code> is a <em>request</em> so that the app at least get a volume of that size.</p>
| Jonas |
<p>I am new to YAML and I would like to understand the following piece of a .yaml file:</p>
<pre><code>version: "3.7"
services:
influxdb:
image: influxdb:alpine
environment:
INFLUXDB_DB: ft_services
INFLUXDB_ADMIN_USER: admin
INFLUXDB_ADMIN_PASSWORD: admin
volumes:
- datainfluxdb:/var/lib/influxdb
deploy:
restart_policy:
condition: on-failure
</code></pre>
<p>As far as I know, there are 3 types of data that can be used in a .yaml file: scalars, sequences and mappings. For example, <code>version: "3.7"</code> is a scalar. But I am not sure what the following are:</p>
<pre><code>volumes:
- datainfluxdb:/var/lib/influxdb
</code></pre>
<pre><code>environment:
INFLUXDB_DB: ft_services
INFLUXDB_ADMIN_USER: admin
INFLUXDB_ADMIN_PASSWORD: admin
</code></pre>
<p>I don't really understand what type of data are these and how do they work, can someone give me a hint?</p>
| kubo | <h2>Lists</h2>
<p>example</p>
<pre><code>volumes:
- data: /var/lib
other-field: "example"
- data: /etc
</code></pre>
<p>Each indented line beginning with an <code>-</code> above is the beginning of a <em>List Item</em>. There is two items in the list in the example and the whole list is named <code>volumes</code>. The example is a List of Maps, but also List of Scalars is valid.</p>
<h2>Maps</h2>
<p>example</p>
<pre><code>environment:
INFLUXDB_DB: ft_services
INFLUXDB_ADMIN_USER: admin
INFLUXDB_ADMIN_PASSWORD: admin
</code></pre>
<p>as you wrote, this is a Map with Key-Value pairs and the whole Map is named <code>environment</code>.</p>
<h2>Scalars</h2>
<p>As you wrote there is also scalars of various types. A value within quotes like <code>"3.7"</code> is a <code>string</code>.</p>
| Jonas |
<p>I have the following scenario,</p>
<p>I have two deployments on Kubernetes, my first deployment needs to be shut down due to some issue
and user requests need to route to the second deployment and then the first deployment will shut
down, once the second is up and running.</p>
<p>How would I route user requests from the first to the second?
I know that there are readiness and liveness check but how would I specifically
specify in the script to send the requests to the second deployment.</p>
<p>Based on my limited knowledge I believe there might be some other ways to re-route the traffic from the first deployment to second deployment.
Also, my user request is a continuous video image being sent from the user to the Kubernetes system.</p>
<p>Thanks, help is highly appreciated.</p>
| abaair davis | <p>With a "second deployment", I assume that you mean a new version of your app, in Kubernetes this would be a change of the <code>Deployment</code> resource to use a different <code>image:</code>.</p>
<p>In Kubernetes, you usually have a few instances ("replicas") of your app running. And when a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer"><code>Deployment</code> is updated</a> to contain a new version (e.g. a new image), then Kubernetes automatically does a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rollover-aka-multiple-updates-in-flight" rel="nofollow noreferrer">Rolling Update</a>. This means that it will create new instances from the new image - one by one - and at the same time terminate instances from the old version of the app. This means that you, during a short period of time will have both versions of the app running at the same time. User requests will be routed to any running instance. This can be changed to use a different <em>Deployment Strategy</em>, e.g. "replace" or you can create a more advanced setup.</p>
<blockquote>
<p>my user request is a continuous video image being sent from the user to the Kubernetes system</p>
</blockquote>
<p>Requests to apps running on Kubernetes should be designed in an <em>idempotent</em> way - such that the client can retry the request if the connection is interrupted.</p>
| Jonas |
<p>I'm at the finish line of this tutorial on my Linux machine with minikube: <a href="https://tekton.dev/docs/getting-started/" rel="nofollow noreferrer">https://tekton.dev/docs/getting-started/</a>. But something went wrong and I don't get the expected <code>echo</code> result.</p>
<p>In order to track the TaskRun progress run:</p>
<pre><code>➜ TWOC tkn task start hello && sleep 5 && kubectl get pods && tkn taskrun list
TaskRun started: hello-run-rjd2l
In order to track the TaskRun progress run:
tkn taskrun logs hello-run-rjd2l -f -n default
NAME READY STATUS RESTARTS AGE
twoc-backend-local-deployment-55b494d4cb-fjz6v 3/3 Running 12 7d22h
twoc-backend-local-deployment-55b494d4cb-vdtv5 3/3 Running 12 7d22h
NAME STARTED DURATION STATUS
hello-run-5f4qc --- --- ---
hello-run-5zck9 --- --- ---
hello-run-8sdmx --- --- ---
hello-run-bvhdg --- --- ---
hello-run-cdhz8 --- --- ---
hello-run-frbwf --- --- ---
hello-run-pzvbz --- --- ---
hello-run-q57p9 --- --- ---
hello-run-rjd2l --- --- ---
hello-run-tpnt7 --- --- ---
➜ TWOC kubectl describe taskrun hello-run-5zck9
Name: hello-run-5zck9
Namespace: default
Labels: <none>
Annotations: <none>
API Version: tekton.dev/v1beta1
Kind: TaskRun
Metadata:
Creation Timestamp: 2021-01-06T17:34:43Z
Generate Name: hello-run-
Generation: 1
Managed Fields:
API Version: tekton.dev/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:generateName:
f:spec:
.:
f:resources:
f:serviceAccountName:
f:taskRef:
.:
f:name:
f:status:
.:
f:podName:
Manager: kubectl-create
Operation: Update
Time: 2021-01-06T17:34:43Z
Resource Version: 180093
Self Link: /apis/tekton.dev/v1beta1/namespaces/default/taskruns/hello-run-5zck9
UID: a9353809-44c0-4864-b131-f1ab52ac080d
Spec:
Resources:
Service Account Name:
Task Ref:
Name: hello
Events: <none>
➜ TWOC tkn taskrun logs --last -f
Error: task hello create has not started yet or pod for task not yet available
➜ TWOC kubectl describe task hello
Name: hello
Namespace: default
Labels: <none>
Annotations: <none>
API Version: tekton.dev/v1beta1
Kind: Task
Metadata:
Creation Timestamp: 2021-01-06T16:28:46Z
Generation: 1
Managed Fields:
API Version: tekton.dev/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:spec:
.:
f:steps:
Manager: kubectl-create
Operation: Update
Time: 2021-01-06T16:28:46Z
API Version: tekton.dev/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-01-06T17:34:07Z
Resource Version: 180053
Self Link: /apis/tekton.dev/v1beta1/namespaces/default/tasks/hello
UID: 4dc3e52e-4407-4921-8365-7e8845eb8c6b
Spec:
Steps:
Args:
Hello World!
Command:
echo
Image: ubuntu
Name: hello
Events: <none>
➜ TWOC git:(master) ✗ kubectl get pods --namespace tekton-pipelines
NAME READY STATUS RESTARTS AGE
tekton-dashboard-6884b7b896-qtx4t 1/1 Running 3 8d
tekton-pipelines-controller-7c5494d584-d6gkn 1/1 Running 5 8d
tekton-pipelines-webhook-59c94c5c6d-nh8wc 1/1 Running 3 8d
➜ TWOC git:(master) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
twoc-backend-local-deployment-55b494d4cb-fjz6v 3/3 Running 9 7d20h
twoc-backend-local-deployment-55b494d4cb-vdtv5 3/3 Running 9 7d20h
</code></pre>
| Vassily | <p>This listing of the TaskRuns:</p>
<pre><code>NAME STARTED DURATION STATUS
hello-run-5f4qc --- --- ---
hello-run-5zck9 --- --- ---
hello-run-8sdmx --- --- ---
</code></pre>
<p>and no corresponding created Pods indicate that your Pipeline Controller does not work properly. Inspect the logs of your controller to see if there are any related issues, e.g. with <code>kubectl logs tekton-pipelines-controller-7c5494d584-d6gkn</code>.</p>
<p>This error from the logs:</p>
<blockquote>
<p>Kind=Task failed: Post "https://tekton-pipelines-webhook.tekton-pipelines.svc:443/?timeout=30s": dial tcp 10.101.106.201:443: connect: connection refused</p>
</blockquote>
<p>indicate that there are some connectivity problems.</p>
<p>When I followed the guide with Minikube on my machine, it worked without problems.</p>
| Jonas |
<p>The <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/" rel="nofollow noreferrer">helm documentation</a> suggests to <strong>recreate a pod</strong> by setting variable metadata values.</p>
<p>For example:</p>
<pre><code>kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
</code></pre>
<p>But there a <strong>situations</strong>, when a pod is <strong>not</strong> recreated:</p>
<ul>
<li>A pod is erroneous in state <code>CrashLoopBackOff</code></li>
<li>Only Deployment Metadata has changed</li>
</ul>
<p>I would like to know <strong>what events</strong> do trigger a pod recreate:</p>
<ul>
<li>Why is the pod in state <code>CrashLoopBackOff</code> not restarted?</li>
<li>Why are not all parts of the spec considered to recreate the pod?</li>
</ul>
<p><strong>Edit</strong></p>
<p>The <code>CrashLookBackOff</code> is an application problem.
But if a new image (containing the bugfix) is provided, the pod should be restarted without the need to kill it explicitly.</p>
<p>Is there a cause not to restart the <code>CrashLookBackOff</code> pod?</p>
| Matthias M | <p>The <em>template</em> in a <code>Deployment</code> is a <code>PodTemplate</code>. Every time the PodTemplate is changed, a new ReplicaSet is created, and it creates new Pods according to the number of replicas using the PodTemplate.</p>
<pre><code>kind: Deployment
spec:
template:
# any change here will lead to new Pods
</code></pre>
<p>Everytime a new Pod is created from a template, it will be identical as the previous pods.</p>
<p>A <code>CrashLoopBackOff</code> is a Pod-level problem, e.g. it may be a problem with the application.</p>
<blockquote>
<p>But if a new image (containing the bugfix) is provided, the pod should be restarted without the need to kill it explicitly.</p>
</blockquote>
<p>If a new image is provided, it should have its own unique name. That means that whenever you change the image, you have to change the image name. A change of the image name is a change in the PodTemplate, so it will always create new Pods - and delete but not reuse old Pods.</p>
| Jonas |
<p>I had a broken mongo container that was not initializing, and I suspected that I could fix it by cleaning all the storage. So, I deleted the persistent storage being used, but now, every new storage that I create is in the "Lost" phase, and it's not found by in the mongo's POD creation.</p>
<p>This is the .yml file that I'm using to create the PV:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-class: standard
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
creationTimestamp: "2018-08-11T09:19:29Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
environment: test
role: mongo
name: mongo-persistent-storage-mongo-db-0
namespace: default
resourceVersion: "122299922"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/mongo-persistent-storage-mongo-db-0
uid: a68de459-9d47-11e8-84b1-42010a800138
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
volumeMode: Filesystem
volumeName: pvc-a68de459-9d47-11e8-84b1-42010a800135
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
phase: Bound
</code></pre>
<p>This is the error that I get when restarting the mongo POD:</p>
<pre><code>error while running "VolumeBinding" filter plugin for pod "mongo-db-0": could not find v1.PersistentVolume "pvc-a68de459-9d47-11e8-84b1-42010a800135"
</code></pre>
<p>I already tryied changing the PV name and ids, but it didn't work.</p>
| Hugo Sartori | <blockquote>
<p>This is the .yml file that I'm using to create the PV</p>
</blockquote>
<p>It looks like you use a manifest that binds to a specific PV.</p>
<p>How about if you remove the unique fields and let the cluster dynamically provision a new PV for you?</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
environment: test
role: mongo
name: mongo-persistent-storage-mongo-db-0
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
volumeMode: Filesystem
</code></pre>
| Jonas |
<p>Hello Kubernetes Experts,</p>
<p>Trying to get a better understanding here.</p>
<p>I have created a deployment with a regular deployment yaml and service yaml
The service is node port, I then created a ingress and pointed the service</p>
<p>Tried to access the service and it works as expected on the default port of 80 on nginx ingress.</p>
<p>Next created the same deployment and service file. The only exception here was insted of node port is chose ClusterIP. Created a Ingress and pointed the service.</p>
<p>Tried to access the service and it simply fails with the nginx home page and does not do any routing to my application.</p>
<p>I understand that nodeport is what exposes the application to the external world.
But then I'm using Ingress to attain the same functionality.</p>
<p>Do we really need to set the service as node port even if we use Ingress???</p>
<p>Or is something terribly wrong with my yaml files. I tried reading about it and could not get any relevant explanation.</p>
<p>Thank you,
Anish</p>
| anish anil | <p>First, the <code>Service</code> and <code>Ingress</code> resources works a bit different across cloud providers. E.g. on Google Cloud Platform and <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">AWS</a>, you need to use a <code>NodePort</code> service when using <code>Ingress</code> but on e.g. OpenShift <code>ClusterIP</code> is working.</p>
<p>Mostly, the reason is that the <em>Load Balancer</em> is <strong>located outside</strong> of your cluster (this is not the case on the OpenShift environment where I work).</p>
<p>From <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">Google Cloud documentation</a>, use <code>NodePort</code> for load balancing but <code>ClusterIP</code> if your load balancer is "container native".</p>
<blockquote>
<p>In the Service manifest, you must use type: NodePort unless you're using container native load balancing. If using container native load balancing, use the type: ClusterIP.</p>
</blockquote>
| Jonas |
<p>I try to mount a local folder as PersistentVolume and use it in one of the pods, but seems there is problem with the process and the pod stays on the status "pending".</p>
<p>The following is my pv yaml file:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-web
labels:
type: local
spec:
storageClassName: mlo-web
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: ${MLO_REPO_DIR}/web/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- mlo-node
</code></pre>
<p>and pvc yaml file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-web
namespace: mlo-dev
labels:
type: local
spec:
storageClassName: mlo-web
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>and the deployment yaml file :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
namespace: mlo-dev
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxxx/web:latest
ports:
- containerPort: 3000
volumeMounts:
- name: webdir
mountPath: /service
...
volumes:
- name: webdir
persistentVolumeClaim:
claimName: pvc-web
</code></pre>
<p>I found that the pod is alway in "pending" status:</p>
<pre><code>web-deployment-d498c7f57-4cfbg 0/1 Pending 0 26m
</code></pre>
<p>and when I check the pod status using "kubectl describe", the following is the result:</p>
<pre><code>Name: web-deployment-d498c7f57-4cfbg
Namespace: mlo-dev
Priority: 0
Node: <none>
Labels: app=web
pod-template-hash=d498c7f57
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/web-deployment-d498c7f57
Containers:
web:
Image: xxxxxx/web:latest
Port: 3000/TCP
Host Port: 0/TCP
Command:
npm
run
mlo-start
Environment:
NODE_ENV: <set to the key 'NODE_ENV' of config map 'env-config'> Optional: false
WEBPACK_DEV_SERVER: <set to the key 'webpack_dev_server' of config map 'env-config'> Optional: false
REDIS_URL_SESSION: <set to the key 'REDIS_URL' of config map 'env-config'> Optional: false
WORKSHOP_ADDRESS: <set to the key 'WORKSHOP_ADDRESS' of config map 'env-config'> Optional: false
USER_API_ADDRESS: <set to the key 'USER_API_ADDRESS' of config map 'env-config'> Optional: false
ENVCUR_API_ADDRESS: <set to the key 'ENVCUR_API_ADDRESS' of config map 'env-config'> Optional: false
WIDGETS_API_ADDRESS: <set to the key 'WIDGETS_API_ADDRESS' of config map 'env-config'> Optional: false
PROGRAM_BULL_URL: <set to the key 'REDIS_URL' of config map 'env-config'> Optional: false
PROGRAM_PUBSUB: <set to the key 'REDIS_URL' of config map 'env-config'> Optional: false
PROGRAM_API_ADDRESS: <set to the key 'PROGRAM_API_ADDRESS' of config map 'env-config'> Optional: false
MARATHON_BULL_URL: <set to the key 'REDIS_URL' of config map 'env-config'> Optional: false
MARATHON_API_ADDRESS: <set to the key 'MARATHON_API_ADDRESS' of config map 'env-config'> Optional: false
GIT_API_ADDRESS: <set to the key 'GIT_API_ADDRESS' of config map 'env-config'> Optional: false
GIT_HTTP_ADDRESS: <set to the key 'GIT_HTTP_ADDRESS' of config map 'env-config'> Optional: false
LOG_URL: <set to the key 'LOG_URL' of config map 'env-config'> Optional: false
LOGGER_PUBSUB: <set to the key 'REDIS_URL' of config map 'env-config'> Optional: false
AUTH0_CLIENT_ID: <set to the key 'AUTH0_CLIENT_ID' of config map 'env-config'> Optional: false
AUTH0_DOMAIN: <set to the key 'AUTH0_DOMAIN' of config map 'env-config'> Optional: false
AUTH0_CALLBACK_URL: <set to the key 'AUTH0_CALLBACK_URL' of config map 'env-config'> Optional: false
AUTH0_LOGOOUT_RETURN: <set to the key 'AUTH0_LOGOOUT_RETURN' of config map 'env-config'> Optional: false
AUTH0_CLIENT_SECRET: <set to the key 'auth0-client-secret' in secret 'env-secret'> Optional: false
SESSION_SECRET: <set to the key 'session-secret' in secret 'env-secret'> Optional: false
Mounts:
/service from webdir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w9v7j (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
webdir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-web
ReadOnly: false
default-token-w9v7j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w9v7j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 30s (x2 over 30s) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict.
</code></pre>
<p>The error message I found is :</p>
<pre><code> Warning FailedScheduling 30s (x2 over 30s) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict.
</code></pre>
<p>Do you know where my problem is? Many Thanks!</p>
| Ken Tsoi | <p>You doesn't seem to have a Node that match your affinity requirement.</p>
<p>Remove the affinity requirement on your <code>PersistentVolume</code>:</p>
<p>Remove this part:</p>
<pre><code> nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- mlo-node
</code></pre>
<p>and only use (and change <code>local</code> to <code>hostPath</code>):</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-web
labels:
type: local
spec:
storageClassName: mlo-web
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /absolute-path/web/
</code></pre>
<p>This is similar to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a> example, also using Minikube.</p>
| Jonas |
<p>I want to set a boolean variable in configMap (or secret):</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: mlo-stage
data:
webpack_dev_server: false
</code></pre>
<p>But when I apply it, I get the following error:</p>
<pre><code>The request is invalid: patch: Invalid value: "map[data:map[webpack_dev_server:false] metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{ blah blah blah}]]]": unrecognized type: string
</code></pre>
<p>I have tried to change the value to Off/No/False, all having the same problem.</p>
<p>It seems that the value of the keys in the data map can only be string, I have tried to change the value to "false", the yaml file is OK, but then the variable becomes a string but not boolean.</p>
<p>what should I do if I want to pass a boolean as value?</p>
| Ken Tsoi | <p>Values in a <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="noreferrer">ConfigMap</a> must be key-value string values or files.</p>
<p>Change:</p>
<pre><code>data:
webpack_dev_server: false
</code></pre>
<p>To:</p>
<pre><code>data:
webpack_dev_server: "false"
</code></pre>
<p>To your question:</p>
<blockquote>
<p>what should I do if I want to pass a boolean as value?</p>
</blockquote>
<p>You may handle this in the application, transform from <code>string</code> to <code>bool</code>.</p>
| Jonas |
<p>Assume I have a Python application/JBoss application.
I can setup my SSL certificate at server level</p>
<p>For instance in Python 3 using <code>SimpleHTTPServer</code>:</p>
<pre class="lang-py prettyprint-override"><code>def main():
key_file = "/etc/letsencrypt/live/mydomain.fr/privkey.pem"
cert_file = "/etc/letsencrypt/live/mydomain.fr/fullchain.pem"
port = 9443
httpd = HTTPServer(('0.0.0.0', port), SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket,
keyfile=key_file,
certfile=cert_file, server_side=True)
httpd.serve_forever()
</code></pre>
<p>However if our Webapp is deployed in Kubernetes or OpenShift: We can keep our application in HTTP and use Kubernetes ingress (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">spec.tls</a>) or OpenShift route (<a href="https://docs.openshift.com/container-platform/4.6/rest_api/network_apis/route-route-openshift-io-v1.html#specification" rel="nofollow noreferrer">spec.tls</a>) to expose the app in HTTP<strong>s</strong> and route to a service and then a pod in HTTP.</p>
<p>What would be the pros and cons of the 2 solutions?</p>
<p>Can I consider traffic is encrypted inside the cluster (like pod to pod) when using Kubernetes services IP/pods IPs though we are in HTTP?</p>
| scoulomb | <p>Kubernetes is often used for apps that is composed of multiple services, e.g. <em>microservice architecture</em>. In such architecture, the https-connection to the "app" is terminated at the gateway, typically Kubernetes Ingress load balancer, and then depending on what url the request has, it is forwarded to the appropriated Pod within the cluster.</p>
<p>The case you describe is only a single Pod / Deployment. For so small use cases, it is probably not worth using a full Kubernetes cluster. Kuberntes is typically used for larger environments when you have many nodes to handle your workload.</p>
<p>A benefit with using Kubernetes in a larger environment is that it is very good on separating <em>infrastructure</em> from the <em>applications</em>. See e.g. <a href="https://thenewstack.io/devops-before-and-after-kubernetes/" rel="nofollow noreferrer">DevOps Before and After Kubernetes</a>. As an example, the developers of your app can focus on implementing features and let the infrastructure handle the rest. In your case, this means that the developer does not need to handle the SSL-certificates like <code>privkey.pem</code> and <code>fullchain.pem</code> - this can instead be handled outside the app and changed independently.</p>
<blockquote>
<p>Can I consider traffic is encrypted inside the cluster (like pod to pod) when using Kubernetes services IP/pods IPs though we are in HTTP?</p>
</blockquote>
<p>Pod to Pod traffic is not encrypted unless you or your cluster has configuration for SSL / HTTPS. But Pod to Pod traffic is internal traffic within the Kubernetes cluster and it is typically within a private IP-subnet. That said, you can add a <em>service mesh</em> product like e.g. <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> to get strong encryption for Pod to Pod using <em>mTLS</em>, also including authentication with certificates - but this is still managed by the <em>infrastructure</em> outside the app-container.</p>
| Jonas |
<p>I want to pass a list object in the data section of configMap in application YAML file.</p>
<p>I have the following list in the properties file:</p>
<pre><code>abc.management.provider[0].url=https://example.com
abc.management.provider[0].username=John
abc.management.provider[1].url=https://example2.com
abc.management.provider[1].username=Targerian
</code></pre>
<p>YAML file:</p>
<pre><code>data:
abc_management:
provider:
- url: "https://example.com"
username: "John"
- url: "https://example2.com"
username: "Targerian"
</code></pre>
<p>I'm getting this error: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap: Data: ReadString: expects " or n,.</p>
<p>what should I do?</p>
| Manoj Singh | <blockquote>
<p>what should I do?</p>
</blockquote>
<p>This mostly depends on how your application reads the configuration.</p>
<p>If it works for you, you an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">create the <code>ConfigMap</code> directly</a> with your properties-file:</p>
<pre><code>kubectl create configmap app-config --from-file=app.properties
</code></pre>
| Jonas |
<p>I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.</p>
<p>In kubernete, I read this doc <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a>, it seems that Kubernete itself has a <code>loadbalance</code> service. And it seems that it creates an external hostname and IP address.</p>
<p>so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?</p>
| Joey Yi Zhao | <p>You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.</p>
<p>When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.</p>
<p>A Kubernetes <code>Service</code> of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">type LoadBalancer</a> corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> that corresponds to an application load balancer (also known as L7 load balancer).</p>
<p>To use an ALB or <code>Ingress</code> in Kubernetes, you also need to install an Ingress Controller. For AWS you should install <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller" rel="nofollow noreferrer">AWS Load Balancer Controller</a>, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.</p>
<p>See the EKS docs about <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="nofollow noreferrer">EKS network load balancing</a> and <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">EKS application load balancing</a></p>
| Jonas |
<p>A Kubernetes cluster should not contain stateful services like databases because of scalability, recovery and operation of the storage. Would be the use of ceph be an alternative to this problem?</p>
| moinster | <p>A Kubernetes can contain stateful services. Some examples are Redis cache or <a href="https://www.cockroachlabs.com/product/" rel="nofollow noreferrer">CockroachDB</a> - but they should be a <em>distributed service</em>.</p>
<p>Ceph is a storage solution alternative.</p>
| Jonas |
<p>I am new to kubernetes just doing little R&D on k8s.
was checking out different deployment strategies like rolling update, recreate, blue-green and canary. if am correct the idea behind canary deployment is rolling out new version to set of users. here my questions lets my team has developers and testing team. whenever testing team try to access the application it should redirect to new version of application, is it possible? or canary is only used for having 2 version applications running at same time with one service?</p>
| Sugatur Deekshith S N | <h2>Canary Deployment</h2>
<blockquote>
<p>if am correct the idea behind canary deployment is rolling out new version to set of users. here my questions lets my team has developers and testing team.</p>
</blockquote>
<p>The term <em>Canary Deployment</em> does not have a precise definition as what I know. But it usually means that you deploy a new version of your app and only let a small fraction of your traffic hit the new version, e.g. 5% or 15% - and then have a <em>Canary Analyzer</em> (e.g. <a href="https://netflixtechblog.com/automated-canary-analysis-at-netflix-with-kayenta-3260bc7acc69" rel="noreferrer">Kayenta</a>) to analyze the metrics for the new and the old version - and then do an <em>automatic decision</em> to route all your traffic to the new version or to rollback the deployment.</p>
<p>The good thing with this is the high degree of automation - humans does not have to monitor the metrics after the deployment. And if there was a bug in the new version, only a small fraction of your customers were affected. But this is also challenging since you need a certain amount of traffic for a good statistical ground.</p>
<h2>Route based on User property</h2>
<blockquote>
<p>whenever testing team try to access the application it should redirect to new version of application, is it possible?</p>
</blockquote>
<p>What you want here is to route the traffic to a specific version, based on a property of the user, e.g. from the authentication token.</p>
<p>You need a <em>service mesh</em> like e.g. <a href="https://istio.io/" rel="noreferrer">Istio</a> and base your authentication on <a href="https://en.wikipedia.org/wiki/JSON_Web_Token" rel="noreferrer">JWT</a> e.g. <a href="https://openid.net/connect/" rel="noreferrer">OpenID Connect</a> to do this in Kubernetes.</p>
<p>From the <a href="https://istio.io/latest/docs/reference/config/security/request_authentication/" rel="noreferrer">Istio documentation</a>, you need to create a <code>RequestAuthentication</code> and an <code>AuthorizationPolicy</code> for your app.</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "issuer-foo"
- issuer: "issuer-bar"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
rules:
- from:
- source:
requestPrincipals: ["issuer-foo/*"]
to:
hosts: ["example.com"]
- from:
- source:
requestPrincipals: ["issuer-bar/*"]
to:
hosts: ["another-host.com"]
</code></pre>
<p>The <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/#allow-requests-with-valid-jwt-and-list-typed-claims" rel="noreferrer">JWT and list-typed claims</a> part of the documentation describes how to specify rules for a specific user name (<code>sub</code>) or a property / group of users with <code>claims</code>. E.g.</p>
<pre><code> when:
- key: request.auth.claims[groups]
values: ["test-group"]
</code></pre>
| Jonas |
<p>If desperate, one can consider Argo Workflows as a programming language implemented in YAML and using Kubernetes as a back-end.</p>
<ul>
<li>A procedure can be defined using <code>steps:</code></li>
<li>Functions are Templates with arguments coming in two flavours:
<ul>
<li>Parameters, which are strings</li>
<li>Artifacts, which are files shared by some tool, such as S3 or NFS</li>
</ul>
</li>
<li>There is flow control
<ul>
<li>Conditionals are implemented by <code>when:</code></li>
<li>Iterators are implemented by <code>withSequence:</code> and <code>withItems:</code></li>
<li>Recursion is possible by Templates calling themselves</li>
</ul>
</li>
</ul>
<p>The templates map somewhat directly onto Kubernetes YAML specs. Parameters appear to be shared via annotations and artifacts are shared via native Kubernetes functionality.</p>
<p>How is the flow-control implemented? What features of Kubernetes does Argo use to accomplish this? Does it have something to do with the Kubernetes Control Plane?</p>
| Seanny123 | <p>Argo Workflows is implemented with custom <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">Kubernetes Custom Resources</a>, e.g. its own yaml manifest types. For every <em>custom resource</em> there is an associated custom pod that acts as a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">Kubernetes Controller</a> with the logic.</p>
<p>The custom controller may create other resources or Pods, and watch the result of their execution status in the status-fields and then implement its workflow logics accordingly, e.g. watch results and follow the declared <code>when:</code> expressions depending on the results.</p>
<p><em>I have more experience using <a href="https://github.com/tektoncd/pipeline" rel="nofollow noreferrer">Tekton Pipelines</a> but it works the same way as Argo Workflows. If you are interested in implementing similar things, I recommend to start with <a href="https://github.com/kubernetes-sigs/kubebuilder" rel="nofollow noreferrer">Kubebuilder</a> and read <a href="https://book.kubebuilder.io/" rel="nofollow noreferrer">The Kubebuilder book</a>.</em></p>
| Jonas |
<p>Within same stateful set afaik you can interact between particular pods just by referencing it directly, like this - <code>pod-{0..N-1}.my_service.my_namespace.svc.cluster.local</code>.
(some more info here: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id</a>).</p>
<p>However in my case I have 2 different stateful sets, and I want to be able <code>statefullset1-pod-0</code> from 1st stateful set to interact with <code>statefullset2-pod-0</code> from 2nd stateful set(and also <code>statefullset1-pod-1</code> with <code>statefullset2-pod-1</code>, and so on). Is it possible? If yes, can you please provide example configuration?</p>
| Ruslan Akhundov | <blockquote>
<p>However in my case I have 2 different stateful sets, and I want to be able statefullset1-pod-0 from 1st stateful set to interact with statefullset2-pod-0 from 2nd stateful set(and also statefullset1-pod-1 with statefullset2-pod-1, and so on). Is it possible? If yes, can you please provide example configuration?</p>
</blockquote>
<p>Yes, your apps can access other <code>StatefulSet</code> as it access any other Service in the cluster, use the DNS name of the Service. E.g. if you have created Service <code>statefullset2-pod-0</code> in the same namespace, you can access it with <code>http://statefullset2-pod-0</code> if it is a http-service.</p>
<p>Remember, for StatefulSet, you are responsilbe to create the Pod-identity services yourself.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet documentation</a>:</p>
<blockquote>
<p>StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.</p>
</blockquote>
| Jonas |
<p>I've investigated a couple of helm charts for apache spark deployment and found that most of them use statefulset for deployment instead of normal k8s deployment resource. </p>
<p>E.g. the <a href="https://hub.helm.sh/charts/microsoft/spark" rel="nofollow noreferrer">microsoft/spark</a> uses normal deployment while <a href="https://hub.helm.sh/charts/bitnami/spark" rel="nofollow noreferrer">bitnami/spark</a> prefers statefulset.</p>
<p>I am just wondering is there any specific reason to do that? </p>
| Russell Bie | <p>Apache Spark is a <em>stateful</em> service, those should be deployed as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p>
<p>Only <em>stateless</em> services should be deployed as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. Applications that are <em>stateless</em> follow the <a href="https://12factor.net/" rel="nofollow noreferrer">Twelve Factor App</a> principles. Making an app stateless make it much easier to run as a distributed system e.g. multiple instances in Kubernetes. But not everything can be stateless, and StatefulSet is a an option for <em>stateful</em> services.</p>
| Jonas |
<p>I have one pod which requires a persistent disk. I have 1 pod running on us-central1-a and if that zone goes down I want to migrate to another zone without data loss to another zone (us-central1-*).</p>
<p>Is it possible to migrate a pod to another zone(where i know the disks exists) and use the regional disk for the pod in the new zone?</p>
<p><strong>Approach 1</strong></p>
<p>Using the below <code>StorageClass</code> my pod is always unable to claim any of these and my pod never starts. I had the understanding this regional disk with all zones configured would make the disk available to all zones in case of zone failure. I do not understand why I cannot claim any of these.</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-central1-a
- us-central1-b
- us-central1-c
- us-central1-f
</code></pre>
<p>Error: My PVC status is always pending</p>
<pre><code> Normal NotTriggerScaleUp 106s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added):
Warning FailedScheduling 62s (x2 over 108s) default-scheduler 0/8 nodes are available: 8 node(s) didn't find available persistent volumes to bind.
</code></pre>
<p><strong>Attempt 2</strong></p>
<p>This storage config will allow me to run my pod in 2/4 zones with 1 zone being the initial zone and 1 being random. When I intentionally reduce and move out of my initial pods zone I will get the below error unless i'm lucky enough to have chosen the other randomly provisioned zone. Is this functionality intentional because Google assumes a very low chance of 2 zone failures? If one does fail wouldn't i have to provision another disk in another zone just in case?</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Errors:</p>
<pre><code>Normal NotTriggerScaleUp 4m49s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added):
Warning FailedScheduling 103s (x13 over 4m51s) default-scheduler 0/4 nodes are available: 2 node(s) had volume node affinity conflict, 2 node(s) were unschedulable.
Warning FailedScheduling 43s (x2 over 43s) default-scheduler 0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) had volume node affinity conflict.
Warning FailedScheduling 18s (x3 over 41s) default-scheduler 0/2 nodes are available: 2 node(s) had volume node affinity conflict.
</code></pre>
<p>My pvc</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
storageClassName: regionalpd-storageclass
</code></pre>
<p>My Pod volume</p>
<p>volumes:</p>
<pre><code> - name: console-persistent-volume
persistentVolumeClaim:
claimName: my-pvc
</code></pre>
| rubio | <p>A regional Persistent Disk on Google Cloud is only available in <strong>two zones</strong>, so you must change your <code>StorageClass</code> to only two zones.</p>
<p>See example StorageClass on <a href="https://cloud.google.com/solutions/using-kubernetes-engine-to-deploy-apps-with-regional-persistent-disks" rel="nofollow noreferrer">Using Kubernetes Engine to Deploy Apps with Regional Persistent Disks</a>
and more details on <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd" rel="nofollow noreferrer">GKE: Provisioning regional persistent disks</a></p>
| Jonas |
<p>I have a PHP container, I am getting my media from an S3 bucket but resizing them locally and then the container uses them from local. I don't care about persistence or about sharing between containers, but there is a fair amount of I/O. Do I need an emptydir volume or am I ok just creating the files inside the container... basically I'm asking does a volume do anything apart from adding persistence and shareability.</p>
| Wayne Theisinger | <p>A Persistent Volume is a way to have data persisted even when the container is disposed, e.g. terminated and replaced by a new version.</p>
<p>An EmptyDir volume might work for you, in some configurations it is using the memory of the Pod instead of disk.</p>
<p>What choice you do, depend on your requirements.</p>
| Jonas |
<p>I'm planning to deploy RabbitMQ on Kubernetes Engine Cluster. I see there are two kinds of location types i.e. 1. Region 2. Zone
Could someone help me understand what kind of benefits I can think of respective to each location types? I believe having multi-zone set up
could help enhancing the network throughout. While multi-region set up can ensure an undisputed service even if in case of regional failure events. Is this understanding correct? I'm looking at relevant justifications to choose a location type. Please help. </p>
| Balajee Venkatesh | <blockquote>
<p>I'm planning to deploy RabbitMQ on Kubernetes Engine Cluster. I see there are two kinds of location types:</p>
<ol>
<li>Region </li>
<li>Zone </li>
</ol>
<p>Could someone help me understand what kind of benefits I can think of respective to each location types?</p>
</blockquote>
<p>A <em>zone</em> (Availability Zone) is typically a Datacenter.</p>
<p>A <em>region</em> is multiple zones located in the same geographical region. When deploying a "cluster" to a region, you typically have a VPC (Virtual private cloud) network spanning over 3 datacenters and you spread your components to those zones/datacenters. The idea is that you should be <em>fault tolerant</em> to a failure of a whole _datacenter, while still have relatively low latency within your system.</p>
<blockquote>
<p>While multi-region set up can ensure an undisputed service even if in case of regional failure events. Is this understanding correct? I'm looking at relevant justifications to choose a location type.</p>
</blockquote>
<p>When using multiple regions, e.g. in different parts of the world, this is typically done to be <em>near the customer</em>, e.g. to provide lower latency. CDN services is distributed to multiple geographical locations for the same reason. When deploying a service to multiple regions, communications between regions is typically done with asynchronous protocols, e.g. message queues, since latency may be too large for synchronous communication.</p>
| Jonas |
<p>Sometimes k8s nodes are labelled as <code>k8s.infra/postgres=</code> . Is this a valid label for a node ?</p>
<p>How do we use this kind of label whilst adding node affinities in our Deployment manifests ?</p>
<pre><code> spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: k8s.infra/postgres
operator: Exists
values:
-
-
</code></pre>
| nevosial | <blockquote>
<p>Sometimes k8s nodes are labelled as k8s.infra/postgres= . Is this a valid label for a node?</p>
</blockquote>
<p>Yes, it is a valid label. The type the key is a <code>string</code> and the value is also a <code>string</code> but the value can be empty <code>string</code>: <code>""</code>.</p>
<blockquote>
<p>How do we use this kind of label whilst adding node affinities in our Deployment manifests?</p>
</blockquote>
<p>The operators <code>Exists</code> and <code>DoesNotExist</code> only use <code>key:</code> and not <code>values:</code> so you can write:</p>
<pre><code> - matchExpressions:
- key: k8s.infra/postgres
operator: Exists
</code></pre>
| Jonas |
<p>I have created a <strong>Django-Python application with a postgres database</strong>. Its working fine in my PC as well as in any other windows based systems.
I am trying to use K8s to host the application.
I have setup the postgres container successfully.</p>
<p><em><strong>But when I am trying to create the Django-Python container and tryong to start it, it shows me this kind of error:</strong></em></p>
<blockquote>
<p>Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?</p>
</blockquote>
<p>The Deployment and service yaml for the postgres container:</p>
<pre><code>---
# Deployment for the PostgreSQL container
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
namespace: trojanwall
labels:
app: postgres-db
spec:
replicas: 1
selector:
matchLabels:
app: postgres-db
strategy:
type: Recreate
template:
metadata:
labels:
app: postgres-db
tier: postgreSQL
spec:
containers:
- name: postgresql
image: postgres:10.3
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-db-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-db-credentials
key: password
- name: POSTGRES_DB
value: 'postgres'
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresql-volume-mount
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgresql-volume-mount
persistentVolumeClaim:
claimName: postgres-pv-claim
---
# Service for the PostgreSQL container
apiVersion: v1
kind: Service
metadata:
name: postgresql
namespace: trojanwall
labels:
app: postgres-db
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres-db
tier: postgreSQL
</code></pre>
<p><strong>The log of the Postgres container:</strong></p>
<pre><code>2020-09-23 15:39:58.034 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-09-23 15:39:58.034 UTC [1] LOG: listening on IPv6 address "::", port 5432
2020-09-23 15:39:58.038 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-09-23 15:39:58.049 UTC [23] LOG: database system was shut down at 2020-09-23 15:37:17 UTC
2020-09-23 15:39:58.053 UTC [1] LOG: database system is ready to accept connections
2020-09-23 15:47:12.845 UTC [1] LOG: received smart shutdown request
2020-09-23 15:47:12.846 UTC [1] LOG: worker process: logical replication launcher (PID 29) exited with exit code 1
2020-09-23 15:47:12.846 UTC [24] LOG: shutting down
2020-09-23 15:47:12.851 UTC [1] LOG: database system is shut down
2020-09-23 15:47:13.123 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-09-23 15:47:13.123 UTC [1] LOG: listening on IPv6 address "::", port 5432
2020-09-23 15:47:13.126 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-09-23 15:47:13.134 UTC [24] LOG: database system was shut down at 2020-09-23 15:47:12 UTC
2020-09-23 15:47:13.138 UTC [1] LOG: database system is ready to accept connections
2020-09-23 15:47:25.722 UTC [1] LOG: received smart shutdown request
2020-09-23 15:47:25.724 UTC [1] LOG: worker process: logical replication launcher (PID 30) exited with exit code 1
2020-09-23 15:47:25.725 UTC [25] LOG: shutting down
2020-09-23 15:47:25.730 UTC [1] LOG: database system is shut down
2020-09-23 15:47:25.925 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-09-23 15:47:25.925 UTC [1] LOG: listening on IPv6 address "::", port 5432
2020-09-23 15:47:25.927 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-09-23 15:47:25.937 UTC [23] LOG: database system was shut down at 2020-09-23 15:47:25 UTC
2020-09-23 15:47:25.941 UTC [1] LOG: database system is ready to accept connections
</code></pre>
<p>Now when I am trying to deploy the Django-Python container, it just wont connect to the database container.</p>
<p>Django-Python application deployment and service YAML file:</p>
<pre><code>---
# Deployment for the Django-Python application container
apiVersion: apps/v1
kind: Deployment
metadata:
name: trojanwall-django
namespace: trojanwall
labels:
app: django
spec:
replicas: 1
selector:
matchLabels:
app: django
template:
metadata:
labels:
app: django
spec:
containers:
- name: trojanwall-django
image: arbabu/trojan-wall:v3.0
imagePullPolicy: Always
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-db-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-db-credentials
key: password
- name: POSTGRES_DB
value: 'postgres'
- name: DATABASE_URL
value: postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@postgresql:5432/$(POSTGRES_DB)
- name: DJANGO_SETTINGS_MODULE
value: 'TestProject.settings'
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: django-secret-key
key: secret_key
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresql-volume-mount
volumes:
- name: postgresql-volume-mount
persistentVolumeClaim:
claimName: postgres-pv-claim
---
# Service for the Django-Python application container
apiVersion: v1
kind: Service
metadata:
name: trojanwall-django
namespace: trojanwall
labels:
app: django
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
type: NodePort
selector:
app: django
</code></pre>
<p>After this step, the pods do start running, but once I bash into the Django container and run the command:</p>
<blockquote>
<p><strong>python3 manage.py migrate</strong></p>
</blockquote>
<p>It shows me this error:</p>
<pre><code>root@trojanwall-django-7df4bc7759-89bgv:/TestProject# python3 manage.py migrate
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 21, in <module>
main()
File "manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 85, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 92, in handle
executor = MigrationExecutor(connection, self.migration_progress_callback)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 53, in __init__
self.build_graph()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 216, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 77, in applied_migrations
if self.has_table():
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 55, in has_table
with self.connection.cursor() as cursor:
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
</code></pre>
<blockquote>
<p><strong>Does anyone know how to resolve this?</strong></p>
</blockquote>
<p>Here's a reference to the settings.py file's database configurations.</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': os.environ.get('POSTGRES_NAME', 'postgres'),
'USER': os.environ.get('POSTGRES_USER', 'postgres'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD', 'postgres'),
'HOST': os.getenv('POSTGRES_SERVICE_HOST','127.0.0.1'),
'PORT': os.getenv('POSTGRES_SERVICE_PORT',5432)
}
}
</code></pre>
<p>The secrets yaml file:</p>
<pre><code>---
# Secrets for the Database Credential Management
apiVersion: v1
kind: Secret
metadata:
name: postgres-db-credentials
namespace: trojanwall
labels:
app: postgres-db
type: opaque
data:
user: cG9zdGdyZXM=
password: cG9zdGdyZXM=
</code></pre>
| arjunbnair | <pre><code>kind: Service
metadata:
name: postgresql
namespace: trojanwall
labels:
app: postgres-db
spec:
type: ClusterIP
</code></pre>
<p>You service for the PostreSQL instance will allocate a new IP address from the Kubernetes cluster, since your service has type <code>ClusterIP</code>. This is well.</p>
<p>But your Python app has to connect to PostgreSQL on <em>that IP address</em> and <strong>not</strong> to <code>127.0.0.1</code>.</p>
<p>From the line below in your <code>setting.py</code> it looks like the IP address for your PostgreSQL instance can be overridden, it must be changed to reflect the IP in the Kubernetes cluster.</p>
<pre><code> 'HOST': os.getenv('POSTGRES_SERVICE_HOST','127.0.0.1'),
</code></pre>
<p>Update your <code>Deployment</code> for the app, to contain an environment value for <code>POSTGRES_SERVICE_HOST</code>.</p>
<p>Example:</p>
<pre><code> spec:
containers:
- name: trojanwall-django
image: arbabu/trojan-wall:v3.0
imagePullPolicy: Always
ports:
- containerPort: 8000
env:
- name: POSTGRES_SERVICE_HOST
value: "<INSERT YOUR IP ADDRESS>" # update this to reflect your IP
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-db-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-db-credentials
key: password
</code></pre>
| Jonas |
<p>(using kubernetes v1.15.7 in minikube and matching client version and minikube 1.9.0)</p>
<p>If I <code>kubectl apply</code> a secret like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
MY_KEY: dmFsdWUK
MY_SECRET: c3VwZXJzZWNyZXQK
kind: Secret
metadata:
name: my-secret
type: Opaque
</code></pre>
<p>then subsequently <code>kubectl apply</code> a secret removing the MY_SECRET field, like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
MY_KEY: dmFsdWUK
kind: Secret
metadata:
name: my-secret
type: Opaque
</code></pre>
<p>The <code>data</code> field in the result is what I expect when I <code>kubectl get</code> the secret:</p>
<pre class="lang-yaml prettyprint-override"><code>data:
MY_KEY: dmFsdWUK
</code></pre>
<hr>
<p>However, if I do the same thing using <code>stringData</code> instead <strong>for the first kubectl apply</strong>, it does not remove the missing key on the second one:</p>
<p>First <code>kubectl apply</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
stringData:
MY_KEY: value
MY_SECRET: supersecret
kind: Secret
metadata:
name: my-secret
type: Opaque
</code></pre>
<p>Second <code>kubectl apply</code> (stays the same, except replacing <code>MY_KEY</code>'s value with <code>b2hubyEK</code> to show the configuration DID change)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
MY_KEY: b2hubyEK
kind: Secret
metadata:
name: my-secret
type: Opaque
</code></pre>
<p><code>kubectl get</code> result after applying the second case:</p>
<pre><code>data:
MY_KEY: b2hubyEK
MY_SECRET: c3VwZXJzZWNyZXQ=
</code></pre>
<p>The field also does not get removed if the second case uses <code>stringData</code> instead. So it seems that once <code>stringData</code> is used once, it's impossible to remove a field without deleting the secret. Is this a bug? Or should I be doing something differently when using <code>stringData</code>?</p>
| Andrew DiNunzio | <p>kubectl apply need to merge / patch the changes here. How this works is described in <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-apply-calculates-differences-and-merges-changes" rel="nofollow noreferrer">How apply calculates differences and merges changes</a></p>
<p>I would recommend to use <em>kustomize</em> with <code>kubectl apply -k</code> and use the <strong><a href="https://kubectl.docs.kubernetes.io/pages/reference/kustomize.html#secretgenerator" rel="nofollow noreferrer">secretGenerator</a></strong> to create a unique secret name, for every change. Then you are practicing <em>Immutable Infrastructure</em> and does not get this kind of problems.</p>
<p>A brand new tool for config manangement is <a href="https://googlecontainertools.github.io/kpt/guides/consumer/apply/" rel="nofollow noreferrer">kpt</a>, and <code>kpt live apply</code> may also be an interesting solution for this.</p>
| Jonas |
<p>We have a service which queries database records periodically. For HA we want to have replicas. But with replicas all of then queries the database records.</p>
<p>Following <code>Deployment</code> manifest is used to deploy. But in this configuration one pod is receiving the traffic. But all of them queries db and performing actions.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: db-queries
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: db-queries
template:
metadata:
labels:
app: db-queries
version: v1
spec:
serviceAccountName: leader-election-test
containers:
- name: db-queries
image: our-registry:5000/db-queries
imagePullPolicy: Always
ports:
- containerPort: 8090
volumeMounts:
- readOnly: true
mountPath: /path/to/config
name: config-files
- name: mako
image: gcr.io/google_containers/leader-elector:0.4
imagePullPolicy: Always
args:
- --election=sample
</code></pre>
<p>Here <code>mako</code> container shows only one pod is working as leader holding the lock. We simply want only one pod to query database record and other two stay ideal.</p>
| Sachith Muhandiram | <h1>Availability</h1>
<p>Different levels of availability can be achieved in Kubernetes, it all depends on your requirements.</p>
<p>Your use case seem to be that only one replica should be active at the time to the database.</p>
<h2>Single Replica</h2>
<p>Even if you use a single replica in a Kubernetes Deployment or StatefulSet, it is regularly probed, using your declared LivenessProbe and ReadinessProbe.</p>
<p>If your app does not respond on LivenessProbe, a new pod will be created immediately.</p>
<h2>Multiple replicas using Leader election</h2>
<p>Since only one replica at a time should have an active connection to your database, a <em>leader election</em> solution is viable.</p>
<p>The passive replicas, that currently does not have the lock, should regularly try to get the lock - so that they get active in case the old active pod has died. How this is done depend on the implementation and configuration.</p>
<p>If you want that only the active Pod in a multiplie replica solution should query the database, the app must first check if it has the lock (e.g. is the active instance).</p>
<h2>Conclusion</h2>
<p>There is not much difference between a <em>single replica</em> Deployment and a <em>multi replica</em> Deployment using <em>leader election</em>. There might be small differences in the time a failover takes.</p>
<p>For a <em>single replica</em> solution, you might consider using a StatefulSet instead of a Deployment due to different behavior when a node becomes unreachable.</p>
| Jonas |
<p>I have a kubernetes cluster of 3 worker nodes where I need to deploy a <code>statefulset</code> app having 6 replicas.
My requirement is to make sure in every case, each node should get exactly 2 pods out of 6 replicas. Basically,</p>
<pre><code>node1 - 2 pods of app
node2 - 2 pods of app
node3 - 2 pods of app
========================
Total 6 pods of app
</code></pre>
<p>Any help would be appreciated!</p>
| Nish | <p>You should use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">Pod Anti-Affinity</a> to make sure that the pods are <strong>spread to different nodes</strong>.</p>
<p>Since you will have more than one pod on the nodes, use <code>preferredDuringSchedulingIgnoredDuringExecution</code></p>
<p>example when the app has the label <code>app: mydb</code> (use what fits your case):</p>
<pre><code> podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mydb
topologyKey: "kubernetes.io/hostname"
</code></pre>
<blockquote>
<p>each node should get exactly 2 pods out of 6 replicas</p>
</blockquote>
<p>Try to not think that the pods are pinned to certain node. The idea with Kubernetes workload is that the workload is independent of the underlying infrastructure such as nodes. What you really want - I assume - is to spread the pods to increase availability - e.g. if one nodes goes down, your system should still be available.</p>
<p>If you are running at a cloud provider, you should probably design the anti-affinity such that the pods are scheduled to different Availability Zones and not only to different Nodes - but it requires that your cluster is deployed in a Region (consisting of multiple Availability Zones).</p>
<h2>Spread pods across Availability Zones</h2>
<blockquote>
<p>After even distribution, all 3 nodes (scattered over three zones ) will have 2 pods. That is ok. The hard requirement is if 1 node ( Say node-1) goes down, then it's 2 pods, need not be re-scheduled again on other nodes. When the node-1 is restored, then those 2 pods now will be scheduled back on it. So, we can say, all 3 pair of pods have different node/zone affinity. Any idea around this?</p>
</blockquote>
<p>This <em>can</em> be done with <code>PodAffinity</code>, but is more likely done using <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">TopologySpreadConstraints</a> and you will probably use <code>topologyKey: topology.kubernetes.io/zone</code> but this depends on what labels your nodes have.</p>
| Jonas |
<p>Recently we were getting the following exception in one of our Containers which was running a Java application in Openshift 4.2. This container used to run perfectly on Openshift 3.11.</p>
<pre><code>Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717
</code></pre>
<p>Within the containers, the ulimits looks perfectly fine. See the below image.</p>
<p><a href="https://i.stack.imgur.com/Rytje.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rytje.png" alt="Ulimits"></a></p>
<p>In Openshift 3.11 the same container could create 4096 Threads. But in Openshift 4.2 it just can create 1024 threads. Please see the Below Images,</p>
<p>OCP 3.11
<a href="https://i.stack.imgur.com/EyE8b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EyE8b.png" alt="Openshift 3.11"></a></p>
<p>OCP 4.2
<a href="https://i.stack.imgur.com/dTXeU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dTXeU.png" alt="enter image description here"></a></p>
<p>From the above ulimits it's evident that docker agent level configurations are done. Also, I have allocated enough memory for OS to create native threads. But I have no clue where this limit is set. How can I increase this global limit? Thanks in advance. </p>
| ycr | <p>By default, OpenShift 3 uses <em>docker</em> as container runtime whereas OpenShift 4 uses <a href="https://cri-o.io/" rel="nofollow noreferrer">cri-o</a> as container runtime.</p>
<p>According to <a href="https://github.com/cri-o/cri-o/issues/1921" rel="nofollow noreferrer">Default pids_limit too low</a> - by default there has been a limit to 1024 threads when using cri-o.</p>
| Jonas |
<p>I'm looking for a lightweight way to access the Kubernetes API from a Pod in a C# app.</p>
<p>Kubernetes docs mention two ways of <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">accessing the API from a Pod</a>:</p>
<blockquote>
<ol>
<li>Run kubectl proxy in a sidecar container in the pod, or as a background process within the container</li>
</ol>
</blockquote>
<p>This generally works, and allows for easily hitting an API endpoint with just a line of code or two - example:</p>
<pre><code>using System;
using System.Net;
namespace GetIngresses
{
class Program
{
static void Main(string[] args)
{
var apiBaseUrl = "http://127.0.0.1:8001"; // requires kubectl proxy
Console.WriteLine((new WebClient()).
DownloadString($"{apiBaseUrl}/apis/networking.k8s.io/v1/ingresses"));
}
}
}
</code></pre>
<p>However, now there's a running <code>kubectl proxy</code> process to monitor, maintain etc. - this doesn't seem ideal for production.</p>
<blockquote>
<ol start="2">
<li>Use the Go client library, and create a client using the rest.InClusterConfig() and kubernetes.NewForConfig() functions. They handle locating and authenticating to the apiserver.</li>
</ol>
</blockquote>
<p>My app is written in C#, not Go. There's a <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">C# client library</a> which presumably might be able to achieve the same thing. But do I really have to bring a whole client library on board just for a simple GET to a single endpoint?</p>
<p>Ideally, I'd like to just use <code>WebClient</code>, like in the example above. Documentation mentions that</p>
<blockquote>
<p>The recommended way to locate the apiserver within the pod is with the kubernetes.default.svc DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.</p>
</blockquote>
<p>So, in the example above, can I just do this...</p>
<pre><code>var apiBaseUrl = "http://kubernetes.default.svc"
</code></pre>
<p>... and get <code>WebClient</code> to pass the required service account credentials? If yes, how?</p>
| Max | <blockquote>
<p>Ideally, I'd like to just use WebClient</p>
</blockquote>
<p>The Kubernetes is a REST API, so this would work. As shown on <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#using-kubectl-proxy" rel="nofollow noreferrer">Directly accessing the REST API using kubectl proxy</a> it is easy to explore the API using e.g. <code>curl</code>.</p>
<p>Example with <code>curl</code> and <code>kubectl proxy</code> - response is in json format.</p>
<pre><code>curl http://localhost:8080/api/v1/pods
</code></pre>
<p>The complicating factor is that you probably need a private certificate bundle, and it is good practice to properly validate this for security reasons. When <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api-1" rel="nofollow noreferrer">accessing</a> the API from a Pod, the client certificate is located on <code>/var/run/secrets/kubernetes.io/serviceaccount/ca.crt</code> and in addition, you need to authenticate using the token located on <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code></p>
<blockquote>
<p>But do I really have to bring a whole client library on board just for a simple GET to a single endpoint?</p>
</blockquote>
<p>What you get from a client library is:</p>
<ul>
<li>Implemented authentication using certificates and tokens</li>
<li>Typed client access - instead of hand code urls and requests</li>
</ul>
<p>The <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#dotnet-client" rel="nofollow noreferrer">dotnet-client</a> example shows how the "Typed client access" looks like, for "listing Pods in the default namespace" (see <a href="https://github.com/kubernetes-client/csharp#creating-the-client" rel="nofollow noreferrer">authentication alternatives</a>):</p>
<pre><code>var config = KubernetesClientConfiguration.InClusterConfig() // auth from Pod
IKubernetes client = new Kubernetes(config);
Console.WriteLine("Starting Request!");
var list = client.ListNamespacedPod("default");
foreach (var item in list.Items)
{
Console.WriteLine(item.Metadata.Name);
}
</code></pre>
| Jonas |
<p>I am new to Kubernetes, and I struggle to understand whol idea behind Persistent Storage in Kubernetes.</p>
<p>So is this enough or I have to create Persistent Volume and what will happen if I deploy only these two object without creating PV?</p>
<p>Storage should be on local machine.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: nginx-logs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-web
name: app-web
spec:
selector:
matchLabels:
app: app-web
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: app-web
spec:
containers:
image: nginx:1.14.2
imagePullPolicy: Always
name: app-web
volumeMounts:
- mountPath: /var/log/nginx
name: nginx-logs
restartPolicy: Always
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
</code></pre>
| Most31 | <blockquote>
<p>I struggle to understand whole idea behind Persistent Storage in Kubernetes</p>
</blockquote>
<p>The idea is to separate the <em>storage request</em> that the app needs, and the physical storage - such that an app can be moved to e.g. other cloud provider that has a different storage system - but without needing any changes in the app. It also separates the responsibility for "requesting storage" and managing the underlying storage e.g. developers vs operations.</p>
<blockquote>
<p>So is this enough or I have to create Persistent Volume and what will happen if I deploy only these two object without creating PV?</p>
</blockquote>
<p>This depends on you environment. Most environments typically have <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a>, e.g. the big cloud providers and now also Minikube has support for this.</p>
<p>When using dynamic volume provisioning, the developer only has to create a <code>PersistentVolumeClaim</code> - and no <code>PersistentVolume</code>, its instead dynamically provsioned.</p>
| Jonas |
<p>I'm using kubernetes, and I want to get podname from the app name. Here is the command I use.</p>
<pre><code>POD=$(kubectl get pods -l app=influxdb-local -o custom-columns=:metadata.name -n influx)
</code></pre>
<p>Then I would like to use the result in:</p>
<pre><code>kubectl exec -it -n influx $POD -- influx -username user -password "user" -execute "CREATE DATABASE db"
</code></pre>
<p>but the first line is empty, and the second line contains pod name, so the second command doesn't work.</p>
<p>How should I remove the first white line ?</p>
| Juliatzin | <p>Add <code>--no-headers</code> to skip the header line</p>
<pre><code>kubectl get pods -l app=influxdb-local -o custom-columns=:metadata.name -n influx --no-headers
</code></pre>
<h1>Custom Columns</h1>
<p>Using the flag <code>-o custom-columns=<header-name>:<field></code> will let you customize the output.</p>
<p>Example with resource name, under header <code>NAME</code></p>
<pre><code>kubectl get pods -o custom-columns=NAME:metadata.name
</code></pre>
<p>output</p>
<pre><code>NAME
myapp-5b77df6c48-dbvnh
myapp-64d5985fdb-mcgcn
httpd-9497b648f-9vtbl
</code></pre>
<p><strong>Empty header name:</strong> you used the flag as <code>-o custom-columns=:metadata.name</code> - the first line is the header line, but with an empty header.</p>
<h2>Omit headers</h2>
<p>the proper solution to omit the header line is by using the flag <code>--no-headers</code></p>
<pre><code>kubectl get pods -o custom-columns=NAME:metadata.name --no-headers
</code></pre>
<p>Example output</p>
<pre><code>myapp-5b77df6c48-dbvnh
myapp-64d5985fdb-mcgcn
httpd-9497b648f-9vtbl
</code></pre>
| Jonas |
<p>I need my Go app to <strong>monitor some resources</strong> in a Kubernetes cluster and react to their changes. Based on numerous articles and examples, I seem to have found a few ways to do it; however, I'm relatively new to Kubernetes, and they're described in terms much too complex to me, such that I'm still <strong>unable to grasp the difference</strong> between them — and thus, to know which one to use, so that I don't get some unexpected behaviors... Specifically:</p>
<ol>
<li><a href="https://godoc.org/k8s.io/apimachinery/pkg/watch#Interface" rel="noreferrer"><code>watch.Interface.ResultChan()</code></a> — (acquired through e.g. <a href="https://godoc.org/k8s.io/client-go/rest#Request.Watch" rel="noreferrer"><code>rest.Request.Watch()</code></a>) — this already seems to let me react to changes happening to a resource, by providing <code>Added</code>/<code>Modified</code>/<code>Deleted</code> events;</li>
<li><p><a href="https://godoc.org/k8s.io/client-go/tools/cache#NewInformer" rel="noreferrer"><code>cache.NewInformer()</code></a> — when I implement a <a href="https://godoc.org/k8s.io/client-go/tools/cache#ResourceEventHandler" rel="noreferrer"><code>cache.ResourceEventHandler</code></a>, I can pass it as last argument in:</p>
<pre><code>cache.NewInformer(
cache.NewListWatchFromClient(clientset.Batch().RESTClient(), "jobs", ...),
&batchv1.Job{},
0,
myHandler)
</code></pre>
<p>— then, the <code>myHandler</code> object will receive <code>OnAdd()</code>/<code>OnUpdate()</code>/<code>OnDelete()</code> calls.</p>
<p>To me, this seems more or less equivalent to the <code>ResultChan</code> I got in (1.) above; one difference is that apparently now I get the "before" state of the resource as a bonus, whereas with <code>ResultChan</code> I would only get its "after" state.</p>
<p>Also, IIUC, this is actually somehow built on the <code>watch.Interface</code> mentioned above (through <code>NewListWatchFromClient</code>) — so I guess it brings some value over it, and/or fixes some (what?) deficiencies of a raw <code>watch.Interface</code>?</p></li>
<li><a href="https://godoc.org/k8s.io/client-go/tools/cache#NewSharedInformer" rel="noreferrer"><code>cache.NewSharedInformer()</code></a> and <a href="https://godoc.org/k8s.io/client-go/tools/cache#NewSharedIndexInformer" rel="noreferrer"><code>cache.NewSharedIndexInformer()</code></a> — <sub><sup>(uh wow, now <em>those</em> are a mouthful...)</sup></sub> I tried to dig through the godocs, but I feel completely overloaded with terminology I don't understand, such that I don't seem to be able to grasp the subtle (?) differences between a "regular" <code>NewInformer</code> vs. <code>NewSharedInformer</code> vs. <code>NewSharedIndexInformer</code>... 😞</li>
</ol>
<p>Could someone please help me <strong>understand the differences</strong> between above APIs in the Kubernetes client-go package?</p>
| akavel | <p>These methods differ in the <strong>level of abstraction</strong>. If a higher level abstraction fits your need, you should use it, as many lower level problems is solved for you.</p>
<p><strong>Informers</strong> is a higher level of abstraction than <em>watch</em> that also include <em>listers</em>. In most use cases you should use any kind of Informer instead of lower level abstraction. An Informer internally consists of a <em>watcher</em>, a <em>lister</em> and an <em>in-memory cache</em>.</p>
<p><strong>SharedInformers</strong> share the connection with the API server and other resources between your informers.</p>
<p><strong>SharedIndexInformers</strong> add an index to your data cache, in case you work with a larger dataset.</p>
<p>It is recommended to use SharedInformers instead of the lower level abstractions. Instantiate new SharedInformes from the same <strong>SharedInformerFactory</strong>. Theres is an example in <a href="https://github.com/feiskyer/kubernetes-handbook/blob/master/examples/client/informer/informer.go" rel="noreferrer">Kubernetes Handbook example</a></p>
<pre><code>informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30)
podInformer := informerFactory.Core().V1().Pods()
serviceInformer := informerFactory.Core().V1().Services()
podInformer.Informer().AddEventHandler(
// add your event handling
)
// add event handling for serviceInformer
informerFactory.Start(wait.NeverStop)
informerFactory.WaitForCacheSync(wait.NeverStop)
</code></pre>
| Jonas |
<p>I have a Kubernetes EKS cluster on AWS, an my goal is to be able to watch particular config maps in my Spring Boot application.
On my local environment everything works correctly, but when I use this setup inside AWS I get forbidden state and my application fails to run.
I've created a Service Account but don't understand how to create Terraform script which can assign the needed IAM Role.
Any help would be appreciated.</p>
| xeLL | <p>This depends on several things.</p>
<p>An AWS IAM Role can be provided to Pods in different ways, but the recommended way now is to use <a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="noreferrer">IAM Roles for Service Accounts, IRSA</a>.</p>
<p>Depending on how you provision the Kubernetes cluster with Terraform, this is also done in different ways. If you use <a href="https://aws.amazon.com/eks/" rel="noreferrer">AWS EKS</a> and provision the cluster using the <a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="noreferrer">Terraform AWS EKS module</a>, then you should set <a href="https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/variables.tf#L326" rel="noreferrer">enable_irsa</a> to <code>true</code>.</p>
<p>You then need to create an IAM Role for you application (Pods), and you need to return the ARN for the IAM Role. This can be done using the <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role" rel="noreferrer">aws_iam_role</a> resource.</p>
<p>You need to create a Kubernetes ServiceAccount for your pod, it can be created with Terraform, but many want to use Yaml for Kubernetes resources. The ServiceAccount need to be annotated with the IAM Role ARN, like:</p>
<pre><code>annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::14xxx84:role/my-iam-role
</code></pre>
<p>See the <a href="https://www.eksworkshop.com/beginner/110_irsa/" rel="noreferrer">EKS workshop for IAM Roles for Service Accounts</a> lesson for a guide through this. However, it does not use Terraform.</p>
| Jonas |
<p>Im new to Kubernetes and i try to run Laravel application from persistent volume. I got it working and i will have now the code running from persistent volume and i can scale it.</p>
<p>I have nginx running in own pods and share the same persistent volume.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
tier: backend
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
tier: backend
template:
metadata:
labels:
app: nginx
tier: backend
spec:
volumes:
- name: laravel-pv-volume
persistentVolumeClaim:
claimName: laravel-pv-claim
- name: config
configMap:
name: nginx-config
items:
- key: config
path: site.conf
containers:
- name: nginx
image: nginx
volumeMounts:
- name: laravel-pv-volume
mountPath: /code
- name: config
mountPath: /etc/nginx/conf.d
ports:
- containerPort: 80
name: http
protocol: TCP
</code></pre>
<p>My Laravel deployment includes initContainers which has command to copy Laravel source code from /var/www to /code which is the persistent volume path</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: php
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: php
tier: backend
template:
metadata:
labels:
app: php
tier: backend
spec:
volumes:
- name: laravel-pv-volume
persistentVolumeClaim:
claimName: laravel-pv-claim
containers:
- name: php
image: registry.digitalocean.com/xxx/laravel-test:3.0
volumeMounts:
- name: laravel-pv-volume
mountPath: /code
initContainers:
- name: install
imagePullPolicy: Always
image: registry.digitalocean.com/xxx/laravel-test:3.0
command: ["/bin/sh", "-c", "cp -R /var/www/. /code && chown -R www-data:www-data /code"]
volumeMounts:
- name: laravel-pv-volume
mountPath: /code
</code></pre>
<p>How can i create new pods with new code after the Laravel image changes and code will be updated, somehow i think that after this i should make new persistent volume path for mount that will run the code for the new pods, and when old pods will be terminated, last one will delete the "old" code from persistent volume. But also i don't know that will the pods now that which one is last?</p>
<p>My workaround this could be that if my Laravel image updates to next version, i will add command to clear the /code folder, but that will not be the best practice and will couse downtime.</p>
<p><code>command: ["/bin/sh", "-c", "rm -rf /code/*" && "cp -R /var/www/. /code && chown -R www-data:www-data /code"]</code></p>
| Janiko | <p><em>I understand from where these practices come. But you should do differently on Kubernetes.</em></p>
<h2>Code in Docker images</h2>
<p>When you updated your PHP code, don't put it on a PersistentVolume, instead, build a new Docker image, with e.g. <code>docker build -t <myapp>:<new version> .</code> and when you have pushed the image to a registry, you can update the <code>image: </code> field in the <code>Deployment</code>. Avoid putting code on Persistent Volumes, and most likely you will not need a Persistent Volume Claim in your <code>Deployment</code> at all.</p>
<p>Most likely you want a <code>COPY</code> command in your <code>Dockerfile</code> to put your code in the container, something like (don't know your paths):</p>
<pre><code>COPY code/ opt/app/myapp
</code></pre>
| Jonas |
<p>I want to get a list of just the pod names and the Result should not include the status, number of instances etc. </p>
<p>I am using the command </p>
<p><code>oc get pods</code></p>
<p>It prints</p>
<pre><code>Pod1-qawer Running 1/1 2d
Pod2g-bvch Running 1/1 3h
</code></pre>
<p>Expected result</p>
<pre><code>Pod1-qawer
Pod2g-bvch
</code></pre>
<p>How do i avoid the extra details from getting printed </p>
| lr-pal | <p>You can omit the headers with <code>--no-headers</code> and you can use <code>-o custom-columns=</code> to customize the output.</p>
<pre><code>oc get pods -o custom-columns=POD:.metadata.name --no-headers
</code></pre>
<p>Example output</p>
<pre><code>$ oc get pods -o custom-columns=POD:.metadata.name --no-headers
goapp-75d9b6bfbf-b5fdh
httpd-58c5c54fff-b97h8
app-proxy-6c8dfb4899-8vdkb
app-64d5985fdb-xjp58
httpd-dd5976fc-rsnhz
</code></pre>
| Jonas |
<p>I'm looking to see if it's currently possible to run Kubernetes locally on a 2020 M1 MacBook air.</p>
<p>The environment I need is relatively simple, just for going through some tutorials. As an example, this <a href="https://sdk.operatorframework.io/docs/building-operators/golang/tutorial/" rel="noreferrer">operator-sdk guide</a>.</p>
<p>So far I've tried <code>microk8s</code> and <code>minikube</code>, as they're tools I've used before on other machines.</p>
<p>For both of these, I've installed them using <code>brew</code> after opening the terminal app "with Rosetta 2"
(i.e like <a href="https://doesitarm.com/app/homebrew/benchmarks/#rU-fa0sbCGs" rel="noreferrer">this</a>). My progress is then:</p>
<p><strong>Minikube</strong></p>
<p>When I run <code>minikube start --driver=docker</code> (having installed the <a href="https://www.docker.com/blog/download-and-try-the-tech-preview-of-docker-desktop-for-m1/" rel="noreferrer">tech preview of Docker Desktop for M1</a>), an initialization error occurs. It seems to me that this is being tracked here <a href="https://github.com/kubernetes/minikube/issues/9224" rel="noreferrer">https://github.com/kubernetes/minikube/issues/9224</a>.</p>
<p><strong>Microk8s</strong></p>
<p><code>microk8s install</code> asks to install <code>multipass</code>, which then errors with <code>An error occurred with the instance when trying to start with 'multipass': returned exit code 2. Ensure that 'multipass' is setup correctly and try again.</code>. Multipass shows a <code>microk8s-vm</code> stuck in starting. I think this may relate to this issue <a href="https://github.com/canonical/multipass/issues/1857" rel="noreferrer">https://github.com/canonical/multipass/issues/1857</a>.</p>
<p>I'm aware I'd probably be better chasing up those issues for help on these particular errors. What would be great is any general advice on if it's currently possible/advisable to setup a basic Kubernetes env for playing with on an M1 mac. I'm not experienced with the underlying technologies here, so any additional context is welcome. :)</p>
<p>If anyone has suggestions for practising Kubernetes, alternative to setting up a local cluster, I'd also appreciate them. Thanks!</p>
| James Cockbain | <p>First, it is usually good to have Docker when working with containers. Docker now has a <a href="https://www.docker.com/blog/download-and-try-the-tech-preview-of-docker-desktop-for-m1/" rel="noreferrer">Tech Preview of Docker for Apple M1 based macs</a>.</p>
<p>When you have a workin Docker on your machine, it should also work to use <a href="https://kind.sigs.k8s.io/" rel="noreferrer">Kind</a> - a way to run Kubernetes on Docker containers.</p>
| Jonas |
<p>I'm migrating <code>helm2</code> releases to <code>helm3</code>. One of my resources is <code>redis</code> and it's protected from migration. I have to remove it using</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete statefulsets.apps --cascade=false -nkube-system testme-redis-master
</code></pre>
<p>I wanna use the <code>Kubernetes</code> python lib, only that I cannot find the matching function.
I'm using <code>CoreV1API</code>.</p>
| OLS | <blockquote>
<p>I wanna use the Kubernetes python lib, only that I cannot find the matching function.</p>
</blockquote>
<p>You must look in the right API Group.</p>
<blockquote>
<p>I'm using CoreV1API.</p>
</blockquote>
<p><code>StatefulSets</code> is in <code>AppsV1</code> and not in <code>CoreV1</code>, so check that API Group instead.</p>
<p>See the Python Kubernetes client <a href="https://github.com/kubernetes-client/python/blob/master/examples/deployment_create.py" rel="nofollow noreferrer">example for Deployment in AppsV1 API Group</a>, it is very similar to <code>StatefulSet</code></p>
| Jonas |
<p>I have been studying how kubernetes pod communication works across nodes and here is my intake so far:</p>
<p>Basically, the following figure describes. how each pod has network interface eth0 that is linked to the veth and than bridged to the hosts eth0 interface.</p>
<p>One way to make cross node communication between pods is by configuring routing tables accordingly.</p>
<p>let's say Node A has address domain 10.1.1.0/24 and Node B has address domain 10.1.2.0/24.</p>
<p>I can configure routing tables on node A to forward traffic for 10.1.2.0/24 to 10.100.0.2(eth0 of node B), and similar for node B to forward traffic for 10.1.1.0/24 to 10.100.0.1 (eth0 of node A)</p>
<p>This can work if my nodes aren't seperated by routers or if the routers are configured accordingly because they will otherwise drop packets that have private ip address as destination, This is isn't practical!</p>
<p><a href="https://i.stack.imgur.com/Cwd7c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cwd7c.png" alt="enter image description here" /></a></p>
<p>And here we get to talk about SDN which I am not clear about and is apparently the solution.
As far as I know the SDN encapsulates packets to set a routable source and destination Ips</p>
<p>So basically to deploy A Container network plugin on kubernetes which creates an SDN, you basically create daemon sets and other assisting kubernetes objects.</p>
<p>My question is:</p>
<p>How do those daemon sets replace the routing tables modifications and make sure pods can communicate across nodes?</p>
<p>How do daemon sets which are also pods, influence the network and other pods which have different namespaces?</p>
| Ezwig | <blockquote>
<p>How do those daemon sets replace the routing tables modifications and make sure pods can communicate across nodes?</p>
</blockquote>
<p>Networking can be customized with a <em>kubenet-plugin</em> or a <em>CNI-plugin</em> as described in <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">Network Plugins</a> to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> that runs on every node. The Network Plugin is responsible for handling the routing, possibly by using <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">kube-proxy</a>. E.g. Cilium CNI plugin is a <a href="https://cilium.io/blog/2019/08/20/cilium-16/" rel="nofollow noreferrer">complete replacement of kube-proxy</a> and is using <a href="https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables/" rel="nofollow noreferrer">eBPF instead of iptables</a>.</p>
<blockquote>
<p>How do daemon sets wich are also pods, influence the network and other pods which have diffrent namespaces?</p>
</blockquote>
<p>Yes, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> is normal pods. Kubelet is a special <a href="https://kubernetes.io/docs/concepts/overview/components/#node-components" rel="nofollow noreferrer">node-component</a> that manage pods, except containers not created by Kubernetes.</p>
<p><strong><a href="https://www.youtube.com/watch?v=0Omvgd7Hg1I" rel="nofollow noreferrer">Life of a packet</a></strong> is a recommended presentation about Kubernetes Networking</p>
| Jonas |
<p>I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.
I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing, etc.
But I want to know whether Kubernetes already can satisfy my objective? what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?</p>
<p>Our applications are built with Django and most are likely used by students such course planner or search/recommend systems.</p>
| NYL | <blockquote>
<p>I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.</p>
</blockquote>
<p>This is a great idea. What you really want to do here is to use as many existing tools as you can, to let you <strong>focus</strong> on improving the <em>core functionality</em> of your <strong>current server</strong> - e.g. serving your users with your business logic and data and increase availability of this.</p>
<p>Focusing on your <em>core functionality</em> means that you should NOT do, e.g.</p>
<ul>
<li>NOT write your own memory allocation algorithm or garbage collection</li>
<li>NOT write you own operating system</li>
<li>NOT write your own container scheduler (e.g. what Kubernetes can do for you)</li>
</ul>
<blockquote>
<p>I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing</p>
</blockquote>
<p>Most applications deployed on Kubernetes or that have your <em>availability</em> requirements actually <em>should be a <strong>distributed system</strong></em> - e.g. be composed of more than one instance, designed for elasticity and resiliency.</p>
<blockquote>
<p>what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?</p>
</blockquote>
<p>Kubernetes is a tool, almost an distributed operating system that e.g. schedules containerized apps to a server farm. It is a tool that can help you a lot when developing and designing your distribued application that should follow the <a href="https://12factor.net/" rel="nofollow noreferrer">Twelve Factor principles</a>.</p>
| Jonas |
<h2>Objective</h2>
<p>I am have deployed <a href="https://airflow.apache.org/" rel="nofollow noreferrer">Apache Airflow</a> on AWS' <a href="https://aws.amazon.com/ek" rel="nofollow noreferrer">Elastic Kubernetes Service</a> using Airflow's Stable Helm chart. My goal is to create an Ingress to allow others to access the airflow webserver UI via their browser. It's worth mentioning that I am deploying on EKS using AWS Fargate. My experience with Kubernetes is somewhat limited, and I have not set an Ingress myself before.</p>
<h2>What I have tried to do</h2>
<p>I am currently able to connect to the airflow web-server pod via port-forwarding (like <code>kubectl port-forward airflow-web-pod 8080:8080</code>). I have tried setting the Ingress through the Helm chart (documented <a href="https://github.com/airflow-helm/charts/tree/main/charts/airflow" rel="nofollow noreferrer">here</a>). After which:</p>
<p>Running <code>kubectl get ingress -n dp-airflow</code> I got:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
airflow-flower <none> foo.bar.com 80 3m46s
airflow-web <none> foo.bar.com 80 3m46s
</code></pre>
<p>Then running <code>kubectl describe ingress airflow-web -n dp-airflow</code> I get:</p>
<pre><code>Name: airflow-web
Namespace: dp-airflow
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/airflow airflow-web:web (<redacted_ip>:8080)
Annotations: meta.helm.sh/release-name: airflow
meta.helm.sh/release-namespace: dp-airflow
</code></pre>
<p>I am not sure what did I need to put into the browser, so I have tried using <code>http://foo.bar.com/airflow</code> as well as the cluster endpoint/ip without success.</p>
<p>This is how the airflow webservice service looks like:
Running <code>kubectl get services -n dp-airflow</code>, I get:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-web ClusterIP <redacted_ip> <none> 8080/TCP 28m
</code></pre>
<h3>Other things I have tried</h3>
<p>I have tried creating an Ingress without the Helm chart (I am using Terraform), like:</p>
<pre class="lang-sh prettyprint-override"><code>resource "kubernetes_ingress" "airflow_ingress" {
metadata {
name = "ingress"
}
spec {
backend {
service_name = "airflow-web"
service_port = 8080
}
rule {
http {
path {
backend {
service_name = "airflow-web"
service_port = 8080
}
path = "/airflow"
}
}
}
}
}
</code></pre>
<p>However I was still not able to connect to the web UI. What are the steps that I need to take to set up an Ingress? Which address do I need to use in my browser to connect to the web UI?</p>
<p>I am happy to provide further details if needed.</p>
| alt-f4 | <p>It sound like you have created <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a> resources. That is a good step. But for those Ingress resources to have any effect, you also need an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="noreferrer">Ingress Controller</a> than can <em>realize</em> your Ingress to an actual <em>load balancer</em>.</p>
<p>In an AWS environment, you should look at <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller" rel="noreferrer">AWS Load Balancer Controller</a> that creates an <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="noreferrer">AWS Application Load Balancer</a> that is configured according your Ingress resources.</p>
<blockquote>
<p>Ingress to connect to a ClusterIP service?</p>
</blockquote>
<p>First, the default load balancer is <em>classic load balancer</em>, but you probably want to use the newer <em>Application Load Balancer</em> to be used for your Ingress resources, so on your Ingress resources add this annotation:</p>
<pre><code>annotations:
kubernetes.io/ingress.class: alb
</code></pre>
<p>By default, your services should be of type <code>NodePort</code>, but as you request, it is possible to use <code>ClusterIP</code> services as well, when you on your Ingress resource also add this annotation (for traffic mode):</p>
<pre><code>alb.ingress.kubernetes.io/target-type: ip
</code></pre>
<p>See the <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="noreferrer">ALB Ingress documentation</a> for more on this.</p>
| Jonas |
<p>I want to create PV using the <code>kubectl</code> command line, Is it possible in Kubernetes or not. If it is possible then how?</p>
| user15003460 | <p>For infrastructure, it has become a good practice to use <em>declarative</em> files to provision new infrastructure. Therefore an imperative variant only using <code>kubectl</code> commands for creating <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">Persistent Volumes</a> has not been implemented.</p>
<p>Persistent Volumes are typically created in one of two ways:</p>
<ul>
<li>Automatically created by a <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noreferrer">dynamic volume provisioning</a> service. What service is used depends on your environment, e.g. on-prem or a cloud provider like Google Cloud Platform or Amazon Web Services.</li>
<li>In some cases, created using <code>kubectl</code> and yaml manifests, typically by an operator.</li>
</ul>
| Jonas |
<p>What is the difference between using <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> and <a href="https://cloud.google.com/tekton/" rel="nofollow noreferrer">Tekton</a> for deployment?</p>
<p>To me it looks like Kustomize is a lightweight CI/CD client developer tool where you manually go in and do your CI/CD, where Tekton is automated CI/CD running within Kubernetes?</p>
| Chris G. | <p>Kustomize is a tool for overriding (instead of templating) your Kubernetes manifest files. It it now built-in in kubectl with <code>kubectl apply -k</code>.</p>
<p>Tekton is a project for creating Kubernetes Custom Resources for building CICD pipelines of tasks on Kubernetes. One of the tasks in a pipeline can be an image with <code>kubectl</code> that apply the changes using Kustomize (<code>kubectl apply -k</code>).</p>
| Jonas |
<p>Let's say we're using EKS on AWS, would we need to manually manage the underlying Node's OS, installing patches and updates? </p>
<p>I would imagine that the pods and containers running inside the Node could be updated by simply version bumping the containers OS in your Dockerfile, but I'm unsure about how that would work for the Node's OS. Would the provider (AWS) in this case manage that? </p>
<p>Would be great to get an explanation for both Windows and Linux nodes. Are they different? Thank you!</p>
| alejandro | <p>Yes, you need to keep the nodes updated. But this has recently became easier with the new <a href="https://aws.amazon.com/bottlerocket/" rel="noreferrer">Bottlerocket</a> - container optimized OS for nodes in EKS.</p>
<blockquote>
<p>Updates to Bottlerocket can be automated using container orchestration services such as Amazon EKS, which lowers management overhead and reduces operational costs.</p>
</blockquote>
<p>See also the blog post <a href="https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os-for-container-hosting/" rel="noreferrer">Bottlerocket – Open Source OS for Container Hosting</a></p>
| Jonas |
<p>Following the GoQuorum Official documentation, I was able to setup quorum nodes using Kubernetes and also bare metal raft setup with the help of the following links respectively <a href="https://docs.goquorum.consensys.net/en/stable/HowTo/GetStarted/Getting-Started-Qubernetes/" rel="nofollow noreferrer">Qubernetes Setup</a> and <a href="https://docs.goquorum.consensys.net/en/stable/Tutorials/Create-a-Raft-network/" rel="nofollow noreferrer">Raft Setup Bare Metal</a></p>
<p>If I would like to have my quorum nodes deployed on Kubernetes Cluster, can I use the replica set feature effectively to replicate a quorum node for high availability? If not, on Kubernetes, what is the best way to maintain a replica of a node for Load balancing a high number of grpc requests? When I am trying to replicate it facing issues and my pod is crashing.</p>
| Aryama I | <p>If you want to deploy a Raft-based application on Kubernetes, you want your instances to talk to the other instances.</p>
<p>In this case, you want "Stable, unique network identifiers." so that your instances effectively can address requests to the other instances using a known instance-address.</p>
<p>Deploy your app as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> to get this feature.</p>
| Jonas |
<p>I would like to deploy a RESTService in kubernetes behind a gateway and a service discovery. There is a moment where I will have my RestService version 1 and my RestService version 2. </p>
<p>Both will have the exact same URLs, but I might deploy them in pods where I label the version. When I make a call to the RESTService I would like to add something in the HTTP header indicating that I want to use my V2.</p>
<p>Is there any way I can route the traffic properly to the set of pods? (I'm not sure if using label is the right way). I also have to keep in mind that in the future I will have a V3 with new services and my urls will change, it cannot be something configured statically. I will also have serviceA with v1
and servicesB with v3. Both behind the same service discovery, both must be routed properly using the header parameter (or similar).</p>
<p>I'm not sure if Envoy is the right component for this, or is there anything else? and I'm not sure in which moment I should place this component.
I'm missing something, I'm still quite confused with kubernetes. Does anybody have and example from something similar?</p>
| Elena | <p>Yes, you can have <strong>two</strong> <code>Deployment</code>, with different labels, e.g.</p>
<pre><code>kind: Deployment
metadata:
name: rest-service-v1
labels:
app: rest-service
spec:
selector:
matchLabels:
app: rest-service
version: v1
template:
metadata:
labels:
app: rest-service
version: v1
kind: Deployment
metadata:
name: rest-service-v3
labels:
app: rest-service
spec:
selector:
matchLabels:
app: rest-service
version: v3
template:
metadata:
labels:
app: rest-service
version: v3
</code></pre>
<p>Then you create an <code>Service</code> for each:</p>
<pre><code>kind: Service
metadata:
name: rest-service-v1
spec:
selector:
app: rest-service
version: v1
kind: Service
metadata:
name: rest-service-v3
spec:
selector:
app: rest-service
version: v3
</code></pre>
<p>and finally an <code>Ingress</code> object. However, the default Ingress can only route by <em>path</em>. You may find a 3rd party Ingress Controller that can route by <em>Header Value</em></p>
<pre><code>kind: Ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /v1/*
backend:
serviceName: rest-service-v1
servicePort: 8080
- path: /v3/*
backend:
serviceName: rest-service-v3
servicePort: 8080
</code></pre>
| Jonas |
<br>
Maybe this question is very wrong but my research so far hasn't been very helpful.
<br> My plan is to deploy a server app to multiple pods , as replicas (same code running in multiple pods) and I want each pod to be able to communicate with the rest pods. <br>
More specifically I need to broadcast a message to all the rest pods every x minutes.
<p>I cannot find examples of how I could do that with Python code or anything helpful related to the communication internally between the pods. I can see some instructions for the yaml configurations that I should use to make that possible , but no practical examples , which makes me think that maybe using Kubernetes is not the best technology service for what I am trying to do (?).</p>
<p>Any advice/suggestion/documentation is more than needed.
Thank you</p>
| Flora Biletsiou | <p>Applications is typically deployed as <code>Deployment</code> to Kubernets, however in use-cases where you want <strong>stable network identity</strong> for your Pods, it is easier to deploy your app as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p>
<p>When your app is deployed as <code>StatefulSet</code> the pods will be named e.g.: <code>appname-0</code>, <code>appname-1</code>, <code>appname-2</code> if your <code>StatefulSet</code> is named <code>appname</code> and your replicas is <code>replicas: 3</code></p>
<blockquote>
<p>I cannot find examples of how I could do that with Python code</p>
</blockquote>
<p>This is just plain network programming between the pods. You can use any UDP or TCP protocol, e.g. you can use http for this. The network address is the pod name (since your replicas are Pods within the same namespace) e.g. <code>http://appname-0</code> or <code>http://appname-1</code>.</p>
| Jonas |
<p>We are migrating our infrastructure to kubernetes. I am talking about a part of it, that contains of an api for let's say customers (we have this case for many other resources). Let's consider we have a billion customers, each with some data etc. and we decided they deserve a specialized api just for them, with its own db, server, domain, etc.</p>
<p>Kubernetes has the notion of nodes and pods. So we said "ok, we dedicate node X with all its resources to this particular api". And now the question:</p>
<p>Why would I used multiple pods each of them containing the same nginx + fpm and code, and limit it to a part of traffic and resources, and add an internal lb, autoscale, etc., instead of having a single pod, with all node resources?</p>
<p>Since each pod adds a bit of extra memory consumption this seems like a waste to me. The only upside being the fact that if something fails only part of it goes down (so maybe 2 pods would be optimal in this case?).</p>
<p>Obviously, would scale the nodes when needed.</p>
<p>Note: I'm not talking about a case where you have multiple pods with different stuff, I'm talking about that particular case.</p>
<p>Note 2: The db already is outside this node, on it's own pod.</p>
<p>Google fails me on this topic. I find hundreds of post with "how to configure things, but 0 with WHY?".</p>
| zozo | <blockquote>
<p>Why would I used multiple pods each of them containing the same nginx + fpm and code, and limit it to a part of traffic and resources, and add an internal lb, autoscale, etc., instead of having a single pod, with all node resources?</p>
</blockquote>
<blockquote>
<p>Since each pod adds a bit of extra memory consumption this seems like a waste to me. The only upside being the fact that if something fails only part of it goes down (so maybe 2 pods would be optimal in this case?).</p>
</blockquote>
<p>This comes down to the question, should I scale my app <strong>vertically (larger instance)</strong> or <strong>horizontally (more instances)</strong>.</p>
<p>First, try to avoid using only a <em>single instance</em> since you probably want more redundancy if you e.g. upgrade a Node. A single instance may be a good option if you are OK with some downtime sometimes.</p>
<h2>Scale app vertically</h2>
<p>To scale an app vertically, by changing the instance to a bigger, is a viable alternative that sometimes is a good option. Especially when the app can not be scaled horizontally, e.g. an app that use <em>leader election</em> pattern - typically listen to a specific event and react. There is however a limit by how much you can scale an app vertically.</p>
<h2>Scale app horizontally</h2>
<p>For a stateless app, it is usually <strong>much easier</strong> and cheaper to scale an app horizontally by adding more instances. You typically want more than one instance anyway, since you want to tolerate that a Node goes down for maintenance. This is also possible to do for a large scale app to very many instances - and the cost scales linearly. However, not every app can scale horizontally, e.g. a distributed database (replicated) can typically not scale well horizontally unless you shard the data. You can even use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> to automatically adjust the number of instances depending on how busy the app is.</p>
<h2>Trade offs</h2>
<p>As described above, horizontal scaling is usually easier and preferred. But there are trade offs - you would probably not want to run thousands of instances when you have low traffic - an instance has some resource overhead costs, also in maintainability. For <em>availability</em> you should run at least 2 pods and make sure that they does not run on the same node, if you have a <em>regional</em> cluster, you want to make sure that they does not run on the same <em>Availability Zone</em> - for availability reasons. Consider 2-3 pods when your traffic is low, and use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> to automatically scale up to more instance when you need. In the end, this is a number game - resources cost money - but you want to provide a good service for your customers as well.</p>
| Jonas |
<p>So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated</p>
| Hary | <blockquote>
<p>when I have not used EKS and created by own k8s cluster, I have spun one NLB per service</p>
</blockquote>
<p>AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes <code>Service</code> of <code>type: LoadBalancer</code>. But there are options, configured by the annotations on the <code>Service</code>. The most recent <a href="https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/" rel="nofollow noreferrer">feature is IP mode</a>. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="nofollow noreferrer">EKS Network Load Balancing doc</a> for more configuration alternatives.</p>
<p>Example:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
</code></pre>
<blockquote>
<p>If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?</p>
</blockquote>
<p>The load balancer uses the target pods that is matched by the <code>selector:</code> in your <code>Service</code>.</p>
<p>The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes <code>Ingress</code> resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see <a href="https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/" rel="nofollow noreferrer">AWS Load Balancer Controller</a></p>
| Jonas |
<p>I have a pod that uses 2 persistent volumes. The persistent volumes are in different zones. While deploying I get the following error:</p>
<pre><code>node(s) had volume node affinity conflict
</code></pre>
<p>Any solution to the above problem?</p>
| lokesh mani deep | <blockquote>
<p>I have a pod that uses 2 persistent volumes. The persistent volumes are in different zones.</p>
</blockquote>
<p>The volumes that your Pod mount must be in the same Availability Zone, so that they can be mounted on the Node where the Pod is scheduled.</p>
<p>You <em>can</em> also use a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd" rel="nofollow noreferrer">Regional Persistent Volume</a> by setting the StorageClass to <code>regionalpd-storageclass</code> but this is more expensive and slower and makes your volume mirrored in <strong>two</strong> zones. But this is a bit more complicated and probably not what you want to do.</p>
| Jonas |
<h1>Motive</h1>
<p>I want to fully automate the deployment of many <strong>services</strong> with the help of <a href="https://cloud.google.com/cloud-build/" rel="noreferrer">Google Cloud Build</a> and <a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">Google Kubernetes Engine</a>. Those services are located inside a <strong>monorepo</strong>, which has a folder called <code>services</code>.</p>
<p>So I created a <code>cloudbuild.yaml</code> for every service and created a build trigger. The <code>cloudbuild.yaml</code> does:</p>
<ol>
<li>run tests</li>
<li>build new version of Docker image</li>
<li>push new Docker image</li>
<li>apply changes to Kubernetes cluster</li>
</ol>
<h1>Issue</h1>
<p>As the number of services increases, the number of build triggers increases, too. There are also more and more services that are built even though they haven't changed.</p>
<p>Thus I want a mechanism, which has only <strong>one</strong> build trigger and automatically determines which services need to be rebuild.</p>
<h1>Example</h1>
<p>Suppose I have a monorepo with this file structure:</p>
<pre><code>├── packages
│ ├── enums
│ ├── components
└── services
├── backend
├── frontend
├── admin-dashboard
</code></pre>
<p>Then I make some changes in the <code>frontend</code> service. Since the <code>frontend</code> and the <code>admin-dashboard</code> service depend on the <code>components</code> package multiple services need to be rebuild:</p>
<ul>
<li>frontend</li>
<li>admin-dashboard</li>
</ul>
<p>But <strong>not</strong> backend!</p>
<h1>What I've Tried</h1>
<h3>(1) Multiple build triggers</h3>
<p>Setting up multiple build triggers for <strong>every</strong> service. But 80% of those builds are redundant, since most changes in the code are only related to individuals services. It's also increasingly complex to manage many build triggers, which look almost identical. A single <code>cloudbuild.yaml</code> file looks like this:</p>
<pre><code>steps:
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-f",
"./services/frontend/prod.Dockerfile",
"-t",
"gcr.io/$PROJECT_ID/frontend:$REVISION_ID",
"-t",
"gcr.io/$PROJECT_ID/frontend:latest",
".",
]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/frontend"]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
</code></pre>
<h3>(2) Looping through cloudbuild files</h3>
<p><a href="https://stackoverflow.com/questions/51861870">This</a> question is about a very similar issue. So I've tried to set up one "entry-point" <code>cloudbuild.yaml</code> file in the root of the project and looped through all services:</p>
<pre><code>steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in ./services/*/; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
</code></pre>
<p>This would eliminate the need for having multiple build triggers. But I also ran into issues with this method:</p>
<p>Every service is sent into it's own build process with the file scope of this particular service. This means, that I can only access files inside <code>/services/specific-service</code> during the build. Which is a total bummer for me (I need access to files in parent directories like <code>packages</code> and config files in the root).</p>
<h3>(3) Build only changed services</h3>
<p>Since I want a mechanism to only build changed services, I've tried to determine the services that need to be rebuilt. It seems quite easy to do this with the help of <a href="https://github.com/lerna/lerna" rel="noreferrer">lerna</a>. Running</p>
<pre><code>lerna changed --all --parseable
</code></pre>
<p>will return a list file paths to the changed packages like this:</p>
<pre><code>/home/username/Desktop/project/packages/components
/home/username/Desktop/project/services/frontend
/home/username/Desktop/project/services/admin-dashboard
</code></pre>
<p>However, the list also includes <code>packages</code> and I have no idea how I would be able to use this list in a script to loop through affected services. Also: when I trigger a build (e.g. through tagging a commit), lerna wouldn't be able to recognize changed packages during the build process as the changes have already been committed.</p>
<hr>
<p>I know this is a long one. But I think it's an important topic, so I really appreciate any help! </p>
<p>P.S.: <a href="https://github.com/flolude/cents-ideas/tree/develop" rel="noreferrer">This</a> is how my actual project looks like, if you want to take a close look at the specific use-case.</p>
| Florian Ludewig | <p>To build from a <strong>monorepo</strong> you really want to build incrementally (what is changed and parts that depends on changed parts). To achieve this, your build tool need to handle a dependency graph in some way.</p>
<p>Lerna that you describe is designed for monorepos. But so is also <a href="https://bazel.build/" rel="nofollow noreferrer">Bazel</a> and it is available as an option in Google Cloud Builder, <a href="https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/bazel" rel="nofollow noreferrer">cloud-builders/bazel</a> with documentation for using in combination with a docker builder.</p>
<p>However, build tools designed for monorepos are usually more complex to setup.</p>
| Jonas |
<p>I am using a GCE cluster with 2 nodes, which I set up using kubeadm. Now I want to set up a persistent volume for postgresql to be deployed. I created a PVC and PV with a storageClass and also created a disk space with 10G in name <strong>postgres</strong> in the same project.Iam attaching the scripts for the PVC,PV,and Deployment below.Also I am using a service account that have the access to the disks.</p>
<p>1.Deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kyc-postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: "postgres:9.6.2"
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/db-data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: kyc-postgres-pvc
</code></pre>
<p>2.PersistentVolumeClaim.yml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kyc-postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
</code></pre>
<p>3.PersistentVolume.yml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: kyc-postgres-pv
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
finalizers:
- kubernetes.io/pv-protection
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: kyc-postgres-pvc
namespace: default
gcePersistentDisk:
fsType: NTFS
pdName: postgres
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-central1-a
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- us-central1-a
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
volumeMode: Filesystem
status:
phase: Bound
</code></pre>
<ol start="4">
<li>StorageClass.yml</li>
</ol>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-central1-a
</code></pre>
<p>Now when I create these volumes and deployments, the pod is not getting started properly.Iam getting the following errors when I tired creating deployments.</p>
<pre><code>Failed to get GCE GCECloudProvider with error <nil>
</code></pre>
<p>Also Iam attaching my output for <code>kubectl get sc</code></p>
<pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard kubernetes.io/gce-pd Delete Immediate false 10m
</code></pre>
<p>Can someone help me with this.Thanks in advance for your time – if I’ve missed out anything, over- or under-emphasised a specific point let me know in the comments.</p>
| Shilpa | <p>Your <code>PersistentVolumeClaim</code> does not specify a <code>storageClassName</code>, so I suppose you may want to use the default <em>StorageClass</em>. When using a default StorageClass, you don't need to create a <code>PersistentVolume</code> resource, that will be provided dynamically from the Google Cloud Platform. (Or is there any specific reason you don't want to use the default StorageClass?)</p>
| Jonas |
<p>Each <code>t2.micro</code> node should be able to run 4 pods according to this <a href="https://dev.to/wingkwong/how-to-fix-insufficient-pods-issue-when-deploying-to-amazon-eks-d35" rel="noreferrer">article</a> and the command <code>kubectl get nodes -o yaml | grep pods</code> output.</p>
<p>But I have two nodes and I can launch only 2 pods. 3rd pod gets stuck with the following error message.</p>
<p>Could it be the application using too much resource and as a result its not launching more pods? If that was the case it could indicate <code>Insufficient CPU or memory</code>.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 33s (x2 over 33s) default-scheduler 0/2 nodes are available: 2 Too many pods.
</code></pre>
| user630702 | <p>According to the AWS documentation <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="noreferrer">IP addresses per network interface per instance type</a> the <code>t2.micro</code> only has <code>2</code> Network Interfaces and <code>2</code> IPv4 addresses <strong>per interface</strong>. So you are right, only 4 IP addresses.</p>
<p>But EKS deploys <code>DaemonSets</code> for e.g. CoreDNS and kube-proxy, so some IP addresses <strong>on each node</strong> is already allocated.</p>
| Jonas |
<p>We are currently using 2 Nodes, but we may need more in the future.</p>
<p>The StatefulSets is a <a href="https://hub.kubeapps.com/charts/bitnami/mariadb-galera" rel="nofollow noreferrer">mariadb-galera</a> is current replica is at 2.</p>
<p>When we'll had a new Nodes we want the replica to be a 3, f we don't need it anymore and we delete it or a other Node we want it to be a 2.</p>
<p>In fact, if we have 3 Nodes we want 3 replica one on each Nodes.</p>
<p>I could use <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods" rel="nofollow noreferrer">Pod Topology Spread Constraints</a> but we'll have a bunch of "notScheduled" pods.</p>
<p>Is there a way to adapt the number of Replica automatically, every time a nodes is add or remove?</p>
| destroyed | <blockquote>
<p>When we'll had a new Nodes we want the replica to be a 3, f we don't need it anymore and we delete it or a other Node we want it to be a 2.</p>
</blockquote>
<p>I would recommend to do it the other way around. Manage the replicas of your <em>container workload</em> and let the number of nodes be adjusted after that.</p>
<p>See e.g. <a href="https://github.com/kubernetes/autoscaler" rel="nofollow noreferrer">Cluster Autoscaler</a> for how this can be done, it depends on what cloud provider or environment your cluster is using.</p>
<p>It is also important to specify your <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">CPU and Memory requests</a> such that it occupy the whole nodes.</p>
<p>For MariaDB and similar workload, you should use <code>StatefulSet</code> and not <code>DaemonSet</code>.</p>
| Jonas |
<p>I have a local Docker setup consisting of four containers: a flask web app, MySQL, Redis, and an RQ worker.</p>
<p>The setup is essentially the same as Miguel Grinberg's Flask Mega-Tutorial. Here are links for his <a href="https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xxii-background-jobs" rel="nofollow noreferrer">tutorial</a> and his <a href="https://github.com/miguelgrinberg/microblog/tree/v0.22" rel="nofollow noreferrer">code</a>.</p>
<p>The only difference in my case is that I've replaced his export blog post function, which runs on the rq-worker, with another that is incredibly computationally intensive and long running (30 minutes).</p>
<p><strong>What is the best way for me to deploy this application for production?</strong> </p>
<p>I only expect it to be accessed by a one or two people at a time and for them to visit only once or twice a week.</p>
<p>I've been looking into Kubernetes examples but I'm having difficulty translating them to my setup and figuring out how to deploy to GCP. I'm open to other deployment options.</p>
<p>Here are the docker run commands from the tutorial:</p>
<p><code>docker run --name redis -d -p 6379:6379 redis:3-alpine</code></p>
<pre><code>docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
-e MYSQL_DATABASE=flaskapp -e MYSQL_USER=flaskapp \
-e MYSQL_PASSWORD=mysqlpassword \
mysql/mysql-server:5.7
</code></pre>
<pre><code>docker run --name rq-worker -d --rm -e SECRET_KEY=my-secret-key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e [email protected] -e MAIL_PASSWORD=mysqlpassword \
--link mysql:dbserver --link redis:redis-server \
-e DATABASE_URL=mysql+pymysql://flaskapp:mypassword@dbserver/flaskapp \
-e REDIS_URL=redis://redis-server:6379/0 \
--entrypoint venv/bin/rq \
flaskapp:latest worker -u redis://redis-server:6379/0 dyson-tasks
</code></pre>
<pre><code>docker run --name flaskapp -d -p 8000:5000 --rm -e SECRET_KEY=my_secret_key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e [email protected] -e MAIL_PASSWORD=mypassword \
--link mysql:dbserver --link redis:redis-server \
-e DATABASE_URL=mysql+pymysql://flaskapp:mysqlpassword@dbserver/flaskapp \
-e REDIS_URL=redis://redis-server:6379/0 \
flaskapp:latest
</code></pre>
| Scott Guthart | <p>Since you tag the question with Kubernetes and Google Cloud Platform, I expect that is the direction that you want.</p>
<p>When deploying to a cloud platform, consider to use a cloud ready storage / database solution. A single-node MySQL is not a cloud ready storage out of the box. Consider using e.g. <a href="https://cloud.google.com/sql/" rel="nofollow noreferrer">Google Cloud SQL</a> instead.</p>
<p>Your "flask web app" can perfectly be deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> to <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Google Kubernetes Engine</a> - but this require that your app is <em>stateless</em> and follow <a href="https://12factor.net/" rel="nofollow noreferrer">the twelve-factor app principles</a>.</p>
<p>Your Redis <em>can</em> also be deployed to Kubernetes, but you need to think about how important your availability requirements are. If you don't want to think about this, you can also use Google managed Redis, e.g. <a href="https://cloud.google.com/memorystore/" rel="nofollow noreferrer">Google memorystore</a> - a fully-managed in-memory data store service for Redis.</p>
<p>If you decide to use a fully managed cache, you could potentially deploy your "flask web app" as a container using <a href="https://cloud.google.com/run/" rel="nofollow noreferrer">Google Cloud Run</a> - this is a more managed solution than a full Kubernetes cluster, but also more limited. But the good think here is that you only pay for <em>requests</em>.</p>
| Jonas |
<p>I am learning Kubernetes at the moment. I have built a simple python application that uses Flask to expose rest APIs. Flask by default uses port 5000 to run the server. My API look like -</p>
<pre><code>http://0.0.0.0:5000/api
</code></pre>
<p>Application is built into a docker image</p>
<pre><code>FROM python:3.8.6-alpine
COPY . /app
WORKDIR /app
RUN \
apk add --no-cache python3 postgresql-libs && \
apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev postgresql-dev && \
python3 -m pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
ENTRYPOINT ["python3"]
CMD ["app.py"]
</code></pre>
<p>I deploy this in a Kubernetes pod with pod definition</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: python-webapp
labels:
type: web
use: internal
spec:
containers:
- name: python-webapp
image: repo/python-webapp:latest
</code></pre>
<p>Everything works fine and I am able to access the api on the pod directly and through Kubernetes service. I am boggled how does the POD know that the application in the container is running on port 5000? Where is the mapping for a port on the container to port on the pod?</p>
| Jaspreet | <blockquote>
<p>I am boggled how does the POD know that the application in the container is running on port 5000?</p>
</blockquote>
<p>The pod does not know that. The app in the container, in the pod can respond to any request on any port.</p>
<p>But to expose this to outside the cluster, you likely will forward traffic from a specific port to a specific port on your app, via a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> (that can map to different ports) and a <a href="https://kubernetes.io/docs/concepts/services-networking/" rel="nofollow noreferrer">load balancer</a>.</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a> to restrict traffic in your cluster, e.g. to specific ports or services.</p>
| Jonas |
<p>I have an AWS EKS Kubernetes cluster. I have a set of EC2 nodes and also have configured a Fargate profile to launch pods in Fargate compute. Say I am running an App in namespace <code>alex-ns</code>. Can I somehow deploy 1 set of pods in Fargate and 1 set of different pods in my other EC2 nodegroup if they all reside in <code>alex-ns</code>? It appears that if I set the Fargate profile to match with namespace <code>alex-ns</code> everything is Launched in Fargate. However, I would like to split it up specifically based on labels or something. The reason I'm asking is I have to run a Pod that requires more than 32 GB of RAM that's available in Fargate so my pod must run in my EC2 node group.</p>
| alex | <blockquote>
<p>Can I somehow deploy 1 set of pods in Fargate and 1 set of different pods in my other EC2 nodegroup if they all reside in alex-ns?</p>
</blockquote>
<p>Yes, you can also add <strong>labels</strong> to the <a href="https://www.eksworkshop.com/beginner/180_fargate/creating-profile/" rel="nofollow noreferrer">Fargate Profile selector</a> in addition to the namespace <code>alex-ns</code>.</p>
<p>The EKS documentation has more info about <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="nofollow noreferrer">Fargate Profiles</a>, an example selector for your case:</p>
<pre><code> "selectors": [
{
"namespace": "alex-ns",
"labels": {
"fargate": "yes"
}
}
</code></pre>
| Jonas |
<p>I am trying to understand Kubernetes and how it works under the hood. As I understand it each pod gets its own IP address. What I am not sure about is what kind of IP address that is. </p>
<p>Is it something that the network admins at my company need to pass out? Or is an internal kind of IP address that is not addressable on the full network?</p>
<p>I have read about network overlays (like Project Calico) and I assume they play a role in this, but I can't seem to find a page that explains the connection. (I think my question is too remedial for the internet.)</p>
<p><strong>Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?</strong></p>
| Vaccano | <h2>Kubernetes clusters</h2>
<blockquote>
<p>Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?</p>
</blockquote>
<p>The thing with Kubernetes is that it is not a <em>service</em> like e.g. a Virtual Machine, but a <strong>cluster</strong> that has it's own networking functionality and management, including <strong>IP address allocation</strong> and <strong>network routing</strong>.</p>
<p>Your nodes may be virtual or physical machines, but they are registered in the NodeController, e.g. for health check and most commonly for IP address management.</p>
<blockquote>
<p>The node controller is a Kubernetes master component which manages various aspects of nodes.</p>
<p>The node controller has multiple roles in a node’s life. The first is assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Cluster Architecture - Nodes</a></p>
<h2>IP address management</h2>
<p>Kubernetes Networking depends on the Container Network Interface (<a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="nofollow noreferrer">CNI</a>) plugin your cluster is using.</p>
<blockquote>
<p>A CNI plugin is responsible for ... It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.</p>
</blockquote>
<p>It is common that each node is assigned an CIDR range of IP-addresses that the nodes then assign to pods that is scheduled on the node.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview" rel="nofollow noreferrer">GKE network overview</a> describes it well on how it work on GKE.</p>
<blockquote>
<p>Each node has an IP address assigned from the cluster's Virtual Private Cloud (VPC) network.</p>
<p>Each node has a pool of IP addresses that GKE assigns Pods running on that node (a /24 CIDR block by default).</p>
<p>Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster.</p>
<p>Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network.</p>
</blockquote>
| Jonas |
<p>I am building a CI/CD pipeline using <strong>Tekton</strong> on a bare metal Kubernetes Cluster. I have managed to cache the necessary images (Node & Nginx) and the layers, but how can I cache the .cache / public folders created by <strong>Gatsby build</strong>? These folders are not present in the repo. If the build step does not find these folders in takes longer because it needs to create all images using Sharp.</p>
<p>The pipeline has a PVC attached. In the task it is called <em>source</em> (workspaces). To be more clear, how can I copy the Gatsby folders to this PVC after the build has finished and to the Kaniko container before the next build?</p>
<p><strong>The Tekton task has the following steps:</strong></p>
<ol>
<li>Use Kaniko warmer to cache Docker Images used in the Docker build</li>
<li>Create a timestamp so that "RUN build" is executed every time even if the files don't change because it runs a GraphQL query</li>
<li>Build and push image using Kaniko</li>
<li>& 5. Export image digest used by next step in the pipeline</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-docker-image
spec:
params:
- name: pathToDockerFile
type: string
description: The path to the dockerfile to build
default: $(resources.inputs.source-repo.path)/Dockerfile
- name: pathToContext
type: string
description: |
The build context used by Kaniko
(https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts)
default: $(resources.inputs.source-repo.path)
resources:
inputs:
- name: source-repo
type: git
outputs:
- name: builtImage
type: image
- name: event-to-sink
type: cloudEvent
workspaces:
# PVC
- name: source
description: |
Folder to write docker image digest
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer
workingDir: $(workspaces.source.path)
args:
- --cache-dir=$(workspaces.source.path)/cache
- --image=node:14-alpine
- --image=nginx:1.19.5
- name: print-date-unix-timestamp
image: bash:latest
script: |
#!/usr/bin/env bash
date | tee $(params.pathToContext)/date
- name: build-and-push
workingDir: $(workspaces.source.path)
image: gcr.io/kaniko-project/executor:v1.3.0
env:
- name: 'DOCKER_CONFIG'
value: '/tekton/home/.docker/'
command:
- /kaniko/executor
args:
- --build-arg=CACHEBUST=$(params.pathToContext)/date
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url)
- --context=$(params.pathToContext)
- --cache=true
- --cache-ttl=144h
- --cache-dir=$(workspaces.source.path)/cache
- --use-new-run
- --snapshotMode=redo
- --cache-repo=<repo>/kaniko-cache
- --log-timestamp
securityContext:
runAsUser: 0
- name: write-digest
workingDir: $(workspaces.source.path)
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.16.2
command: ['/ko-app/imagedigestexporter']
args:
- -images=[{"name":"$(resources.outputs.builtImage.url)","type":"image","url":"$(resources.outputs.builtImage.url)","digest":"","OutputImageDir":"$(workspaces.source.path)/$(params.pathToContext)/image-digest"}]
- -terminationMessagePath=$(params.pathToContext)/image-digested
securityContext:
runAsUser: 0
- name: digest-to-result
workingDir: $(workspaces.source.path)
image: docker.io/stedolan/jq@sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
script: |
cat $(params.pathToContext)/image-digested | jq '.[0].value' -rj | tee /$(results.IMAGE-DIGEST.path)
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM node:14-alpine as build
ARG CACHEBUST=1
RUN apk update \
&& apk add \
build-base \
libtool \
autoconf \
automake \
pkgconfig \
nasm \
yarn \
libpng-dev libjpeg-turbo-dev giflib-dev tiff-dev \
zlib-dev \
python \
&& rm -rf /var/cache/apk/*
EXPOSE 8000 9000
RUN yarn global add gatsby-cli
WORKDIR /usr/src/app
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build && echo $CACHEBUST
CMD ["yarn", "serve"]
FROM nginx:1.19.5 as serve
EXPOSE 80
COPY --from=build /usr/src/app/public /usr/share/nginx/html
</code></pre>
| Espen Finnesand | <blockquote>
<p>how can I cache the .cache / public folders created by Gatsby build? These folders are not present in the repo.</p>
</blockquote>
<p>If Persistent Volumes is available on your cluster and these volumes is available from all nodes, you can use a PVC-backed workspace for cache.</p>
<p>A more generic solution that also works in a regional cluster (e.g. cloud) is to upload the cached folder to something, e.g. a Bucket (<a href="https://min.io/" rel="nofollow noreferrer">Minio</a>?) or potentially Redis? Then also need a Task that download this folder - potentially in parallel with <code>git clone</code> when starting a new <code>PipelineRun</code>. GitHub Actions has a similar solution with the <a href="https://github.com/actions/cache" rel="nofollow noreferrer">cache action</a>.</p>
<p>Example of a Task with two workspaces that copy a file from one workspace to the other:</p>
<pre><code>apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: copy-between-workspaces
spec:
workspaces:
- name: ws-a
- name: ws-b
steps:
- name: copy
image: ubuntu
script: cp $(workspaces.ws-a.path)/myfile $(workspaces.ws-b.path)/myfile
</code></pre>
| Jonas |
<p>How can I mount a hostPath into each pod in a statefulset when I don't know the names of the nodes in advance (so can't pre-create a PV on each node)?</p>
<p>I want to set up an elasticsearch cluster on a number of nodes, mounting each elasticsearch data directory onto the SSD of the host node...</p>
<p>How can I accomplish this with a statefulset?</p>
| GDev | <p>Instead of a <em>HostPath Volume</em>, you should use a <a href="https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/" rel="nofollow noreferrer">Local Persistent Volume</a> for this kind of use cases.</p>
<blockquote>
<p>The biggest difference is that the Kubernetes scheduler understands which node a Local Persistent Volume belongs to. With HostPath volumes, a pod referencing a HostPath volume may be moved by the scheduler to a different node resulting in data loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node.</p>
</blockquote>
<p>Consider using <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner" rel="nofollow noreferrer">local static provisioner</a> for this, it has instructions for <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/getting-started.md#option-3-baremetal-environments" rel="nofollow noreferrer">Baremetal environments</a>.</p>
| Jonas |
<p>Im looking for an inter-service communication solution.</p>
<p>I have 1 service and multiple pods with an incoming gRPC stream. The initial request calls out to an external resource which eventually triggers a request back to this service with a status message. This is on a separate thread and for this example ends up going to Pod B. I would like PodA to respond with this status message. I have tried to demonstraite this with the workflow below.</p>
<p><a href="https://i.stack.imgur.com/ICh2A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ICh2A.png" alt="service workflow"></a></p>
<p>The obvious solution here is to add some sort of messaging pattern but I would am looking for help in determining which is the best approach. The example below introduces a service mesh sidecar which would route external requests to a queue which Pod A would then subscribe to. If using AMQP, I would probably look to use <a href="https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchange-direct" rel="nofollow noreferrer">direct exchange</a>.</p>
<p><a href="https://i.stack.imgur.com/v9BzX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v9BzX.png" alt="servicemesg-solution"></a></p>
<p>Any further information needed, please let me know.</p>
| Christo | <p>Pub-Sub communication between two microservices is a good communication pattern. The book <a href="https://www.manning.com/books/cloud-native-patterns" rel="nofollow noreferrer">Cloud Native Patterns</a> describes exactly this, and the advantage using this pattern over request-response in some cases. See chapter 12.</p>
<blockquote>
<p>12.2 Moving from request/response to event driven</p>
<p>12.3 The event log</p>
<p>12.4 Event sourcing</p>
</blockquote>
<p>This can be implemented with multiple technologies, but using <a href="https://www.confluent.io/blog/building-a-microservices-ecosystem-with-kafka-streams-and-ksql/" rel="nofollow noreferrer">Kafka with microservices</a> is a good fit. Kafka is also a distributed system, designed for cloud, similar to the principles used in Kubernetes.</p>
<p><strong>Your example</strong></p>
<p>To apply this to your example, the <em>queue</em> is the Kafka broker (or RabbitMQ?) and you need a way to post data from the <em>External Resource</em> to the broker. If the <em>External Resource</em> always reply to the pod that the request was from, a <em>sidecar</em> may be a solution. If the "reply address" can be configured, this could be an independent "adapter service" like e.g. <a href="https://docs.confluent.io/current/kafka-rest/index.html" rel="nofollow noreferrer">Kafka REST proxy</a> if Kafka is used as broker. There is probably corresponding proxies for e.g. RabbitMQ.</p>
| Jonas |
<p>Anyone has aware of this issue, I have a cluster of 3 nodes and Am running pods in statefulset. totally 3 pods are running in the order, assume pod-0 running on node-1, pod-2 running on node-2, and pod-3 running on node-3. now, the traffic is going properly and getting the response immediately, when we stop one node(eg: node-2) , then the response is intermittent and the traffic is routing to stopped pod as well, is there any solution/workaround for this issue.</p>
| Raja | <blockquote>
<p>when we stop one node(eg: node-2), then the response is intermittent and the traffic is <strong>routing to stopped pod as well</strong>, is there any solution/workaround for this issue.</p>
</blockquote>
<p>This seem to be a <a href="https://github.com/kubernetes/kubernetes/issues/55713" rel="nofollow noreferrer">reported issue</a>. However, Kubernetes is a distribued cloud native system and you should design for resilience with use of request retries.</p>
<ul>
<li><p><a href="https://medium.com/asos-techblog/improve-availability-and-resilience-of-your-micro-services-using-this-7-cloud-design-patterns-16006eaf32b1" rel="nofollow noreferrer">Improve availability and resilience of your Microservices using these seven cloud design patterns</a></p></li>
<li><p><a href="https://dzone.com/articles/libraries-for-microservices-development" rel="nofollow noreferrer">How to Make Services Resilient in a Microservices Environment
</a></p></li>
</ul>
| Jonas |
<p>I have a kubernetes ingress configured on google cloud with a managed certificate. Then I have the theia/theia-full docker image as a pod and a kubernetes service connecting the ingress and the pod.</p>
<p>The initial load of the theia page in my browser works and all plugins are started in the backend. After that every 30sec the browser issues another websocket request to wss://mytheiadomain. The theia backend logs</p>
<pre><code>root ERROR [hosted-plugin: 59] Error: connection is closed
at Object.create (/home/theia/node_modules/@theia/plugin-ext/lib/common/rpc-protocol.js:82:30)
at Object.<anonymous> (/home/theia/node_modules/@theia/plugin-ext/lib/common/rpc-protocol.js:108:56)
at Object.disposable.dispose (/home/theia/node_modules/@theia/core/lib/common/disposable.js:101:13)
at DisposableCollection.dispose (/home/theia/node_modules/@theia/core/lib/common/disposable.js:78:40)
at RPCProtocolImpl.dispose (/home/theia/node_modules/@theia/plugin-ext/lib/common/rpc-protocol.js:129:24)
at /home/theia/node_modules/@theia/plugin-ext/lib/hosted/node/plugin-host.js:142:21
at step (/home/theia/node_modules/@theia/plugin-ext/lib/hosted/node/plugin-host.js:48:23)
at Object.next (/home/theia/node_modules/@theia/plugin-ext/lib/hosted/node/plugin-host.js:29:53)
at fulfilled (/home/theia/node_modules/@theia/plugin-ext/lib/hosted/node/plugin-host.js:20:58)
at processTicksAndRejections (internal/process/task_queues.js:97:5) {
code: 'RPC_PROTOCOL_CLOSED'
}
root INFO [e894a0b2-e9cd-4f35-8167-89eb28e840d8][typefox.yang-vscode]: Disconnected.
root INFO [e894a0b2-e9cd-4f35-8167-89eb28e840d8][rebornix.ruby]: Disconnected.
root INFO [e894a0b2-e9cd-4f35-8167-89eb28e840d8][ms-python.python]: Disconnected.
...
</code></pre>
<p>and all plugins disconnect and initialize again. (sometimes I don't even get this error message and the plugins just disconnect and initialize)</p>
<p>If I cut the wifi connection of my browser this does not happen! So the browsers wss request seems to trigger the restart. The disconnect every 30sec does not happen if I run theia-full locally on plain docker.</p>
<p>This is as far as I got tracing the error after a few hours of searching. Any hint would be appreciated. I can provide more log output and my configuration files.</p>
| markop | <p>The default timeout for <a href="https://cloud.google.com/load-balancing/docs/backend-service#timeout-setting" rel="nofollow noreferrer">Google Load Balancers</a> is 30 seconds.</p>
<blockquote>
<p>For external HTTP(S) load balancers and internal HTTP(S) load balancers, if the HTTP connection is upgraded to a WebSocket, the backend service timeout defines the maximum amount of time that a WebSocket can be open, whether idle or not.</p>
</blockquote>
<p>You need to create a custom <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout" rel="nofollow noreferrer">BackendCondig</a> with the timeout that you want.</p>
| Jonas |
<p>Trying to decide where to store some critical sharding configs, and I have not yet found sufficient documentation around the reliability of Kube ConfigMaps to ease my mind.</p>
<p>Say I have a single-cluster kube pod spec that injects an environment variable with the value of a configmap entry at pod start (using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data" rel="nofollow noreferrer">configMapKeyRef</a>). I have many pods running based on this spec.</p>
<ol>
<li>I <code>kubectl edit</code> the configmap entry and wait for the operation to succeed.</li>
<li>I restart the pods.</li>
</ol>
<p>Are these pods guaranteed to see the new configmap value? (Or, failing that, is there a window of time I'd need to wait before restarting the pods to ensure that they get the new value?)</p>
<p>Similarly, are all pods guaranteed to see a consistent value, assuming there are no configmap edits during the time they are all restarting?</p>
| David Grant | <p>Kubernetes is an eventual consistency system. It is <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md" rel="nofollow noreferrer">recommended</a> to <em>create a new ConfigMap</em> when you want to change the value.</p>
<blockquote>
<p>Changing the data held by a live configMap in a cluster is considered bad practice. Deployments have no means to know that the configMaps they refer to have changed, so such updates have no effect.</p>
</blockquote>
<p>Using <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Declarative config management with Kustomize</a>, this is easier to do using <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="nofollow noreferrer">configMapGenerator</a></p>
<blockquote>
<p>The recommended way to change a deployment's configuration is to</p>
<ol>
<li>create a new configMap with a new name,</li>
<li>patch the deployment, modifying the name value of the appropriate configMapKeyRef field.</li>
</ol>
</blockquote>
<p><strong>Deployment</strong></p>
<p>When using Kustomize for both <code>Deployment</code> and <code>ConfigMap</code> using the <em>configMapGenerator</em> the name of <code>ConfigMap</code> will be generated based on content and the references to the <code>ConfigMap</code> in <code>Deployment</code> will be updated with the generated name, so that a new <em>rolling deployment</em> will be triggered.</p>
| Jonas |
<p>I am using Azure Kubernetes and I have created Persistent Volume, Claims and Storage Class.</p>
<p>I want to deploy pods on the Persistent Volume so we can increase the Volume anytime as per the requirement. Right now our Pods are deployed in the Virtual Machines OS Disk. Since we are using the default Pods deployment on the VM Disk when we run out of the disk space the whole cluster will be destroyed and created again.</p>
<p>Please let me know how can I configure Pods to deploy in Azure (Managed) Disk.</p>
<p>Thanks,
Mrugesh</p>
| Mrugesh Shah | <p>You don't have to create a Persistent Volume manually, if you want Azure Disk, this can be created dynamically for you.</p>
<p>From <a href="https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/azure-disks-dynamic-pv.md#built-in-storage-classes" rel="nofollow noreferrer">Azure Built-in storage classes</a>:</p>
<blockquote>
<p>The default storage class provisions a standard SSD Azure disk.
Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.</p>
</blockquote>
<p>You only have to create the <code>PersistentVolumeClaim</code> with the storage class you want to use, e.g.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 5Gi
</code></pre>
<p>and then refer to that PVC in your Deployment or Pods.</p>
| Jonas |
<p>I have a pod running in a production cluster. The pod is for debugging purpose and I would like to sniff the host network traffic. For security reason, I cannot deploy the pod in the host network.</p>
<p>Is it possible to sniff the host network traffic from a non hostnetwork pod in kubernetes?</p>
| Kintarō | <p>A Pod only received traffic that is addressed to the Pod.</p>
<p>A <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni" rel="nofollow noreferrer">CNI plugin</a> is the component that you would be interested in, since that a way to plugin and intercept the traffic.</p>
| Jonas |
<p>We are trying to choose schema for allocation microservices in multi tenant application. We want to use kubernates and see two cases:</p>
<p>First case: </p>
<p>+ Looks like a more productive scheme <br>
+ Easy to administer <br>
- Difficult to implement<br></p>
<p><a href="https://i.stack.imgur.com/hiYCQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hiYCQ.png" alt="enter image description here"></a></p>
<p>Second case:</p>
<p>+ More incapsulated <br>
- Looks like a less productive scheme<br></p>
<p><a href="https://i.stack.imgur.com/F2dqR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F2dqR.png" alt="enter image description here"></a></p>
| Vladimir | <p>Use the <em>second case</em> with a separate namespace per tenant.</p>
<p><strong>Different configurations</strong></p>
<p>You have designed a solution with a separate database for each tenant. You can run the same <em>container image</em> for the tenants but the should use <strong>different configurations</strong> e.g. they have different address to the database. See <a href="https://12factor.net/config" rel="nofollow noreferrer">Twelve factor - externalize configuration</a>.</p>
<blockquote>
<p>We must always create a new service's container for each tenant. Although if load is low we could use one general container for all tenants</p>
</blockquote>
<p>You can easily create the same service for each tenant using Kubernetes <strong>declarative</strong> Deployment manifests. You can also assign only the resources that is needed for each tenant, e.g. variations in number of replicas or different CPU or Memory resources.</p>
<p><strong>Route error information to a central service</strong></p>
<blockquote>
<p>We have single entry point for detect errors</p>
</blockquote>
<p>You should always route observability information, e.g. logs, metrics and events to a central service for your cluster.</p>
<p><strong>Isolate tenants</strong></p>
<p>In addition, if you have separate namespaces for tenants, you can isolate them more using <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a></p>
| Jonas |
<p>I have one application which servers for REST request and also is listening on a Kafka topic.
I deployed the application to Kubernetes and configure the readiness probe like this</p>
<pre><code>readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>basically following the instruction from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">[configure-liveness-readiness-startup-probes]</a></p>
<p>After deployment is done, I can see the pod readiness probe fails</p>
<pre><code>Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory
</code></pre>
<p>That is expected. Then I sent a kafka message to the topic . I observed that </p>
<p>1) the kafka message has been consumed by my application and saved to database.<br>
2) the rest api can't be accessed.</p>
<p>I assumed if the pod's readiness probe is failed, the application can neither receive kafka message nor the rest request. But why in my test, the REST request and Kafka message are handled differently. </p>
<p>According to the Kubernete documentation:</p>
<pre><code>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic
</code></pre>
<p>But it doesn't say clearly what kind of traffic it really means.
Does kubernetes only restrict the http traffic to the pod if readiness probe failes but not restrict tcp traffic (as Kafka is working over tcp)? </p>
<p>My actual intention is to make my service application (kafka consumer) able to control when to receive kafka messages (and REST request as well). E.g. if there is heavy opertion, my service will delete the /tmp/healthy file and thus make the pod not ready for recieving kafka message and Rest request. When the heavy operation is finished, the app write the healthy file to make the pod ready for receiving message.</p>
<p>Some more information, in my test, the kubernetes version is v1.14.3 and kafka broker is running in a separated vm outside of kubernetes.</p>
| Shenghua Liu | <p>This is two very different things:</p>
<ul>
<li><strong>Receiving requests</strong>: An <em>external service</em> is sending a request and expect a response.</li>
<li><strong>Sending requests</strong>: Your service is sending a request and waiting for a response.</li>
</ul>
<h2>ReadinessProbe</h2>
<p>When a ReadinessProbe fails, <strong>no new requests will be routed to the pod</strong>.</p>
<h2>Kafka consumer</h2>
<p>If your pod is a <em>Kafka consumer</em>, then your <strong>pod is initializing requests</strong> to Kafka, to retrieve messages from the <em>topic</em>.</p>
<p><strong>Check for required directory</strong></p>
<blockquote>
<p>can't open '/tmp/healthy': No such file or directory</p>
</blockquote>
<p>If the directory <code>/tmp/healthy</code> is needed for your service to work correctly, your service should check for it on startup, and <code>exit(1)</code> (crash with an error message) if the required directory isn't available. This should be done before connecting to Kafka. If your application uses the directory continually, e.g. writing to it, any operations <strong>error codes should be checked and handled properly</strong> - log and crash depending on your situation.</p>
<h2>Consuming Kafka messages</h2>
<blockquote>
<p>My actual intention is to make my service application (kafka consumer) able to control when to receive kafka messages ( and REST request as well). E.g. if there is heavy opertion, my service will delete the /tmp/healthy file and thus make the pod not ready for recieving kafka message and Rest request.</p>
</blockquote>
<p>Kafka consumers <strong>poll</strong> Kafka for more data, whenever the consumer want. In other words, the Kafka consumer <em>ask</em> for more data whenever it is ready for more data.</p>
<p>Example consumer code:</p>
<pre><code> while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
// process your records
}
}
</code></pre>
<p>Remember to <code>commit</code> the records that you have <em>processed</em> so that the messages aren't processed multiple times e.g. after a crash.</p>
| Jonas |
<p>Can a confuration for a progam running in container/pod be placed in a Deployment yaml instead of ConfigMap yaml - like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
spec:
containers:
- env:
-name: "MyConfigKey"
value: "MyConfigValue"
</code></pre>
| Evgeny Benediktov | <h2>Single environment</h2>
<p>Putting values in environment variables in the <code>Deployment</code> works.</p>
<p><strong>Problem:</strong> You should not work on the environment that is the production environment, so you will need at least another environment.</p>
<p>Using docker, containers and Kubernetes makes it very easy to create more than one environment.</p>
<h2>Multiple environements</h2>
<p>When you want to use more than one environment, you want to keep the difference as small as possible. This is important to fast detect problems and to limit the management needed.</p>
<p><strong>Problem:</strong> Maintaining the difference between environments and also avoid unique problems (config drift / <a href="https://martinfowler.com/bliki/SnowflakeServer.html" rel="nofollow noreferrer">snowflake servers</a>).</p>
<p>Therefore, keep as much as possible common for the environments, e.g. use the same <code>Deployment</code>.</p>
<p>Only use unique instances of <code>ConfigMap</code>, <code>Secret</code> and probably <code>Ingress</code> for each app and environment.</p>
| Jonas |
<p>I have deployed a mosquitto image in a pod in kubernetes with this dockerfile:</p>
<pre><code>FROM eclipse-mosquitto:1.6.7
</code></pre>
<p>I downloaded the image an added it to my cluster, using this yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: mosquittotest:1.0
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
</code></pre>
<p>It is running correctly.</p>
<p>My question is: How can I know which IP is the one I should use t sub/pub, and which port?<br />
Do I just have to use the IP of the entrypoint service with the 8080 port?</p>
<p>I'm at a loss here.</p>
| Manu Ruiz Ruiz | <p>Do you get an IP-address on the Service?</p>
<h2>Using ClusterIP</h2>
<p>To have an cluster interal IP, you should set <code>type=ClusterIP</code> on your service:</p>
<pre><code>spec:
type: ClusterIP
</code></pre>
<p>Your clients route it requests to a DNS name for the service, depending on how your namespaces are setup. See <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a></p>
<h2>Using NodePort</h2>
<p>If you want to continue using type=NodePort, you can send request to the IP for any Node, but with the specific Node Port number.</p>
| Jonas |
<p>I have too many LoadBalancer services consuming too many external IPs and I'd like to switch to using an Ingress controller.</p>
<p>I did the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">tutorial</a> and everything worked fine with the google provided pods.</p>
<p>However, with my pod I am able to hit the NodePort service ...</p>
<pre><code> 😈 >curl http://35.223.89.81:32607/healthz
OK 😈 >
</code></pre>
<p>... but calls to the Ingress Controller are consistently failing ...</p>
<pre><code> 😈 >curl http://35.241.21.71:80/healthz
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>That’s an error.</ins>
<p>The requested URL <code>/healthz</code> was not found on this server. <ins>That’s all we know.</ins>
</code></pre>
<p>This is the version of k8s I am using:</p>
<pre><code> 😈 >gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
monza-predictors us-central1-a 1.13.11-gke.14 35.193.247.210 n1-standard-1 1.13.11-gke.9 * 2 RUNNING
</code></pre>
<p><strong>YAML for the ingress</strong></p>
<pre><code> 😈 >cat fanout-ingress-v2.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
spec:
rules:
- http:
paths:
- path: /healthz
backend:
serviceName: predictor-classification-seatbelt-driver-service-node-port
servicePort: 4444
- path: /seatbelt-driver
backend:
serviceName: predictor-classification-seatbelt-driver-service-node-port
servicePort: 4444
</code></pre>
<p><strong>describe of the ingress</strong></p>
<pre><code> 😈 >kubectl describe ing fanout-ingress
Name: fanout-ingress
Namespace: default
Address: 35.241.21.71
Default backend: default-http-backend:80 (10.40.2.10:8080)
Rules:
Host Path Backends
---- ---- --------
*
/healthz predictor-classification-seatbelt-driver-service-node-port:4444 (<none>)
/seatbelt-driver predictor-classification-seatbelt-driver-service-node-port:4444 (<none>)
Annotations:
ingress.kubernetes.io/url-map: k8s-um-default-fanout-ingress--62f4c45447b62142
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"fanout-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"predictor-classification-seatbelt-driver-service-node-port","servicePort":4444},"path":"/healthz"},{"backend":{"serviceName":"predictor-classification-seatbelt-driver-service-node-port","servicePort":4444},"path":"/seatbelt-driver"}]}}]}}
ingress.kubernetes.io/backends: {"k8s-be-31413--62f4c45447b62142":"HEALTHY","k8s-be-32607--62f4c45447b62142":"UNHEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-fanout-ingress--62f4c45447b62142
ingress.kubernetes.io/target-proxy: k8s-tp-default-fanout-ingress--62f4c45447b62142
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 21m loadbalancer-controller default/fanout-ingress
Normal CREATE 19m loadbalancer-controller ip: 35.241.21.71
</code></pre>
<p>I noticed that 1 of the 2 backends are UNHEALTHY.</p>
<p><strong>YAML for the NodePort service:</strong></p>
<pre><code> 😈 >cat service-node-port-classification-predictor.yaml
apiVersion: v1
kind: Service
metadata:
name: predictor-classification-seatbelt-driver-service-node-port
namespace: default
spec:
ports:
- port: 4444
protocol: TCP
targetPort: 4444
selector:
app: predictor-classification-seatbelt-driver
type: NodePort
</code></pre>
<p><strong>describe of the NodePort service</strong></p>
<pre><code> 😈 >kubectl describe svc predictor-classification-seatbelt-driver-service-node-port
Name: predictor-classification-seatbelt-driver-service-node-port
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"predictor-classification-seatbelt-driver-service-node-port","name...
Selector: app=predictor-classification-seatbelt-driver
Type: NodePort
IP: 10.43.243.69
Port: <unset> 4444/TCP
TargetPort: 4444/TCP
NodePort: <unset> 32607/TCP
Endpoints: 10.40.2.16:4444
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p><strong>YAML for the deployment</strong></p>
<pre><code> 😈 >cat deployment-classification-predictor-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: predictor-classification-seatbelt-driver
labels:
app: predictor-classification-seatbelt-driver
spec:
replicas: 1
selector:
matchLabels:
app: predictor-classification-seatbelt-driver
template:
metadata:
labels:
app: predictor-classification-seatbelt-driver
spec:
containers:
- name: predictor-classification-seatbelt-driver
image: gcr.io/annotator-1286/classification-predictor
command: ["/app/server.sh"]
args: ["4444", "https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/mobile.pb", "https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/labels.csv"]
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /healthz
port: 4444
initialDelaySeconds: 120
</code></pre>
<p><strong>describe of the deployment</strong></p>
<pre><code> 😈 >kubectl describe deploy predictor-classification-seatbelt-driver
Name: predictor-classification-seatbelt-driver
Namespace: default
CreationTimestamp: Mon, 18 Nov 2019 12:17:13 -0800
Labels: app=predictor-classification-seatbelt-driver
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"predictor-classification-seatbelt-driver"},"name...
Selector: app=predictor-classification-seatbelt-driver
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=predictor-classification-seatbelt-driver
Containers:
predictor-classification-seatbelt-driver:
Image: gcr.io/annotator-1286/classification-predictor
Port: 4444/TCP
Host Port: 0/TCP
Command:
/app/server.sh
Args:
4444
https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/mobile.pb
https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/labels.csv
Liveness: http-get http://:4444/healthz delay=120s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: predictor-classification-seatbelt-driver-85bc679444 (1/1 replicas created)
Events: <none>
</code></pre>
<p><strong>describe of the pod</strong></p>
<pre><code> 😈 >kubectl describe po predictor-classification-seatbelt-driver-85bc679444-lcb7v
Name: predictor-classification-seatbelt-driver-85bc679444-lcb7v
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-monza-predictors-default-pool-268f57e3-1bs6/10.128.0.65
Start Time: Mon, 18 Nov 2019 12:17:13 -0800
Labels: app=predictor-classification-seatbelt-driver
pod-template-hash=85bc679444
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container predictor-classification-seatbelt-driver
Status: Running
IP: 10.40.2.16
Controlled By: ReplicaSet/predictor-classification-seatbelt-driver-85bc679444
Containers:
predictor-classification-seatbelt-driver:
Container ID: docker://90ce1466b852760db92bc66698295a2ae2963f19d26111e5be03d588dc83a712
Image: gcr.io/annotator-1286/classification-predictor
Image ID: docker-pullable://gcr.io/annotator-1286/classification-predictor@sha256:63690593d710182110e51fbd620d6944241c36dd79bce7b08b2823677ec7b929
Port: 4444/TCP
Host Port: 0/TCP
Command:
/app/server.sh
Args:
4444
https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/mobile.pb
https://storage.googleapis.com/com-aosvapps-runs/38/1564677191/models/labels.csv
State: Running
Started: Mon, 18 Nov 2019 12:17:15 -0800
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://:4444/healthz delay=120s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8q95m (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-8q95m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8q95m
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p><strong>UPDATE: Using a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Single Service Ingress</a> did not fix the problem</strong></p>
<pre><code> 😈 >cat fanout-ingress-v3.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
spec:
backend:
serviceName: predictor-classification-seatbelt-driver-service-node-port
servicePort: 4444
😈 >kubectl apply -f fanout-ingress-v3.yaml
ingress.extensions/fanout-ingress created
😈 >kubectl describe ing fanout-ingress
Name: fanout-ingress
Namespace: default
Address: 35.244.250.224
Default backend: predictor-classification-seatbelt-driver-service-node-port:4444 (10.40.2.16:4444)
Rules:
Host Path Backends
---- ---- --------
* * predictor-classification-seatbelt-driver-service-node-port:4444 (10.40.2.16:4444)
Annotations:
ingress.kubernetes.io/url-map: k8s-um-default-fanout-ingress--62f4c45447b62142
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"fanout-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"predictor-classification-seatbelt-driver-service-node-port","servicePort":4444}}}
ingress.kubernetes.io/backends: {"k8s-be-32607--62f4c45447b62142":"Unknown"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-fanout-ingress--62f4c45447b62142
ingress.kubernetes.io/target-proxy: k8s-tp-default-fanout-ingress--62f4c45447b62142
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 3m31s loadbalancer-controller default/fanout-ingress
Normal CREATE 2m56s loadbalancer-controller ip: 35.244.250.224
😈 >curl 35.244.250.224/healthz
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>That’s an error.</ins>
<p>The requested URL <code>/healthz</code> was not found on this server. <ins>That’s all we know.</ins>
</code></pre>
| Alex Ryan | <p>Add a <strong>readinessProbe</strong> to your <code>Deployment</code> object.</p>
<pre><code> readinessProbe:
httpGet:
path: /healthz
port: 4444
initialDelaySeconds: 120
</code></pre>
<p>An IngressController may wait with routing traffic to the service until the pods in the service become <em>ready</em> to handle requests from the Ingress proxy.</p>
| Jonas |
<p>I'm using <code>react-router-dom</code> to capture parameters from a url on my website, however everytime I try reaching my ending point at <code>www.mywebsite.com/video/id</code> I get a 404 response from my nginx ingress. I've configured it to point the incoming request to my frontend deployment appropriately but otherwise I don't know why the configuration isn't working properly:</p>
<p>My Ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
kubecnginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- website.app
- www.website.app
secretName: website-app
rules:
- host: website.app
http:
paths:
- path: /
backend:
serviceName: frontend-cluster-ip-service
servicePort: 3000
- path: /video/*
backend:
serviceName: frontend-cluster-ip-service
servicePort: 3000
- path: /api/
backend:
serviceName: backend-cluster-ip-service
servicePort: 5000
- path: /payments/
backend:
serviceName: backend-cluster-ip-service
servicePort: 5000
- path: /streaming/
backend:
serviceName: streaming-ip-service
servicePort: 3000
- host: www.website.app
http:
paths:
- path: /
backend:
serviceName: frontend-cluster-ip-service
servicePort: 3000
- path: /video/*
backend:
serviceName: frontend-cluster-ip-service
servicePort: 3000
- path: /api/
backend:
serviceName: backend-cluster-ip-service
servicePort: 5000
- path: /payments/
backend:
serviceName: backend-cluster-ip-service
servicePort: 5000
- path: /streaming/
backend:
serviceName: streaming-ip-service
servicePort: 3000
</code></pre>
<p>React code:</p>
<pre class="lang-js prettyprint-override"><code>function App() {
return (
<div className="App">
<RecoilRoot>
<React.Suspense fallback={<div>Loading...</div>}>
<Router>
<Switch>
<Route exact path="/" component={LandingPage}/>
<Route exact path="/video/:id" component={Video}/>
</Switch>
</Router>
</React.Suspense>
</RecoilRoot>
</div>
);
}
</code></pre>
<p>UPDATE:</p>
<p>Jonas' answer does indeed fix the problem on the k8s ingress side, however the docker container actually running the react application must have its nginx.conf updated to work with <code>react-router-dom</code>. I provide the appropriate <code>Dockerfile</code> and <code>nginx/nginx.conf</code> (top level directory of your application) files below (courtesy of <a href="https://levelup.gitconnected.com/dockerizing-a-react-application-using-nginx-and-react-router-43154cc8e58c" rel="nofollow noreferrer">source</a>):</p>
<p><strong>Dockerfile:</strong></p>
<pre><code>FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile
RUN yarn add [email protected] -g --silent
COPY . ./
RUN yarn run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p><strong>nginx/nginx.conf:</strong></p>
<pre><code>server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
</code></pre>
| Adrian Coutsoftides | <p>Since you don't want to use <em>exact match</em> on <code>path:</code>, you probably want to enable <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">regex path matching</a> by adding an annotation:</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
<p>And then change your path:</p>
<pre><code>path: /video/*
</code></pre>
<p>to</p>
<pre><code>path: /video/.*
</code></pre>
<p>or something more specific matching your <code>/id</code> pattern.</p>
| Jonas |
<p>I can only find documentation online for attaching pods to nodes based on labels.
Is there a way to attach pods to nodes based on labels and count - So only x pods with label y?</p>
<p>Our scenario is that we only want to run 3 of our API pods per node.
If a 4th API pod is created, it should be scheduled onto a different node with less than 3 API pods running currently.</p>
<p>Thanks</p>
| TomH | <p>No, you can not schedule by <em>count</em> of a specific <em>label</em>. But you can avoid co-locate your pods on the same node.</p>
<h2>Avoid co-locate your pods on same node</h2>
<p>You can use <code>podAntiAffinity</code> and <code>topologyKey</code> and <code>taints</code> to avoid scheduling pods on the same node. See <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node" rel="nofollow noreferrer">Never co-located in the same node
</a></p>
| Jonas |
<p>I have two projects:</p>
<blockquote>
<p>Project A - Contains the Source code for my Microservice application</p>
<p>Project B - Contains the Kubernetes resources for Project A using Helm</p>
</blockquote>
<p>Both the Projects reside in their own separate git repositories.</p>
<p>Project A builds using a full blown CI pipeline that build a Docker image with a tag version and pushes it into the Docker hub and then writes the version number for the Docker image into Project B via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.</p>
<p>So far so good! I now have Project B which contains this Docker version for the Microservice Project A and I now want to pass / inject this value into the Values.yaml so that when I package the Project B via Helm, I have the latest version.</p>
<p>Any ideas how I could get this implemented?</p>
| joesan | <blockquote>
<p>via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.</p>
</blockquote>
<p>What I usually do here, is that I write the value to the correct field in the yaml directly. To work with yaml on the command line, I recommend the cli tool <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer">yq</a>.</p>
<p>I usually use full Kubernetes <code>Deployment</code> manifest yaml and I typically update the image field with this <code>yq</code> command:</p>
<pre><code>yq write --inplace deployment.yaml 'spec.template.spec.containers(name==myapp).image' <my-registry>/<my-image-repo>/<image-name>:<tag-name>
</code></pre>
<p>and after that commit the yaml file to the repo with yaml manifests.</p>
<p>Now, you use Helm but it is still Yaml, so you should be able to solve this in a similar way. Maybe something like:</p>
<pre><code>yq write --inplace values.yaml 'app.image' <my-image>
</code></pre>
| Jonas |
<p>This is a follow-up from my last question - <a href="https://stackoverflow.com/q/64881584/5291015">How to programmatically modify a running k8s pod status conditions?</a> after which I realized, you can only patch the container spec from the deployment manifest to seamlessly let the controller apply the patch changes to the pods through the ReplicaSet it created.</p>
<p>Now my question is how to apply patch to make the Pod phase to go to <code>Succeeded</code> or <code>Failed</code>. I know for e.g. for pod phase to go to <code>Succeeded</code>, all the containers need to terminate successfully and shouldn't be restarted. My intention is to not modify the original command and arguments from the container image, but apply a patch to introduce a custom command which will override the one from the container image.</p>
<p>So I attempted to do below to run <code>exit 0</code> as below</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n foo-ns patch deployment foo-manager -p '
{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "container1",
"command": [
"exit",
"0"
]
},
{
"name": "container2",
"command": [
"exit",
"0"
]
}
]
}
}
}
}'
</code></pre>
<p>But since my container file system layers are built from <code>FROM scratch</code>, there aren't any native commands available other than the original executable supposed to run i.e. <code>exit</code> as a built-in is not even available.</p>
<p>What's the best way to do this? by patching the pod to make it transition to either of those Pod phases.</p>
| Inian | <blockquote>
<p>how to apply patch to make the Pod phase to go to Succeeded or Failed</p>
</blockquote>
<p>Pods are intended to be <strong>immutable</strong> - don't try to change them - instead, replace them with new Pods. You can create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> directly, but mostly, you want to work with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> that is replacing the current ReplicaSet for every change on the Pod-template.</p>
<blockquote>
<p>Basically I'm testing one of my custom controllers can catch a pod's phase (and act on it) when it is stuck in a certain state e.g. Pending</p>
</blockquote>
<p>All Pods goes through those states. For testing, you can create <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> directly, with different binaries or arguments.</p>
<p>To test Pod phase <code>Pending</code> you could, log the <em>phase</em> in your controller when watching a Pod? Or you can mock the pod - so that it is in phase Pending?</p>
<p>I don't know <em>kubernetes-python-client</em> but <code>client-go</code> does have <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1/fake" rel="nofollow noreferrer">Fake-clients</a> that can work with Pods, including <code>UpdateStatus</code>.</p>
<pre><code>func (c *FakePods) UpdateStatus(ctx context.Context, pod *corev1.Pod, opts v1.UpdateOptions) (*corev1.Pod, error)
</code></pre>
<p>Now, looking at the Python client, it does seem to lack this feature: <a href="https://github.com/kubernetes-client/python/issues/524" rel="nofollow noreferrer">Issue #524 fake client for unit testing</a></p>
| Jonas |
<p>We are using Azure AKS v1.17.9 with auto-scaling both for pods (using HorizontalPodAutoscaler) and for nodes. Overall it works well, but we have seen outages in some cases. We have some deployments where minReplicas=1 and maxReplicas=4. Most of the time there will only be one pod running for such a deployment. In some cases where the auto-scaler has decided to scale down a node, the last remaining pod has been killed. Later a new pod is started on another node, but this means an outage.</p>
<p>I would have expected the auto-scaler to first create a new pod running on another node (bringing the number of replicas up to the allowed value of 2) and then scaling down the old pod. That would have worked without downtime. As it is it kills first and asks questions later.</p>
<p>Is there a way around this except the obvious alternative of setting minReplicas=2 (which increases the cost as all these pods are doubled, needing additional VMs)? And is this expected, or is it a bug?</p>
| ewramner | <blockquote>
<p>In some cases where the auto-scaler has decided to scale down a node, the last remaining pod has been killed. Later a new pod is started on another node, but this means an outage.</p>
</blockquote>
<p>For this reason, you should always have at least 2 replicas for <code>Deployment</code> in a production environment. And you should use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">Pod Anti-Affinity</a> so that those two pods are not scheduled to the same <a href="https://learn.microsoft.com/en-us/azure/availability-zones/az-overview#availability-zones" rel="nofollow noreferrer">Availability Zone</a>. E.g. if there is network problems in one Availability Zone, your app is still available.</p>
<p>It is common to have at least 3 replicas, one in each Availability Zone, since cloud providers typically has 3 Availability Zones in each <a href="https://learn.microsoft.com/en-us/azure/availability-zones/az-overview#regions" rel="nofollow noreferrer">Region</a> - so that you <em>can</em> use inter-zone traffic which is cheaper than cross-zone traffic, typically.</p>
<p>You can always use fewer replicas to save cost, but it is a trade-off and you get worse availability.</p>
| Jonas |
<p>I seem to find it hard to quickly lookup something in k8s docs or api reference. Terraform docs are very clear to understand.</p>
<p>How do one find what's the child element for parent? For example, for PV Claim <code>resources</code> needs a child <code>requests</code> but its not described anywhere except in example in this <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">link</a>, but there aren't examples for everything.</p>
<p>Either I'm looking at the wrong docs or I'm not doing proper research. I wonder what's the proper way or what I should know before checking the docs.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
</code></pre>
| user630702 | <blockquote>
<p>I seem to find it hard to quickly lookup something in k8s docs or api reference. Terraform docs are very clear to understand.</p>
</blockquote>
<p>The fastest and easiest way to quickly get documentation about Kubernetes resources is to use <code>kubectl explain <resource></code> e.g:</p>
<pre><code>kubectl explain PersistentVolumeClaim.spec.resources
</code></pre>
<p>In this case with output like:</p>
<pre><code>KIND: PersistentVolumeClaim
VERSION: v1
RESOURCE: resources <Object>
DESCRIPTION:
Resources represents the minimum resources the volume should have. More
info:
https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
ResourceRequirements describes the compute resource requirements.
FIELDS:
limits <map[string]string>
Limits describes the maximum amount of compute resources allowed. More
info:
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
requests <map[string]string>
Requests describes the minimum amount of compute resources required. If
Requests is omitted for a container, it defaults to Limits if that is
explicitly specified, otherwise to an implementation-defined value. More
info:
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
</code></pre>
<p>Then you get a good description of available fields. But as you say, it is not very clear what fields is needed in this case.</p>
| Jonas |
<p>This question has been asked in the past, but I did not manage to find clear answer. Is that good practice to set up rabbit as a pod in Kubernetes cluster? We got ~7 pods in our cluster, and some queuing mechanism starts to be necessary. First idea was to create pod for rabbit with persistent volume and service, and allow other pods to connect to it. I'm not sure if that solution is correct. Maybe it's better idea to set up rabbit on some remote server, as we did with database? </p>
| Mateusz Pydych | <p>Pods represents any container group/binary on Kubernetes. What is owning/managing those pods is important, is it a ReplicaSet, DaemonSet or a StatefulSet? ReplicaSets are supposed to be <strong>stateless</strong> and a RabbitMQ borker is <strong>stateful</strong>.</p>
<p>You can deploy stateful workload on Kubernetes, but you should do it as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>. Here is an example <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/tree/master/k8s/rabbitmq" rel="nofollow noreferrer">RabbitMQ on Google Kubernetes Engine</a></p>
| Jonas |
<p>I would like to build a web application that processes video from users' webcams. It looks like WebRTC is ideal for this project. But, I'm having a hard time creating a peer connection between the user's machine and a pod in my Kubernetes cluster. How would you connect these two peers?</p>
<p>This question on Server Fault discusses the issue I'm running into: <a href="https://serverfault.com/questions/954715/webrtc-mcu-sfu-inside-kubernetes-port-ranges">WEBRTC MCU/SFU inside kubernetes - Port Ranges</a>. WebRTC wants a bunch of ports open so users can create peer connections with the server but Kubernetes has ports closed by default. Here's a rephrasing of my question: How to create
RTCPeerConnections connecting multiple users to an application hosted in a Kubernetes cluster? How should network ports be setup?</p>
<p>The closest I've come to finding a solution is <a href="https://cloud.google.com/solutions/orchestrating-gpu-accelerated-streaming-apps-using-webrtc" rel="noreferrer">Orchestrating GPU-accelerated streaming apps using WebRTC</a>, their code is available on <a href="https://github.com/GoogleCloudPlatform/selkies-vdi" rel="noreferrer">GitHub</a>. I don't fully understand their approach, I believe it depends on <a href="https://istio.io/" rel="noreferrer">Istio</a>.</p>
| Andrew | <p>The document you link to is helpful, <a href="https://cloud.google.com/solutions/orchestrating-gpu-accelerated-streaming-apps-using-webrtc" rel="noreferrer">Orchestrating GPU-accelerated streaming apps using WebRTC</a></p>
<p>What they do to allow for <code>RTCPeerConnection</code> is:</p>
<p>Use two separate Node pools (group of Nodes):</p>
<ul>
<li>Default Node pool - for most components, using <code>Ingress</code> and load balancer</li>
<li>TURN Node pool - for STUN/TURN service</li>
</ul>
<h2>STUN/TURN service</h2>
<p>The STUN/TURN service is <em>network bound</em> and deployed to dedicated nodes. It is deployed with one instance on each node in the node pool. This can be done on Kubernetes using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noreferrer">DaemonSet</a>. In addition this service should use host networking, e.g. all nodes has its ports accessible from Internet. Activate host networking for the PodTemplate in your <code>DaemonSet</code>:</p>
<pre><code>hostNetwork: true
</code></pre>
<p>They use <a href="https://github.com/coturn/coturn" rel="noreferrer">coturn</a> as STUN/TURN server.</p>
<blockquote>
<p>The STUN/TURN service is run as a DaemonSet on each node of the TURN node pool. The coTURN process needs to allocate a fixed block of ports bound to the host IP address in order to properly serve relay traffic. A single coTURN instance can serve thousands of concurrent STUN and TURN requests based on the machine configuration.</p>
</blockquote>
<h3>Network</h3>
<p>This part of their network diagram shows that some services are served over <strong>https</strong> with an <em>ingress gateway</em>, whereas the STUN/TURN service is through a different connection using <strong>dtls/rtp</strong> to the nodes exposed via host network.</p>
<p><a href="https://i.stack.imgur.com/Gzs6u.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Gzs6u.png" alt="Network" /></a></p>
| Jonas |
<p><strong>EDIT</strong>: As mentioned in <a href="https://stackoverflow.com/users/213269/jonas">Jonas'</a> response Kubernetes REST API can be actually considered as declarative and not imperative.</p>
<p>Kubernetes is well known for its declarative model.
Controller are watching objects in ETCD which contains the desired state (declarative). It compares it to the current state and generates imperative commands to the <a href="https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/" rel="nofollow noreferrer">imperative Kubernetes API</a>.</p>
<p><strong>Which reasons leads Kubernetes project to not expose a declarative HTTP API?</strong></p>
<p>Thus let the controller/operator do the reconciliation.</p>
<p>An example of declarative REST API, I found is <a href="https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/about-as3.html" rel="nofollow noreferrer">F5 AS3</a>. And I guess their <a href="https://clouddocs.f5.com/containers/latest/userguide/kubernetes/" rel="nofollow noreferrer">Kubernetes operator</a> built on top of this declarative API is quite straightforward.</p>
| scoulomb | <p>The Kubernetes API can be used both <em>declaratively</em> and also <em>imperatively</em>. For quick development an imperative workflow might work better whereas for traceability and production workload a declarative workflow is recommended.</p>
<h2>Declarative HTTP example using curl</h2>
<p>This requires to run <code>kubectl proxy</code> first.</p>
<pre><code>curl -X POST -H 'Content-Type: application/yaml' --data '
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14
ports:
- containerPort: 80
' http://127.0.0.1:8001/apis/apps/v1/namespaces/default/deployments
</code></pre>
<p>The Kubernetes API is <strong>declarative</strong> in the sense that you always specify <strong>what</strong> you want, e.g. <code>replicas: 2</code> instead of e.g. <code>create 2 replicas</code> that would be the case in an <em>imperative</em> API. The controllers then "drives" the state to "what" you specified in a reconciliation loop.</p>
<p>See:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="nofollow noreferrer">Declarative Management of Kubernetes Objects Using Configuration Files</a></li>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-config/" rel="nofollow noreferrer">Imperative Management of Kubernetes Objects Using Configuration Files</a></li>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="nofollow noreferrer">Managing Kubernetes Objects Using Imperative Commands</a></li>
</ul>
<p>From your link:</p>
<blockquote>
<p>The Application Services 3 Extension uses a declarative model, meaning you send a declaration file using a single Rest API call.</p>
</blockquote>
<p>the Kubernetes API works exactly the same when you apply yaml-manifest files using e.g. <code>kubectl apply -f deployment.yaml</code></p>
| Jonas |
<p>Is there a way to restart pods automatically after some time or when they reach some memory limit?</p>
<p>I want achieve the same behavior as gunicorn(or any mainstream process manager does)</p>
| kharandziuk | <h1>Memory limit</h1>
<p>If you set a limit for the memory on a container in the podTemplate, this pod will be restarted if it uses more than specified memory.</p>
<pre><code>resources:
limits:
memory: 128Mi
</code></pre>
<p>See <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">Managing Compute Resources for Containers
</a> for documentation</p>
<h1>Time limit</h1>
<p>This can be done in many different ways, internally by calling <code>exit(1)</code> or stop responding on a configured <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">livenessProbe</a>. Or externally, e.g. by configuring a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a>.</p>
| Jonas |
<p>queries which will allow us to track kubeevents and get notified if there are any issues with the pods being scheduled or killed..</p>
| Vijay | <p>You may be interested in <a href="https://github.com/heptiolabs/eventrouter" rel="nofollow noreferrer">eventrouter</a>.</p>
<p>It is a service for handling your Kubernetes Events, e.g. log them or send them to any <em>sink</em> e.g. Kafka.</p>
| Jonas |
<p>I am trying to set <code>PersistentVolumeClaims</code> to my pods. Now the problem is when the <code>deployment</code> is success, the pods are in <code>pending</code> state. When I try to describe the pods, I get the error why they are not spinning up as below:</p>
<blockquote>
<p>Warning FailedScheduling 20s (x3 over 22s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pod has unbound immediate PersistentVolumeClaims.</p>
</blockquote>
<p>This is the <code>yaml</code> for creating the persistent volume and refer it in the deployments</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
namespace: mongo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
namespace: mongo
labels:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data/db
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
namespace: mongo
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-url
key: database_url
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
.
.
.
</code></pre>
<p>I have removed the other yaml configurations from the above and keep the necessary ones only for easy-reading.</p>
<p>and when I try to see the status of the pvc using <code>kubectl get pvc -n mongo</code> I get the below <code>pending</code> status</p>
<blockquote>
<p>my-pvc Pending 9m54s</p>
</blockquote>
<p>Can someone tell me where I am doing wrong?</p>
| Jananath Banuka | <p>As described in <a href="https://stackoverflow.com/a/52669115/213269">answer to pod has unbound PersistentVolumeClaims</a>, if you use a <code>PersistentVolumeClaim</code> you typically need a <em>volume provisioner</em> for Dynamic Volume Provisioning. The bigger cloud providers typically has this, and also Minikube has one that can be enabled.</p>
<p>Unless you have a <em>volume provisioner</em> in your cluster, you need to create a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a> resource and possibly also a StorageClass and declare how to use your storage system.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a> describes how to create a <code>PersistentVolume</code> with a <code>hostPath</code> that may be good for learning or development, but is typically not used in production by applications.</p>
| Jonas |
<p>I have the following situation:</p>
<p>I have a couple of microservices, only 2 are relevant right now.
- Web Socket Service API
- Dispatcher Service</p>
<p>We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.</p>
<p>In our situation, we will then have the following "schema":</p>
<p><a href="https://i.stack.imgur.com/jOCnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jOCnr.png" alt="enter image description here"></a></p>
<hr>
<p>Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.</p>
<p><a href="https://i.stack.imgur.com/FEDw9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FEDw9.png" alt="enter image description here"></a></p>
<p>As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.</p>
<p>Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.</p>
<hr>
<p>The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: <code>Please, send this to the user 2</code>.</p>
<p>As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...</p>
<p><a href="https://i.stack.imgur.com/6tEzk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6tEzk.png" alt="enter image description here"></a></p>
<p>How could I manage to get the container IP? Can it be accessed by another container from another Pod?</p>
<p><strong>To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP</strong></p>
<p>Thanks!</p>
<h1>edit 1</h1>
<p>There is my <code>web-socket-service-api.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-socket-service-api
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
# Port that accepts WebSockets.
- port: 8082
targetPort: 8082
protocol: TCP
name: websocket
selector:
app: web-socket-service-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-socket-service-api
spec:
replicas: 3
template:
metadata:
labels:
app: web-socket-service-api
spec:
containers:
- name: web-socket-service-api
image: gcr.io/[PROJECT]/web-socket-service-api:latest
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 8082
</code></pre>
| Emixam23 | <h2>Dispatcher ≈ Message broker</h2>
<p>As how I understand your design, your <em>Dispatcher</em> is essentially a message broker for the pods of your <em>Websocket Service</em>. Let all Websocket pods connect to the broker and let the broker route messages. This is a <em>stateful</em> service and you should use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> for this in Kubernetes. Depending on your requirements, a possible solution could be to use a MQTT-broker for this, e.g. <a href="https://mosquitto.org/" rel="nofollow noreferrer">mosquitto</a>. Most MQTT brokers have support for websockets.</p>
<h2>Scale out: Multiple replicas of pods</h2>
<blockquote>
<p>each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api.</p>
</blockquote>
<p>This is not how Kubernetes is intented to be used. Use multiple <strong>replicas of pods</strong> instead of multiple containers in the pod. I recommend that you create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> for your <em>Websocket Service</em> with as many replicas you want.</p>
<h2>Service as Load balancer</h2>
<blockquote>
<p>Each pod has its Load Balancer and this will be each time the entry point.</p>
</blockquote>
<p>In Kubernetes you should create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> that load balance traffic to a set of pods.</p>
<p><strong>Your solution</strong></p>
<blockquote>
<p>To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP</p>
</blockquote>
<p>Yes, I mostly agree. That is similar to what I have described here. But I would let the <em>Websocket Service</em> establish a connection to the <em>Broker/Dispatcher</em>.</p>
| Jonas |
<p>How get client IP from nginx ingress load blanacer? I've tried setting <code>use proxy protocol</code> and <code>externalTrafficPolicy</code> but still it doesn't show client IP.</p>
<p>Apache logs <code>10.0.0.225</code> for each http request. I'm not sure what IP that is, it doesn't seem to be pod's
IP or node IP.</p>
<p>httpd service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
</code></pre>
<p>ingress-lb:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
use-proxy-protocol: 'true'
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
</code></pre>
<p>In Apache I've configured the following logging settings:</p>
<pre><code>LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
CustomLog "logs/ssl_access_log" combined env=!forwarded
CustomLog "logs/ssl_access_log" proxy env=forwarded
</code></pre>
| user630702 | <p>You should get the origin IP in the <code>X-Forwarded-For</code> header, this is a default config for nginx-ingress: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header" rel="nofollow noreferrer">forwarded-for-header</a></p>
<p>This is configured in a <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-configuration/" rel="nofollow noreferrer">ConfigMap</a></p>
| Jonas |
<p>I have two different types of worker nodes, ones that do data preparation and nodes that do machine learning.<br />
I want to run a Cronjob that runs one process on a preparation node, then (only when finished) a second process on an ML node.<br />
How can I do this in Kubernetes?</p>
| GDev | <blockquote>
<p>I want to run a Cronjob that runs one process on a preparation node, then (only when finished) a second process on an ML node.</p>
</blockquote>
<p>A CronJob is only one Pod.</p>
<p>What you want to do here is a <em>Workflow</em> or <em>Pipeline</em> consisting of two pods, executed on different nodes.</p>
<p>This can be done with e.g. <a href="https://argoproj.github.io/argo/" rel="nofollow noreferrer">Argo Workflow</a> or <a href="https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/" rel="nofollow noreferrer">Kubeflow Pipelines</a> or maybe <a href="https://github.com/tektoncd/pipeline" rel="nofollow noreferrer">Tekton Pipeline</a>.</p>
| Jonas |
<p>I confused between Multi-Container Pod Design patterns.<br>
(sidecar, adapter, ambassador) </p>
<p>What I understand is :<br>
<strong>Sidecar</strong> : container + container(share same resource and do other functions)<br>
<strong>Adapter</strong> : container + adapter(for checking other container's status. <em>e.g. monitoring</em>)<br>
<strong>Ambassador</strong> : container + proxy(to networking outside) </p>
<blockquote>
<p>But, According to <a href="https://istio.io/docs/setup/additional-setup/sidecar-injection/#injection" rel="noreferrer">Istio -Installing the Sidecar</a>, They introduce proxy as a sidecar pattern. </p>
</blockquote>
<p>Adapter is container, and Proxy is container too. </p>
<p>So, My question is <strong>What is differences between Sidecar pattern and Adapter&Ambassador pattern?</strong> </p>
<p>Is the Sidecar pattern concept contain Adapter&Ambassador pattern?</p>
| GRu. L | <p>First, you are right, the term <strong>sidecar</strong> container has now became a word for describing <em>an extra container</em> in your pod. <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">Originally(?)</a> it was a specific multi-container design pattern.</p>
<h2>Multi-container design patterns</h2>
<p><strong>Sidecar pattern</strong></p>
<p>An extra container in your pod to <strong>enhance</strong> or <strong>extend</strong> the functionality of the main container.</p>
<p><strong>Ambassador pattern</strong></p>
<p>A container that <strong>proxy the network connection</strong> to the main container.</p>
<p><strong>Adapter pattern</strong></p>
<p>A container that <strong>transform output</strong> of the main container.</p>
<p>This is taken from the original article from 2015: <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">Patterns for Composite Containers</a></p>
<h2>Summary</h2>
<p>Your note on</p>
<blockquote>
<p>But, According to Istio -Installing the Sidecar, They introduce proxy as a sidecar pattern.</p>
</blockquote>
<p>In the patterns above, both <em>Ambassador</em> and <em>Adapter</em> must in fact <strong>proxy</strong> the network connection, but do it with different purpose. With Istio, this is done e.g. to terminate <strong>mTLS</strong> connection, collect metrics and more to <strong>enhance</strong> your main container. So it actually is a <strong>sidecar pattern</strong> but confusingly, as you correctly pointed out, all pattern <em>proxy the connection</em> - but for different purposes.</p>
| Jonas |
<p>I'm getting a</p>
<pre><code>Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
GET https://sqladmin.googleapis.com/v1/projects/adstudio-sandbox1/instances/us-central1~adspromosdb-sandbox/connectSettings
{
"code": 403,
"errors": [
{
"domain": "global",
"message": "The client is not authorized to make this request.",
"reason": "notAuthorized"
}
],
"message": "The client is not authorized to make this request."
}
</code></pre>
<p>I guessed a service account based on the name of my service and gave it the Cloud SQL Admin/Editor/Client and Storage Object Admin roles but I'm still getting the same error. I'm wondering if it's not using the service account that I think it is.</p>
<p>How can I confirm what service account my service is running as? (And how does it relate to Kubernetes? Does a cluster or a namespace always share 1 service account?)</p>
| Andrew Cheong | <p>A Google Cloud Service Account and a Kubernetes Service Account are two different things.</p>
<p>For connecting to a Cloud SQL database from a Kubernetes cluster, it is recommended and easiest to use an <a href="https://cloud.google.com/sql/docs/mysql/connect-auth-proxy" rel="nofollow noreferrer">Cloud SQL Auth Proxy</a> to "assist" with the authentication from the cluster to the database. It is also possible to use the Cloud SQL Auth Proxy as a local CLI tool, to connect to your Cloud SQL database in a secure way.</p>
<p>See <a href="https://cloud.google.com/sql/docs/mysql/connect-instance-kubernetes" rel="nofollow noreferrer">Connect to Cloud SQL from Google Kubernetes Engine</a> for a full guide on how to setup connection from Kubernetes to a Cloud SQL database.</p>
| Jonas |
<p>I've been working on a <a href="https://github.com/dgp1130/chatter" rel="nofollow noreferrer">small side project</a> to try and learn Kubernetes. I have a relatively simple cluster with two services, an ingress, and working on adding a Redis database now. I'm hosting this cluster in <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Google Kubernetes Engine (GKE)</a>, but using <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">Minikube</a> to run the cluster locally and try everything out before I commit any changes and push them to the prod environment in GKE.</p>
<p>During this project, I have noticed that GKE seems to have some slight differences in how it wants the configuration vs what works in Minikube. I've seen this previously with ingresses and now with persistent volumes.</p>
<p>For example, to run Redis with a persistent volume in GKE, I can use:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: chatter-db-deployment
labels:
app: chatter
spec:
replicas: 1
selector:
matchLabels:
app: chatter-db-service
template:
metadata:
labels:
app: chatter-db-service
spec:
containers:
- name: master
image: redis
args: [
"--save", "3600", "1", "300", "100", "60", "10000",
"--appendonly", "yes",
]
ports:
- containerPort: 6379
volumeMounts:
- name: chatter-db-storage
mountPath: /data/
volumes:
- name: chatter-db-storage
gcePersistentDisk:
pdName: chatter-db-disk
fsType: ext4
</code></pre>
<p>The <code>gcePersistentDisk</code> section at the end refers to a disk I created using <code>gcloud compute disks create</code>. However, this simply won't work in Minikube as I can't create disks that way.</p>
<p>Instead, I need to use:</p>
<pre><code> volumes:
- name: chatter-db-storage
persistentVolumeClaim:
claimName: chatter-db-claim
</code></pre>
<p>I also need to include separate configuration for a <code>PeristentVolume</code> and a <code>PersistentVolumeClaim</code>.</p>
<p>I can easily get something working in either Minikube <strong>OR</strong> GKE, but I'm not sure what is the best means of getting a config which works for both. Ideally, I want to have a single <code>k8s.yaml</code> file which deploys this app, and <code>kubectl apply -f k8s.yaml</code> should work for both environments, allowing me to test locally with Minikube and then push to GKE when I'm satisfied.</p>
<p>I understand that there are differences between the two environments and that will probably leak into the config to some extent, but there must be an effective means of verifying a config before pushing it? What are the best practices for testing a config? My questions mainly come down to:</p>
<ol>
<li>Is it feasible to have a single Kubernetes config which can work for both GKE and Minikube?</li>
<li>If not, is it feasible to have a <strong>mostly</strong> shared Kubernetes config, which overrides the GKE and Minikube specific pieces?</li>
<li>How do existing projects solve this particular problem?</li>
<li>Is the best method to simply make a separate <code>dev</code> cluster in GKE and test on that, rather than bothering with Minikube at all?</li>
</ol>
| Douglas Parker | <p>Yes, you have found some parts of Kubernetes configuration that was not perfect from the beginning. But there are newer solutions.</p>
<h2>Storage abstraction</h2>
<p>The idea in <em>newer</em> Kubernetes releases is that your <strong>application configuration</strong> is a <em>Deployment</em> with Volumes that refers to <em>PersistentVolumeClaim</em> for a <em>StorageClass</em>.</p>
<p>While <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class" rel="nofollow noreferrer">StorageClass</a> and <em>PersistentVolume</em> belongs more to the <strong>infrastructure configuration</strong>. </p>
<p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a> on how to configure a <em>Persistent Volume</em> for Minikube. For GKE you configure a <em>Persistent Volume</em> with <em>GCEPersistentDisk</em> or if you want to deploy your app to AWS you may use a <em>Persistent Volume</em> for <em>AWSElasticBlockStore</em>.</p>
<h2>Ingress and Service abstraction</h2>
<p><code>Service</code> with type <em>LoadBalancer</em> and <em>NodePort</em> in combination with <code>Ingress</code> does not work the same way across cloud providers and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controllers</a>. In addition, <em>Services Mesh</em> implementations like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> have introduced <code>VirtualService</code>. The plan is to improve this situation with <a href="https://www.youtube.com/watch?v=Ne9UJL6irXY" rel="nofollow noreferrer">Ingress v2</a> as how I understand it.</p>
| Jonas |
<p>I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas?</p>
<p>Something like:
If name contains "randomname" then do not scale up/down the service.</p>
<p>I did try some exceptions but still not working.</p>
| Gilberto Graham | <p>You can add a <em>label</em> on the one you want to exclude, and then use queries using <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#api" rel="nofollow noreferrer">labels and selectors</a> to apply operations on the selected set of resources.</p>
| Jonas |
<p>I need to deploy a GPU intensive task on GCP. I want to use a Node.js Docker image and within that container to run a Node.js server that listens to HTTP requests and runs a Python image processing script on-demand (every time that a new HTTP request is received containing the images to be processed). My understanding is that I need to deploy a load balancer in front of the K8s cluster that has a static public IP address which then builds/launches containers every time a new HTTP request comes in? And then destroy the container once processing is completed. Is container re-use not a concern? I never worked with K8s before and I want to understand how it works and after reading the GKE documentation this is how I imagine the architecture. What am I missing here?</p>
| Asdasdprog | <blockquote>
<p>runs a Python image processing script on-demand (every time that a new HTTP request is received containing the images to be processed)</p>
</blockquote>
<p>This <em>can</em> be solved on Kubernetes, but it is not a very common kind of workload.</p>
<p>The project that support your problem best is <a href="https://knative.dev/" rel="nofollow noreferrer">Knative</a> with its per-request auto-scaler. <a href="https://cloud.google.com/run" rel="nofollow noreferrer">Google Cloud Run</a> is the easiest way to use this. But if you want to run this within your own GKE cluster, you can <a href="https://cloud.google.com/run/docs/gke/enabling-on-existing-clusters" rel="nofollow noreferrer">enable it</a>.</p>
<p>That said, you <em>can</em> also design your Node.js service to integrate with the Kubernetes API-server to create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a> - but it is not a good design to have common workload talk to the API-server. It is better to use Knative or Google Cloud Run.</p>
| Jonas |
<p>We are building a rate limiter and calculate config values based on number of pods available at that moment. Is there any way to get the number of pods that are active in any instance from a java application? Also, is it possible to get the number of tasks in the case of AWS Fargate from a Java application?</p>
| Venkat | <p>You need to request this info from the Kubernetes API Server. You can do it with Java with <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">kubernetes-client/java</a>. See <a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/InClusterClientExample.java" rel="nofollow noreferrer">InClusterClientExample</a></p>
<p>You are interested in the pods in <em>Ready</em> state matching a specific <code>label</code> that is representing (matching) your "application".</p>
<p>If you are only interesting in the <strong>number</strong> of <em>Ready</em> pods for a specific application, you can also request the <code>Deployment</code> directly instead of pods be a label.</p>
| Jonas |
Subsets and Splits