Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have private Docker registry which works over https with self-signed SSL certificate. I've installed this certificate on local machine and it's working fine (can push and pull).<br/>
Is it possible to configure Kubernetes to use this certificate for deployments (pull images from private registry)?</p>
| Kirill | <p>Kubernetes it self doesn't support this. you have to deploy certificate to all worker nodes. You can simplify the process using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> and <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">hostPath</a> volumes.</p>
| Maciek Sawicki |
<p>Considering the following scenario:</p>
<ul>
<li>I have one main <strong>heavy service</strong> and many other <strong>small microservices</strong>.</li>
<li>The <strong>small microservices</strong> are consumed only the <strong>heavy service</strong>.</li>
<li>Only the main <strong>heavy service</strong> is exposed to the public internet.</li>
</ul>
<p>What is the best practice to deploy those services into ?</p>
<p>All together in the same kubernetes cluster:</p>
<ul>
<li>The main <strong>heavy service</strong> LoadBalancer</li>
<li>The other <strong>small microservices</strong> as ClusterIP (to protect them from public internet).</li>
</ul>
<p>Is that a good approach ?</p>
| kilobaik | <p>There are some misunderstandings here.</p>
<p>There terminology about <em>microservice</em> is <strong>not about size</strong> but more an organizational thing. Ten years ago, the whole system was deployed as a <em>monolith</em> but now it is recommended that teams should not be bigger than 5-8 persons - and those teams should work in their own pace with their own <em>deployment cycle</em>. So the <em>monolith</em> has to be broken into smaller services. The services in such an architectural pattern is called for <em>microservices</em> - but not that they are <em>small</em> or <em>big</em>.</p>
<p>All your services should be deployed as a <code>Deployment</code> on Kubernetes, and the services should be <em>stateless</em>. So even the "main heavy service" should be stateless and possibly scaled to multiple replicas.</p>
<p>You are correct in that only services that <strong>need</strong> to be exposed to the Internet should be exposed to the Internet.</p>
<p>Wether your "heavy service" should be exposed with a <code>Service</code> of type <code>LoadBalancer</code> or <code>NodePort</code> actually depends more on what <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a> you are using. E.g. If you are using <em><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">Google Kubernetes Engine</a></em>, you should exposed it as a <code>NodePort</code> type. And yes, the other applications should have a <code>Service</code> of type <code>ClusterIP</code>.</p>
<p>It is worth noting that all Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> objects will provide <em>load balancing</em> functionality to the <em>replicas</em>. The service type, e.g. <code>LoadBalancer</code>, <code>NodePort</code> or <code>ClusterIP</code> is more about <strong>how</strong> the Service will be exposed.</p>
| Jonas |
<p>I use <strong>AKS</strong> to create our cluster with <strong>3</strong> worker node and their names are <strong>node-0</strong>, <strong>node-1</strong>, <strong>node-2</strong>. They are created by <strong>Azure VMSS</strong>.</p>
<p>When I deploy a <strong>pod-new</strong> through Helm, the scheduler always schedules <strong>pod-new</strong> to <strong>node-0</strong> or <strong>node-2</strong> even though those node with heavy workload.</p>
<p>Assume there is a <strong>pod-old</strong> on <strong>node-0</strong> or <strong>node-2</strong>, It will result in <strong>pod-old</strong> be evicted but still doesn't re-scheduled to <strong>node-1</strong>. I need to explicit use <strong>nodeSelector</strong> to move it to <strong>node-1</strong>.</p>
<p>Below is my <strong>deployment.yaml</strong>. To gain better resource usage, I don't specify <code>request</code> and <code>limit</code> cause my pod needs higher cpu-burst to do initialize. After initialization, it only needs a little resource to serve request.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-service
labels:
helm.sh/chart: service-0.1.0
app.kubernetes.io/name: service
app.kubernetes.io/instance: aks
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
minReadySeconds: 10
selector:
matchLabels:
app.kubernetes.io/name: service
app.kubernetes.io/instance: aks
template:
metadata:
labels:
app.kubernetes.io/name: service
app.kubernetes.io/instance: aks
spec:
imagePullSecrets:
- name: custom
serviceAccountName: account-with-azure-cr
securityContext:
fsGroup: 2000
runAsGroup: 3000
runAsUser: 1000
containers:
- name: frontend
image: imageurl
imagePullPolicy: IfNotPresent
ports:
- name: port-frontend
containerPort: 3000
protocol: TCP
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlogpod
hostPath:
path: /var/log/pods
</code></pre>
| Loofah | <p>In order for kubernetes to understand the resource utilization, requests data is used. You can read about best practices mentioned by AKS from <a href="https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-resource-management" rel="nofollow noreferrer">link</a>.</p>
<p>Requests tells the kubernetes to block some resources for running the pods. I guess when you are saying provisioning of service, you are talking about pods only as pod is the working unit here (service is the load balancer abstraction which redirect the traffic to appropirate pod). Even if your pod requires very less resources, you can mention them in requests. Limit is something which k8s will enforce (so that is not required in your case) but requests should be must.</p>
<p>Quoting from above document</p>
<blockquote>
<p>In your pod specifications, it's best practice and very important to define these requests and limits based on the above information. If you don't include these values, the Kubernetes scheduler cannot take into account the resources your applications require to aid in scheduling decisions.</p>
</blockquote>
| Atul |
<p>Is it possible to send requests to all pods behind a service/ingress controller based on the requests?</p>
<p>My requirement is to send requests to all the pods if the request is /send/all.</p>
| user1578872 | <p>It's not possible because ingress controller can't do this (for sure nginx and GLBC based ingress can't do it, bud due to the way how to http works I assume this is the case for all ingress controllers).</p>
<p>Depending what your exact case is you have few options.</p>
<p>If your case is just monitoring and you can afford using control on number of request sending to your pods you can just set http <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">liveness probe</a> for your pods. Then you will be sure that if pod doesn't return correct response k8s won't send traffic to it.</p>
<p>If you need to trigger some action on all pods you have few options:</p>
<p>Use messaging - for example you can use <a href="https://github.com/kubernetes/charts/tree/master/stable/rabbitmq" rel="nofollow noreferrer">rabbitmq chart</a> to deploy rabbitmq and write some application that will handle your traffic.</p>
<p>Using DB - create some app that will set some flag in DB abd add some logic to your app to monitor the flag, or create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">cron job</a> and to monitor the flag and trigger and trigger required actions on pods (in this case you can <a href="https://stackoverflow.com/questions/50202961/kube-config-how-to-make-it-available-to-a-rest-service-deployed-in-kubernetes/50203659#50203659">use service account</a> to give your cron job pod to k8s API to list pods.</p>
| Maciek Sawicki |
<p>I could not find a documentation that specifies how Kubernetes service behaves when the affiliated deployment is scaled with multiple replicas.</p>
<p>I'm assuming there's some sort of load balancing. Is it related to the service type?</p>
<p>Also, I would want to have some affinity in the request forwarded by the service (i.e all requests with a certain suffix should always be mapped to the same pod if possible, etc). Is that achievable? Closes I've seen is <a href="https://www.envoyproxy.io/docs/envoy/latest/start/distro/ambassador" rel="noreferrer">Ambassador</a>, but that is affinity in the service level, and not pod level.</p>
| Mugen | <h2>Deployment: Stateless workload</h2>
<blockquote>
<p>I could not find a documentation that specifies how Kubernetes service behaves when the affiliated deployment is scaled with multi replicas.</p>
</blockquote>
<p>Pods deployed with <code>Deployment</code> is supposed to be stateless.</p>
<h2>Ingress to Service routing</h2>
<p>When using <code>Ingress</code>, L7-proxy, the routing can be based on http request content, but this depends on what implementation of an IngressController you are using. E.g. <a href="https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/" rel="noreferrer">Ingress-nginx</a> has <em>some</em> support for <em>sticky sessions</em> and other implementations may have what you are looking for. E.g. <a href="https://istio.io/docs/reference/config/networking/destination-rule/#LoadBalancerSettings" rel="noreferrer">Istio</a> has support similar settings.</p>
<p><strong>Ambassador</strong></p>
<p><a href="https://www.getambassador.io/reference/core/load-balancer/" rel="noreferrer">Ambassador</a> that you write about does also have <em>some</em> support for <em>session affinity / sticky sessions</em>.</p>
<blockquote>
<p>Configuring sticky sessions makes Ambassador route requests to the same backend service in a given session. In other words, requests in a session are served by the same Kubernetes <strong>pod</strong></p>
</blockquote>
<h2>Pod to Service routing</h2>
<p>When a pod in your cluster does an http request to a Service within the cluster, the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">kube-proxy does routing</a> in a <strong>round robin</strong> way by default.</p>
<blockquote>
<p>By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.</p>
</blockquote>
<p>If you want session affinity on pod-to-service routing, you can set the <code>SessionAffinity: ClientIP</code> field on a <code>Service</code> object.</p>
<blockquote>
<p>If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on client’s IP addresses by setting service.spec.sessionAffinity to “ClientIP” (the default is “None”).</p>
</blockquote>
| Jonas |
<p>I have a Spring Boot (1.5.8) webapp, with several other Spring Boot services, all deployed to AWS. The webapp module is deployed to two EC2 instances managed by an Elastic Load Balancer. The whole system is orchestrated by Kubernetes.</p>
<p>I'm trying to set the session cookie max age to work around a problem, as suggested here: <a href="https://stackoverflow.com/questions/48756986/spring-saml-endless-redirect-loop-after-a-successful-authentication">Spring-SAML Endless redirect loop after a successful authentication</a></p>
<p>When I run on my local machine using Docker Compose, all I do is set <code>server.session.cookie.max-age</code> in the webapp's <code>application.yml</code> and it works. </p>
<p>The same thing doesn't work in the Kubernetes-managed system. The webapp has an env actuator endpoint set up and I can see that the max-age setting was applied, but the cookie still has "session" expiration. </p>
<p>The session cookie is named <code>JSESSIONID</code> on my local machine, but just <code>SESSION</code> on the Kubernetes cluster. Why is that? Is the session cookie managed at some higher level in that system, like by the load balancer or Kubernetes itself? I'm pretty lost at this point, so any suggestions would help. </p>
| Greg Charles | <p>OK, it turned out that the session is being managed by Spring due to <code>spring.session.store-type</code> being set to <code>redis</code> in the integration environments. It's <code>none</code> in my local build, which means Tomcat manages the session instead. Sorry, I didn't mean to obfuscate that. I just didn't know what to look for. The default session cookie name for a Spring-managed session is <code>SESSION</code>, while for Tomcat, it's <code>JSESSIONID</code>, so I was at least right about that being the key to the mystery.</p>
<p>There don't seem to be properties for configuring the Spring session cookie, but I found this explanation about how to configure it in code: <a href="https://docs.spring.io/spring-session/docs/current/reference/html5/guides/java-custom-cookie.html" rel="nofollow noreferrer">Spring Session - Custom Cookie</a></p>
| Greg Charles |
<p>I have installed Traefik on Kubernetes and followed allong the official tutorial.
I have a cluster of 4 machines for Kubernetes.</p>
<p>When I run <code>kubectl --namespace=kube-system get pods</code> I see <code>traefik-ingress-controller-678226159-eqseo</code>, so all fine.</p>
<p>Then I executed:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
</code></pre>
<p>and then ran:</p>
<pre><code>echo "$(my master node ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
</code></pre>
<p>which resulted in:
<code>http://192.168.178.31 traefik-ui.minikube</code> in <code>/etc/hosts</code></p>
<p>I further edited <code>kubectl -n kube-system edit service traefik-web-ui</code> service and changed
the type to <code>NodePort</code>.</p>
<p>When I finally run <code>$ curl http://192.168.178.31:31107</code> I get:</p>
<pre><code>curl: (7) Failed to connect to 192.168.178.31 port 31107: Connection refused
</code></pre>
<p>Does anyone know, why I am getting the Connection refused?</p>
<p><strong>EDIT 1:</strong></p>
<p>Log from <code>kubectl logs -f traefik-ingress-controller-68994b879-5z2xr -n kube-system</code>:</p>
<pre><code>time="2018-05-13T09:55:48Z" level=info msg="Traefik version v1.6.0 built on 2018-04-30_09:28:44PM"
time="2018-05-13T09:55:48Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n"
time="2018-05-13T09:55:48Z" level=info msg="Preparing server http &{Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0x14ed5e50} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-05-13T09:55:48Z" level=info msg="Preparing server traefik &{Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0x14ed5e60} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-05-13T09:55:48Z" level=info msg="Starting server on :80"
time="2018-05-13T09:55:48Z" level=info msg="Starting provider *kubernetes.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Trace\":false,\"TemplateVersion\":0,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"\",\"Token\":\"\",\"CertAuthFilePath\":\"\",\"DisablePassHostHeaders\":false,\"EnablePassTLSCert\":false,\"Namespaces\":null,\"LabelSelector\":\"\",\"IngressClass\":\"\"}"
time="2018-05-13T09:55:48Z" level=info msg="Starting server on :8080"
time="2018-05-13T09:55:48Z" level=info msg="ingress label selector is: \"\""
time="2018-05-13T09:55:48Z" level=info msg="Creating in-cluster Provider client"
time="2018-05-13T09:55:48Z" level=info msg="Server configuration reloaded on :80"
time="2018-05-13T09:55:48Z" level=info msg="Server configuration reloaded on :8080"
time="2018-05-13T09:55:53Z" level=info msg="Server configuration reloaded on :80"
time="2018-05-13T09:55:53Z" level=info msg="Server configuration reloaded on :8080"
time="2018-05-13T09:55:55Z" level=info msg="Server configuration reloaded on :80"
time="2018-05-13T09:55:55Z" level=info msg="Server configuration reloaded on :8080"
</code></pre>
| Max | <p>in <a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml</a> there is following ingress definition:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
</code></pre>
<p>This mean you should access traefik-web-ui via ingress service.</p>
<p>If you deployed traefik as Deployment (<a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml</a>), you should check the NodePort returned by <code>kubectl describe svc traefik-ingress-service -n kube-system</code> and use it as your url (<a href="http://traefik-ui.minikube:xxx" rel="nofollow noreferrer">http://traefik-ui.minikube:xxx</a>)</p>
<p>(you don't have to change traefik-web-ui to NodePort)</p>
<p>If you used DeamonSet (<a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml</a>) just use <code>http://traefik-ui.minikube</code>.</p>
<p>If you would like to access <code>traefik-web-ui</code> directly the easiest way would be:
<code>minikube service traefik-web-ui --url</code></p>
| Maciek Sawicki |
<p>Running an Akka cluster on k8s and it is using a downing strategy (let’s say Autodowning), so in the case where a node goes unreachable the container which went unreachable exits. The problem is that this node went unreachable because of a network issue/ issue with the platform provided by k8s and as such the entire pod should be restarted and scheduled onto a new healthy k8s node. Because scheduling can take some time we only want to reschedule the container onto a new pod on a new node if unreachability is the cause of the failure. Is there any way to propagate failure messages to the parent in k8s like use an exit code to make the decision of when to restart the container and when to delete the pod.</p>
| cyborg-panther | <blockquote>
<p>Because scheduling can take some time we only want to reschedule the container onto a new pod on a new node if unreachability is the cause of the failure. </p>
</blockquote>
<p>Kubernetes manages all scheduling and health checks for you.</p>
<blockquote>
<p>Is there any way to propagate failure messages to the parent in Kubernetes like use an exit code</p>
</blockquote>
<p>Kubernetes creates <a href="https://www.bluematador.com/blog/kubernetes-events-explained" rel="nofollow noreferrer">events</a> for some events, or you can <em>watch</em> the API for changes on Pods.</p>
<blockquote>
<p>to make the decision of when to restart the container and when to delete the pod.</p>
</blockquote>
<p>Kubernetes manages restart, scheduling and eviction of Pods.</p>
| Jonas |
<p>I am wondering if there are examples of full application stack based on Kubernetes, for ex: golang+solr+postgres with all the services and load balancers configured? And is it a good idea to have services like PostgreSQL and Solr on Kubernetes?</p>
| Gadelkareem | <p>For databases, you can use SaaS, since it relives you of tasks like backup and management. Or if you really want to go all in on Kubernetes, for databases, you can go with operators. Operators manages most lifecycles for you.</p>
<p>As far as the other components are concerned, you will have to containerize them and then create rhe deployment artifacts. You can also use tools like Konveyor Move2kube (<a href="https://github.com/konveyor/move2kube" rel="nofollow noreferrer">https://github.com/konveyor/move2kube</a>) to accelerate the process.</p>
| Ashok Pon Kumar |
<p>I am new to kubernetes. I have an issue in the pods. When I run the command</p>
<pre><code> kubectl get pods
</code></pre>
<p>Result:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mysql-apim-db-1viwg 1/1 Running 1 20h
mysql-govdb-qioee 1/1 Running 1 20h
mysql-userdb-l8q8c 1/1 Running 0 20h
wso2am-default-813fy 0/1 ImagePullBackOff 0 20h
</code></pre>
<p>Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion? </p>
| Dilshani Subasinghe | <p>In case of not having the yaml file:</p>
<p><code>kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -</code></p>
| Maciek Sawicki |
<p>Currently I have one VM for kubernetes master (k8s-server), second for worker node(node-server). I need to specify a rule how this 2 server communicate each other. Through which ports should k8s-server have access to node-server and vice versa?</p>
| George | <p>Strictly speaking Kubernetes makes use of the following ports, depending on your topology and configuration:</p>
<pre><code>Kubelet
- healthz, default tcp:10248
- kubelet, default tcp:10250
- readonly, default tcp:10255
Kube-proxy
- healthz, default tcp:10256
- metrics, default tcp:10249
- proxy-port-range, default is randomly chosen tcp range
Api-server
- secure-port, default tcp:6443
- service-node-port-range, default tcp:30000-32767
Controller-manager
- secure-port, default tcp:10257
Kube-scheduler
- port, default tcp:10251
- secure port, default tcp:10259
Cloud-controller-manager
- secure port, default tcp:10258
Etcd
- port, default tcp:2379-2380
</code></pre>
<p>However for your particular setup, I believe the following should suffice:</p>
<pre><code>Master -> Kubelet
- kubelet-port, default 10250
- kubelet-readonly, default 10255
- service-node-port-range, default 30000-32767
Kubelet -> Master
- apiserver-secure-port, default 6443
- etcd-port, default 2379
- kubelet-port, default 10250
</code></pre>
<p>Hope this helps.</p>
| cewood |
<p>I want to setup a k8s cluster, but I despair with the nginx-ingress controller and some special settings I need to set: especially proxy_pass.</p>
<p>I tried to achieve that already with the "server-snippet"-snippet, but it didn't work.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/hsts: "false"
nginx.ingress.kubernetes.io/server-snippet: |
location / {
internal;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
proxy_pass http://localhost:30022/site/;
proxy_redirect default;
proxy_cookie_path /site/ /;
}
spec:
rules:
- host: preview.test.de
http:
paths:
- path: /
backend:
serviceName: backend-service
servicePort: 8080
</code></pre>
<p>What I want to achieve is this nginx config:</p>
<pre><code>location / {
internal;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
proxy_pass http://localhost:30022/site/;
proxy_redirect default;
proxy_cookie_path /site/ /;
}
</code></pre>
<p>In an optimal world I would like to achieve that the host and the port in the proxy_pass directive would be depend on the backend pod I want to connect to, so that there is no hard coded port.</p>
<p>Can anyone help me out with this problem?</p>
| Peter Lang | <p>I believe what you're looking for is:</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: /site</code></p>
<p>However this will effectively mean that you can only use this particular ingress instance for this one app, or others like it, since this rewrite would apply to all the rules for that ingress instance.</p>
<p>You might be able to accomplish the same thing using a regex based rule, but this approach is definitely simpler.</p>
<p>Regarding your question about how to handle the two different paths which require rewrites, this should be possible with the following ingress config:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: true
spec:
rules:
- host: preview.test.de
http:
paths:
- path: /(site/.*)
backend:
serviceName: backend-service
servicePort: 8080
- path: /(cms/.*)
backend:
serviceName: cms-service
servicePort: 8080
</code></pre>
<p>However as you might be able to guess, this might become more difficult if you end up having many paths with lots of rewrites later on.</p>
<p>As for best practices for configuring this type of set up, I would generally recommend to adopt sub-domains. Set up one for each site/path you want to be reachable, and let them have their own distinct nginx/ingress/etc to handle their rewrites as needed. Then if you want to tie these together later into some other top level domain/site, it's easily done and not having to manage many rewrite rules in the one location, which can get quite messy.</p>
| cewood |
<p>I have self hosted a .NET framework 4.x version of WCF with net.tcp binding console app, which is running in Azure k8s contianer, as a selfhosted wcf service</p>
<p>exposed as <a href="http://net.tcp://CONTAINERIP:5000/WCFServiceName" rel="nofollow noreferrer">net.tcp://CONTAINERIP:5000/WCFServiceName</a></p>
<p>and the port 5000 is exposed via Loadbalancer type ingress service</p>
<p>so the client will be accessing this service like below</p>
<p><a href="http://net.tcp://LoadBalancerIP:5000/containerAppName/WCFServiceName" rel="nofollow noreferrer">net.tcp://LoadBalancerIP:5000/containerAppName/WCFServiceName</a></p>
<p>But using the loadbalancer ip is not forwarding the request to container - getting the error.</p>
<p>'There was no endpoint listening at net.tcp://LoadBalancerIp:5000/ContainerAppName/WCFServiceName that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.'</p>
<p>LB Yaml</p>
<pre><code>spec:
clusterIP: IP.ADDRESS.OF.CLUSTER
externalTrafficPolicy: Cluster
ports:
- name: nettcp
nodePort: 30412
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: CONTAINERAPPNAME
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: LB.PUBLIC.IP.ADDRESS
</code></pre>
<p>Any idea or suggesions ??</p>
| solairaja | <p>Guys thanks for the inputs,</p>
<p>Found the reason for the issue, since the app is deployed using the LoadBalancer type service, its not required to provide APP Name in the URL.</p>
<p>when the outside world access the service :</p>
<p><a href="http://net.tcp://LoadBalancerIP:5000/containerAppName/WCFServiceName" rel="nofollow noreferrer">net.tcp://LoadBalancerIP:5000/containerAppName/WCFServiceName</a> - WONT WORK</p>
<p><a href="http://net.tcp://LoadBalancerIP:5000/WCFServiceName" rel="nofollow noreferrer">net.tcp://LoadBalancerIP:5000/WCFServiceName</a> - THIS WORKS !!! (Solution)</p>
<p>or</p>
<p>when we do the self hosting we need to include the app name in the URI framing to host the WCF TCP Service.</p>
| solairaja |
<p>The issue is that I would like to persistent one status file(status generated by the service), not the directory, of some service in case the status lost when service restart, how to solve?</p>
| zulv | <p>If it's just a status file, you should be able to write it into a config map. See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">Add ConfigMap data to a Volume</a>. If in volumes you have</p>
<pre><code>volumes:
- name: status
configMap:
name: status
defaultMode: 420
optional: true
</code></pre>
<p>and in volumeMounts</p>
<pre><code>volumeMounts:
- name: status
mountPath: /var/service/status
</code></pre>
<p>then you should be able to write in it. See also how kube-dns does it with the <code>kube-dns-config</code> mount from <code>kube-dns</code> config-map.</p>
| Jan Hudec |
<p>What we're looking for is a way for an actuator health check to signal some intention like "I am limping but not dead. If there are X number of other pods claiming to be healthy, then you should restart me, otherwise, let me limp."</p>
<p>We have a rest service hosted in clustered Kubernetes containers that periodically call out to fetch fresh data from an external resource. Occasionally we have failures reaching those external resources, and sometimes, but not every time, a restart of the pod will resolve the issue.</p>
<p>The services can operate just fine on possibly stale data. Although we wouldn't want to continue operating on stale data, that's preferable to just going down entirely.</p>
<p>In the interim, we're planning on having a node unilaterally decide not to report any problems through actuator until X amount of time has passed since the last successful sync, but that really only delays the point at which all nodes would still report failure.</p>
| Brian Deacon | <p>In Kubernetes you can use LivenessProbe and ReadinessProbe to let a controller to <em>heal</em> your service, but some situations is better handled with HTTP response codes or alternative <em>degraded</em> service.</p>
<h2>LivenessPobe</h2>
<p>Use a LivenessProbe to <strong>resolve a deadlock</strong> situation. When your pod does not respond on a LivenessProbe, it will be killed and a new pod will replace it.</p>
<h2>ReadinessProbe</h2>
<p>Use a ReadinessProbe when your pod is <strong>not prepared for serving requests</strong>, e.g. if your pod need to read some files or need to connect to an external service before serving requests.</p>
<h2>Fault affecting all replicas</h2>
<p>If you have a problem that all your replicas depends on, e.g. an external service is down, then you can not solve it by restarting your pods. You may use an <a href="https://martinfowler.com/articles/feature-toggles.html" rel="nofollow noreferrer">OpsToogle</a> or a circuit breaker in this situation and notifying other services that you are degraded or show a message about temporary error.</p>
<p><strong>For your situations</strong></p>
<blockquote>
<p>If there are X number of other pods claiming to be healthy, then you should restart me, otherwise, let me limp.</p>
</blockquote>
<p>You can not <em>delegate</em> that logic to Kubernetes. Your application need to understand each fault situation, e.g. if an error was a transient network error or if your error will affect all replicas.</p>
| Jonas |
<p>I'm experiencing an issue where an image I'm running as part of a Kubernetes deployment is behaving differently from the expected and consistent behavior of the same image run with <code>docker run <...></code>. My understanding of the main purpose of containerizing a project is that it will always run the same way, regardless of the host environment (ignoring the influence of the user and of outside data. Is this wrong?</p>
<p>Without going into too much detail about my specific problem (since I feel the solution may likely be far too specific to be of help to anyone else on SO, and because I've already detailed it <a href="https://stackoverflow.com/questions/53714029/gunicorn-continually-booting-workers-when-run-in-a-docker-image-on-kubernetes">here</a>), I'm curious if someone can detail possible reasons to look into as to why an image might run differently in a Kubernetes environment than locally through Docker.</p>
| Mike S | <p>The general answer of why they're different is <em>resources</em>, but the real answer is that they should both be identical given identical resources.</p>
<p>Kubernetes uses <code>docker</code> for its container runtime, at least in most cases I've seen. There are some other runtimes (<a href="https://cri-o.io/" rel="nofollow noreferrer"><code>cri-o</code></a> and <a href="https://coreos.com/rkt/" rel="nofollow noreferrer"><code>rkt</code></a>) that are less widely adopted, so using those may also contribute to variance in how things work.</p>
<p>On your local <code>docker</code> it's pretty easy to mount things like directories (volumes) into the image, and you can populate the directory with some content. Doing the same thing on <code>k8s</code> is more difficult, and probably involves more complicated mappings, persistent volumes or an init container.</p>
<p>Running <code>docker</code> on your laptop and <code>k8s</code> on a server somewhere may give you different hardware resources:</p>
<ul>
<li>different amounts of RAM</li>
<li>different size of hard disk</li>
<li>different processor features</li>
<li>different core counts</li>
</ul>
<p>The last one is most likely what you're seeing, <code>flask</code> is probably looking up the core count for both systems and seeing two different values, and so it runs two different thread / worker counts.</p>
| Maelstrom |
<p>I have a cluster hosted on GKE, I have several deployments on this cluster, <br>I can connect with <code>kubectl exec</code> to the pods:</p>
<p><code>kubectl exec -it mypod-1234566-7890976 -- bash</code></p>
<p>I want to remove the option to connect with <code>kubectl exec</code> to a certain container </p>
<p>is there a way to block the option to connect to the container by blocking the ssh on the <code>DOCKERFILE</code> of the container? or any other way</p>
| dina | <p>To limit the ability to <code>kubectl exec</code> to pods what you want to do is create a custom Role & RoleBinding that removes the <code>create</code> verb for the <code>pods/exec</code> resource. An easy approach to this might be to copy the default RBAC policies, and then make the appropriate edit and rename.</p>
<p>Because of how RBAC works, the finest granularity you could apply to this is per-namespace, but it's not possible to filter this to a particular pod/deployment/etc.</p>
<p>As for other inbound external connections to a pod, this shouldn't be possible by default, unless you have created an Ingress and/or Service to specifically do this. This is because by in large most providers will be using private IP address ranges for the node IP's and also the Pod networking, hence they aren't reachable from outside without some NAT'ing or Proxying.</p>
<p>Hope this helps.</p>
| cewood |
<p>I noticed more and more stuff is distributed using Ansible collections. It looks great but it is unclear to me how Ansible collections are used / should be used.</p>
<p>For example when I try </p>
<pre><code>ansible-galaxy collection install community.kubernetes
</code></pre>
<p>It just displays a warning and error and does nothing</p>
<pre><code>[user:~] 5 $ ansible-galaxy collection install community.kubernetes
- downloading role 'collection', owned by [WARNING]: - collection was NOT installed successfully: Content has no field named 'owner'
ERROR! - you can use --ignore-errors to skip failed roles and finish
processing the list.
</code></pre>
<p>Ignoring errors doesn't help, it still won't install</p>
<pre><code>[user:~] $ ansible-galaxy collection install community.kubernetes --ignore-errors
- downloading role 'collection', owned by
[WARNING]: - collection was NOT installed successfully: Content has no field named 'owner'
- downloading role 'kubernetes', owned by community
[WARNING]: - community.kubernetes was NOT installed successfully: - sorry, community.kubernetes was not found on
https://galaxy.ansible.com.
[user:~] $
</code></pre>
| onknows | <p>Collections require Ansible 2.9.*</p>
| onknows |
<p>Using Spring Boot and managing database changes by Liquibase all changes are executed on application start. This is totally fine for fast running changes.</p>
<p>Some changes, e.g. adding DB index, can run for a while. If running application on K8s it happens that liveness/readyness checks trigger an application restart. In this case Liquibase causes an end-less loop.</p>
<p>Is there a pattern how to manage long running scripts with Liquibase? Any examples?
One approach might be splitting the the changes in two groups:</p>
<ul>
<li>Execute before application start.</li>
<li>Or execute while application is already up and running.</li>
</ul>
| lunanigra | <p>Defer long running script after application start. Expose DB upgrade invocation via private REST API or similar.</p>
<p>Your code and dev practice have to support n-1 version of DB schema. Eg: hide the feature that needs new column behind feature flag until schema upgrade fully rolled out.</p>
| gerrytan |
<p>It's possible to perform an authorization(rule-based like) into Kubernetes ingress(like kong, nginx).
For example, i have this:</p>
<p>apiVersion: extensions/v1beta1</p>
<pre><code>kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: api.foo.bar
http:
paths:
- path: /service
backend:
serviceName: service.foo.bar
servicePort: 80
</code></pre>
<p>But before redirect to /service, I need to perform a call in my authorization api to valid if the request token has the rule to pass for /service.</p>
<p>Or I really need to use an API gateway behind ingress like a spring zuul to do this?</p>
| Ricardo Palazzio | <p><code>Ingress</code> manifest is just input for a controller. You also need an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a>, an proxy that understand the <code>Ingress</code> object. Kong and Nginx is two examples of implementation.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress Controller</a> is provided from the Kubernetes community and it has an example of <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/oauth-external-auth" rel="nofollow noreferrer">configuring an external oauth2 proxy</a> using annotations</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
</code></pre>
| Jonas |
<p>I am writing out json structured log messages to stdout with exactly one time field, called <code>origin_timestamp</code>. </p>
<p>I collect the log messages using Fluent Bit with the tail input plugin, which uses the parser <code>docker</code>. The parser is configured with the <code>Time_Key time</code>. </p>
<p>The documentation about <code>Time_Key</code> says:</p>
<blockquote>
<p>If the log entry provides a field with a timestamp, this option
specify the name of that field.</p>
</blockquote>
<p>Since <code>time</code> != <code>origin_timestamp</code>, I would have thought no time fields will be added by Fluent Bit, however the final log messages ending up in Elasticsearch have the following time fields:</p>
<ul>
<li>(<code>origin_timestamp</code> within the field log that contains the original log message)</li>
<li><code>origin_timestamp</code></li>
<li><code>time</code></li>
<li><code>@timestamp</code> (sometimes even multiple times).</li>
</ul>
<p>The <code>@timestamp</code> field is probably added by the es output plugin I am using in Fluent Bit, but <strong>where the heck is the <code>time</code> field coming from?</strong> </p>
| DaveFar | <p>I came across the following issue in the Fluent-bit issue tracker, <a href="https://github.com/fluent/fluent-bit/issues/628#issuecomment-518316428" rel="nofollow noreferrer">Duplicate @timestamp fields in elasticsearch output</a>, which sounds like it might be related to your issue in question.</p>
<p>I've deep linked to a particular comment from one of the contributors, which outlines two possible solutions depending on whether you are using their Kubernetes Filter plugin, or are ingesting the logs into Elasticsearch directly.</p>
<p>Hope this helps.</p>
| cewood |
<p>We have deployed a few pods in cluster in various namespaces. I would like to inspect and identify all pod which is not in a Ready state.</p>
<pre><code> master $ k get pod/nginx1401 -n dev1401
NAME READY STATUS RESTARTS AGE
nginx1401 0/1 Running 0 10m
</code></pre>
<p>In above list, Pod are showing in Running status but having some issue. How can we find the list of those pods. Below command not showing me the desired output:</p>
<pre><code> kubectl get po -A | grep Pending Looking for pods that have yet to schedule
kubectl get po -A | grep -v Running Looking for pods in a state other than Running
kubectl get pods --field-selector=status.phase=Failed
</code></pre>
| sachin | <p>There is a long-standing feature request for this. The <a href="https://github.com/kubernetes/kubernetes/issues/49387#issuecomment-663903782" rel="nofollow noreferrer">latest entry</a> suggests</p>
<pre><code>kubectl get po --all-namespaces | gawk 'match($3, /([0-9])+\/([0-9])+/, a) {if (a[1] < a[2] && $4 != "Completed") print $0}'
</code></pre>
<p>for finding pods that are running but not complete.</p>
<p>There are a lot of other suggestions in the thread that might work as well.</p>
| michaelrp |
<p>We are running a .NET Core 3.1 application in a Kubernetes cluster. The application connects to an Azure SQL Database using EF Core 3.1.7, with Microsoft.Data.SqlClient 1.1.3.</p>
<p>At seemingly random times, we would receive the following error.</p>
<pre><code> ---> System.Data.SqlClient.SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
---> System.ComponentModel.Win32Exception (258): Unknown error 258
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParserStateObject.ThrowExceptionAndWarning(Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)
at System.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync()
at System.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket()
at System.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer()
at System.Data.SqlClient.TdsParserStateObject.TryReadByte(Byte& value)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds)
at System.Data.SqlClient.SqlCommand.ExecuteScalar()
</code></pre>
<p>Even though it seems random, it definitely happens more often under heavier loads. From my research, it appears as if this specific timeout is related to the connection timeout rather than the command timeout. I.e. the client is not able to establish a connection at all. This is not a query that is timing out.</p>
<p>Potential root causes we've eliminated:</p>
<ul>
<li><strong>Azure SQL Server Capacity:</strong> The behaviour is observed whether we run on 4 or 16 vCPUs. Azure Support also confirmed that there are no issues in the logs. This includes the number of open connections, which is only around 50. We also ran load tests from other connections and the server held up fine.</li>
<li><strong>Microsoft.Data.SqlClient Versions:</strong> We've been running on version 1.1.3 and this behaviour only started a week ago (2021-03-16).</li>
<li><strong>Network Capacity:</strong> We are maxing out at around 1-2MB/s at this stage, which is pretty pedestrian.</li>
<li><strong>Kubernetes Scaling:</strong> There is no correlation between the occurrence of the events and when we scale up more pods.</li>
<li><strong>Connection String Issues:</strong> Our system used to work fine, but regardless we changed a few settings mentioned in other articles to see if the issue would not resolve itself. Mars is disabled. We cannot disable connection pooling. We have <code>TrusServerCertificate</code> set to true. Here is the current connection string: <code>Server=tcp:***.database.windows.net,1433;Initial Catalog=***;Persist Security Info=False;User ID=***;Password=***;MultipleActiveResultSets=False;Encrypt=True;Connection Timeout=60;TrustServerCertificate=True;</code></li>
</ul>
<p><strong>Update 1:</strong>
As requested, an example of two timeouts that just occurred. It is a Sunday, so traffic is extremely low. Database utilization (CPU, Mem, IO) is sitting between 2-6%.</p>
<pre><code>Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
---> System.ComponentModel.Win32Exception (258): Unknown error 258
at Microsoft.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at Microsoft.Data.SqlClient.TdsParserStateObject.ThrowExceptionAndWarning(Boolean callerHasConnectionLock, Boolean asyncClose)
at Microsoft.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)
at Microsoft.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync()
at Microsoft.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket()
at Microsoft.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer()
at Microsoft.Data.SqlClient.TdsParserStateObject.TryReadByte(Byte& value)
at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at Microsoft.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at Microsoft.Data.SqlClient.TdsParser.TdsExecuteTransactionManagerRequest(Byte[] buffer, TransactionManagerRequestType request, String transactionName, TransactionManagerIsolationLevel isoLevel, Int32 timeout, SqlInternalTransaction transaction, TdsParserStateObject stateObj, Boolean isDelegateControlRequest)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransactionYukon(TransactionRequest transactionRequest, String transactionName, IsolationLevel iso, SqlInternalTransaction internalTransaction, Boolean isDelegateControlRequest)
at Microsoft.Data.SqlClient.SqlInternalConnection.BeginSqlTransaction(IsolationLevel iso, String transactionName, Boolean shouldReconnect)
at Microsoft.Data.SqlClient.SqlConnection.BeginTransaction(IsolationLevel iso, String transactionName)
at Microsoft.Data.SqlClient.SqlConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.BeginTransaction(IsolationLevel isolationLevel)
at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)
</code></pre>
<p>We are also receiving errors on our database health checker when using this command: <code>Microsoft.EntityFrameworkCore.Infrastructure.DatabaseFacade.CanConnect()</code></p>
<p>The above stack trace is the issue we are trying to solve versus this stack trace below of the SQL query timing out.</p>
<pre><code>Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
---> System.ComponentModel.Win32Exception (258): Unknown error 258
at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at Microsoft.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
at Microsoft.Data.SqlClient.SqlDataReader.get_MetaData()
at Microsoft.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString, Boolean isInternal, Boolean forDescribeParameterEncryption, Boolean shouldCacheForAlwaysEncrypted)
at Microsoft.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean isAsync, Int32 timeout, Task& task, Boolean asyncWrite, Boolean inRetry, SqlDataReader ds, Boolean describeParameterEncryptionRequest)
at Microsoft.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry, String method)
at Microsoft.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior)
</code></pre>
| André Haupt | <p>The problem was an infrastructure issue at <a href="https://github.com/Azure/aks-engine/issues/4341" rel="noreferrer">Azure</a>.</p>
<blockquote>
<p>There is a known issue within Azure Network where the dhcp lease is
lost whenever a disk attach/detach happens on some VM fleets. There is
a fix rolling out at the moment to regions. I'll check to see when
Azure Status update will be published for this.</p>
</blockquote>
<p>The problem disappeared, so it appears as if the fix has been rolled out globally.</p>
<p>For anyone else running into this issue in the future, you can identify it by establishing an <a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="noreferrer">SSH connection into the node</a> (not the pod). Do an <code>ls -al /var/log/</code> and identify all the <code>syslog</code> files and run the following grep on each file.</p>
<pre><code>cat /var/log/syslog | grep 'carrier'
</code></pre>
<p>If you have any <code>Lost carrier</code> and <code>Gained carrier</code> messages in the log, there is a some sort of a network issue. In our case it was the DHCP lease.</p>
<p><a href="https://i.stack.imgur.com/gcCeZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gcCeZ.png" alt="enter image description here" /></a></p>
| André Haupt |
<p>It is not possible to join master nodes without having <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-first-control-plane-node" rel="nofollow noreferrer">set a <code>controlPlaneEndpoint</code></a>:</p>
<blockquote>
<p>error execution phase preflight:<br>
One or more conditions for hosting a new control plane instance is not satisfied.<br>
unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address<br>
Please ensure that:<br>
* The cluster has a stable controlPlaneEndpoint address.</p>
</blockquote>
<p>But if you instead join a worker node (i.e. without <code>--control-plane</code>), then it is not only aware of other nodes in the cluster, but also which ones are masters.</p>
<p>This is because the <code>mark-control-plane</code> phase does:</p>
<blockquote>
<p>Marking the node as control-plane by adding the label "node-role.kubernetes.io/master=''"
Marking the node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]</p>
</blockquote>
<p>So couldn't masters (<code>--control-plane</code>) join the cluster and use the role label to <em>discover</em> the other control plane nodes?</p>
<p>Is there any such plugin or other way of configuring this behaviour to avoid separate infrastructure for load balancing the API server?</p>
| OJFord | <p>Looking at the <a href="https://github.com/kubernetes/kubernetes/blob/63e27a02ed41f9554b547b18d4bffd69f6adf08a/cmd/kubeadm/app/apis/kubeadm/v1beta2/types.go#L70-L80" rel="noreferrer">kubeadm types definition</a> I found this nice description that clearly explains it:</p>
<blockquote>
<p>ControlPlaneEndpoint sets a stable IP address or DNS name for the control plane; it
can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port.
In case the ControlPlaneEndpoint is not specified, the AdvertiseAddress + BindPort
are used; in case the ControlPlaneEndpoint is specified but without a TCP port,
the BindPort is used.
Possible usages are:
e.g. In a cluster with more than one control plane instances, this field should be
assigned the address of the external load balancer in front of the
control plane instances.
e.g. in environments with enforced node recycling, the ControlPlaneEndpoint
could be used for assigning a stable DNS to the control plane.</p>
</blockquote>
<p>This also likely affects the PKI generated by kubernetes, as it will need to know a common name/IP that you will access the cluster via to include in the certs it generates for the api nodes, otherwise these won't match up correctly.</p>
<p>If you really didn't want to have a loadbalancer you might be able to set up a round-robin dns entry with the IP's of all the control plane nodes and try specifying this for the <code>controlPlaneEndpoint</code> value. However this won't help with failover and redundancy, since failed nodes won't be removed from the record, and some clients might cache the address and not try to re-resolve it, thus further prolonging any outages.</p>
<p>Hope this helps.</p>
| cewood |
<p>I would like to access a Windows file share share (SMB3) from a docker container, but I do not want to compromise the security of the host machine. All the guides I have read state that I need to use either the <code>--privileged</code> flag or <code>--cap-add SYS_ADMIN</code> capability.</p>
<p>Here is the command I use:</p>
<blockquote>
<p>mount -t cifs -o
username='[email protected]',password='some_password'
//192.168.123.123/MyShare /mnt/myshare</p>
</blockquote>
<p>Which results in the message:</p>
<blockquote>
<p>Unable to apply new capability set.</p>
</blockquote>
<p>When I apply the <code>--cap-add SYS_ADMIN</code> capability the mount command works fine, but I understand this exposes the host to obvious security vulnerabilities.</p>
<p>I have also read the suggestion in this StackOverflow question (<a href="https://stackoverflow.com/questions/27989751/mount-smb-cifs-share-within-a-docker-container">Mount SMB/CIFS share within a Docker container</a>) to mount the volume locally on the server that runs docker. This is undesirable for two reasons, firstly, the container is orchestrated by a Rancher Kubernetes cluster and I don't know how to achieve what is described by nPcomp using Rancher, and two, this means the volume is accessible to the docker host. I'd prefer only the container have access to this share via the credentials given to it via secrets.</p>
<p>My question is: is there way to mount a CIFS/SMB3 share in a docker container (within Kubernetes) without exposing the host to privilege escalation vulnerabilities and protecting the credentials? Many thanks.</p>
| ben | <p>After more research I have figured out how to do this. There is a Container Storage Interface (CSI) driver for SMB called <strong>SMB CSI Driver for Kubernetes</strong> (<a href="https://github.com/kubernetes-csi/csi-driver-smb" rel="noreferrer">https://github.com/kubernetes-csi/csi-driver-smb</a>).</p>
<p>After installing the CSI driver using helm (<a href="https://github.com/kubernetes-csi/csi-driver-smb/tree/master/charts" rel="noreferrer">https://github.com/kubernetes-csi/csi-driver-smb/tree/master/charts</a>) you can follow the example at <a href="https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md" rel="noreferrer">https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md</a> (<strong>Option #2 Create PV/PVC</strong>) to create a Persistent Volume (PV) and Persistent Volume Claim (PVC) which mounts the SMB3 share.</p>
<p>Then you create your container and give it the relevant Persistent Volume Claim, specifying you want to mount it as /mnt/myshare etc.</p>
<p>I tested this and it gets deployed to multiple worker nodes automatically and works well, without needing the <code>privileged</code> flag or <code>--cap-add SYS_ADMIN</code> to be given to the containers.</p>
<p>This supports SMB3 and even authentication & encryption. To enable encryption go to your Windows Server > File and Storage Services, select the share, Properties > Settings > Encrypt Data Access.</p>
<p>Wireshark shows all the SMB traffic is encrypted. Only thing I don't recall is if you have to install <code>cifs-utils</code> manually first, since I had already done this on all my nodes I wasn't able to test.</p>
<p>Hope this helps somebody.</p>
| ben |
<p>How do I give my pod or minikube the ability to see the 10.x network my laptop is VPN'd onto?</p>
<p>Setup:
* minikube
* php containers</p>
<p>php code accesses a private repository, 10.x address. Things are find locally, but I cannot access this same 10.x address while in a pod.</p>
<p>How can I give my pods/minikube access to my VPN route?</p>
<pre><code>my-pod-99dc9d9d4-6thdj#
my-pod-99dc9d9d4-6thdj# wget https://private.network.host.com/
Connecting to private.network.host.com (10.x.x.x:443)
^C
my-pod-99dc9d9d4-6thdj#
</code></pre>
<p>(sanitized, obviously)</p>
<p>PS: I did find ONE post that mentions what I'm after, but I can't seem to get it to work: <a href="https://stackoverflow.com/questions/55458997/how-to-connect-to-a-private-ip-from-kubernetes-pod">How to connect to a private IP from Kubernetes Pod</a></p>
<p>Still can't access the private ip (through my host's vpn).</p>
| guice | <p>There are a few ways you could achieve this.</p>
<p>If you only want to expose a few services into minikube from the VPN, then you could exploit SSH's reverse tunnelling, as described in this article; <a href="https://medium.com/tarkalabs/proxying-services-into-minikube-8355db0065fd" rel="nofollow noreferrer">Proxying services into minikube</a>. This would present the services as ports on the minikube VM, so acting like a nodePort essentially, and then SSH would tunnel these out and the host would route them through the VPN for you.</p>
<p>However if you genuinely need access to the entire network behind the VPN, then you will need to use a different approach. The following assumes you're VPN is configured as a split tunnel, that it's using NAT, and isn't using conflicting IP ranges.</p>
<p>The easiest option would be to run the VPN client inside minikube, thus providing first class access to the VPN and network, and not needing any routing to be set up. The other option is to set up the routing yourself in order to reach the VPN on the host computer. This would mean ensuring the following are covered:</p>
<ol>
<li>host route for the pod network; <code>sudo ip route add $REPLACE_WITH_POD_NETWORK} via $(minikube ip)</code> e.g. for my case this was <code>sudo ip route add 10.0.2.0/24 via 192.168.99.119</code></li>
<li>ping from host to pod network address (you'll have to look this up with kubectl, e.g. <code>kubectl get pod -n kube-system kube-apiserver-minikube -o yaml</code>)</li>
</ol>
<p>This should work because the networking/routing in the pod/service/kubelet is handled by the default route, which covers everything. Then when the traffic hits the host, the VPN and any networks it has exposed will have corresponding routes, the host will know to route it to the VPN, and NAT it to take care of the return path. When traffic returns it will hit your host because of the NAT'ing, it will lookup the route table, see the entry you added earlier, and forward the traffic to minikube, and then to the pod/service, voila!</p>
<p>Hope this helps.</p>
| cewood |
<p>I've upgrade from Flux V1 to V2. It all went fairly smooth but I can't seem to get the <code>ImageUpdateAutomation</code> to work. Flux knows I have images to update but it doesn't change the container image in the <code>deployment.yaml</code> manifest and commit the changes to Github. I have no errors in my logs so I'm at a bit of a loss as to what to do next.</p>
<p>I have an file structure that looks something like this:</p>
<pre><code>├── README.md
├── staging
│ ├── api
│ │ ├── deployment.yaml
│ │ ├── automation.yaml
│ │ └── service.yaml
│ ├── app
│ │ ├── deployment.yaml
│ │ ├── automation.yaml
│ │ └── service.yaml
│ ├── flux-system
│ │ ├── gotk-components.yaml
│ │ ├── gotk-sync.yaml
│ │ └── kustomization.yaml
│ ├── image_update_automation.yaml
</code></pre>
<p>My <code>staging/api/automation.yaml</code> is pretty strait-forward:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: api
namespace: flux-system
spec:
image: xxx/api
interval: 1m0s
secretRef:
name: dockerhub
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: api
namespace: flux-system
spec:
imageRepositoryRef:
name: api
policy:
semver:
range: ">=1.0.0"
</code></pre>
<p>My <code>staging/image_update_automation.yaml</code> looks something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
git:
checkout:
ref:
branch: master
commit:
author:
email: [email protected]
name: fluxcdbot
messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: master
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
update:
path: ./staging
strategy: Setters
</code></pre>
<p>Everything seems to be ok here:</p>
<pre class="lang-sh prettyprint-override"><code>❯ flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
api True successful scan, found 23 tags 2021-07-28T17:11:02-06:00 False
app True successful scan, found 18 tags 2021-07-28T17:11:02-06:00 False
❯ flux get image policy
NAME READY MESSAGE LATEST IMAGE
api True Latest image tag for 'xxx/api' resolved to: 1.0.1 xxx/api:1.0.1
app True Latest image tag for 'xxx/app' resolved to: 3.2.1 xxx/app:3.2.1
</code></pre>
<p>As you can see from the policy output the <code>LATEST IMAGE</code> api is 1.0.1, however when I view the current version of my app and api they have not been updated.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get deployment api -n xxx -o json | jq '.spec.template.spec.containers[0].image'
"xxx/api:0.1.5"
</code></pre>
<p>Any advice on this would be much appreciated.</p>
| jwerre | <p>My issue was that I didn't add the comment after my image declaration in my deployment yaml. <a href="https://fluxcd.io/docs/guides/image-update/#configure-image-updates" rel="nofollow noreferrer">More details</a>. Honestly, I'm surprised this is not <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="nofollow noreferrer">Annotation</a> instead of a comment.</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- image: docker.io/xxx/api:0.1.5 # {"$imagepolicy": "flux-system:api"}
</code></pre>
| jwerre |
<p>A default Google Kubernetes Engine (GKE) cluster </p>
<pre><code>gcloud container clusters create [CLUSTER_NAME] \
--zone [COMPUTE_ZONE]
</code></pre>
<p>starts with 3 nodes. What's the idea behind that? Shouldn't 2 nodes in the same zone be sufficient for high availability?</p>
| stefan.at.kotlin | <p>Kubernetes uses <a href="https://github.com/etcd-io/etcd" rel="nofollow noreferrer">etcd</a> for state. Etcd uses <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft</a> for consensus to achieve high availability properties.</p>
<p>When using a consensus protocol like Raft, you need <em>majority</em> in voting. Using 3 nodes you need 2 of 3 nodes to respond for availability. Using 2 nodes, you can not get majority with only 1 of 2 nodes, so you need both 2 nodes to be available.</p>
| Jonas |
<p>I have a problem. In my kubernetes cluster, I am trying to run my Rails application. I got the image loaded, but now I want to write a custom command. The default command in my Dockerfile is:</p>
<pre><code>CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
</code></pre>
<p>But I want to also run <code>rails assets:precompile</code> at startup for production. I tried those commands using this config:</p>
<pre><code>command: ["bundle", "exec", "rails", "assets:precompile", "&&", "bundle", "exec", "rails", "server"]
</code></pre>
<p>But after the first command has been executed, I get the error:</p>
<pre><code>rails aborted!
Don't know how to build task '&&' (See the list of available tasks with `rails --tasks`)
</code></pre>
<p>I also tried the following with <code>args</code>:</p>
<pre><code>command: ["/bin/sh -c"]
args:
- bundle exec rails assets:precompile;
bundle exec rails server;
</code></pre>
<p>But that results in a very long error which basicly says that the format of args is incorrect. Can someone explain to me how I can run both commands at startup?</p>
| A. Vreeswijk | <p>Use <code>entrypoint</code> for that:</p>
<pre><code>services:
app:
build: .
entrypoint: ./entrypoint.sh
command: bundle exec rails s -p 3000 -b 0
ports:
- 3000:3000
</code></pre>
<pre class="lang-bash prettyprint-override"><code># entrypoint.sh
#!/bin/bash
set -e
# is this still an issue?
rm -f /myapp/tmp/pids/server.pid
# do what you have to do
bin/rails assets:precompile
# pass the torch to `command:`
exec "$@"
</code></pre>
<hr />
<p>Also the lazy way:</p>
<pre><code> command: bash -c "bin/rails assets:precompile && bin/rails s -p 3000 -b 0"
</code></pre>
<hr />
<p>You can also use <code>ENTRYPOINT</code> in <em>Dockerfile</em> and build it into the image:</p>
<pre><code>COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
</code></pre>
| Alex |
<p>I'm writing an application to be deployed to Kubernetes where I want each pod to hold
a TCP connection to all the other pods in the replica set so any container can notify the other
containers in some use case. If a pod is added, a new connection should be created between it and all
the other pods in the replica set.</p>
<p>From a given pod, how do I open a TCP connection to all the other pods? I.e., how do I discover all
their IP addresses?</p>
<p>I have a ClusterIP Service DNS name that points to these pods by selector, and that's the mechanism
I've thought of trying to use so far. E.g., open a TCP connection to that DNS name repeatedly in a
loop, requesting a container ID over the connection each time, pausing when I've gotten say at least
three connections for each container ID. Then repeating that every minute or so to get new pods that
have been added. A keepalive could be used to remove connections to pods that have gone away.</p>
<p>Is there a better way to do it?</p>
<p>If it matters, I'm writing this application in Go.</p>
| jrefior | <blockquote>
<p>E.g., open a TCP connection to that DNS name repeatedly in a loop, requesting a container ID over the connection each time, pausing when I've gotten say at least three connections for each container ID. Then repeating that every minute or so to get new pods that have been added.</p>
</blockquote>
<p>This does not sounds like a solid solution.</p>
<p>Kubernetes does <strong>not</strong> provide the functionality that you are looking for out-of-the-box.</p>
<p><strong>Detect pods</strong></p>
<p>A possible solution is to query the Kubernetes API Server, for the pods matching your <em>selector</em> (labels). Use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> and look at e.g. <a href="https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go#L53" rel="nofollow noreferrer">example on how to list Pods</a>. You may want to <em>watch</em> for new pods, or query regularly.</p>
<p><strong>Connect to Pods</strong></p>
<p>When you know the <em>name</em> of a pod that you want to connect to, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="nofollow noreferrer">DNS to connect to pods</a>.</p>
| Jonas |
<p>I'm currently in the process of setting up a Kubernetes AKS cluster, mostly for learning purposes for myself and as a proof of concept. My goal is like this:</p>
<ul>
<li>CI / CD with Azure DevOps
<ul>
<li>Every commit to the master-branch triggers an automated release to Kubernetes in the namespace "production"</li>
<li>Every commit to a non-master-branch triggers an automated release to Kubernetes in the namespace "development"</li>
</ul>
</li>
<li>Kubernetes management
<ul>
<li>Resources are created via Helm</li>
<li>Azure DevOps task is used to deploy these resources in combination with variable-groups per environment</li>
</ul>
</li>
<li>Application
<ul>
<li>Simple MVC app</li>
<li>Real database outside of Kubernetes, Azure SQL Server</li>
</ul>
</li>
</ul>
<p>Now, everything works smoothly, but I really struggle with the logical concept of "namespaces as environments". I know, this isn't a great idea in the first place, as for example, the development-environment could use up all the resources. But as I'm hosting stuff myself, I didn't want to create multiple clusters and I think for starters, having namespaces as environments is reasonable.
My problem comes with the routing: my pretty naive approach is explained here <a href="https://stackoverflow.com/questions/51878195/kubernetes-cross-namespace-ingress-network/51899301#51899301.">Kubernetes Cross Namespace Ingress Network</a>:</p>
<ul>
<li>One ingress in the default namespace</li>
<li>Per environment, one ExternalName-Service in the default namespace with a suffix, targetting the CNAME of the service in the other namespaces</li>
<li>Per environment namespace, one ClusterIP service</li>
</ul>
<p>The ingress looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code> rules:
- host: 09ab799fd5674c4594a5.centralus.aksapp.io
http:
paths:
- path: /dev
pathType: Prefix
backend:
service:
name: {{include "helmakskubernetespoc-chart.fullname" .}}external{{ .Values.namespaces.development }}
port:
number: {{ .Values.externalService.port }}
- path: /
pathType: Prefix
backend:
service:
name: {{ include "helmakskubernetespoc-chart.fullname" .}}external{{ .Values.namespaces.production }}
port:
number: {{ .Values.externalService.port }}
</code></pre>
<p>My goal being, that no subroute targets production and the subroute /dev targeting development.</p>
<p>This works as long as there isn't a subroute aka for production. But as I've learned here <a href="https://stackoverflow.com/questions/52021925/kubernetes-ingress-non-root-path-404-not-found">Kubernetes Ingress non-root path 404 Not Found</a>, the ingress takes the route and passes it 1:1 to the internal service. As my little MVC app doesn't know "/dev", I get a 404.
So far so good, I can change this via annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>Now, every call to /dev does in fact route to the correct app, BUT every click on a button routes back to production, as the initially added /dev is lost. Also, trying this one <a href="https://stackoverflow.com/questions/72775105/ingress-dont-load-the-website-assets-css-files-and-javascript-files/72776490">Ingress don't load the website assets (Css files and Javascript files)</a>, I have the same problem.</p>
<p>I'm wondering if I'm conceptually making a mistake: Ingress rules are, as far as I know, only working with subroutes, therefore I can't for example use a port. Is possibly ingress the wrong way to split the environments? As the MVC app either never receives the /dev or doesn't know what to do with it, I don't see an elegant solution for this problem?</p>
| Matthias Müller | <p>You probably wanna use different hosts per environment. Something like:</p>
<pre><code>rules:
- host: dev.09ab799fd5674c4594a5.centralus.aksapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{include "helmakskubernetespoc-chart.fullname" .}}external{{ .Values.namespaces.development }}
port:
number: {{ .Values.externalService.port }}
- host: 09ab799fd5674c4594a5.centralus.aksapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ include "helmakskubernetespoc-chart.fullname" .}}external{{ .Values.namespaces.production }}
port:
number: {{ .Values.externalService.port }}
</code></pre>
| Max |
<p>How to make load balancing for GRPC services on GKE on L7 (with Ingress over HTTP/2 + TLS)?</p>
<p>I know that I have the option to use L4 (TCP layer) to configure Service with "LoadBalancer" type. But I want to know if I can use Ingress + L7 load balancing over HTTP/2+TLS.</p>
<p>Also I see "HTTP/2 to backends is not supported for GKE." (on <a href="https://cloud.google.com/load-balancing/docs/backend-service#HTTP2-limitations" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/backend-service#HTTP2-limitations</a>). But I don't know it's actual or not.</p>
| Harlam | <p>GKE Ingress can now <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2" rel="nofollow noreferrer">load balance with HTTP/2</a>, when you use <strong>https</strong>.</p>
<p>To get HTTP/2 between the load balancer (ingress controller) and your pods, your service need an extra annotation:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/app-protocols: '{"my-port":"HTTP2"}'
</code></pre>
<p>In addition, your pods most use TLS and have <a href="https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation" rel="nofollow noreferrer">ALPN</a> h2 configured. This can be done e.g. with an HAProxy as a sidecar with <a href="https://www.eclipse.org/jetty/documentation/current/http2-configuring-haproxy.html" rel="nofollow noreferrer">http2 configuration</a>. I have successfully used this setup on GKE.</p>
| Jonas |
<p>When I run this command</p>
<pre><code>kubectl get deployments
</code></pre>
<p>at my Linux Ubuntu 18 machine, I got different output than expected (according to documentation).</p>
<p>Expected:
<a href="https://i.stack.imgur.com/I9SIh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I9SIh.png" alt="enter image description here"></a></p>
<p>Actual:</p>
<p><a href="https://i.stack.imgur.com/F50oc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F50oc.png" alt="enter image description here"></a></p>
<p>Of course, I am not talking about values, I am talking about names of labels. </p>
<p>[EDIT]</p>
<p>My k8s version:
<a href="https://i.stack.imgur.com/VTwaj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VTwaj.png" alt="enter image description here"></a></p>
| Lucas | <p>This is just an old output format. The newer output you're getting below contains all the same information; the "READY" field is a combination of the old "DESIRED" and "CURRENT".</p>
<p>It's showing as <code>4/5</code> in your output to indicate 4 pods ready/current, and 5 pods desired.</p>
<p>Hope this helps.</p>
| cewood |
<p>I'm kind of new to Docker, and I'm running into an issue I can't find a straightforward explanation for. I have a pretty simple Dockerfile that I'm building into an image, but when I deploy the image to Kubernetes as a pod, the results aren't what I expect.</p>
<pre><code>FROM ubuntu:16.04
RUN mkdir workspace
WORKDIR /workspace
COPY . /workspace
CMD ["ls"]
</code></pre>
<p>When I check the logs for the deployment, there are no files listed in the /workspace folder, even though the folder itself exists. However, if I change my <code>COPY</code>'s destination to a default linux folder like <code>/usr</code>, the files are there as I'd expect. My suspicion is that this has something to do with storage persistence, but since I'm copying the files into the folder when I build my image and the folder persists in the pod, I'm at a loss for why this happens. Any guidance would be greatly appreciated.</p>
| Ethan | <p>I would venture to guess that the <code>ubuntu:...</code> image doesn't have a WORKDIR set to /, and hence your copy command isn't working as expected.</p>
<p>Try changing the run command to be <code>RUN mkdir /workspace</code> and I think you'll see what you expected.</p>
<p>Hope this helps.</p>
| cewood |
<p>So, what I'm trying to do is use helm to install an application to my kubernetes cluster. Let's say the image tag is 1.0.0 in the chart.</p>
<p>Then, as part of a CI/CD build pipeline, I'd like to update the image tag using kubectl, i.e. <code>kubectl set image deployment/myapp...</code> </p>
<p>The problem is if I subsequently make any change to the helm chart (e.g. number of replicas), and I <code>helm upgrade myapp</code> this will revert the image tag back to 1.0.0.</p>
<p>I've tried passing in the --reuse-values flag to the helm upgrade command but that hasn't helped. </p>
<p>Anyone have any ideas? Do I need to use helm to update the image tag? I'm trying to avoid this, as the chart is not available at this stage in the pipeline.</p>
| Darragh | <p>When using CI/CD to build and deploy, you should use a <em>single source-of-truth</em>, that means a file versioned in e.g. Git and you do <em>all changes</em> in that file. So if you use Helm charts, they should be stored in e.g. Git and all changes (e.g. new image) should be done in your Git repository.</p>
<p>You could have a <em>build pipeline</em> that in the end <em>commit</em> the new image to a Kubernetes config repository. Then a <em>deployment pipeline</em> is triggered that use Helm or Kustomize to apply your changes and possibly execute tests.</p>
| Jonas |
<p>I have multiple pods of the same app deployed using Kubernetes. The app manages multiple 'Project' objects. When Marry is working on 'Project 1' on pod-01, Tom logs on pod-02. Here is the requirement, if Tom tries to open 'Project 1' on pod-02, we need to route him to pod-01 where 'Project 1' is already open by Marry. How would I do that? </p>
<p>Can I store some unique identifier of pod-01 in the 'Project 1' object? So I can use it to route Tom to pod-01.</p>
<p>Is this technically feasible?</p>
| Shawn Pan | <p>What you are describing is <strong>stateful</strong> workload, where each instance of your application contain <em>state</em>. </p>
<p><em>Normal</em> workload in Kubernetes is <strong>stateless</strong> and deployed with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a>. However, Kubernetes now has <em>some</em> support for <strong>stateful</strong> workloads by using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></p>
<p>It may be possible to implement your use case, but it depends. Your users will not have an instance for themself if that is what you need. I would recommend you to architect your service to be <strong>stateless</strong> workload, and store all state in a database (possibly with statefulSet) since it is much easier to handle stateless workloads.</p>
| Jonas |
<p>I am running Traefik on Kubernetes and I have create an Ingress with the following configuration: </p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whitelist-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefix
traefik.ingress.kubernetes.io/whitelist-source-range: "10.10.10.10/32, 10.10.2.10/23"
ingress.kubernetes.io/whitelist-x-forwarded-for: "true"
traefik.ingress.kubernetes.io/preserve-host: "true"
spec:
rules:
- host:
http:
paths:
- path: /endpoint
backend:
serviceName: endpoint-service
servicePort: endpoint-port
---
</code></pre>
<p>When I do a POST on the above endpoint, Traefik logs that the incoming IP is 172.16.0.1 and so my whitelist is not triggered.
Doing an ifconfig I see that IP belongs to Docker </p>
<pre><code>docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.26.0.1 netmask 255.255.0.0 broadcast 172.26.255.255
</code></pre>
<p>How can I keep the original IP instead of the docker one? </p>
<p>EDIT</p>
<p>Traefik is exposed as LoadBalancer and the port is 443 over SSL</p>
<p>This is its yml configuration</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: traefik
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
selector:
k8s-app: traefik-ingress
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
- protocol: TCP
port: 443
targetPort: 443
name: https
type: LoadBalancer
externalTrafficPolicy: Local
externalIPs:
- <machine-ip>
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: default
labels:
k8s-app: traefik-ingress
spec:
replicas: 2
selector:
matchLabels:
k8s-app: traefik-ingress
template:
metadata:
labels:
k8s-app: traefik-ingress
name: traefik-ingress
spec:
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 35
volumes:
- name: proxy-certs
secret:
secretName: proxy-certs
- name: traefik-configmap
configMap:
name: traefik-configmap
containers:
- image: traefik:1.7.6
name: traefik-ingress
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 200m
memory: 900Mi
requests:
cpu: 25m
memory: 512Mi
livenessProbe:
failureThreshold: 2
httpGet:
path: /ping
port: 80
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
failureThreshold: 2
httpGet:
path: /ping
port: 80
scheme: HTTP
periodSeconds: 5
volumeMounts:
- mountPath: "/ssl"
name: "proxy-certs"
- mountPath: "/config"
name: "traefik-configmap"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: dashboard
containerPort: 8080
args:
- --logLevel=DEBUG
- --configfile=/config/traefik.toml
---
</code></pre>
<p>As you can see here is the output of kubectl get svc </p>
<pre><code>traefik LoadBalancer 10.100.116.42 <machine-ip> 80:30222/TCP,443:31578/TCP <days-up>
</code></pre>
<p>Note that Traefik is running in a single node kubernetes cluster (master/worker on the same node). </p>
| Justin | <p>In LoadBalancer service type doc <a href="https://kubernetes.io/docs/concepts/services-networking/#ssl-support-on-aws" rel="nofollow noreferrer">ssl support on aws</a> you can read the following statement:</p>
<blockquote>
<p>HTTP and HTTPS will select layer 7 proxying: the ELB will terminate
the connection with the user, parse headers and inject the
X-Forwarded-For header with the user’s IP address (pods will only see
the IP address of the ELB at the other end of its connection) when
forwarding requests.</p>
</blockquote>
<p>So if you add the following annotation to you traeffik service:</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
</code></pre>
<p>It should work with the <code>ingress.kubernetes.io/whitelist-x-forwarded-for: "true"</code> annotation present in your ingress config and the forwarded header is added by the aws loadbalancer.</p>
<p>Disclaimer: I have not tested that solution.</p>
<p>Regards.</p>
| mdaguete |
<p>I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.</p>
<p>I found <a href="https://pypi.org/project/pyhelm" rel="nofollow noreferrer">pyhelm</a> but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.</p>
| Adam | <blockquote>
<p>I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.</p>
</blockquote>
<p><a href="https://helm.sh/blog/helm-3-released/" rel="nofollow noreferrer">Helm 3 is different</a> than previous versions in that it is a <strong>client only</strong> tool, similar to e.g. <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Kustomize</a>. This means that <em>helm charts</em> only exists on the client (and in chart repositories) but is then <em>transformed</em> to a <em>kubernetes manifest</em> during deployment. So only <em>Kubernetes objects</em> exists in the cluster.</p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#deployment-v1-apps" rel="nofollow noreferrer">Kubernetes API</a> is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.</p>
| Jonas |
<p>So I did update the manifest and replaced <strong>apiVersion: extensions/v1beta1</strong> to <strong>apiVersion: apps/v1</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: secretmanager
namespace: kube-system
spec:
selector:
matchLabels:
app: secretmanager
template:
metadata:
labels:
app: secretmanager
spec:
...
</code></pre>
<p>I then applied the change</p>
<pre><code>k apply -f deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/secretmanager configured
</code></pre>
<p>I also tried</p>
<pre><code>k replace --force -f deployment.yaml
</code></pre>
<p>That recreated the POD (downtime :( ) but still if you try to output the yaml config of the deployment I see the old value</p>
<pre><code>k get deployments -n kube-system secretmanager -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
"metadata":{"annotations":{},"name":"secretmanager","namespace":"kube-system"}....}
creationTimestamp: "2020-08-21T21:43:21Z"
generation: 2
name: secretmanager
namespace: kube-system
resourceVersion: "99352965"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/secretmanager
uid: 3d49aeb5-08a0-47c8-aac8-78da98d4c342
spec:
</code></pre>
<p>So I still see this <strong>apiVersion: extensions/v1beta1</strong></p>
<h3>What I am doing wrong?</h3>
<p>I am preparing eks kubernetes v1.15 to be migrated over to v1.16</p>
| DmitrySemenov | <p>The <code>Deployment</code> <a href="https://github.com/kubernetes/kubernetes/issues/58131#issuecomment-356823588" rel="nofollow noreferrer">exists in multiple apiGroups</a>, so it is ambiguous. Try to specify e.g. <code>apps/v1</code> with:</p>
<pre><code>kubectl get deployments.v1.apps
</code></pre>
<p>and you should see your <code>Deployment</code> but with <code>apps/v1</code> apiGroup.</p>
| Jonas |
<p>Let's say I have classic application with </p>
<ul>
<li>Web</li>
<li>Backend</li>
<li>DB</li>
</ul>
<p>If I understand correctly I will create deployment for each of them. What if I want to deploy all in one step? How should I group the deployments. I have read something about labels and services, I'm totally not sure which concept is the right one. There are two ports necessary to face outside(http and debug). Just for clarity skipping any DB initializations and readiness probes.</p>
<p>How can I deploy everything at once? </p>
| Zveratko | <p>You need multiple Kubernetes objects for this and there is multiple ways to solve this.</p>
<p><strong>Web</strong> - it depends what this is. Is it just <strong>static</strong> JavaScript files? In that case, it is easiest to deploy it with a CDN solution, on any cloud provider, an on-prem solution or possible using a Kubernetes based product like e.g. <a href="https://github.com/ilhaan/kubeCDN" rel="nofollow noreferrer">KubeCDN</a>.</p>
<p><strong>Backend</strong> - When using Kubernetes, we design a backend to be <strong>stateless</strong> following the <a href="https://12factor.net/" rel="nofollow noreferrer">Twelve Factor</a> principles. This kind of application is deployed on Kubernetes using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> that will rollout one or more instances of your application, possible elastically scaled depending on load. In front of all your instances, you need a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> and you want to expose this service with a loadbalancer/proxy using an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>.</p>
<p><strong>DB</strong> - this is a <strong>stateful</strong> application, if deployed on Kubernetes, this kind of application is deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> and you also need to think about how you handle the storage with this, it may possibly be handled with Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume</a>.</p>
<p>As you see, many Kubernetes objects is needed for this setup. If you use plain declarative Kubernetes yaml files, you can have them in a directory e.g <code>my-application/</code> and deploy all files with a single command:</p>
<pre><code>kubectl apply -f my-application/
</code></pre>
<p>However, there is more alternatives to this, e.g. using <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a></p>
| Jonas |
<p>I want to access to the codes regarding Cgroup in Kubernetes GitHub repository. Where is the exact place?</p>
| HamiBU | <p>The cgroups code is in the container engine selected, not in k8s. K8s take care of running containers and talks with the runtime using CRI. CRI is an API to let any container engine interact with kubelet. Kubelet is a piece of kubernetes that sits on every node and make sure that all pods are running as expected.</p>
<p>Take a look at <a href="https://github.com/moby/moby/tree/master/libcontainerd" rel="nofollow noreferrer">libcontainerd</a> for docker as container engine and <a href="https://github.com/containerd/cgroups" rel="nofollow noreferrer">cgroups package</a> for containerd</p>
<p>Regards.</p>
| mdaguete |
<p>I'm just trying out a very simple example of NodePort, using minikube on an Apple M1 machine. I basically create a single-node k8s cluster via minikube, then create a 1-pod deployment, and NodePort service. The NodePort URL keeps loading for me. Below are the commands I use.</p>
<pre><code>$ minikube start
😄 minikube v1.25.2 on Darwin 12.4 (arm64)
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.23.3 preload ...
> preloaded-images-k8s-v17-v1...: 419.07 MiB / 419.07 MiB 100.00% 5.49 MiB
🔥 Creating docker container (CPUs=2, Memory=4000MB) ...
🐳 Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl create deployment nginx-deployment --image=nginxdemos/hello
deployment.apps/nginx-deployment created
$ kubectl expose deployment nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed
$ minikube service nginx-deployment
🏃 Starting tunnel for service nginx-deployment.
🎉 Opening service default/nginx-deployment in default browser...
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
=> This opens the URL in the browser (e.g. http://192.168.49.2:32739/), but it keeps loading forever
</code></pre>
<p>When I SSH into the node, I can curl the pod IP without problems.</p>
<pre><code>$ minikube ssh
Last login: Thu May 26 10:09:35 2022 from 192.168.49.1
# This is the IP of the running pod
docker@minikube:~$ curl -sI 172.17.0.3 | grep HTTP
HTTP/1.1 200 OK
</code></pre>
<p>I also see that the service is binded to the pod:</p>
<pre><code>$ kubectl describe service nginx-deployment
Name: nginx-deployment
Namespace: default
Labels: app=nginx-deployment
Annotations: <none>
Selector: app=nginx-deployment
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.109.174.147
IPs: 10.109.174.147
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32739/TCP
Endpoints: 172.17.0.3:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>What did I do wrong?</p>
| Kien Nguyen | <p>I solved this problem by switching to use Docker Desktop kubernetes. Seems like minikube doesn't play well with Apple M1.</p>
| Kien Nguyen |
<p>I have an Argo workflow that has two steps, the first runs on Linux and the second runs on Windows</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-workflow-v1.13
spec:
entrypoint: process
volumeClaimTemplates:
- metadata:
name: workdir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
arguments:
parameters:
- name: jobId
value: 0
templates:
- name: process
steps:
- - name: prepare
template: prepare
- - name: win-step
template: win-step
- name: win-step
nodeSelector:
kubernetes.io/os: windows
container:
image: mcr.microsoft.com/windows/nanoserver:1809
command: ["cmd", "/c"]
args: ["dir", "C:\\workdir\\source"]
volumeMounts:
- name: workdir
mountPath: /workdir
- name: prepare
nodeSelector:
kubernetes.io/os: linux
inputs:
artifacts:
- name: src
path: /opt/workdir/source.zip
s3:
endpoint: minio:9000
insecure: true
bucket: "{{workflow.parameters.jobId}}"
key: "source.zip"
accessKeySecret:
name: my-minio-cred
key: accesskey
secretKeySecret:
name: my-minio-cred
key: secretkey
script:
image: garthk/unzip:latest
imagePullPolicy: IfNotPresent
command: [sh]
source: |
unzip /opt/workdir/source.zip -d /opt/workdir/source
volumeMounts:
- name: workdir
mountPath: /opt/workdir
</code></pre>
<p>both steps share a volume.</p>
<p>To achieve that in Azure Kubernetes Service, I had to create two node pools, one for Linux nodes and another for Windows nodes</p>
<p><a href="https://i.stack.imgur.com/fPA1D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPA1D.png" alt="enter image description here" /></a></p>
<p>The problem is, when I queue the workflow, sometimes it completes, and sometimes, the <code>win-step</code> (the step that runs in the windows container), hangs/fails and shows this message</p>
<p><a href="https://i.stack.imgur.com/RD5ww.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RD5ww.png" alt="enter image description here" /></a></p>
<p><code>1 node(s) had volume node affinity conflict</code></p>
<p>I've read that this could happen because the volume gets scheduled on a specific zone and the windows container (since it's in a different pool) gets scheduled in a different zone that doesn't have access to that volume, but I couldn't find a solution for that.</p>
<p>Please help.</p>
| areller | <blockquote>
<p>the first runs on Linux and the second runs on Windows</p>
</blockquote>
<p>I doubt that you can mount the same volume on both Linux, typically ext4 file system and on a Windows node, <a href="https://learn.microsoft.com/en-us/azure/aks/windows-faq#what-kind-of-disks-are-supported-for-windows" rel="nofollow noreferrer">Azure Windows containers uses NTFS</a> file system.</p>
<p>So the volume that you try to mount in the second step, is located on the node pool that does not match your <code>nodeSelector</code>.</p>
| Jonas |
<p>I am working with a minecraft server image to create a cluster of statefulsets that all should have a random external port. I was told using a nodeport would do the job but not exactly how that is done. I was looking at nodeport but it looks like you would need to specify that exact port name. </p>
<p>I need each replica in a cluster to either have a random external IP or a random external port on the same IP, Is that possible or do I need to create a service for every single port/IP.</p>
| mjwrazor | <p>You need to create a <code>NodePort</code> service for each instance of minecraft server.</p>
<p>A <code>NodePort</code> open a random port up to < 30000 and links it to an internal (set of) based on the selectors.</p>
<p>For instance, let's say there is one instance of the minecraft server with the following resource:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: minecraft-instance1
labels:
instance: minecraft-1
spec:
...
</code></pre>
<p>This is the <code>nodePort</code> description to reach it on port 30007 (<strong>on every node of the cluster</strong>):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-minecraft-1
spec:
type: NodePort
selector:
instance: minecraft-1
ports:
- port: 25565
targetPort: 25565
nodePort: 30007
</code></pre>
| Kartoch |
<p>I have 1 question regarding pods scheduling for runner pods in k8s. As I can see, during different jobs It creates pods like runner-xxxx-project-xxxx-concurrent and this pods creating dynamically. How I can configure scheduling (nodeSelector) for this pods only (runner-xxxx-project-xxxx-concurrent), not for runner-gitlab-runner deployment?</p>
| Andrew Striletskyi | <p>First, depending on the way you have installed your master nodes, they usually have a taint <code>node-role.kubernetes.io/master:NoSchedule</code> to avoid scheduling of pods.</p>
<pre><code>$ kubectl describe nodes node1
Name: node1
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node1
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: node-role.kubernetes.io/master:NoSchedule
</code></pre>
<p>So if your kubernetes install is conform, there is no need to use a <code>nodeSelector</code> to set the node where the pods are going to be scheduled (it is usually a bad practice).</p>
<p>First solution is to taint your master node for no scheduling if not done during install:</p>
<pre><code>kubectl taint nodes node1 node-role.kubernetes.io/master:NoSchedule-
</code></pre>
<p>Second solution: set label to nodes to use <code>nodeSelector</code></p>
<pre><code>kubectl label nodes node1 gitlab-runner=true
</code></pre>
<p>And use <code>nodeSelector</code> to indicate to scheduler you want node with a specific label:</p>
<pre><code>spec:
containers:
- [...]
nodeSelector:
gitlab-runner: "true"
</code></pre>
<p>As mentionned by <a href="https://stackoverflow.com/users/2653911/nicolas-pepinster">@Nicolas-pepinster</a>, you can set the label in the <code>[runners.kubernetes]</code> section of our gitlab-runner (<a href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnerskubernetes-section" rel="nofollow noreferrer">see doc</a>).</p>
| Kartoch |
<p>I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies. </p>
<p>What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.</p>
<p>Below is my deployment and service YAML.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
</code></pre>
| Debdeep Das | <p>In high-level terms, what you want is a <strong>StatefulSet</strong> instead of a Deployment for your ActiveMQ. You are correct that you want "shared file system" -- in kubernetes this is expressed as a "<strong>Persistent Volume</strong>", which is made available to the pods in your StatefulSet using a "<strong>Volume Mount</strong>".</p>
<p>These are the things you need to look up.</p>
| Andrew McGuinness |
<p>I understand the concepts of a Kubernetes service ClusterIP and Headless service but when would I use one over the other?</p>
| Wunderbread | <p>The common case is to use <code>ClusterIP</code> for services within your cluster, unless you have a specific reason for another kind of <code>Service</code>.</p>
<blockquote>
<p>For headless Services, a cluster IP is not allocated, kube-proxy does not handle these Services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the Service has selectors defined</p>
</blockquote>
<p>A specific reason for a <em>headless</em> service may be when you use <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">StatefulSet</a>.</p>
| Jonas |
<p>When we run <code>kubectl apply -k github.com/minio/direct-csi</code> command, how kubectl downloads and apply the deployment manifest?</p>
<p>How can we download this file to local using <code>curl</code> or <code>wget</code> command?</p>
<p>Thanks
SR</p>
| sfgroups | <p>You can see all the http request that <code>kubectl</code> does by using a <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">verbose log level</a>.</p>
<p>E.g.</p>
<pre><code>kubectl get po --v=7
</code></pre>
<p>Output</p>
<pre><code>$ kubectl get po --v=7
I0822 20:08:27.940422 36846 loader.go:375] Config loaded from file: /Users/Jonas/.kube/config
I0822 20:08:27.958708 36846 round_trippers.go:420] GET https://clusteraddress.com/api/v1/namespaces/default/pods?limit=500
I0822 20:08:27.958736 36846 round_trippers.go:427] Request Headers:
I0822 20:08:27.958742 36846 round_trippers.go:431] Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
I0822 20:08:27.958747 36846 round_trippers.go:431] User-Agent: kubectl/v1.17.5 (darwin/amd64) kubernetes/e0fccaf
I0822 20:08:28.624188 36846 round_trippers.go:446] Response Status: 200 OK in 665 milliseconds
NAME READY STATUS RESTARTS AGE
nx-67b4f5946c-2z58x 1/1 Running 0 21h
</code></pre>
<blockquote>
<p>How can we download this file to local using curl or wget command?</p>
</blockquote>
<p>You can do the same with e.g. <code>curl</code>, everyting in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/" rel="nofollow noreferrer">Kubernetes is a REST API</a> and you need proper authentication from your <code>.kube/config</code> or some else valid authentication.</p>
<blockquote>
<p>what is download from github.com/minio/direct-cs ?</p>
</blockquote>
<p>Instead of applying with kustomize (<code>apply -k</code>) you can just build the kustomize without applying with this command:</p>
<pre><code>kubectl kustomize github.com/minio/direct-csi
</code></pre>
<p>And you should see all manifests (derived from <a href="https://github.com/minio/direct-csi/blob/master/kustomization.yaml" rel="nofollow noreferrer">kustomization.yaml</a>) in the remote location in a large manifest.</p>
| Jonas |
<p>So yes, <code>StatefulSet</code> helps preserve the order and name of the pod, but what is it that it does extra (or different) that is advantageous over a regular <code>Deployment</code> with respect to volumes.</p>
<p>I see many examples of master/slave setup for databases as a use case for <code>StatefulSet</code>, but can't that problem be solved with just a <code>Deployment</code> (replicas=1) for the master and a <code>Deployment</code> (replicas=<code><multiple></code>) for the slaves. For volumes just create a <code>PV</code>/<code>PVC</code>.</p>
<p>Can someone please let me know what is the difference with respect to volumes?</p>
| Gurleen Sethi | <blockquote>
<p>So yes, StatefulSet helps preserve the order and name of the pod, but what is it that it does extra (or different) that is advantageous over a regular Deployment with respect to volumes.</p>
</blockquote>
<p>With a <strong>StatefulSet</strong> each Pod get its own PersistentVolumeClaim, but with <strong>Deployment</strong> all Pods use the same PersistentVolumeClaim.</p>
<p><code>StatefulSet</code> has <code>volumeClaimTemplates</code> that creates volumes for you from the template and it adds <code>-<ordinal></code> on the name for PersistentVolumeClaims, so a name with <code>my-pvc</code> will be <code>my-pvc-0</code> and <code>my-pvc-1</code> if the StatefulSet has <code>replicas: 2</code>.</p>
<blockquote>
<p>I see many examples of master/slave setup for databases as a use case for StatefulSet, but can't that problem be solved with just a Deployment (replicas=1) for the master and a Deployment (replicas=) for the slaves. For volumes just create a PV/PVC.</p>
</blockquote>
<p>Yes, that would probably work - for testing and development. But it would not be recommended in a production environment. Typically in a cloud (and Kubernetes) environment, you use a Database that use three replicas and that has an instance in every <em>Availability Zone</em> - this is easier to manage with a single <code>StatefulSet</code> with <code>replicas: 3</code> and proper PodAffinity configuration.</p>
| Jonas |
<p>I'm trying to modify the running state of my pod, managed by a deployment controller both from command line via <code>kubectl patch</code> and from the k8s python client API. Neither of them seem to work</p>
<p>From the command line, I tried both strategic merge match and JSON merge patch, but neither of them works. For e.g. I'm trying to patch the pod conditions to make the <code>status</code> field to <code>False</code></p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n foo-ns patch pod foo-pod-18112 -p '{
"status": {
"conditions": [
{
"type": "PodScheduled",
"status": "False"
},
{
"type": "Ready",
"status": "False"
},
{
"type": "ContainersReady",
"status": "False"
},
{
"type": "Initialized",
"status": "False"
}
],
"phase": "Running"
}
}' --type merge
</code></pre>
<p>From the python API</p>
<pre class="lang-py prettyprint-override"><code># definition of various pod states
ready_true = { "type": "Ready", "status": "True" }
ready_false = { "type": "Ready", "status": "False" }
scheduled_true = { "type": "PodScheduled", "status": "True" }
cont_ready_true = { "type": "ContainersReady", "status": "True" }
cont_ready_false = { "type": "ContainersReady", "status": "False" }
initialized_true = { "type": "Initialized", "status": "True" }
initialized_false = { "type": "Initialized", "status": "False" }
patch = {"status": { "conditions": [ready_false, initialized_false, cont_ready_false, scheduled_true ], "phase" : "Running" }}
p_status = v1.patch_namespaced_pod_status(podname, "default", body=patch)
</code></pre>
<p>While running the above snippet, I don't see any errors and the response <code>p_status</code> has all the pod conditions modified as applied in the <code>patch</code>, but I don't see any events from API server related to this pod status change.</p>
<p>May be the deployment controller is rolling back the changes to a working config? I'm looking for ways to patch the pod conditions and test if my custom controller (not related to the question) is able to see those new pod conditions.</p>
| Inian | <p>You should not.</p>
<p>Clients write the <em>desired state</em> in the <code>spec:</code> and controllers write the <code>status:</code>-part.</p>
| Jonas |
<p>Is it possible for an InitContainer to change the environment variables of the application container when running inside the same Pod?</p>
<p>Note that I am looking for a detailed answer that describes the technical reasons why this is or isn't possible. Example: 'Current container technology supports environment variable isolation between containers and Pods cannot bypass that restriction by "grouping" containers in the same "environment variable space"'.</p>
| atomaras | <p>Short answer is No, they can't.</p>
<p>You can try some hack something using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="nofollow noreferrer">ShareProcessNamespace</a> and <a href="https://stackoverflow.com/questions/205064/is-there-a-way-to-change-the-environment-variables-of-another-process-in-unix">gdb</a> but for sure this is not correct solution for problem you are trying to solve.</p>
| Maciek Sawicki |
<p>What rules should be used to assign affinity to Kubernetes pods for distributing the pods across all Availability Zones?
I have a region with 3 Availability Zones and Nodes in each of these. I want to make sure that each of the 3 pods are spread across all the 3 Availability Zones.</p>
| user4202236 | <p>You should be able to use the label <code>topology.kubernetes.io/zone</code> (for e.g. topologyKey) and add <strong>anti-affinity</strong> rules.</p>
<p>This is part of the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#an-example-of-a-pod-that-uses-pod-affinity" rel="nofollow noreferrer">anti-affinity example</a>:</p>
<pre><code> podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: failure-domain.beta.kubernetes.io/zone
</code></pre>
<p>the result of the example is documented as</p>
<blockquote>
<p>The pod anti-affinity rule says that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with label having key "security" and value "S2".</p>
</blockquote>
<p>Instead of the label <code>security</code> in the example, you can use e.g. <code>app-name: <your-app-name></code> as label and use that in your <code>matchExpression</code>.</p>
| Jonas |
<p>I knew I can tag the image name from official name with private image name and push to it</p>
<pre><code>docker pull alpine
docker tag alpine <abc.jrog.io>/alpine
docker push <abc.jrog.io>/alpine
</code></pre>
<p>But this is not the case when I deal with Kubernetes helm charts, especailly with sub-charts.</p>
<p>I can set new image name in <code>values.yaml</code>, but if one chart calls other charts, I can't make this works.</p>
<p>So currently I have to pull all charts and rename images and add private registry server as prefix.</p>
<p>Are there any ways I can do that transparently?</p>
<p>For example, if I pull image <code>alpine</code>, the computer/server can automatically to get the image from private registry without any image name changed?</p>
<p>So the ideal is very close to git insteadOf feature.</p>
<pre><code>git config --global url."https://".insteadOf git://
</code></pre>
<p>with above configuraiton, I can force <code>https://</code> to <code>git://</code> always</p>
<p>I'd like to set the similar in <code>docker pull</code>, it is not pull from hub.docker.io, but from <code><abc.jrog.io></code></p>
| Bill | <p>What you want, is to configure a custom <em>default registry</em>. Wether this is possible or how to do it depends on what container runtime and nodes that you are using. See e.g. <a href="https://stackoverflow.com/questions/33054369/how-to-change-the-default-docker-registry-from-docker-io-to-my-private-registry">How to change the default docker registry from docker.io to my private registry?
</a></p>
| Jonas |
<p>I'm currently working on my own custom operator that deploys a fully functional Wordpress. I'm required to implement SSL. Now this is where I'm stuck, I'm not really sure how to implement this using Go.</p>
<p>Is there a way of adding already existing CRDs, for example cert-manage, into my operator and then create a Kubernetes resource type out of these, using my custom Operator?</p>
| Modx | <p>Yes, every Go controller also has clients generated. See e.g. <a href="https://github.com/jetstack/cert-manager/tree/master/pkg/client" rel="nofollow noreferrer">client-go cert-manager</a>.</p>
<p>If you import the client-go for cert-manager, you can use it to e.g. <code>create</code> resources or <code>watch</code> for changes.</p>
| Jonas |
<p>Assume that I have a pod active and contains only one active container initially.
This container is a nodejs application in typescript and shows user interface when opened in browser.</p>
<p>Can this container create another container on-demand/dynamically within the SAME POD ?
How can we achieve this? Please advise.</p>
<p>Also, will reusing npm modules like <a href="https://www.npmjs.com/package/kubernetes-client" rel="nofollow noreferrer">https://www.npmjs.com/package/kubernetes-client</a> help in creating such containers within the same pod?</p>
| che_new | <blockquote>
<p>Can this container create another container on-demand/dynamically within the SAME POD ? How can we achieve this?</p>
</blockquote>
<p>No, the containers within a Pod is declared in the <code>PodTemplate</code> that need to be declared upfront before the pod is created. More specific, what use case do you have? What are you trying to achieve?</p>
<blockquote>
<p>Also, will reusing npm modules like <a href="https://www.npmjs.com/package/kubernetes-client" rel="nofollow noreferrer">https://www.npmjs.com/package/kubernetes-client</a> help in creating such containers within the same pod?</p>
</blockquote>
<p>A kubernetes client library is useful for interacting with the ApiServer, e.g. for deploying new applications or Pods. But the Kubernetes <em>deployment unit</em> is a <strong>Pod</strong> - that is the smallest unit you work with. To change a Pod, you create a new one and terminated the previous one.</p>
| Jonas |
<p>I created a cluster:</p>
<pre><code>gcloud container clusters create test
</code></pre>
<p>so there will be 3 nodes:</p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-test-default-pool-cec920a8-9cgz Ready <none> 23h v1.9.7-gke.5
gke-test-default-pool-cec920a8-nh0s Ready <none> 23h v1.9.7-gke.5
gke-test-default-pool-cec920a8-q83b Ready <none> 23h v1.9.7-gke.5
</code></pre>
<p>then I delete a node from the cluster</p>
<pre><code>kubectl delete node gke-test-default-pool-cec920a8-9cgz
node "gke-test-default-pool-cec920a8-9cgz" deleted
</code></pre>
<p>no new node is created.</p>
<p>Then I delete all nodes. still there is no new node created. </p>
<pre><code>kubectl get nodes
No resources found.
</code></pre>
<p>Am I doing something wrong? I suppose it can automatically bring up new node if some node died.</p>
| gacopu | <p>After running <code>kubectl delete node gke-test-default-pool-cec920a8-9cgz</code> run <code>gcloud compute instances delete gke-test-default-pool-cec920a8-9cgz</code></p>
<p>This will actually delete VM (<code>kubectl delete</code> only "disconnects" it from the cluster). GCP will recreate the VM and it will automatically rejoin the cluster. </p>
| Maciek Sawicki |
<p>My Kubernetes deployment has a PVC attached, and it has 3 replicas. I was trying to understand what it actually means. 3 replicas are all on different nodes, in different zones, but the pods can access the same piece of storage at the same time.</p>
<p>So my question is that where the physical disk locates? If it's with say node 1 in zone 1, then how does node in zone 2 access it without network? If it requires network, then it's possible that the data will not be synced? What if I have a worker node in Dallas and another one in London? Are they still able to access the same PV and update at the same time?</p>
<p>I was trying to use it to store some cache data because looks like it's accessible to all the pods, but there were too many questions in my mind that I can't get over. Thanks for any insightful answers in advance.</p>
| Keno | <p>Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a> is an abstraction. PV works with different storage system and they may have different properties. E.g. the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Storage Class</a> you are using may hint if it is available in all your Zones in a cloud Region or only one Zone. Also <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Access Mode</a> of your PersistentVolume affect wether all your pods can access the volume concurrently from different Nodes or not.</p>
<p>In most cases, a PV is only available in a single Zone and on a single Node at a time. But e.g. PVs backed by e.g. NFS may be available from multiple Nodes and Zones.</p>
<p>When using <code>PersistentVolume</code> from a <code>Deployment</code>, all your replicas refers to the <em>same</em> volume. Depending on your storage system, this may be problematic, if using more than one replica.</p>
<p>When using <code>StatefulSet</code>, all your replicas refers to their own <em>unique</em> volume.</p>
<p>For using <em>cache</em> in a distributed environment like Kubernetes, I would consider using something that is distributed and accessible over the network, e.g. <a href="https://redis.io/" rel="nofollow noreferrer">Redis</a>.</p>
<blockquote>
<p>where the physical disk locates?</p>
</blockquote>
<p>This depends on what storage system is configured for your <em>Storage Class</em>, but usually it is something located on another server, e.g. <a href="https://aws.amazon.com/ebs/" rel="nofollow noreferrer">AWS EBS</a> or <a href="https://cloud.google.com/persistent-disk" rel="nofollow noreferrer">Google Persistent Disk</a></p>
<blockquote>
<p>If it's with say node 1 in zone 1, then how does node in zone 2 access it without network? If it requires network, then it's possible that the data will not be synced?</p>
</blockquote>
<p>PVs that are available in multiple Zones are typically synced synchronously (e.g. a trade off with higher write latency), but only to another nearby located Zone. If you need geo-replicated data, it would be better to consider something asynchronous, e.g. <a href="https://kafka.apache.org/" rel="nofollow noreferrer">Apache Kafka</a>.</p>
| Jonas |
<p>I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.</p>
<p>I tried this command to get the token and entered it in:</p>
<pre><code>gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
</code></pre>
<p>And it shows some things, but not others (services are missing, says it's forbidden).</p>
<p>How do I use kubectl proxy or show that dashboard with GKE?</p>
| atkayla | <p>Provided you are authenticated with <code>gcloud auth login</code> and the current project and k8s cluster is configured to the one you need, authenticate <code>kubectl</code> to the cluster (this will write <code>~/.kube/config</code>):</p>
<pre><code>gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
</code></pre>
<p>retrieve the auth token that the kubectl itself uses to authenticate as you</p>
<pre><code>gcloud config config-helper --format=json | jq -r '.credential.access_token'
</code></pre>
<p>run</p>
<pre><code>kubectl proxy
</code></pre>
<p>Then open a local machine web browser on</p>
<p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy" rel="noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy</a>
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)</p>
<p>and use the token from the second command to log in with your Google Account's permissions.</p>
| Alexander |
<p>I have got a deployment.yaml and it uses a persistentvolumeclaim like so</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mautic-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
</code></pre>
<p>I am trying to scale my deployment horizontally using (Horizontal Pod Scheduler) but when I scale my deployment, the rest of the pods are in <code>ContainerCreating</code> process and this is the error I get when I <code>describe the pod</code></p>
<pre><code>Unable to attach or mount volumes: unmounted volume
</code></pre>
<p>What am I doing wrong here?</p>
| jeril | <p>Using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> is great if your app can scale <em>horizontally</em>. However, using a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume</a> with a <code>PersistentVolumeClaim</code> can be challenging when scaling <em>horizontally</em>.</p>
<h3>Persistent Volume Claim - Access Modes</h3>
<p>A <code>PersistentVolumeClaim</code> can be requested for a few different <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Access Modes</a>:</p>
<ul>
<li>ReadWriteOnce (most common)</li>
<li>ReadOnlyMany</li>
<li>ReadWriteMany</li>
</ul>
<p>Where <code>ReadWriteOnce</code> is the most commonly available and is typical behavior for a local disk. But to scale your app horizontally - you need a volume that is available from multiple nodes at the same time, so only <code>ReadOnlyMany</code> and <code>ReadWriteMany</code> is viable options. You need to check what what <em>access modes</em> are available for your <em>storage system</em>.</p>
<p>In addition, you use a <em>regional cluster</em> from a cloud provider, it spans over three <em>Availability Zones</em> and a volume typically only live in one <em>Availability Zone</em>, so even if you use <code>ReadOnlyMany</code> or <code>ReadWriteMany</code> access modes, it makes your volume available on multiple nodes in the same AZ, but not available in all three AZs in your cluster. You might consider using a <strong>storage class</strong> from your cloud provider that is replicated to multiple <em>Availability Zones</em>, but it typically costs more and is slower.</p>
<h2>Alternatives</h2>
<p>Since only <code>ReadWriteOnce</code> is commonly available, you might look for better alternatives for your app.</p>
<p><strong>Object Storage</strong></p>
<p>Object Storage, or Buckets, is a common way to handle file storage in the cloud instead of using filesystem volumes. With Object Storage you access you files via an API over HTTP. See e.g. <a href="https://aws.amazon.com/s3/" rel="nofollow noreferrer">AWS S3</a> or <a href="https://cloud.google.com/storage" rel="nofollow noreferrer">Google Cloud Storage</a>.</p>
<p><strong>StatefulSet</strong></p>
<p>You could also consider <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> where each instance of your app get its own volume. This makes your app <em>distributed</em> but typically not <em>horizontally scalable</em>. Here, your app typically needs to implement replication of data, typically using <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft</a> and is a more advanced alterantive.</p>
| Jonas |
<p>Sometimes I got a bunch of jobs to launch, and each of them mounts a pvc. As our resource is limited, some pods fail to mount in less than one minute.</p>
<blockquote>
<p>Unable to mount volumes for pod "package-job-120348968617328640-5gv7s_vname(b059856a-ecfa-11ea-a226-fa163e205547)": timeout expired waiting for volumes to attach or mount for pod "vname"/"package-job-120348968617328640-5gv7s". list of unmounted volumes=[tmp]. list of unattached volumes=[log tmp].</p>
</blockquote>
<p>And it sure keeps retrying. But it never success (event age is like <code>44s (x11 over 23m)</code>). But if I delete this pod, this job will create a new pod and it will complete.</p>
<p>So why is this happening? Shouldn't pod retry mount automatically instead of needing manual intervention?
And if this is not avoidable, is there a workaround that it will automatically delete pods in Init Phase more than 2 min?</p>
<h3>Conclusion</h3>
<p>It's actually the attaching script provided by my cloud provider in some of the nodes stucks (caused by a network problem). So If others run into these problem, maybe checking storage plugin that attaches disks is a good idea.</p>
| Nick Allen | <blockquote>
<p>So why is this happening? Shouldn't pod retry mount automatically instead of needing manual intervention? And if this is not avoidable, is there a workaround that it will automatically delete pods in Init Phase more than 2 min?</p>
</blockquote>
<p>There can be multiple reasons to this. Do you have any Events on the Pod if you do <code>kubectl describe pod <podname></code>? And do you reuse the PVC that another Pod used before?</p>
<p>I guess that you use a <em>regional</em> cluster, consisting of multiple datacenters (Availability Zones) and that your PVC is located in one AZ but your Pod is scheduled to run in a different AZ? In such situation, the Pod will never be able to mount the volume since it is located in another AZ.</p>
| Jonas |
<p>I have a kubernetes cluster setup,
I am converting a docker-compose using komposer , and I get:</p>
<pre><code>WARN Volume mount on the host "<SOME_PATH>" isn't supported - ignoring path on the host
WARN Volume mount on the host "<SOME_PATH_2>" isn't supported - ignoring path on the host
WARN Volume mount on the host "<SOME_PATH_3>" isn't supported - ignoring path on the host
</code></pre>
<p>we built a docker-compose that uses volume mounts for key,crt files and that looks like volumes are not supported by K8.</p>
<p>I found this : <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</a>
and it looks like Kubernetes can make and renew its own certs. That's great.
How do I get access to those certs so I can use them in my Koa/Nodejs microservice?
or is there another standard way to apply certs at the application layer?</p>
<p>There was some talk about if this didn't work to move to nginx or Kong and let that use certs.
I'm trying to see if there's an application layer way to do this rather than go that route.
Any suggestions or comments are appreciated.</p>
<p>EDIT: it seems I can assign a cert to a secret and call it via my application? I'm new to kubernetes so... I may be looking into this.</p>
| user1562431 | <blockquote>
<p>or is there another standard way to apply certs at the application layer?</p>
</blockquote>
<p>Yes, for TLS and mTLS between applications within your cluster, I would consider to use a <em>service mesh</em>, e.g. <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls" rel="nofollow noreferrer">Istio (with mTLS)</a> or <a href="https://linkerd.io/2/features/automatic-mtls/" rel="nofollow noreferrer">Linkerd (with mTLS)</a>. With a <em>service mesh</em> you get TLS encryption managed for you, between Pod to Pod - but your application does not need to manage any certificates.</p>
| Jonas |
<p>I'm new to the Apache Kamel. I have installed Kubernetes on master machine and then I downloaded the binary file "kamel" and placed in the path "/usr/bin". My versions are,</p>
<pre><code>Camel K Client 0.3.3
</code></pre>
<p>My kubernetes master and kubeDNS is running fine. When I tried to install kamel on kubernetes cluster by using command "kamel install" as per the documentation, I'm getting the following error,</p>
<pre><code>Error: cannot find automatically a registry where to push images
</code></pre>
<p>I don't know what does this new command does </p>
<pre><code>"kamel install --cluster-setup"
</code></pre>
<p>After running the above command the response is like this,</p>
<pre><code>Camel K cluster setup completed successfully
</code></pre>
<p>I tried to run a small integration script like </p>
<pre><code>"kamel run hello.groovy --dev"
</code></pre>
<p>My groovy file code is,</p>
<pre><code>from("timer:tick?period=3s")
.setBody().constant("Hello World from Camel K!!!")
.to("log:message")
</code></pre>
<p>but the pods are getting hanged, its status is pending.</p>
<pre><code>camel-k-operator-587b579567-92xlk 0/1 Pending 0 26m
</code></pre>
<p>Can you please help me in this regard? Thanks a lot for your time.</p>
<p>References I used are,
<a href="https://github.com/apache/camel" rel="nofollow noreferrer">https://github.com/apache/camel</a></p>
| Shr4N | <p>You need to set the container registry where camel-k can publish/retrieve images, you can do it by editing camel-k's integration platform</p>
<pre><code>oc edit integrationplatform camel-k
</code></pre>
<p>or upon installation</p>
<pre><code>kamel install --registry=...
</code></pre>
| Luca Burgazzoli |
<p>When you decrease the number of pods in a Kubernetes workload, I am assuming it is doing a soft kill. Does it start that process by stopping incoming connections? In an event-driven microservice environment, where container reads message from a queue. When I deploy, what happens to the message that are currently being processed. Does it stop taking messages from the queue?</p>
| Ash_this | <blockquote>
<p>When you decrease the number of pods in a kubernetes workload, i am assuming it is doing a soft kill.</p>
</blockquote>
<p>Yes, it does a <em>graceful termination</em>, so your pod get a <code>SIGTERM</code> signal, but it is up to you to implement this handling in the app, before the pod is killed after the configured <em>graceful termination period</em>, by default 30 seconds but can be configured with the <code>terminationGracePeriodSeconds</code> field of the Pod.</p>
<blockquote>
<p>In an event-driven microservice environment, where container reads message from a queue. When i deploy, what happens to the message that are currently being processed. Does is stop taking messages from the queue?</p>
</blockquote>
<p>As explained above, your app needs to implement the handling of the <code>SIGTERM</code> signal and e.g. stop consuming new messages from the queue. You also need to properly configure the <code>terminationGracePeriodSeconds</code> so that messages can fully be processed before the pod is evicted.</p>
<p>A good explanation of this is <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">Kubernetes best practices: terminating with grace</a></p>
<blockquote>
<p>Does it start that process by stopping incoming connections?</p>
</blockquote>
<p>Yes, your pod is removed from the <code>Kubernetes Endpoint</code> list, so it should work if you access your pods via Services.</p>
| Jonas |
<p>I am using <code>apiversion : apps/v1beta2</code>in most of deployment however Kubernetes cluster version <code>1.14</code> it's recommended to use <code>apiversion : apps/v1</code>. Also v1beta2 will be deprecated from <code>Kubernetes 1.16</code>.</p>
<p>Is there any better option to reduce manual work and update all deployment which having version <code>apps/v1beta2</code> to <code>v1</code>.</p>
<p>Or I can use <code>patch</code> all deployment.</p>
| Harsh Manvar | <p>You can try using move2kube (<a href="https://github.com/konveyor/move2kube" rel="nofollow noreferrer">https://github.com/konveyor/move2kube</a>) tool to achieve the above.</p>
<p>To achieve the above, do the following:</p>
<p>Create a yaml file that defines your cluster kinds like below (call is say clusterconfig.yaml):</p>
<pre><code>apiVersion: move2kube.konveyor.io/v1alpha1
kind: ClusterMetadata
metadata:
name: Kubernetes
spec:
storageClasses:
- default
- ibmc-block-bronze
- ibmc-block-custom
- ibmc-block-gold
- ibmc-block-retain-bronze
- ibmc-block-retain-custom
- ibmc-block-retain-gold
- ibmc-block-retain-silver
- ibmc-block-silver
- ibmc-file-bronze
- ibmc-file-bronze-gid
- ibmc-file-custom
- ibmc-file-gold
- ibmc-file-gold-gid
- ibmc-file-retain-bronze
- ibmc-file-retain-custom
- ibmc-file-retain-gold
- ibmc-file-retain-silver
- ibmc-file-silver
- ibmc-file-silver-gid
apiKindVersionMap:
APIService:
- apiregistration.k8s.io/v1
Binding:
- v1
CSIDriver:
- storage.k8s.io/v1beta1
CSINode:
- storage.k8s.io/v1
- storage.k8s.io/v1beta1
CatalogSource:
- operators.coreos.com/v1alpha1
CertificateSigningRequest:
- certificates.k8s.io/v1beta1
ClusterImagePolicy:
- securityenforcement.admission.cloud.ibm.com/v1beta1
ClusterRole:
- rbac.authorization.k8s.io/v1
- rbac.authorization.k8s.io/v1beta1
ClusterRoleBinding:
- rbac.authorization.k8s.io/v1
- rbac.authorization.k8s.io/v1beta1
ClusterServiceVersion:
- operators.coreos.com/v1alpha1
ComponentStatus:
- v1
ConfigMap:
- v1
ControllerRevision:
- apps/v1
CronJob:
- batch/v1beta1
- batch/v2alpha1
CustomResourceDefinition:
- apiextensions.k8s.io/v1
DaemonSet:
- apps/v1
Deployment:
- apps/v1
EndpointSlice:
- discovery.k8s.io/v1beta1
Endpoints:
- v1
Event:
- events.k8s.io/v1beta1
- v1
HorizontalPodAutoscaler:
- autoscaling/v1
- autoscaling/v2beta1
- autoscaling/v2beta2
ImagePolicy:
- securityenforcement.admission.cloud.ibm.com/v1beta1
Ingress:
- networking.k8s.io/v1beta1
- extensions/v1beta1
InstallPlan:
- operators.coreos.com/v1alpha1
Job:
- batch/v1
Lease:
- coordination.k8s.io/v1beta1
- coordination.k8s.io/v1
LimitRange:
- v1
LocalSubjectAccessReview:
- authorization.k8s.io/v1
- authorization.k8s.io/v1beta1
MutatingWebhookConfiguration:
- admissionregistration.k8s.io/v1beta1
- admissionregistration.k8s.io/v1
Namespace:
- v1
NetworkPolicy:
- networking.k8s.io/v1
Node:
- v1
OperatorGroup:
- operators.coreos.com/v1
PersistentVolume:
- v1
PersistentVolumeClaim:
- v1
Pod:
- v1
PodDisruptionBudget:
- policy/v1beta1
PodSecurityPolicy:
- policy/v1beta1
PodTemplate:
- v1
PriorityClass:
- scheduling.k8s.io/v1beta1
- scheduling.k8s.io/v1
ReplicaSet:
- apps/v1
ReplicationController:
- v1
ResourceQuota:
- v1
Role:
- rbac.authorization.k8s.io/v1
- rbac.authorization.k8s.io/v1beta1
RoleBinding:
- rbac.authorization.k8s.io/v1
- rbac.authorization.k8s.io/v1beta1
Secret:
- v1
SelfSubjectAccessReview:
- authorization.k8s.io/v1
- authorization.k8s.io/v1beta1
SelfSubjectRulesReview:
- authorization.k8s.io/v1
- authorization.k8s.io/v1beta1
Service:
- v1
ServiceAccount:
- v1
StatefulSet:
- apps/v1
StorageClass:
- storage.k8s.io/v1
- storage.k8s.io/v1beta1
SubjectAccessReview:
- authorization.k8s.io/v1
- authorization.k8s.io/v1beta1
Subscription:
- operators.coreos.com/v1alpha1
TokenReview:
- authentication.k8s.io/v1
- authentication.k8s.io/v1beta1
ValidatingWebhookConfiguration:
- admissionregistration.k8s.io/v1beta1
- admissionregistration.k8s.io/v1
VolumeAttachment:
- storage.k8s.io/v1
- storage.k8s.io/v1beta1
</code></pre>
<p>and then run :</p>
<pre><code>move2kube translate -s <folder containing your clusterconfig.yaml file and kubernetes yaml files>
</code></pre>
<p>The interactive tool will ask for the required info and do the translation.</p>
| Ashok Pon Kumar |
<p>we have an app in production which need to be highly available (100%),so we did the following:</p>
<ol>
<li>We configure 3 instance as HA but then the node died</li>
<li>We configure anti-affinity (to run on differents nodes) but some update done on the nodes and we were unavailable(evicted) for some min.</li>
<li>Now we consider to add pod disruption Budget
<a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/disruptions/</a></li>
</ol>
<p>My question are:</p>
<ol>
<li>How the affinity works with pod disruption Budget, could be any collusion ? or this is redundant configs ?</li>
<li>is there any other configuration which I need to add to make sure that my pods <strong>run always</strong> (as much as possible )</li>
</ol>
| Rayn D | <blockquote>
<p>How the affinity works with pod disruption Budget, could be any collusion ? or this is redundant configs ?</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">Affinity and Anti-affinity</a> is about <strong>where</strong> your Pod is scheduled, e.g. so that two replicas of the same app is not scheduled to the same node. <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets" rel="nofollow noreferrer">Pod Disruption Budgets</a> is about to increase availability when using <em>voluntary disruption</em> e.g. maintenance. They are both related to making better availability for your app - but not related to eachother.</p>
<blockquote>
<p>Is there any other configuration which I need to add to make sure that my pods run always (as much as possible)</p>
</blockquote>
<p>Things will fail. What you need to do is to embrace <strong>distributed systems</strong> and make all your workload a distributed system, e.g. with multiple instances to remove <em>single point of failure</em>. This is done differently for <em>stateless</em> (e.g. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>) and <em>stateful</em> (e.g. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>) workload. What's important for you is that your <strong>app</strong> is available at much as possible, but individual instances (e.g. Pods) can fail, almost without that any user notice it.</p>
<blockquote>
<p>We configure 3 instance as HA but then the node died</p>
</blockquote>
<p>Things will always fail. E.g. a physical node may crash. You need to design your apps so that it can tolerate some failures.</p>
<p>If you use a cloud provider, you should use <em>regional clusters</em> that uses three independent <em>Availability Zones</em> and you need to spread your workload so that it runs in more than one <em>Availability Zone</em> - in this way, your app can tolerate that a whole <em>Availability Zone</em> is <em>down</em> without affecting your users.</p>
| Jonas |
<p>When deploying Kubernetes Daemonset, what will happen when single node (out of a few nodes) is almost out of resource, and a pod can't be created, and when there are no pods that can be evicted? Though Kubernetes can be horizontally scaled, I believe it is meaningless to scale horizontally as Daemonset would need every pod on each node.</p>
| Piljae Chae | <blockquote>
<p>Though Kubernetes can be horizontally scaled, I believe it is meaningless to scale horizontally as Daemonset would need every pod on each node.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> is a workload type that is mostly for <em>operations workload</em> e.g. transporting logs from the node or similar "system services". It is rarely a good fit for workload that is serving your users, but it can be.</p>
<blockquote>
<p>what will happen when single node (out of a few nodes) is almost out of resource, and a pod can't be created, and when there are no pods that can be evicted?</p>
</blockquote>
<p>As I described above, workload deployed with <code>DaemonSet</code> is typically <em>operations workload</em> that has e.g. an infrastructure role in your cluster. Since this may be more critical pods (or less, depending on what you want), I would use a higher <em>Quality of Service</em> for these pods, so that other pods is evicted when there are few resources on the node.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">Configure Quality of Service for Pods</a> for how to configure your Pods to be in a Quality of Service class, one of:</p>
<ul>
<li>Guaranteed</li>
<li>Burstable</li>
<li>Best Effort</li>
</ul>
<p>You might also consider to use <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">Pod Priority and Preemption</a></p>
<p>The question was about <code>DaemonSet</code> but as a final note: Workload that serves requests from your users, typically is deployed as <code>Deployment</code> and for those, it is very easy to do horizontal scaling using <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>.</p>
| Jonas |
<p>I am trying to change my existing deployment logic/switch to kubernetes (My server is in gcp and till now I used docker-compose to run my server.) So I decided to start by using <code>kompose</code> and generating services/deployments using my existing docker-compose file. After running </p>
<pre><code>kompose --file docker-compose.yml convert
#I got warnings indicating Volume mount on the host "mypath" isn't supported - ignoring path on the host
</code></pre>
<p>After a little research I decided to use the command below to "fix" the issue</p>
<pre><code>kompose convert --volumes hostPath
</code></pre>
<p>And what this command achieved is -> It replaced the persistent volume claims that were generated with the first command to the code below.</p>
<pre><code> volumeMounts:
- mountPath: /path
name: certbot-hostpath0
- mountPath: /somepath
name: certbot-hostpath1
- mountPath: /someotherpath
name: certbot-hostpath2
- hostPath:
path: /path/certbot
name: certbot-hostpath0
- hostPath:
path: /path/cert_challenge
name: certbot-hostpath1
- hostPath:
path: /path/certs
name: certbot-hostpath2
</code></pre>
<p>But since I am working in my local machine </p>
<pre><code>kubectl apply -f <output file>
</code></pre>
<p>results in The connection to the server localhost:8080 was refused - did you specify the right host or port?
I didn't want to connect my local env with gcp just to generate the necessary files, is this a must? Or can I move this to startup-gcp etc </p>
<p>I feel like I am in the right direction but I need a confirmation that I am not messing something up.</p>
<p>1)I have only one compute engine(VM instance) and lots of data in my prod db. "How do I"/"do I need to" make sure I don't lose any data in db by doing something?
2)In startup-gcp after doing everything else (pruning docker images etc) I had a docker run command that makes use of <code>docker/compose 1.13.0 up -d</code>. How should I change it to switch to kubernetes?
3)Should I change anything in <code>nginx.conf</code> as it referenced to 2 different services in my docker-compose (I don't think I should since same services also exist in kubernetes generated yamls)</p>
| Prethia | <p>You should consider using Persistent Volume Claims (PVCs). If your cluster is managed, in most cases it can automatically create the PersistentVolumes for you.</p>
<p>One way to create the Persistent Volume Claims corresponding to your docker compose files is using Move2Kube(<a href="https://github.com/konveyor/move2kube" rel="nofollow noreferrer">https://github.com/konveyor/move2kube</a>). You can download the release and place it in path and run :</p>
<pre><code>move2kube translate -s <path to your docker compose files>
</code></pre>
<p>It will then interactively allow you configure PVCs.</p>
<p>If you had a specific cluster you are targeting, you can get the specific storage classes supported by that cluster using the below command in a terminal where you have set your kubernetes cluster as context for kubectl.</p>
<pre><code>move2kube collect
</code></pre>
<p>Once you do collect, you will have a m2k_collect folder, which you can then place it in the folder where your docker compose files are. And when you run move2kube translate it will automatically ask you whether to target this specific cluster, and also option to choose the storage class from that cluster.</p>
<blockquote>
<p>1)I have only one compute engine(VM instance) and lots of data in my
prod db. "How do I"/"do I need to" make sure I don't lose any data in
db by doing something?</p>
</blockquote>
<p>Once the PVC is provisioned you can copy the data to the PVC by using kubectl cp command into a pod where the pvc is mounted.</p>
<blockquote>
<p>2)In startup-gcp after doing everything else (pruning docker images
etc) I had a docker run command that makes use of docker/compose
1.13.0 up -d. How should I change it to switch to kubernetes?</p>
</blockquote>
<p>You can potentially change it to use helm chart. Move2Kube, during the interactive session, can help you create helm chart too. Once you have the helm chart, all you have to do is "helm upgrade -i </p>
<blockquote>
<p>3)Should I change anything in nginx.conf as it referenced to 2
different services in my docker-compose (I don't think I should since
same services also exist in kubernetes generated yamls)</p>
</blockquote>
<p>If the services names are name in most cases it should work.</p>
| Ashok Pon Kumar |
<p>Is there any way to setup cross communication with different namespace, say pods of <code>namespace-a</code> to communicate pods of <code>namespace-b</code> with each other in GKE cluster except for setting network policies?</p>
| Neelam | <p>Networking within a Kubernetes cluster can be done in different ways, but the recommended and most common way is to use DNS names. Pods get their own DNS names, but it is recommended that you access another app in the cluster via the <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#dns" rel="noreferrer">DNS name for the Service</a>.</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces/#understanding-namespaces-and-dns" rel="noreferrer">DNS names are hierarchical</a>, starting with the Service name, and then the Namespace name.</p>
<ul>
<li><p>To access another app in the same namespace, use <code><other-app-service-name></code>, e.g. <code>http://<other-app-service-name></code>.</p>
</li>
<li><p>To send a request to an app in a different namespace, also use the namepspace part of the domain name, <code><another-app-service-name>.<other-namespace-name></code>, e.g. <code>http://<another-app-service-name>.<other-namespace-name></code></p>
</li>
</ul>
| Jonas |
<p>I'm working on a project and encountered some issues. I'm still a beginner with Kubernetes and need some help regarding that.</p>
<p>The code from helm config is as follows:</p>
<pre><code>storage:
storageClass: aws-efs
provisioner: someCustomName
pvc:
logs:
.........
</code></pre>
<p>I'm unable to figure out the <strong>provisioner part</strong> there's some custom name written instead of the usual storageclass provisioner such as <code>kubernetes.io/azure-file</code>. So, is it a custom provisioner? or it's some different concept? Please, guide me!</p>
<p>I've searched a lot but unable to get anything on this.</p>
| Navjot Singh | <p>If the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">provisioner</a> is prefixed with <code>kubernetes.io/</code> like <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file" rel="nofollow noreferrer">azure-file</a> it means that it is an <em>internal provisioner</em> plugin. But it is valid to use an <em>external provisioner</em> as well.</p>
<blockquote>
<p>You are not restricted to specifying the "internal" provisioners listed here (whose names are prefixed with "kubernetes.io" and shipped alongside Kubernetes). You can also run and specify external provisioners, which are independent programs that follow a <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-provisioning.md" rel="nofollow noreferrer">specification</a> defined by Kubernetes. Authors of external provisioners have full discretion over where their code lives, how the provisioner is shipped, how it needs to be run, what volume plugin it uses (including Flex), etc. The repository kubernetes-sigs/sig-storage-lib-external-provisioner houses a library for writing external provisioners that implements the bulk of the specification.</p>
</blockquote>
<p>Also see more about the <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver" rel="nofollow noreferrer">AWS EFS CSI driver</a></p>
| Jonas |
<p>I am trying to expose a service to the outside world using the <code>loadBalancer</code> type service.</p>
<p>For that, i have followed this doc </p>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/" rel="noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/</a></p>
<p>My <code>loadbalancer.yaml</code> looks like this </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>But the load balancer is not creating as expected I am getting the following error</p>
<pre><code>Warning SyncLoadBalancerFailed 8s (x3 over 23s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
</code></pre>
<p>Seems like its because of some issues in the subnet tags to solve,but i have the required tags in my subnets</p>
<pre><code>kubernetes.io/cluster/<cluster-name>. owned
kubernetes.io/role/elb 1
</code></pre>
<p>But still, I am getting the error <code>could not find any suitable subnets for creating the ELB</code></p>
| shamon shamsudeen | <p>By default AWS EKS only attaches load balancers to public subnets. In order to launch it in a private subnet you need to not only label your subnets (which it looks like you did) but also annotate your load balancer-</p>
<blockquote>
<p>service.beta.kubernetes.io/aws-load-balancer-internal: "true"</p>
</blockquote>
<p>You can find more information <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="noreferrer">here</a>.</p>
| Robert Hafner |
<p>Is it safe/secure to have intra-service communication in http and external routes in https in OpenShift / Kubernetes.
If not what are the risks?</p>
| Fabry | <p>This depends on your security requirements. You probably use a cluster with multiple nodes, so there are network links that the traffic cross. Do you use multiple data centers, and how is the network secured between data centers? Is there another organization that operate e.g. network or hardware parts, that perhaps need to inspect the network during network problems? and how much do you trust their operations?</p>
<p>In the end, if the security is enough depends on your requirements. But if you want a high level of security, you should probably use e.g. <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> with <em>mutual TLS</em> between all services within the cluster, harden it with <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Kubernetes Network Policies</a> and perhaps use a specific gateway for external traffic.</p>
<p>But if you have control over the nodes in your cluster and say that it is enough with the level of security that a private network gives you, that is also fine - it depends on your requirements.</p>
| Jonas |
<p>Could someone help me please and point me what configuration should I be doing for my use-case?</p>
<p>I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.</p>
<p>I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.</p>
<p>Could someone be so nice and help me or point me to the right setup that should be used?</p>
<p><strong>-- Update: 26/11/2020</strong>
So this is my setup:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hlf-nfs--server
spec:
replicas: 1
selector:
matchLabels:
app: hlf-nfs--server
template:
metadata:
labels:
app: hlf-nfs--server
spec:
containers:
- name: hlf-nfs--server
image: itsthenetwork/nfs-server-alpine:12
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: "/opt/k8s-pods/data"
volumeMounts:
- name: pvc
mountPath: /opt/k8s-pods/data
volumes:
- name: pvc
persistentVolumeClaim:
claimName: shared-nfs-pvc
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hlf-nfs--server
labels:
name: hlf-nfs--server
spec:
type: ClusterIP
selector:
app: hlf-nfs--server
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 1Gi
</code></pre>
<p>These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/k8s-pods/data
server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`
</code></pre>
<p>The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.</p>
<p>Did I miss anything?</p>
| Sniady | <p>It is correct that there is a 1:1 relation between a <code>PersistentVolumeClaim</code> and a <code>PersistentVolume</code>.</p>
<p>However, Pods running on the same Node can concurrently mount the same volume, e.g. use the same <code>PersistentVolumeClaim</code>.</p>
<p>If you use Minikube for local development, you only have one node, so you can use the same <code>PersistentVolumeClaim</code>. Since you want to use different files for each app, you could use a <em>unique directory</em> for each app in that shared volume.</p>
| Jonas |
<p>Some background: I have set up Airflow on Kubernetes (on AWS). I am able to run DAGs that query a database, send emails or do anything that doesn't require a package that isn't already a part of Airflow. For example, if I try to run a DAG that uses the Facebook-business SDK the DAG will obviously break because the dependency isn't available. I've tried a couple different ways of trying to get this dependency, along with others, installed but haven't been successful. </p>
<p>I have tried to install python packages by modifying my scheduler and webserver deployments to do a pip install of my dependencies as part of an initContainer. When I do this, the DAG remains broken as it is unable to find the needed packages. When I open a shell to my pod I can see that the dependencies have not been installed (I check using <code>pip list</code>). I have also verified that there aren't other python/pip versions installed. </p>
<p>I have also tried to install the dependencies by running a pip install when I open a shell to my pod. This way is successful in installing the dependency in the correct place and also makes it available. However, instead of the webserver UI showing that my DAG is broken, I get the <code>this dag isn't available in the webserver dagbag object</code> message. </p>
<p>I would expect that running <code>pip install</code> as part of my initContainer or container would makes these dependencies available in my pod. However, this isn't the case. It's as if pip install runs without any issues, but by the time my pods are fully set up the python packages are nowhere to be found</p>
<p><strong>I forgot to say that I have found a way to make it work, but it feels somewhat hacky and like there should be a better way</strong>
- If I open a shell to my webserver container and install the needed dependencies and then open a shell to my scheduler and do the same thing, the dependencies are found and the DAG works. </p>
| Jesus Garcia | <p>The init container is a separate docker instance. Unless you rig up some sort of shared storage for your python libraries (which is quite dubious) any pip installs in the init container won't impact the running container of the pod. </p>
<p>I see two options:</p>
<p>1) Modify the docker image that you're using to include the packages you need</p>
<p>2) Prepend a <code>pip install</code> to the command being run in the pod. It's not uncommon to string together a few commands with <code>&&</code> between them, in order to execute a sequence of operations in a starting pod. </p>
| Laizer |
<p>I have the following deployment config. The test-worker-health and health endpoints are both unreachable as the application is failing due to an error. The startup probe keeps restarting the container after failing as restartPolicy: Always. The pods enter CrashLoopBackoff state. Is there a way to fail such startup probe?</p>
<pre><code>livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 30
startupProbe:
httpGet:
path: /test-worker-health
port: 8080
failureThreshold: 12
periodSeconds: 10
</code></pre>
| Ruchika Salwan | <blockquote>
<p>The startup probe keeps restarting the container after failing</p>
</blockquote>
<p>The startupProbe does not restart your container, but the <em>livenessProbe</em> does.</p>
<blockquote>
<p>The pods enter CrashLoopBackoff state. Is there a way to fail such startup probe?</p>
</blockquote>
<p>If you remove the <em>livenessProbe</em>, you will not get this restart-behavior. You may want to use a <em>readinessProbe</em> instead?</p>
<blockquote>
<p>Is there a way to fail such startup probe?</p>
</blockquote>
<p>What do you mean? It is already "failing" as you say. You want automatic rollback? That is provided by e.g. Canary Deployment, but is a more advanced topic.</p>
| Jonas |
<p>I have a docker image in my local machine which I have pushed to Google Cloud Containers.
Now I want to deploy this image in Google Kubernetes Engine.</p>
<p>I am following the steps in below link -</p>
<p><a href="https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-gke#deploying_a_pre-built_container_image" rel="nofollow noreferrer">https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-gke#deploying_a_pre-built_container_image</a></p>
<p>I will create a YAML deployment config file.</p>
<p>My problem is where Do I need to keep this file in google cloud so that it can be used for deployment.
Also, in YAML file what is nginx - I have used the default one. Where do I need to keep this YAML config file.
ms_aggregator is name of my image</p>
<pre><code> apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "deployment-aggregator"
spec:
replicas: 1
selector:
matchLabels:
app: "nginx-1"
template:
metadata:
labels:
app: "nginx-1"
spec:
containers:
- name: "nginx-1"
image: "ms_aggregator"
</code></pre>
| Hershika Sharma | <p>You need to use the full image name, that usually includes the image registry and repository.</p>
<p>From the example:</p>
<pre><code>"gcr.io/cloud-builders/gke-deploy"
</code></pre>
<p>Usually, the GCP format is</p>
<pre><code><docker registry host>/<gcp-project-name>/<image-name>
</code></pre>
<p>For you, this is likely:</p>
<pre><code>gcr.io/<your-gcp-project-name>/ms_aggregator
</code></pre>
<p>But if you have choosed to use a registry in a different location, the registry name could be e.g. <code>eu.gcr.io</code></p>
| Jonas |
<p>On GKE, I set a statefulset resource as</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data-pvc
</code></pre>
<p>Want to use pvc so created this one. (This step was did before the statefulset deployment)</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>When check the resource in kubernetes</p>
<pre><code>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-pvc Bound pvc-6163d1f8-fb3d-44ac-a91f-edef1452b3b9 10Gi RWO standard 132m
</code></pre>
<p>The default Storage Class is <code>standard</code>.</p>
<pre><code>kubectl get storageclass
NAME PROVISIONER
standard (default) kubernetes.io/gce-pd
</code></pre>
<p>But when check the statafulset's deployment status. It always wrong.</p>
<pre><code># Describe its pod details
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler persistentvolumeclaim "redis-data-pvc" not found
Warning FailedScheduling 17s (x2 over 20s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Normal Created 2s (x2 over 3s) kubelet Created container redis
Normal Started 2s (x2 over 3s) kubelet Started container redis
Warning BackOff 0s (x2 over 1s) kubelet Back-off restarting failed container
</code></pre>
<p>Why can't it find the <code>redis-data-pvc</code> name?</p>
| iooi | <p>What you have done, should work. Make sure that the <code>PersistentVolumeClaim</code> and the <code>StatefulSet</code> is located in the same namespace.</p>
<hr />
<p>Thats said, this is an easier solution, and that let you easier scale up to more replicas:</p>
<p>When using StatefulSet and PersistentVolumeClaim, use the <code>volumeClaimTemplates:</code> field in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="noreferrer">StatefulSet</a> instead.</p>
<p>The <code>volumeClaimTemplates:</code> will be used to create unique PVCs for each replica, and they have unique naming ending with e.g. <code>-0</code> where the number is an <em>ordinal</em> used for the replicas in a StatefulSet.</p>
<p>So instead, use a SatefuleSet manifest like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumeClaimTemplates: // this will be used to create PVC
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
</code></pre>
| Jonas |
<p>I use a kubernetes manifest file to deploy my code. My manifest typically has a number of things like Deployment, Service, Ingress, etc.. How can I perform a type of "rollout" or "restart" of everything that was applied with my manifest?</p>
<p>I know I can update my deployment say by running</p>
<pre><code>kubectl rollout restart deployment <deployment name>
</code></pre>
<p>but what if I need to update all resources like ingress/service? Can it all be done together?</p>
| alex | <p>I would recommend you to store your manifests, e.g. <code>Deployment</code>, <code>Service</code> and <code>Ingress</code> in a directory, e.g. <code><your-directory></code></p>
<p>Then use <code>kubectl apply</code> to "apply" those files to Kubernetes, e.g.:</p>
<pre><code>kubectl apply -f <directory>/
</code></pre>
<p>See more on <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="nofollow noreferrer">Declarative Management of Kubernetes Objects Using Configuration Files</a>.</p>
<p>When your <code>Deployment</code> is updated this way, your pods will be replaced with the new version during a rolling deployment (you can configure to use another deployment strategy).</p>
| Jonas |
<p>I have a deployment in Kubernetes. In this deployment, I can specify a persistent volume claim as follows:</p>
<pre><code> volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-claim
</code></pre>
<p>I have a disk (an Azure disk to be precise) with lots of preprocessed data, which I can expose in Kubernetes as PVC with name <code>my-claim</code>. In the next step I link it to the deployment as shown above. The problem with this approach is, that I cannot scale the deployment to more than one pod.</p>
<p>How can I scale this setup? I tried to duplicate the disk and create it as second PVC with a different name. This worked, but now I don't see a way to tell the Kubernetes deployment, that each pod should mount one of these two PVCs.</p>
<p>I hope there is an option to mark both PVCs with a common label and then link my deployment to this label instead of the PVC name. Is something like this out there or is my approach completely wrong?</p>
<p>Thanks!</p>
| Stephan | <blockquote>
<p>I have a disk (an Azure disk to be precise) with lots of preprocessed data, which I can expose in Kubernetes as PVC with name my-claim. In the next step I link it to the deployment as shown above. The problem with this approach is, that I cannot scale the deployment to more than one pod.</p>
</blockquote>
<p>Here, you use a <code>PersistentVolumeClaim</code> with <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Access Mode</a> <code>ReadWriteOnce</code> (that's the only option for Azure Disks, see access mode link)</p>
<blockquote>
<p>How can I scale this setup? I tried to duplicate the disk and create it as second PVC with a different name. This worked, but now I don't see a way to tell the Kubernetes deployment, that each pod should mount one of these two PVCs.</p>
</blockquote>
<p>Here, it sounds like you want a volume with <em>access mode</em> <code>ReadOnlyMany</code> - so you need to consider a storage system that support this <em>access mode</em>.</p>
<blockquote>
<p>I tried to duplicate the disk and create it as second PVC with a different name. This worked, but now I don't see a way to tell the Kubernetes deployment, that each pod should mount one of these two PVCs.</p>
</blockquote>
<p>This does not work for a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> because the <code>template</code> for each pod is identical. But you can do this with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>, declaring your PVC with <code>volumeClaimTemplates</code> - then the PVC for each pod has in <strong>unique, well known identity</strong>.</p>
<p>Example part of <code>StatefulSet</code>:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: my-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
</code></pre>
<p>Then if you have two replicas of you StatefulSet, they will you a PVC named <code>my-pvc-0</code> and <code>my-pvc-1</code> where the number is called "ordinal". The <code>volumeClaimTemplate</code> only creates a new PVC if it does not exists, so if you have created PVCs with the correct names - the existing will be used.</p>
<h2>Alternative Storage Solutions</h2>
<p>An alternative storage solution to Azure Disk is Azure Files. Azure Files support <em>access mode</em> <code>ReadWriteOnce</code>, <code>ReadOnlyMany</code> and <code>ReadWriteMany</code>. See <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer">Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS)</a>.</p>
<p>There may also be other storage alternatives that better fit your application.</p>
| Jonas |
<p>I'm trying to understand how kubernetes works, so I tried to do this operation for my minikube:</p>
<pre><code>~ kubectl delete pod --all -n kube-system
pod "coredns-f9fd979d6-5n4b6" deleted
pod "etcd-minikube" deleted
pod "kube-apiserver-minikube" deleted
pod "kube-controller-manager-minikube" deleted
pod "kube-proxy-879lg" deleted
pod "kube-scheduler-minikube" deleted
</code></pre>
<p>It's okay. Pods deleted as wish. But if I do <code>kubectl get pods -n kube-system</code> I will see:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-5d25r 1/1 Running 0 50s
etcd-minikube 1/1 Running 0 50s
kube-apiserver-minikube 1/1 Running 0 50s
kube-controller-manager-minikube 1/1 Running 0 50s
kube-proxy-nlw69 1/1 Running 0 43s
kube-scheduler-minikube 1/1 Running 0 49s
</code></pre>
<p>Okay. I thought it's ReplicaSet or DaemonSet:</p>
<pre><code>➜ ~ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 18m
➜ ~ kubectl get rs -n kube-system
NAME DESIRED CURRENT READY AGE
coredns-f9fd979d6 1 1 1 18m
</code></pre>
<p>It is true for <code>coredns</code> and <code>kube-proxy</code>. But what about others (<code>apiserver</code>, <code>etcd</code>, <code>controller</code> and <code>scheduler</code>)? Why are they still alive?</p>
| Василий Никпуп | <p>The control plane pods are run as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/" rel="noreferrer">static Pods</a> - static Pods are not managed by the control plane controllers like e.g. DaemonSet and ReplicaSet. <em>Static pods</em> are instead managed by the Kubelet daemon on the local node directly.</p>
| Jonas |
<p>Assume that there are some pods from Deployments/StatefulSet/DaemonSet, etc. running on a Kubernetes node.</p>
<p>Then I restarted the node directly, and then start docker, start kubelet with the same parameters.</p>
<p>What would happen to those pods?</p>
<ol>
<li>Are they recreated with metadata saved locally from kubelet? Or use info retrieved from api-server? Or recovered from OCI runtime and behaves like nothing happened?</li>
<li>Is it that only stateless pod(no --local-data) can be recovered normally? If any of them has a local PV/dir, would they be connected back normally?</li>
<li>What if I did not restart the node for a long time? Would api-server assign other nodes to create those pods? What is the default timeout value? How can I configure this?</li>
</ol>
<p>As far as I know:</p>
<pre><code> apiserver
^
|(sync)
V
kubelet
^
|(sync)
V
-------------
| CRI plugin |(like api)
| containerd |(like api-server)
| runc |(low-level binary which manages container)
| c' runtime |(container runtime where containers run)
-------------
</code></pre>
<p>When kubelet received a PodSpec from kube-api-server, it calls CRI like a remote service, the steps be like:</p>
<ol>
<li>create PodSandbox(a.k.a 'pause' image, always 'stopped')</li>
<li>create container(s)</li>
<li>run container(s)</li>
</ol>
<p>So I <strong>guess</strong> that as the node and docker being restarted, steps 1 and 2 are already done, containers are at 'stopped' status; Then as kubelet being restarted, it pulls latest info from kube-api-server, found out that container(s) are not in 'running' state, so it calls CRI to run container(s), then everything are back to normal.</p>
<p>Please help me confirm.</p>
<p>Thank you in advance~</p>
| Li Ziyan | <p>Good questions. A few things first; a Pod is not pinned to a certain node. The nodes is mostly seen as a "server farm" that Kubernetes can use to run its workload. E.g. you give Kubernetes a set of nodes and you also give a set of e.g. <code>Deployment</code> - that is desired state of applications that should run on your servers. Kubernetes is responsible for scheduling these Pods and also keep them running when something in the cluster is changed.</p>
<p>Standalone pods is not managed by anything, so if a Pod crashes it is not recovered. You typically want to deploy your stateless apps as <code>Deployments</code>, that then initiates <code>ReplicaSets</code> that manage a set of Pods - e.g. 4 Pods - instances of your app.</p>
<p>Your desired state; a <code>Deployment</code> with e.g. <code>replicas: 4</code> is saved in the <strong>etcd</strong> database within the Kubernetes control plane.</p>
<p>Then a set of controllers for <code>Deployment</code> and <code>ReplicaSet</code> is responsible for keeping 4 replicas of your app alive. E.g. if a node becomes unresponsible (or dies), new pods will be created on other Nodes, if they are managed by the controllers for <code>ReplicaSet</code>.</p>
<p>A <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a> receives a PodSpecs that are scheduled to the node, and then keep these pods alive by regularly health checks.</p>
<blockquote>
<p>Is it that only stateless pod(no --local-data) can be recovered normally?</p>
</blockquote>
<p>Pods should be seen as emphemeral - e.g. can disappear - but is recovered by a controller that manages them - unless deployed as standalone Pod. So don't store local data within the pod.</p>
<p>There is also <code>StatefulSet</code> pods, those are meant for <em>stateful</em> workload - but <em>distributed stateful workload</em>, typically e.g. 3 pods, that use <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft</a> to replicate data. The etcd database is an example of distributed database that uses Raft.</p>
| Jonas |
<p>I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer</p>
<p>This is the configuration of my service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:</p>
<ul>
<li>Service is created in K8s</li>
<li>NLB is created in AWS</li>
<li>NLB gets Status 'active'</li>
<li>VPC and other values for the NLB look correct</li>
<li>Target Group is created in AWS</li>
<li>There are 0 targets registered</li>
<li>I can't register targets because group expects instances, which I do not have</li>
<li>EXTERNAL_IP is </li>
<li>Listener is not created automatically</li>
</ul>
<p>Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.</p>
<p>My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.</p>
| Daniel Müller | <p>The <em>in-tree</em> Kubernetes <code>Service</code> LoadBalancer for AWS, can not be used for AWS Fargate.</p>
<blockquote>
<p>You can use NLB instance targets with pods deployed to nodes, but not to Fargate.</p>
</blockquote>
<p>But you can now install <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller" rel="nofollow noreferrer">AWS Load Balancer Controller</a> and use <strong>IP Mode</strong> on your <code>Service</code> LoadBalancer, this also works for AWS Fargate.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
</code></pre>
<p>See <a href="https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/" rel="nofollow noreferrer">Introducing AWS Load Balancer Controller</a> and <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html#load-balancer-ip" rel="nofollow noreferrer">EKS Network Load Balancer - IP Targets</a></p>
| Jonas |
<p>This is my kubernetes.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicetwo
labels:
name: servicetwo
namespace: sock-shop
spec:
replicas: 1
template:
metadata:
labels:
name: servicetwo
spec:
containers:
- name: servicetwo
image: nik/pythonserviceone
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: servicetwo
labels:
name: servicetwo
namespace: sock-shop
spec:
ports:
# the port that this service should serve on
- port: 5000
targetPort: 5000
nodePort: 30003
selector:
name: servicetwo
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: servicethree
labels:
name: servicethree
namespace: sock-shop
spec:
replicas: 1
template:
metadata:
labels:
name: servicethree
spec:
containers:
- name: servicetwo
image: nik/pythonservicetwo
ports:
- containerPort: 7000
---
apiVersion: v1
kind: Service
metadata:
name: servicethree
labels:
name: servicethree
namespace: sock-shop
spec:
ports:
# the port that this service should serve on
- port: 7000
targetPort: 7000
nodePort: 30002
selector:
name: servicethree
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: apigateway
labels:
name: apigateway
namespace: sock-shop
spec:
replicas: 1
template:
metadata:
labels:
name: apigateway
spec:
containers:
- name: apigateway
image: ni/aggregatornew
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: apigateway
labels:
name: apigateway
namespace: sock-shop
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 30001
selector:
name: apigateway
---
</code></pre>
<p>I know this error is because of new version of kubernetes ,but I am unable to fix the issue ,when I change extensions/v1beta1 to apps/v1 I start getting error servicetwo not found while running <code>kubectl apply -f kubernets.yml</code>.With kuberentes 1.10 its running perfectly ,any help would be really appreciated thanks</p>
| gANDALF | <p>In addition to change to <code>apps/v1</code> you also need to add the <strong>new required field</strong> <code>selector:</code> in the <code>spec:</code></p>
<p>something like this should work for you:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetwo
labels:
name: servicetwo
namespace: sock-shop
spec:
replicas: 1
selector: // new required field
matchLabels:
name: servicetwo // must match your labels
template:
metadata:
labels:
name: servicetwo
spec:
containers:
- name: servicetwo
image: nik/pythonserviceone
ports:
- containerPort: 5000
</code></pre>
| Jonas |
<p>I have microsevices and SPA app. All of them run on docker with docker compose. I have ocelot api gateway. But gateway knows ip address or container names of microservices for reaching . I add a aggregater service inside ocelot app. And I can reach to all services from aggregator service with ips.</p>
<p>But I want to move kubernates. I can scale services. there is no static ip. How can I configure .</p>
<p>I have identity service. This service knows clients ip addresses. Again same problem.</p>
<p>I searched for hours. I found some keywords. Envoy, Ingress, Consul, Ocelot . Can someone explain these things ?</p>
| eren arslan | <p>It sounds like your question is related to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#motivation" rel="nofollow noreferrer">Service Discovery</a>.</p>
<p>In Kubernetes, the native way an "API Gateway" is implemented, is by using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress resources</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controllers</a>. If you use a cloud provider, they usually have a product for this, or you can use a custom deployed within the cluster.</p>
<p><em>Service Discovery</em> the Kubernetes way, is by referring to <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service resources</a>, e.g. the Ingress resources maps URLs (in your public API) to services. And your app is deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment resource</a>, and all replicas (instances) is exposed via a Service resource. An app can also send request to other apps, and it should address that request to the Service resource for the other app. The Service resource does load balancing to the replicas of the receiving app.</p>
| Jonas |
<p>When I run <code>shutdown -h now</code> command to shutdown a node in kubernetes cluster, endpoint update its state after about 40 seconds, but when I run command <code>kubectl delete pod POD-NAME</code>, endpoint update its state very quick. Can anyone explain why?</p>
| Ren | <p>When you "shutdown" a node, you should do it gracefully with <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">kubectl drain</a>. This will evict the pods in a controlled manner and this should be more friendly to your traffic.</p>
<p>The article <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">Kubernetes best practices: terminating with grace</a> has a detailed description on all steps that happen when a Pod is <em>gracefully</em> terminated. For all planned maintenance, use gracefully shutdown - for unplanned maintenance you can not do much.</p>
| Jonas |
<p>After creating the pod-definition.yml file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: server
spec:
containers:
- name: nginx-container
image: nginx
</code></pre>
<p>The linter is giving this warning.</p>
<p><code>One or more containers do not have resource limits - this could starve other processes</code></p>
| Rushikesh Sabde | <p>It is a good practice to declare resource requests and limits for both <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="noreferrer">memory</a> and <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="noreferrer">cpu</a> for each container. This helps to schedule the container to a node that has available resources for your Pod, and also so that your Pod does not use resources that other Pods needs - therefore the <em>"this could starve other processes"</em> message.</p>
<p>E.g. to add resource requests and limits to your example</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: server
spec:
containers:
- name: nginx-container
image: nginx
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
</code></pre>
| Jonas |
<p>We have setup with two Kubernetes VMs on Ubuntu Server, a master and a worker. The cluster appears healthy, I'm still learning my way around. I know you can build an image and push it to docker hub and then pull that image down from the Kubernetes server.</p>
<p>My trouble is everything I'm learning says you use a .yaml file to create the deployment. I have written the yaml pod and service file which are on my local machine. However when I ssh into the Kubernetes master it doesn't know how to find files on my local machine nor do I know how to tell it. Mind you I'm also pretty new to the Linux shell.</p>
<p>I have tried putting the yaml files up in git hub and google drive but this didn't work for me. Can I push my yaml files into docker hub? My class didn't go over this because the instructor uses Minikube instead of a full blown VM on a separate host. So he just typed <code>kubectl apply -f <filename.yaml></code> and it it took the file from his current working directory and pushed it into Kubernetes. So that won't work for me since have to remote in to the server(terminal) first. Thanks for any tips.</p>
| Lucas | <blockquote>
<p>when I ssh into the Kubernetes master it doesn't know how to find files on my local machine nor do I know how to tell it</p>
</blockquote>
<p>The prober way to interact with your Kubernetes cluster, is to send your requests to the API-Server. The API Server is a REST API, but it is easiest to use <code>kubectl</code> as your client tool.</p>
<p>When you setup your cluster, you should have a "kubeconfig" - this is configuration that your local <code>kubectl</code> client should use to authenticate to your cluster.</p>
| Jonas |
<p>I want to use serving api which is the part of the knative serving repo to create serving application. Since i'm writing a custom controller, i need to make use of Go client. I'm finding it difficult to generate boiler plate code using the code-generator. I'm following the below mentioned blog on how to do it.</p>
<ol>
<li><a href="https://insujang.github.io/2020-02-13/programming-kubernetes-crd/#write-template-code" rel="nofollow noreferrer">https://insujang.github.io/2020-02-13/programming-kubernetes-crd/#write-template-code</a></li>
<li><a href="https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/" rel="nofollow noreferrer">https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/</a></li>
</ol>
<p>But i couldn't achieve it. Any help is appreciated.</p>
| coders | <p>Yes, code generation for controllers is not the most easy thing. And it has changed over the years.</p>
<p>To start writing a controller with code generation, I would recommend to use Kubebuilder and follow the <a href="https://kubebuilder.io/" rel="nofollow noreferrer">Kubebuilder guide</a>. And perhaps do custom things when that is understood.</p>
<p>The Kubebuilder guide includes chapters on how to generate CRD code using <a href="https://kubebuilder.io/reference/generating-crd.html" rel="nofollow noreferrer">controller-gen</a>.</p>
| Jonas |
<p>I have a service using Azure Kubernetes cluster and AKS load balancer.
I want to forward some HTTP(client) requests to all instances.
is there any way to configure this behavior using AKS or Kubernetes in general?</p>
<p>Say I have XYZ API running two replicas/instances.</p>
<ul>
<li>XYZ-1 pod instance</li>
<li>XYZ-2 pod instance</li>
</ul>
<p>I have some rest API requests to the app domain.com/testendpoint</p>
<p>Currently, using AKS load balancer it sends the requests in a round-robin fashion to XYZ-1 and XYZ-2. I am looking to see if it is possible to forward a request to both instances (XYZ-1 and XYZ-2) when the request endpoint is <code>testendpoint</code> and all other API requests use the same round-robin order.</p>
<p>The use case to refresh a service in-memory data via a rest call once a day or twice and the rest call will be triggered by another service when needed. so want to make sure all pod instances update/refresh in-memory data by an HTTP request.</p>
| vkt | <blockquote>
<p>if it is possible to forward a request to both instances (XYZ-1 and XYZ-2) when the request endpoint is testendpoint</p>
</blockquote>
<p>This is not a feature in the HTTP protocol, so you need a purpose built service to handle this.</p>
<blockquote>
<p>The use case to refresh a service in-memory data via a rest call once a day or twice and the rest call will be triggered by another service when needed. so want to make sure all pod instances update/refresh in-memory data by an HTTP request.</p>
</blockquote>
<p>I suggest that you create a new utility service, "update-service" - that you send the call once a day to. This service then makes a request to every instance of XYZ, like <code>XYZ-1</code> and <code>XYZ-2</code>.</p>
| Jonas |
<p>Complex AWS EKS / ENI / Route53 issue has us stumped. Need an expert.</p>
<p>Context:</p>
<p>We are working on dynamic game servers for a social platform (<a href="https://myxr.social" rel="nofollow noreferrer">https://myxr.social</a>) that transport game and video data using WebRTC / UDP SCTP/SRTP via <a href="https://MediaSoup.org" rel="nofollow noreferrer">https://MediaSoup.org</a></p>
<p>Each game server will have about 50 clients
Each client requires 2-4 UDP ports</p>
<p>Our working devops strategy
<a href="https://github.com/xr3ngine/xr3ngine/tree/dev/packages/ops" rel="nofollow noreferrer">https://github.com/xr3ngine/xr3ngine/tree/dev/packages/ops</a></p>
<p>We are provisioning these game servers using Kubernetes and <a href="https://agones.dev" rel="nofollow noreferrer">https://agones.dev</a></p>
<p>Mediasoup requires each server connection to a client be assigned individual ports. Each client will need two ports, one for sending data and one for receiving data; with a target maximum of about 50 users per server, this requires 100 ports per server be publicly accessible.</p>
<p>We need some way to route this UDP traffic to the corresponding gameserver. Ingresses appear to primarily handle HTTP(S) traffic, and configuring our NGINX ingress controller to handle UDP traffic assumes that we know our gameserver Services ahead of time, which we do not since the gameservers are spun up and down as they are needed.</p>
<p>Questions:</p>
<p>We see two possible ways to solve this problem.</p>
<p>Path 1</p>
<p>Assign each game server in the node group public IPs and then allocate ports for each client. Either IP v4 or v6. This would require SSL termination for IP ports in AWS. Can we use ENI and EKS to dynamically create and provision IP ports for each gameserver w/ SSL? Essentially expose these pods to the internet via a public subnet with them each having their own IP address or subdomain. <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html</a> We have been referencing this documentation trying to figure out if this is possible.</p>
<p>Path 2</p>
<p>Create a subdomain (eg gameserver01.gs.xrengine.io, etc) dynamically for each gameserver w/ dynamic port allocation for each client (eg client 1 [30000-30004], etc). This seems to be limited by the ports accessible in the EKS fleet.</p>
<p>Are either of these approaches possible? Is one better? Can you give us some detail about how we should go about implementation?</p>
| Liam Collins | <p>The native way for receiving <a href="https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-eks-now-supports-udp-load-balancing-with-network-load-balancer/" rel="nofollow noreferrer">UDP traffic on Amazon EKS</a> is by using a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Service of type Loadbalancer</a> with an extra annotation to get the <em>NLB</em>.</p>
<p>Example</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-game-app-service
annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
selector:
app: my-game-app
ports:
- name: outgoing-port # choose your name
protocol: UDP
port: 9000 # choose your port
- name: incoming-port # choose your name
protocol: UDP
port: 9001 # choose your port
type: LoadBalancer
</code></pre>
| Jonas |
<p>Assume I have a web application (Apache httpd server) deployed in AWS using EC2 instances (VM). Application deployment is performed using EC2 userdata.</p>
<p>Alternatively I could dockerize my web application. Deploy a Kubernetes cluster on EC2 instances using EKS, or custom setup. We could also use AWS Fargate for serverless feature.</p>
<p>What are the pros and cons to use second approach with Kubernetes here?</p>
| scoulomb | <h1>EC2 - more responsibilty for Developers</h1>
<p>If you as a developer deploy your application to EC2 machines, you usually also is responsible for maintaining and patching the EC2 instances. The problem is that this is things that developers not usually are good at, and commonly not are so interested in. It is not their expertise to monitor and patch Linux machines or troubleshoot networking.</p>
<h1>Kubernetes - less responsibility for Developers</h1>
<p>With Kubernetes, you as a developer are responsible only for the application container and that your app is healthy. But another team, e.g. a platform team may be responsible for the underlying infrastructure, e.g. EC2 instances and Networking. Or as with Fargate, the cloud provider can be responsible for this.</p>
<h2>Cognitive Load</h2>
<p>Making the Developers responsible for less, but still having APIs for self-service deployment, makes them very efficient.</p>
<h2>Need for a Platform Team</h2>
<p>But when starting to use Kubernetes as a platform, you are taking on more complexity. You need to be a large enough organization for this. Unless you use higher level services like e.g. <a href="https://cloud.google.com/run" rel="nofollow noreferrer">Google Cloud Run</a>.</p>
<p>A good talk about all this is <a href="http://%20https://www.infoq.com/presentations/kubernetes-adoption-foundation/" rel="nofollow noreferrer">Kubernetes is Not Your Platform, It's Just the Foundation</a></p>
| Jonas |
<h1>Problem</h1>
<ul>
<li>We write config files using Terraform for both our Kubernetes Cluster or Apps</li>
<li>Some of these files must be pushed to different git repos
<ul>
<li>Just following GitOps for kubernetes and dynamic config repos</li>
</ul>
</li>
</ul>
<h1>Question</h1>
<ul>
<li>How can I perform a git clone, commit, push using terraform?
<ul>
<li>Should we just use shell?</li>
<li>Is there any provider other than <a href="https://github.com/ilya-lesikov/terraform-provider-gitfile" rel="nofollow noreferrer">https://github.com/ilya-lesikov/terraform-provider-gitfile</a>?
<ul>
<li>It's very close to what I have, but it hasn't been supported nor it supports the use cases I'm looking for.</li>
</ul>
</li>
</ul>
</li>
</ul>
<h1>So far, I have the following:</h1>
<ul>
<li>Generate the configs:</li>
</ul>
<pre><code># https://stackoverflow.com/questions/36629367/getting-an-environment-variable-in-terraform-configuration/36672931#36672931
variable GITLAB_CLONE_TOKEN {}
locals {
carCrdInstance = {
apiVersion = "car.io/v1"
kind = "Car"
metadata = {
name = "super-car"
}
spec = {
convertible = "true"
color = "black"
}
}
# https://docs.gitlab.com/ee/user/project/deploy_tokens/#git-clone-a-repository
clone_location = "${path.module}/.gitops"
branch = "feature/crds-setup"
}
resource "null_resource" "git_clone" {
provisioner "local-exec" {
command = "git clone --branch ${local.branch} https://${var.username}:${var.GITLAB_CLONE_TOKEN}@gitlab.example.com/tanuki/awesome_project.git ${local.clone_location}"
}
}
resource "local_file" "cert_manager_cluster_issuer_object" {
content = yamlencode(local.cert_issuer)
filename = "${git_repo.configs.destination}/crds/instances/white-convertible.yaml"
# https://stackoverflow.com/questions/52421656/terraform-execute-script-before-lambda-creation/52422595#52422595
depends_on = ["null_resource.git_clone"]
# https://stackoverflow.com/questions/7149984/how-do-i-execute-a-git-command-without-being-in-the-repository/35899275#35899275
provisioner "local-exec" {
command = "git -C ${local.clone_location} commit -am ':new: updating cars...'"
}
provisioner "local-exec" {
command = "git -C ${local.clone_location} push origin ${local.branch}'"
}
}
</code></pre>
<h1>Is there anything like that?</h1>
<ul>
<li>I haven't tested this above, but I'm looking for something that allows me to do that</li>
</ul>
| Marcello DeSales | <blockquote>
<p>How can I perform a git clone, commit, push using terraform?</p>
</blockquote>
<blockquote>
<p>Should we just use shell?</p>
</blockquote>
<p>Terraform is a good tool - it is best for <em>provisioning immutable infrastructure</em>. Shell script might also have its place, but when you can, it is preferably to use a more declarative approach.</p>
<p>What you describe with "git clone, commit, push" is essentially some of the steps that is commonly done in something like a Build or Deployment Pipeline. Terraform might be a good tool to use in some of the steps, but it is not the best tool to orchestrate the full workflow, in my point of view.</p>
<p>A tool made for orchestrating pipeline workflows might be best for this, like e.g.</p>
<ul>
<li><a href="https://github.com/tektoncd/pipeline" rel="nofollow noreferrer">Tekton Pipelines</a> - with Tasks for Git and <a href="https://github.com/tektoncd/catalog/tree/master/task/terraform-cli/0.1" rel="nofollow noreferrer">Terraform</a> to be used as steps in a workflow.</li>
<li><a href="https://github.com/argoproj/argo" rel="nofollow noreferrer">Argo Workflows</a></li>
<li><a href="https://github.com/features/actions" rel="nofollow noreferrer">GitHub Actions</a></li>
<li>And possibly <a href="https://www.terraform.io/cloud" rel="nofollow noreferrer">Terraform Cloud</a> (haven't used it, can't say if it can do exact what you ask)</li>
</ul>
| Jonas |
<p>I am Dockerizing the Asp.net core application and how do I read configmap & secret in asp.net core application?</p>
| One Developer | <p>When designing an app for Kubernetes, you should usually follow the <a href="https://12factor.net/" rel="nofollow noreferrer">12 factor app</a> guidelines.</p>
<p>It is common that the app read config-values as Environment variables, but sometimes also as files. The Kubernetes documentation has good examples on how to use <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#configmaps-and-pods" rel="nofollow noreferrer">ConfigMaps for Pods</a> to read the values as Env variables or files.</p>
| Jonas |
<p>Is there a way to get the replica count at runtime within a pod that belongs to a StatefulSet?
I have verified that the following answer <a href="https://stackoverflow.com/a/54306021/5514015">https://stackoverflow.com/a/54306021/5514015</a> gives you the hostname of the pod, which includes the pod ordinal number, at runtime, but I would also like to know the number of replicas configured in the StatefulSet. Is this possible to determine?</p>
| jim | <p>Unfortunately, the number of replicas is not available through the <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api" rel="nofollow noreferrer">Downward API</a>. But in a <code>StatefulSet</code>, as you say, it is common to need this number.</p>
<p>Proposed ways to get the number:</p>
<ul>
<li>Implement this within your app so that they coordinate, and perhaps can find out it through <em>ordinal</em> identities.</li>
<li>Alternatively let your app communicate with the Kubernetes API Server using e.g. <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a>, but this ties your application to Kubernetes.</li>
</ul>
<p>For reliability, you might want to design your app in a way so that it can work (at least for some time) without the Kubernetes control-plane being available, so I would recommend to implement this without the Kubernetes API e.g. the first solution above.</p>
| Jonas |
<p>I have TS application that uses kubernetes-client library to connect to kubernetes in Google Cloud.</p>
<pre><code>import { KubeConfig, CoreV1Api } from "@kubernetes/client-node";
const kc = new KubeConfig();
kc.loadFromDefault();
const kbsCoreApi = kc.makeApiClient(CoreV1Api);
</code></pre>
<p>When running locally works great, but when I dockerize it, it does not work due to not knowing how to load configuration. I tried to create a config file and load it using "kc.loadFromFile('~/some/path')" instead but It seems that I am missing something it gives me a HTTP error. Here is my configuration file.</p>
<pre><code> {
"kind": "Config",
"apiVersion": "v1",
"clusters": [
{
"name": "cluster1",
"cluster": {
"certificate-authority-data": "cert-data",
"server": "https://128.1.1.2"
}
}
],
"users": [
{
"name": "cluster1",
"user": {
"password": "myPassword",
"usernmae": "myUsername"
}
}
],
"contexts": [
{
"name": "cluster1",
"context": {
"cluster": "cluster1",
"user": "cluster1"
}
}
],
"current-context": "cluster1"
}
</code></pre>
| Gus G | <p>Use <code>kc.loadFromCluster();</code> instead of the <code>kc.loadFromDefault();</code> used in your code.</p>
<p>See <a href="https://github.com/kubernetes-client/javascript/blob/master/examples/in-cluster.js" rel="noreferrer">in-cluster example</a></p>
<p>When starting the client with the in-cluster authentication, it will use the <a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration" rel="noreferrer">ServiceAccount token</a> on <code>/var/run/secrets/kubernetes.io/serviceaccount</code></p>
<p>Also make sure that this ServiceAccount has the RBAC permissions for the resource operations that your client code use. But you should be able to get proper error messages, unless.</p>
| Jonas |
<p>Both server POD and client POD runs in a K8S cluster.
I have configured my Server POD to Scale up when its memory usage reaches a threshold.</p>
<p>Now I want to increase number of threads in my client POD whenever a new server POD is spawned.
How to catch this auto scale scaleUp event in the client POD ?</p>
| Chandu | <blockquote>
<p>Now I want to increase number of threads in my client POD whenever a new server POD is spawned. How to catch this auto scale scaleUp event in the client POD ?</p>
</blockquote>
<p>An application typically should not know that it is running within Kubernetes, e.g. it should be agnostic to this kind of information.</p>
<p>But it is fully possible to create an application (e.g. your client?) that know about the Kubernetes environment, and what happens in the environment. You need to interact with the APIServer to get this information, this is typically done by using <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> (if using Golang) but there is libraries for other languages as well, e.g. <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Kubernetes Java client</a>. You can e.g. watch for events or number of replicas for your server.</p>
| Jonas |
<p>From the document, it can be found there is a <code>Stable Network ID</code> feature to use for <code>Pod NDS</code>:</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id</a></p>
<p>I tried to do</p>
<h1>service.yaml</h1>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
labels:
app: logstash
name: logstash
spec:
selector:
app: logstash
ports:
- name: "5044"
port: 5044
targetPort: 5044
</code></pre>
<h1>statefulset.yaml</h1>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
resources:
limits:
memory: 2Gi
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
command: ["/bin/sh","-c"]
args:
- bin/logstash -f /usr/share/logstash/pipeline/logstash.conf;
volumes:
- name: config-volume
configMap:
name: configmap-logstash
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: configmap-logstash
items:
- key: logstash.conf
path: logstash.conf
</code></pre>
<h1>configmap.yaml</h1>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-filebeat
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/lib/nginx/access.json
output.logstash:
hosts: ["logstash-0.logstash.default.svc.cluster.local:5044", "logstash-1.logstash.default.svc.cluster.local:5044"]
loadbalance: true
</code></pre>
<p>Filebeat is deploying into a daemonset use this configuration.</p>
<p>It can't work. From the filebeat's log got:</p>
<pre><code>2020-12-22T02:10:34.395Z WARN [transport] transport/tcp.go:52 DNS lookup failure "logstash-1.logstash.default.svc.cluster.local": lookup logstash-1.logstash.default.svc.cluster.local: no such host
</code></pre>
<p>If use this config, it can work:</p>
<pre><code> output.logstash:
hosts: ["logstash.default.svc.cluster.local:5044"]
</code></pre>
<p>Why it caused <code>DNS lookup failure</code> issue when use <code>Pod DNS</code> format? Is there any more conditions to use this feature? Then how to do?</p>
| iooi | <p>Also note from the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">StatefulSet documentation</a>:</p>
<blockquote>
<p>As mentioned in the limitations section, you are responsible for creating the Headless Service responsible for the network identity of the pods.</p>
</blockquote>
<p>So you are resposible for creating <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Services</a> for the pods, e.g. <code>logstash-0</code> and <code>logstash-1</code>.</p>
| Jonas |
<p>Let's suppose that I have such code to generate new pod</p>
<pre><code>req := &api.Pod{
TypeMeta: unversioned.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
},
ObjectMeta: api.ObjectMeta{
GenerateName: "name-, // need to get that name, before creating an object
},
Spec: api.PodSpec{
Containers: []api.Container{
{
Name: "nginx",
Image: "nginx",
Env: []corev1.EnvVar{} // pass here the generated name,
},
},
},
}
...
// Do some work on the generated name, before creating the resource in Kubernetes cluster
...
err := client.Create(context.Background(), req)
</code></pre>
<p>Is it possible to get that generated name before creating an object? Or is it possible to store that generated name in the env of the same object?</p>
| xaos_xv | <p>The generated name seem to be created in conjunction with the apiServer. See <a href="https://github.com/kubernetes/kubernetes/issues/44501#issuecomment-294255660" rel="nofollow noreferrer">Issue comment</a> and <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#generated-values" rel="nofollow noreferrer">Kubernetes API Concepts - Generated values</a>.</p>
<p>It is recommended to not depend on it. Typically, labels and selectors is more common in the Kubernetes ecosystem.</p>
<pre><code>Env: []corev1.EnvVar{} // pass here the generated name,
</code></pre>
<p>You can use the Downward API for this.
Example:</p>
<pre><code> env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
| Jonas |
<p>I would like to enable mTLS between services in one K8S namespace. I wonder if I can do it without using service mesh? I considered cert-manager but all the examples I've seen involved Ingress resource which I do not need as my services are not exposed outside of the cluster.Thanks</p>
| Revital Eres | <p>Using <em>Service Mesh</em> like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> or <a href="https://linkerd.io/" rel="nofollow noreferrer">Linkerd</a> for this is currently the only general solution for this.</p>
<p>It should be possible to do this using a library for you app as well, the library typically would need to support certificate management. <em>Service Meshes</em> typically use EnvoyProxy as a sidecar, it has implemented novel "control plane" APIs for management, called <a href="https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol" rel="nofollow noreferrer">xDS protocols</a> - this is something that your library typically would need to implement. In addition you need a control plane interface to manage services.</p>
<p>A drawback with doing this in a library is that it will be language-dependent. But the pro is that it will be better performant.</p>
<p>Google has recently taking this route with <a href="https://cloud.google.com/traffic-director/docs/proxyless-overview" rel="nofollow noreferrer">Traffic Director - proxyless service mesh</a></p>
| Jonas |
<p>I created a StatefulSet for running my NodeJS with 3 replicas and want to attach to a gce disk that can become a data storage for user to upload files.</p>
<p>My project naming: carx; Server name: car-server</p>
<p><strong>However I got an error while creating the second pod.</strong></p>
<pre><code>kubectl describe pod car-server-statefulset-1
</code></pre>
<blockquote>
<p>AttachVolume.Attach failed for volume "my-app-data" : googleapi: Error
400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE - The disk resource
'projects/.../disks/carx-disk' is already being used by
'projects/.../instances/gke-cluster-...-2dw1'</p>
</blockquote>
<hr />
<p><strong>car-server-statefulset.yml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: car-server-service
labels:
app: car-server
spec:
ports:
- port: 8080
name: car-server
clusterIP: None
selector:
app: car-server
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: car-server-statefulset
spec:
serviceName: "car-server-service"
replicas: 3
template:
metadata:
labels:
app: car-server
spec:
containers:
- name: car-server
image: myimage:latest
ports:
- containerPort: 8080
name: nodejs-port
volumeMounts:
- name: my-app-data
mountPath: /usr/src/app/mydata
volumes:
- name: my-app-data
persistentVolumeClaim:
claimName: example-local-claim
selector:
matchLabels:
app: car-server
</code></pre>
<hr />
<p><strong>pvc.yml</strong></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: standard
</code></pre>
<hr />
<p><strong>pv.yml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: my-app-data
labels:
app: my-app
spec:
capacity:
storage: 60Gi
storageClassName: standard
accessModes:
- ReadWriteMany
gcePersistentDisk:
pdName: carx-disk
fsType: ext4
</code></pre>
| potato | <p>The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Access Mode</a> field is treated as a <em>request</em>, but it is not sure that you get what you requests. In your case, <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">GCEPersistentDisk</a> only support <code>ReadWriteOnce</code> or <code>ReadOnlyMany</code>.</p>
<p>Your PV is now mounted as <code>ReadWriteOnce</code> but can only be mounted on <strong>one</strong> node at the same time. So the other replicas will fail to mount the volume.</p>
<p>When using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>, it is common that each replica use its own volume, use the <code>volumeClaimTemplate:</code> part of the <code>StatefulSet</code> manifest for that.</p>
<p>Example:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: example-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 5Gi
</code></pre>
<p>In case that you only can use a single volume, you may consider to run the <code>StatefulSet</code> with only one replica, e.g. <code>replicas: 1</code>.</p>
<p>If you want disk-replication, you can use a <em>StorageClass</em> for <em>regional disks</em> that are replicated to another AZ as well. See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd" rel="nofollow noreferrer">Regional Persistent Disk</a>, but it still have the same <em>access modes</em>.</p>
| Jonas |
<p>Is it possible to mount disk to gke pod and compute engine at the same time.</p>
<p>I have a ubunut disk of 10 gb</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
capacity:
storage: 10G
accessModes:
- ReadWriteOnce
claimRef:
name: pv-claim-demo
gcePersistentDisk:
pdName: pv-test1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-demo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
</code></pre>
<p>deploment.yaml</p>
<pre><code>spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /app/logs
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: pv-claim-demo
</code></pre>
<p>The idea is to mount the logs files generated by pod to disk and access it from compute engine.
I cannot use NFS or hostpath to solve the problem. The other challenge is multiple pod will be writting to same pv.</p>
| pythonhmmm | <blockquote>
<p>The other challenge is multiple pod will be writing to same PV.</p>
</blockquote>
<p>Yes, this does not work well, unless you have a storage class similar to NFS. The default <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storageClass</a> in Google Kubernetes Engine only support <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> ReadWriteOnce when dynamically provisioned - so only one replica can mount it.</p>
<blockquote>
<p>The idea is to mount the logs files generated by pod to disk and access it from compute engine.</p>
</blockquote>
<p>This is not a recommended solution for logs when using Kubernetes. An app on Kubernetes should follow the <a href="https://12factor.net/" rel="nofollow noreferrer">12 factor principles</a>, and for this problem there is a specific item about <a href="https://12factor.net/logs" rel="nofollow noreferrer">logs</a> - the app should log to <code>stdout</code>. For apps that does not follow the 12 factor principles, this can be solved by a <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent" rel="nofollow noreferrer">sidecar that tails the log files</a> and then print them on <code>stdout</code>.</p>
<p>Logs that are printed to <code>stdout</code> is typically forwarded by the platform to a log collection system - as a service. So this is not anything the app developer need to be responsible for.</p>
<p>For how logs is handled by the platform in Google Kubernetes Engine, see <a href="https://cloud.google.com/stackdriver/docs/solutions/gke" rel="nofollow noreferrer">Google Cloud Operations suite for GKE</a></p>
| Jonas |
Subsets and Splits