Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>// I'm almost certain this must be a dup or at least a solved problem, but I could not find what I was after through searching the many k8 communities.</p>
<p>We have jobs that run for between a minute and many hours. Given that we assign them resource values that afford them QOS Guaranteed status, how could we minimize resource waste across the nodes?</p>
<p>The problem is that downscaling rarely happens, because each node eventually gets assigned one of the long running jobs. They are not common, but the keep all of the nodes running, even when we do not have need for them.</p>
<p>The dumb strategy that seems to avoid this would be a depth first scheduling algorithm, wherein among nodes that have capacity, the one most filled already will be assigned. In other words, if we have two total nodes running at 90% cpu/memory usage and 10% cpu/memory assigned, the 90% would always be assigned first provided it has sufficient capacity.</p>
<p>Open to any input here and/or ideas. Thanks kindly.</p>
| A D | <p>As of now there seems to be this <a href="https://kubernetes.io/docs/reference/scheduling/profiles/#scheduling-plugins" rel="nofollow noreferrer">kube-sheduler profile plugin</a>:</p>
<blockquote>
<p><strong>NodeResourcesMostAllocated</strong>: Favors nodes that have a high allocation of resources. </p>
</blockquote>
<p>But it is in alpha stage since k8s v1.18+, so probably not safe to use it for produciton.</p>
<hr>
<p>There is also this parameter you can set for kube-scheduler that I have found <a href="https://kubernetes.io/docs/reference/scheduling/#priorities" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p><strong>MostRequestedPriority</strong>: Favors nodes with most requested resources. This policy will fit the scheduled Pods onto the smallest number of Nodes needed to run your overall set of workloads.</p>
</blockquote>
<p>and <a href="https://version.cs.vt.edu/aakashg/jupyterhub/blob/7e14c702626c64e61a78ac1e90a2e510684a5a77/jupyterhub/templates/scheduling/user-scheduler/configmap.yaml" rel="nofollow noreferrer">here is an example</a> on how to configure it.</p>
<hr>
<p>One last thing that comes to my mind is using <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">node affinity</a>.
Using nodeAffinity on long running pods, (more specificaly with <code>preferredDuringSchedulingIgnoredDuringExecution</code>), will prefer to schedule these pods on the nodes that run all the time, and prefer to not schedule them on nodes that are being autoscaled. This approach requires excluding some nodes from autoscaling and labeling approprietly so that scheduler can make use of node-affinity.</p>
| Matt |
<p>I hope you can shed some light on this.</p>
<p>I am facing the same issue as described here: <a href="https://stackoverflow.com/questions/60959284/kubernetes-deployment-not-scaling-down-even-though-usage-is-below-threshold">Kubernetes deployment not scaling down even though usage is below threshold</a></p>
<p>My configuration is almost identical.</p>
<p>I have checked the hpa algorithm, but I cannot find an explanation for the fact that I am having only one replica of my-app3.
Any hints?</p>
<pre>kubectl get hpa -A
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-ns1 my-app1 Deployment/my-app1 49%/75%, 2%/75% 1 10 2 20h
my-ns2 my-app2 Deployment/my-app2 50%/75%, 10%/75% 1 10 2 22h
my-ns2 my-app3 Deployment/my-app3 47%/75%, 10%/75% 1 10 1 22h
</pre>
<pre>kubectl top po -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
my-ns1 pod-app1-8d694bc8f-mkbrh 1m 76Mi
my-ns1 pod-app1-8d694bc8f-qmlnw 1m 72Mi
my-ns2 pod-app2-59d895d96d-86fgm 1m 77Mi
my-ns2 pod-app2-59d895d96d-zr67g 1m 73Mi
my-ns2 pod-app3-6f8cbb68bf-vdhsd 1m 47Mi
</pre>
| Catalin | <p>Posting this answer as it could be beneficiary for the community members on why exactly <code>Horizontal Pod Autoscaler</code> decided <strong>not</strong> to scale the amount of replicas in this particular setup.</p>
<p>The formula for amount of replicas workload will have is:</p>
<blockquote>
<p>desiredReplicas = ceil[<strong>currentReplicas</strong> * ( currentMetricValue / desiredMetricValue )]</p>
</blockquote>
<p>Following on the describe of <code>HPA</code>:</p>
<pre class="lang-sh prettyprint-override"><code>NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-ns1 my-app1 Deployment/my-app1 49%/75%, 2%/75% 1 10 2 20h
my-ns2 my-app2 Deployment/my-app2 50%/75%, 10%/75% 1 10 2 22h
my-ns2 my-app3 Deployment/my-app3 47%/75%, 10%/75% 1 10 1 22h
</code></pre>
<p><code>HPA</code> decides on the amount of replicas on the premise of their <strong>current amount</strong>.</p>
<p><strong>A side note</strong>: In the setup that uses multiple metrics (for example <code>CPU</code> and <code>RAM</code>) it will use the higher metric and act accordingly.</p>
<p>Also please consider that <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">downscaling has a cooldown</a>.</p>
<hr />
<h3>Calculation on each of the <code>Deployments</code></h3>
<blockquote>
<p><code>ceil[]</code> - round a number up:</p>
<ul>
<li>ceil(4,55) = 5</li>
<li>ceil(4,01) = 5</li>
</ul>
</blockquote>
<p><code>app1</code>:</p>
<ul>
<li><code>Replicas</code> = ceil[<code>2</code> * (<code>49</code> / <code>75</code>)]</li>
<li><code>Replicas</code> = ceil[<code>2</code> * <code>0,6533..</code>]</li>
<li><code>Replicas</code> = ceil[<code>1,3066..</code>]</li>
<li><code>Replicas</code> = <strong><code>2</code></strong></li>
</ul>
<p>This example shows that there will be no changes to be amount of replicas.</p>
<p>Amount of replicas would go:</p>
<ul>
<li><strong>Up</strong> when the <code>currentMetricValue</code> (<code>49</code>) would exceed the <code>desiredMetricValue</code> (<code>75</code>)</li>
<li><strong>Down</strong> when the <code>currentMetricValue</code> (<code>49</code>) would be <strong>less than half</strong> of the <code>desiredMetricValue</code> (<code>75</code>)</li>
</ul>
<p><code>app2</code> is in the same situation as <code>app1</code> so it can be skipped</p>
<p><code>app3</code>:</p>
<ul>
<li><code>Replicas</code> = ceil[<code>1</code> * (<code>49</code> / <code>75</code>)]</li>
<li><code>Replicas</code> = ceil[<code>1</code> * <code>0,6266..</code>]</li>
<li><code>Replicas</code> = ceil[<code>0,6266..</code>]</li>
<li><code>Replicas</code> = <strong><code>1</code></strong></li>
</ul>
<p>This example also shows that there will be no changes to be amount of replicas.</p>
<p>Amount of replicas would go:</p>
<ul>
<li><strong>Up</strong> when the <code>currentMetricValue</code> (<code>47</code>) would exceed the <code>desiredMetricValue</code> (<code>75</code>)</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Run application: Horizontal pod autoscale</a></em></li>
</ul>
| Dawid Kruk |
<p>I am trying to set up a fresh kubernetes cluster, and facing issue with using weave as the networking solution. Weave pods are hung in pending state and no events/logs available from kubectl command line.</p>
<p>I am trying to set-up a kubernetes cluster from scratch as part of an online course. I have set up master nodes - with api server, controller manager and scheduler up and running. And the worker nodes with kubelets and kube-proxy running.</p>
<p>Node status:</p>
<pre><code>vagrant@master-1:~$ kubectl get nodes -n kube-system
</code></pre>
<p><code>NAME STATUS ROLES AGE VERSION
worker-1 NotReady <none> 25h v1.13.0
worker-2 NotReady <none> 9h v1.13.0</code></p>
<p>As next step to enable networking, I am using weave. I have installed weave and extracted on worker nodes.</p>
<p>Now when I try to run below command:</p>
<p><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"</code></p>
<p>I see DaemonSet getting initialized, but the pods created continue to be in "Pending state".</p>
<pre><code>vagrant@master-1:~$ kubectl get pods -n kube-system
</code></pre>
<p><code>NAME READY STATUS RESTARTS AGE
weave-net-ccrqs 0/2 Pending 0 73m
weave-net-vrm5f 0/2 Pending 0 73m</code></p>
<p>The below command:
<code>vagrant@master-1:~$ kubectl describe pods -n kube-system</code>
does not return any events ongoing.</p>
<p>From the scheduler service logs, I could see below errors logged.</p>
<pre><code>Oct 13 16:46:51 master-2 kube-scheduler[14569]: E1013 16:46:51.973883 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:anonymous" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Oct 13 16:46:51 master-2 kube-scheduler[14569]: E1013 16:46:51.982228 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.338171 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:anonymous" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.745288 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.765103 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:anonymous" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.781419 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:anonymous" cannot list resource "replicasets" in API group "apps" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.785872 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:anonymous" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.786117 14569 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.786790 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.787016 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:anonymous" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
</code></pre>
<p>Since I am quite new to kubernetes, please excuse if I missed to add relevant information. Will share with immediate effect. Kind help required.</p>
<p>Added kubeconfig for scheduler:</p>
<pre><code> {
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-
scheduler.kubeconfig
}
</code></pre>
<p>Added scheduler service definition:</p>
<pre><code>cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\
--address=127.0.0.1 \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
</code></pre>
<p>Started scheduler using:</p>
<pre><code>sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
</code></pre>
<p>Component status:</p>
<pre><code>vagrant@master-1:~$ kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
</code></pre>
| anmsal | <p>I have restarted kube scheduler and controller manager on both master nodes participating in HA which I believe allowed load balancer URL for the api server to take effect and the errors observed earlier were eliminated.</p>
<p>After this, I have set up a worker node and installed weave, the pod got deployed and node became ready.</p>
<pre><code>vagrant@master-1:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
weave-net-zswht 1/2 Running 0 41s
vagrant@master-1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker-1 Ready <none> 4m51s v1.13.0
</code></pre>
| anmsal |
<p>I am learning Kubernetes. I have Apollo-express GraphQL API, React frontend and React Native app. How do I connect (or) what service type should I use if I want GraphQL API to be able to connect with both frontend and mobile apps?</p>
<p>I got a route <code>/hi</code> that replies <code>Hi!</code> to check if it's up:</p>
<pre><code>app.get('/hi', function (_req, res) {
res.send('Hi!')
})
</code></pre>
<p>Here's my attempt for <code>api.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-api
spec:
replicas: 2
selector:
matchLabels:
app: server-api
template:
metadata:
labels:
app: server-api
spec:
terminationGracePeriodSeconds: 5
containers:
- name: server-api
imagePullPolicy: IfNotPresent
image: my/server-api:latest
ports:
- name: gql
containerPort: 8081
env:
- name: NODE_ENV
value: "development"
- name: REDIS_HOST
value: "redis-cache"
- name: REDIS_PORT
value: "6379"
- name: POSTGRES_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: POSTGRES_URL
---
apiVersion: v1
kind: Service
metadata:
name: server-api
spec:
type: ClusterIP
selector:
app: server-api
ports:
- name: gql
port: 8081
targetPort: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: server-api-external
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: server-api
servicePort: 8081
</code></pre>
<p>Minikube IP is <code>192.168.99.100</code>. But it shows <code>Cannot GET /</code> at <code>192.168.99.100/api/hi</code>.</p>
<p>What I am doing wrong?</p>
| Swix | <p>Your ingress is incorrect.
With your ingress, when there is incoming request to <code>/api/hi</code>, ingress will match with <code>/api</code> path rule and rewrite the path to <code>/</code> acording to <code>rewrite-target</code> annotation.</p>
<p>To make it work you need to use the folowing ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: server-api-external
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/(.*)
backend:
serviceName: server-api
servicePort: 8081
</code></pre>
<p>Now a little explaination on how this works.
Notice changes in <code>path</code> field and <code>rewrite-target</code> annotation.
With this ingress, when there is incoming request to <code>/api/hi</code>, ingress will match <code>/api/(.*)</code> path rule, and then will extract whatever matches with <code>(.*)</code> group (that would be <code>hi</code> in this case). Next, ingress will use this matched group and rewrite the path to <code>/$1</code>, so <code>/</code> + <code>first group match</code>. At the end the path that your application receives will be <code>/hi</code> and this is what you are looking for.</p>
<p>Refer to <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Nginx Ingress Controller documantation</a> for more detailed explaination of rewrite feature.</p>
<p>Let me know if something is not clear enough and needs more explaination.</p>
| Matt |
<p>Am installing nginx-controller to expose the service, after installing the ingress resource am not able to hit the desired port. i get failed saying the below,</p>
<pre><code>[root@k8-m smartrem]# kubectl describe ingress ingress-svc
Name: ingress-svc
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
auditmee.com
/swagger-ui.html springboot-service:8080 (192.168.157.76:8080,192.168.157.77:8080,192.168.250.8:8080)
Annotations: <none>
Events: <none>
</code></pre>
<p>I see the errors are related to default-http-backend, how to create the default-http-backend service.</p>
<p>Any help would be highly appreciated</p>
| Vicky | <p>There are several parts in this question, I do think need to be addressed:</p>
<hr />
<ol>
<li>There are multiple <code>NGINX Ingress Controllers</code> available to use within Kubernetes environment. Specifying which exact one is used will definitely help in troubleshooting process as there could be slight differences in their inner workings that could affect your workload.</li>
</ol>
<p>You can read more about this topic (<code>NGINX</code> based <code>Ingress</code> controllers) by following this thread:</p>
<ul>
<li><em><a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">Github.com: Nginxinc: Kubernetes Ingress: Blob: Master: Docs: Nginx Ingress controllers</a></em></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>I saw you're using this specific <code>Ingress</code> controller as per previous question asked on StackOverflow:</p>
<ul>
<li><a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress</a></li>
</ul>
</blockquote>
<hr />
<ol start="2">
<li>What is a <code>default-backend</code>?</li>
</ol>
<p><code>default-backend</code> in short is a "place" (<code>Deployment</code> with a <code>Pod</code> and a <code>Service</code>) <strong>where all the traffic that doesn't match <code>Ingress</code> resource is sent</strong> (for example unknown <code>path</code>).</p>
<p>Your <code>Ingress</code> resource is displaying following message:</p>
<blockquote>
<p><code>default-http-backend:80 (<error: endpoints "default-http-backend" not found>)</code></p>
</blockquote>
<p>As it can't find an <code>Endpoint</code> named <code>default-http-backend</code> (with associated <code>Service</code> of the same name). To fix that you'll need to provision such resources.</p>
<p>Example of such <code>default-backend</code> implementation:</p>
<ul>
<li><em><a href="https://github.com/uswitch/ingress/blob/master/deploy/default-backend.yaml" rel="nofollow noreferrer">Github.com: Uswitch: Master: Deploy: default-backend.yaml</a></em></li>
</ul>
<hr />
<ol start="3">
<li><code>Ingress</code> resources and it's <code>path</code></li>
</ol>
<p>As for your <code>Ingress</code> resource. It's crucial to include <code>YAML</code> manifests for resources you are deploying. It's easier for other community member to see the whole pictures and the potential issues you are facing.</p>
<p>By the part of the: <code>$ kubectl describe ingress ingress-svc</code> it can be seen:</p>
<pre class="lang-sh prettyprint-override"><code>Rules:
Host Path Backends
---- ---- --------
auditmee.com
/swagger-ui.html springboot-service:8080 (192.168.157.76:8080,...)
</code></pre>
<p>There is a host: <code>HOST.com</code> that have one really specific path (file to be exact). <strong>Setup like this will allow your client to have access only to <code>swagger-ui.html</code>. If you had some other files, there wouldn't be available</strong>:</p>
<ul>
<li><code>curl http://HOST/swagger-ui.html</code> <-- <code>200</code></li>
<li><code>curl http://HOST/super-awesome-icon.png</code> <-- <code>404</code></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>Also please check on which protocol <code>HTTP</code>/<code>HTTPS</code> are you serving your resources.</p>
</blockquote>
<p>As your workload is unknown to us, you could try to set your <code>path</code> to <code>path: /</code>. This rule would allow all request for resources for <code>HOST</code> to be passed to your <code>springboot-service</code>.</p>
<hr />
<p>I encourage you to check available documentation for more resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
<li><em><a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Github.com: Nginxinc: Kubernetes Ingress</a></em>:
<ul>
<li><em><a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">Github.com: Nginxinc: Kubernetes Ingress: Examples: Complete example</a></em></li>
</ul>
</li>
</ul>
<p>I also found displaying logs of the <code>Ingress</code> controller to be highly effective in troubleshooting:</p>
<ul>
<li><code>$ kubectl logs -n NAMESPACE INGRESS-POD-NAME</code></li>
</ul>
| Dawid Kruk |
<p>Say you have a kubernetes installation, but you have no information about the time and the method the K8s cluster was installed.<br>
I am talking about the K8s infrastructure itself and not K8s applications running on the cluster.</p>
<p>How would you find that?</p>
<p>I am looking for something like this:</p>
<pre><code>kubectl/kubeadm(or some other command) cluster info
</code></pre>
<p>and for responses like these:<br>
(<em>method/date/nodes</em>)</p>
<ul>
<li><code>kubeadm-manually 2020-10-10T12:06:43 3nodes</code></li>
<li><code>RKE 2020-10-10T12:06:43 3nodes</code></li>
<li><code>EKS 2020-10-10T12:06:43 3nodes</code></li>
</ul>
<p>Currently what I am doing is looking for traces on the filesystem and on kubernetes resources (whether method specific pods,namespaces,labels..etc are exists)</p>
| beatrice | <h2>kubectl cluster-info</h2>
<p>There is a command <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-nodes-and-cluster" rel="nofollow noreferrer">kubectl cluster-info</a>, but it shows only endpoint information about the master and services in the cluster.</p>
<p>Example from docker desktop.</p>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<p>Example from gke</p>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://xx.xxx.xxx.xx
GLBCDefaultBackend is running at https://xx.xxx.xxx.xx/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://xx.xxx.xxx.xx/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://xx.xxx.xxx.xx/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
</code></pre>
<hr />
<h2>Number of nodes</h2>
<p>If you wan't to check how many nodes are in your cluster you can use <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources" rel="nofollow noreferrer">kubectl get nodes</a>.</p>
<p>Example from docker on desktop.</p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready master 5d22h v1.19.3
</code></pre>
<p>Example from gke</p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-4-default-pool-ae8cecd9-m6br Ready <none> 2d16h v1.16.13-gke.401
gke-cluster-4-default-pool-ae8cecd9-n9nz Ready <none> 2d16h v1.16.13-gke.401
gke-cluster-4-default-pool-ae8cecd9-tb9f Ready <none> 2d16h v1.16.13-gke.401
</code></pre>
<p>If you wan't to get just the number of running nodes .</p>
<pre><code>kubectl get nodes --no-headers | grep -v Running | wc -l
</code></pre>
<hr />
<h2>Method</h2>
<p>You could try to do that with few commands from this <a href="https://stackoverflow.com/questions/38242062/how-to-get-kubernetes-cluster-name-from-k8s-api">stackoverflow</a> question.</p>
<p>For example:</p>
<ul>
<li><code>kubectl config current-context</code></li>
<li><code>kubectl config view -o jsonpath='{.clusters[].name}'</code></li>
<li><code>kubectl -n kube-system get configmap kubeadm-config -o yaml <--- kubeadm only</code></li>
</ul>
<hr />
<h2>Date</h2>
<p>Couldn't find any informations about that, as a workaround you can try to check nodes/kube api-server creationTimestamp, but if there was any restart on the node/kube api-server pod then the data will get updated.</p>
<p>Example from docker desktop.</p>
<pre><code>kubectl get node docker-desktop -o jsonpath='{.metadata.creationTimestamp}'
2020-11-13T10:09:10Z
kubectl get pods -n kube-system kube-apiserver-docker-desktop -o jsonpath='{.metadata.creationTimestamp}'
2020-11-13T10:10:12Z
</code></pre>
| Jakub |
<p>my goal is to make cluster running on raspberry pi 4b.
Currently i'm trying/testing/playing with kubernetes in vagrant.
My project is here:
<a href="https://github.com/kentahikaru/vagranttraining/tree/master/kubernetes_testing" rel="nofollow noreferrer">https://github.com/kentahikaru/vagranttraining/tree/master/kubernetes_testing</a></p>
<p>I am able to initialize master and connect node to the cluster.</p>
<p>However i'm having trouble deploying boinc client, the way i want it.
The deployment i am using is here:
<a href="https://github.com/kentahikaru/vagranttraining/blob/master/kubernetes_testing/Testing/boinc/boinc_client_deploy.yaml" rel="nofollow noreferrer">https://github.com/kentahikaru/vagranttraining/blob/master/kubernetes_testing/Testing/boinc/boinc_client_deploy.yaml</a>
The way it is uploaded on github, it is working. I can deploy it and it will switch into Running state.
However when i uncomment "command" and "args" (because i want it automatically connect to my accounts) it will crash and i can't figure out why:</p>
<pre><code>vagrant@master:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
boincclient-69cdf887dc-7phr2 0/1 CrashLoopBackOff 1 12s 10.244.2.69 node2 <none> <none>
</code></pre>
<p>From the docker log on node (/var/log/containers) :</p>
<pre><code>{"log":"can't connect to local host\n","stream":"stderr","time":"2020-11-11T22:27:35.708789661Z"}
{"log":"can't connect to local host\n","stream":"stderr","time":"2020-11-11T22:27:35.711018367Z"}
{"log":"can't connect to local host\n","stream":"stderr","time":"2020-11-11T22:27:35.714129251Z"}
{"log":"can't connect to local host\n","stream":"stderr","time":"2020-11-11T22:27:35.714160084Z"}
</code></pre>
<p>I can't also figure out why "kubectl logs" is not working:</p>
<pre><code>vagrant@master:~$ kubectl -v=8 logs boincclient-69cdf887dc-7phr2
I1111 22:30:27.452750 31588 loader.go:375] Config loaded from file: /home/vagrant/.kube/config
I1111 22:30:27.464024 31588 round_trippers.go:420] GET https://172.16.0.21:6443/api/v1/namespaces/default/pods/boincclient-69cdf887dc-7phr2
I1111 22:30:27.464082 31588 round_trippers.go:427] Request Headers:
I1111 22:30:27.464095 31588 round_trippers.go:431] Accept: application/json, */*
I1111 22:30:27.464105 31588 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a
I1111 22:30:27.483934 31588 round_trippers.go:446] Response Status: 200 OK in 19 milliseconds
I1111 22:30:27.484300 31588 round_trippers.go:449] Response Headers:
I1111 22:30:27.484514 31588 round_trippers.go:452] Cache-Control: no-cache, private
I1111 22:30:27.485035 31588 round_trippers.go:452] Content-Type: application/json
I1111 22:30:27.485382 31588 round_trippers.go:452] Date: Wed, 11 Nov 2020 22:30:27 GMT
I1111 22:30:27.486128 31588 request.go:1097] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"boincclient-69cdf887dc-7phr2","generateName":"boincclient-69cdf887dc-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/boincclient-69cdf887dc-7phr2","uid":"1f7ae333-07e5-429c-bb37-3430cc648170","resourceVersion":"683632","creationTimestamp":"2020-11-11T22:25:43Z","labels":{"app":"boincclient","pod-template-hash":"69cdf887dc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"boincclient-69cdf887dc","uid":"b6e765bf-f38a-4c55-92a2-68ae87d8adef","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-11-11T22:25:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6e765bf-f38a-4c55-92a2-68ae87d8adef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}}, [truncated 4851 chars]
I1111 22:30:27.501641 31588 round_trippers.go:420] GET https://172.16.0.21:6443/api/v1/namespaces/default/pods/boincclient-69cdf887dc-7phr2/log
I1111 22:30:27.501978 31588 round_trippers.go:427] Request Headers:
I1111 22:30:27.502183 31588 round_trippers.go:431] Accept: application/json, */*
I1111 22:30:27.502463 31588 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a
I1111 22:30:27.508414 31588 round_trippers.go:446] Response Status: 404 Not Found in 5 milliseconds
I1111 22:30:27.508462 31588 round_trippers.go:449] Response Headers:
I1111 22:30:27.508473 31588 round_trippers.go:452] Cache-Control: no-cache, private
I1111 22:30:27.508501 31588 round_trippers.go:452] Content-Type: application/json
I1111 22:30:27.508525 31588 round_trippers.go:452] Content-Length: 270
I1111 22:30:27.508546 31588 round_trippers.go:452] Date: Wed, 11 Nov 2020 22:30:27 GMT
I1111 22:30:27.508587 31588 request.go:1097] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource ( pods/log boincclient-69cdf887dc-7phr2)","reason":"NotFound","details":{"name":"boincclient-69cdf887dc-7phr2","kind":"pods/log"},"code":404}
I1111 22:30:27.509597 31588 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource ( pods/log boincclient-69cdf887dc-7phr2)",
"reason": "NotFound",
"details": {
"name": "boincclient-69cdf887dc-7phr2",
"kind": "pods/log"
},
"code": 404
}]
F1111 22:30:27.509663 31588 helpers.go:115] Error from server (NotFound): the server could not find the requested resource ( pods/log boincclient-69cdf887dc-7phr2)
</code></pre>
<p>Thanks for any help.</p>
| Hikaru | <p>So the solution was to call entry point command from dockerfile.</p>
| Hikaru |
<p>I,m trying to deploy a backend java app to ibm cloud and i'm getting this error`FAILED
Failed to generate the required files. Please try again.</p>
<p>Could not get list of available Starter Kits. Please try again.
[Get <a href="https://us-south.devx.cloud.ibm.com/appmanager/v1/starters?tag=notDeveloperConsole" rel="nofollow noreferrer">https://us-south.devx.cloud.ibm.com/appmanager/v1/starters?tag=notDeveloperConsole</a>: dial tcp: lookup us-south.devx.cloud.ibm.com on 127.0.0.53:53: no such host]
`
Please note the app is written in kotlin and ktor</p>
| ahmed kamar | <blockquote>
<p>dial tcp: lookup us-south.devx.cloud.ibm.com on 127.0.0.53:53: no such host</p>
</blockquote>
<p>Looks like your dns server is not working.
Try using different dns server.</p>
<p>Look at <a href="https://stackoverflow.com/questions/33893150/dial-tcp-lookup-xxx-xxx-xxx-xxx-no-such-host/53848018">this StackOverflow question</a></p>
| Matt |
<p>I followed this DigitalOcean guide <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a>, and I came across something quite strange. When in the hostnames I set a wildcard, then <code>letsencrypt</code> fails in issuing a new certificate. While when I only set defined sub-domains, then it works perfectly.</p>
<p>This is my "working" configuration for the domain and its api (and this one works perfectly):</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- example.com
- api.example.com
secretName: my-tls
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-frontend
servicePort: 80
- host: api.example.com
http:
paths:
- backend:
serviceName: example-api
servicePort: 80
</code></pre>
<p>And this is, instead, the wildcard certificate I'm trying to issue, but that fails to do leaving the message "Issuing".</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- example.com
- *.example.com
secretName: my-tls
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-frontend
servicePort: 80
- host: api.example.com
http:
paths:
- backend:
serviceName: example-api
servicePort: 80
</code></pre>
<p>The only difference is the second line of the hosts. Is there a trivial well known solution I am not aware of? I am new to Kubernetes, but not to DevOps.</p>
| purple_lolakos | <p>Generating wildcard certificate with <code>cert-manager</code> (<code>letsencrypt</code>) requires the usage of <code>DNS-01</code> challenge instead of <code>HTTP-01</code> <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">used in the link from the question</a>:</p>
<blockquote>
<h3>Does Let’s Encrypt issue wildcard certificates?</h3>
<p>Yes. Wildcard issuance must be done via ACMEv2 using the DNS-01 challenge. See <a href="https://community.letsencrypt.org/t/acme-v2-production-environment-wildcards/55578" rel="nofollow noreferrer">this post</a> for more technical information.</p>
</blockquote>
<p>There is a documentation about generating the <code>wildcard</code> certificate with <code>cert-manager</code>:</p>
<ul>
<li><em><a href="https://cert-manager.io/docs/configuration/acme/dns01/" rel="nofollow noreferrer">Cert-manager.io: Docs: Configuration: ACME: DNS-01</a></em></li>
</ul>
<hr />
<p>From the perspective of DigialOcean, there is a guide specifically targeted at it:</p>
<blockquote>
<p>This provider uses a Kubernetes <code>Secret</code> resource to work. In the following
example, the <code>Secret</code> will have to be named <code>digitalocean-dns</code> and have a
sub-key <code>access-token</code> with the token in it. For example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager
data:
# insert your DO access token here
access-token: "base64 encoded access-token here"
</code></pre>
<p>The access token must have write access.</p>
<p>To create a Personal Access Token, see <a href="https://www.digitalocean.com/docs/api/create-personal-access-token" rel="nofollow noreferrer">DigitalOcean documentation</a>.</p>
<p>Handy direct link: <a href="https://cloud.digitalocean.com/account/api/tokens/new" rel="nofollow noreferrer">https://cloud.digitalocean.com/account/api/tokens/new</a></p>
<p>To encode your access token into base64, you can use the following</p>
<pre class="lang-sh prettyprint-override"><code>echo -n 'your-access-token' | base64 -w 0
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer
spec:
acme:
...
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
</code></pre>
<p>-- <em><a href="https://cert-manager.io/docs/configuration/acme/dns01/digitalocean/" rel="nofollow noreferrer">Cert-manager.io: Docs: Configuration: ACME: DNS-01: Digitalocean</a></em></p>
</blockquote>
<hr />
<p>I'd reckon this additional resources could also help:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/51613842/wildcard-ssl-certificate-with-subdomain-redirect-in-kubernetes">Stackoverflow.com: Questions: Wilcard SSL certificate with subdomain redirect in Kubernetes</a></em></li>
<li><em><a href="https://itnext.io/using-wildcard-certificates-with-cert-manager-in-kubernetes-and-replicating-across-all-namespaces-5ed1ea30bb93" rel="nofollow noreferrer">Itnext.io: Using wildcard certificates with cert-manager in Kubernetes</a></em></li>
</ul>
| Dawid Kruk |
<p>I have a problem with connecting two services on Kubernetes with Istio.
My service makes POST requests to the elasticsearch.</p>
<pre><code>2020-11-18T21:51:53.758079131Z org.elasticsearch.client.ResponseException: method [POST], host [http://elasticsearch:9200], URI [/_bulk?timeout=1m], status line [HTTP/1.1 503 Service Unavailable]
2020-11-18T21:51:53.758087238Z upstream connect error or disconnect/reset before headers. reset reason: connection failure
</code></pre>
<p>I read some questions/GitHub issues about that and one of the possible reasons could be <code>mtls</code>, so how can I disable it?</p>
<p>I was trying with this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: DISABLE
</code></pre>
<p>But with this <code>PeerAuthentication</code>, I'm not able to reach even my service.
Do you have any advice?</p>
| Ice | <h2>Disable mtls</h2>
<p>This PeerAuthentication is the correct way to disable mtls.</p>
<pre><code>apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: DISABLE
</code></pre>
<p>There is istio <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#" rel="nofollow noreferrer">documentation</a> about that.</p>
<hr />
<h2>Elasticsearch issue</h2>
<p>According to istio documentation:</p>
<blockquote>
<p>There are two Elasticsearch configuration parameters that need to be set appropriately to run Elasticsearch with Istio: <strong>network.bind_host</strong> and <strong>network.publish_host</strong>. By default, these parameters are set to the network.host parameter. If network.host is set to 0.0.0.0, Elasticsearch will most likely pick up the pod IP as the publishing address and no further configuration will be needed.</p>
<p>If the default configuration does not work, you can set the network.bind_host to 0.0.0.0 or localhost (127.0.0.1) and network.publish_host to the pod IP. For example:</p>
</blockquote>
<pre><code>...
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
env:
- name: network.bind_host
value: 127.0.0.1
- name: network.publish_host
valueFrom:
fieldRef:
fieldPath: status.podIP
...
</code></pre>
<blockquote>
<p>Refer to <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network" rel="nofollow noreferrer">Network Settings for Elasticsearch</a> for more information.</p>
</blockquote>
<p>If that won't work there are two github issues:</p>
<ul>
<li><a href="https://github.com/istio/istio/issues/14662#issuecomment-723669123" rel="nofollow noreferrer">https://github.com/istio/istio/issues/14662#issuecomment-723669123</a></li>
<li><a href="https://github.com/elastic/cloud-on-k8s/issues/2770" rel="nofollow noreferrer">https://github.com/elastic/cloud-on-k8s/issues/2770</a></li>
</ul>
<p>which suggest to use</p>
<pre><code>annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: ""
traffic.sidecar.istio.io/excludeInboundPorts: ""
</code></pre>
<p>There is elasticsearch <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-service-mesh-istio.html" rel="nofollow noreferrer">documentation</a> about that.</p>
| Jakub |
<p>I use docker desktop and minikube on Windows 10. I found the ip address of local docker repository with <code>minikube docker-env</code> command like below,</p>
<pre><code>> minikube docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://172.17.105.232:2376
SET DOCKER_CERT_PATH=C:\Users\joseph\.minikube\certs
SET MINIKUBE_ACTIVE_DOCKERD=minikube
REM To point your shell to minikube's docker-daemon, run:
REM @FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env') DO @%i
</code></pre>
<p>And I set the ip address of docker daemon with above <code>DOCKER_HOST</code> value, not <code>localhost</code> and I can use locally built docker images without errors. But in the case of minikube dashboard, the ip address is always localhost(127.0.0.1) when I type <code>minikube dashboard</code> command. So I can not generate kubernetes namespace and persistent volume. It throws error</p>
<blockquote>
<p>the server could not find the requested resource</p>
</blockquote>
<p>I think this issue is the matter of authorization with different ip addresses. How to configure the static or specific ip address and port number on minukube dashboard so I can generate namespace and persistent volumes without such errors on minikube dashboard?</p>
| Joseph Hwang | <p>If I understand correctly you are trying to access kubernetes dashboard from remote host.
When running <code>minikube dashboard</code>, minikube binary runs <a href="https://github.com/kubernetes/minikube/blob/1a19f8f0a5368f4081a8b3f6f10b2635e043b690/cmd/minikube/cmd/dashboard.go#L130" rel="noreferrer"><code>kubectl proxy</code> command under the hood</a>. </p>
<p>By default running <code>kubectl proxy</code> binds to loopback interface of your local machine, therefore it can't be accessed from outside.</p>
<p>You can't change minikube cli bahaviour (without changing source code) but what you can do is to note down path to a dashboard:</p>
<pre><code>/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
</code></pre>
<p>and run <code>kubectl proxy</code> by your self adding <code>--address</code> paramater with <code>0.0.0.0</code> value.</p>
<p>So now running this you will see:</p>
<pre><code>$ kubectl proxy --address 0.0.0.0
Starting to serve on [::]:8001
</code></pre>
<p>Now open a browser on your remote host and go to:</p>
<pre><code><your-host-external-ip>:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
</code></pre>
<p>That's it. Let me know if it helped.</p>
| Matt |
<p>I want to monitor internals of JVM for my Spring Boot application inside a Docker which is running as a pod in a Kubernetes cluster. But I couldn't find satisfactory answer anywhere even after spending considerable time.
I tried referring the accepted answer on <a href="https://stackoverflow.com/questions/31257968/how-to-access-jmx-interface-in-docker-from-outside">this</a> but it was connecting only when my docker was running locally, and ceased to connect while behind a Kubernetes cluster. </p>
| rohimsh | <p>Assume I wanted to monitor on port 8001 while my app was serving on 8000.
Adding these to my VM options worked fine(VisualVM was showing this process for monitoring) while running Docker locally and mapping port 8001 from my local to Docker(-p 8001:8001)</p>
<pre><code>-Dcom.sun.management.jmxremote \
-Djava.rmi.server.hostname=localhost \
-Dcom.sun.management.jmxremote.port=8001 \
-Dcom.sun.management.jmxremote.rmi.port=8001 \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false
</code></pre>
<p>But it didn't work on a pod in remote Kubernetes Cluster. I found <a href="https://medium.com/cloud-native-the-gathering/remotely-connecting-through-kubernetes-to-a-jmx-agent-in-a-spring-boot-1-x-cf83bb83f499" rel="noreferrer">this</a> but my requirement was to monitor without going via Service and then by reading couple of other articles I got it working, so collating those steps below for saving someone's time:-</p>
<ol>
<li>Removed VM options mentioned above in the startup script</li>
<li>Updated my application.yaml as below where I enabled jmx and added url for running JMXMP Connector server</li>
</ol>
<pre><code>spring:
application:
name: stack-application
jmx:
enabled: true
url: service:jmx:jmxmp://localhost:8001/
server:
port: 8000
</code></pre>
<ol start="3">
<li>Updated my Kubernetes deployment YAML under Deployment block as:</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
----your content----
ports:
- name: stack-app-port
containerPort: 8000
- name: stack-jmx-port
containerPort: 8001
</code></pre>
<ol start="4">
<li>Added following dependency to my pom.xml for downloading <a href="https://docs.oracle.com/cd/E19698-01/816-7609/connectors-116/index.html" rel="noreferrer">JMXMP</a> as after my research I concluded that JMX monitoring over RMI is a tough job and hence JMXMP is everyone's recommendation.</li>
</ol>
<pre><code> <dependency>
<groupId>org.glassfish.main.external</groupId>
<artifactId>jmxremote_optional-repackaged</artifactId>
<version>5.0</version>
</dependency>
</code></pre>
<ol start="5">
<li>Created a new class <code>ConnectorServiceFactoryBeanProvider</code> which fetches URL from our application.yaml</li>
</ol>
<pre><code>@Configuration
public class ConnectorServiceFactoryBeanProvider {
@Value("${spring.jmx.url}")
private String url;
@Bean
public ConnectorServerFactoryBean connectorServerFactoryBean() throws Exception {
final ConnectorServerFactoryBean connectorServerFactoryBean = new ConnectorServerFactoryBean();
connectorServerFactoryBean.setServiceUrl(url);
return connectorServerFactoryBean;
}
}
</code></pre>
<ol start="6">
<li>Build and deploy your docker on Kubernetes and find out the IP address of the pod. You may use <code>kubectl describe pod</code> for that on CLI</li>
<li>Now to start VisualVM, we also need to add the JMXMP jar downloaded above in classpath. I created an alias to do the same, and since the JAR was downloaded in my local .m2 directory the command looked like this:- </li>
</ol>
<pre><code>alias viz='jvisualvm -cp "$JAVA_HOME:~/.m2/repository/org/glassfish/main/external/jmxremote_optional-repackaged/5.0/jmxremote_optional-repackaged-5.0.jar"'
</code></pre>
<ol start="8">
<li>Now, execute "viz" or your alias, it'll start the Visual VM application shipped with your Java. </li>
<li>Click on the +JMX icon in the toolbar of VisualVM or go to (File -> Add JMX Connection...) add the link </li>
</ol>
<pre><code>service:jmx:jmxmp://<IP address obtained in step 6 above>:8001
</code></pre>
<p>and Check "Do not require SSL connection". Once you hit OK, you should see your remote application internals on VisualVM in a while. Screenshot attached below. </p>
<p><a href="https://i.stack.imgur.com/544OE.png" rel="noreferrer">VisualVM screenshot monitoring remote app on 8001</a></p>
| rohimsh |
<p>Initially, I've deployed my frontend web application and all the backend APIS in <code>AWS ECS</code>, each of the backend APIs has a <code>Route53</code> record, and the frontend is connected to these APIs in the <code>.env</code> file. Now, I would like to migrate from <code>ECS</code> to <code>EKS</code> and I am trying to deploy all these application in a <code>Minikube</code> local cluster. I would like to keep my <code>.env</code> in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the <code>ECS</code>. In short, first local(<code>Minikube cluster</code>)then external(<code>AWS</code>). How to implement this in <code>Kubernetes</code>?</p>
<p>http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS</p>
<p>.env</p>
<pre><code>BACKEND_API_URL = http://backendapi.learning.com
</code></pre>
<p>one of the example in the code in which the frontend is calling the backend API</p>
<pre><code>export const ping = async _ => {
const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
const json = await res.json();
return json;
}
</code></pre>
| efgdh | <p>Assuming that your setup is:</p>
<ul>
<li>Basing on microservices architecture.</li>
<li>Applications deployed in Kubernetes cluster (<code>frontend</code> and <code>backend</code>) are <code>Dockerized</code></li>
<li>Applications are capable to be running on top of Kubernetes.</li>
<li>etc.</li>
</ul>
<p>You can configure your Kubernetes cluster (<code>minikube</code> instance) to relay your request to different locations by using <code>Services</code>.</p>
<hr />
<h3>Service</h3>
<p>In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.</p>
<p>Some of the types of <code>Services</code> are following:</p>
<blockquote>
<ul>
<li><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code>.</li>
<li><code>NodePort</code>: Exposes the Service on each Node's IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <code><NodeIP></code>:<code><NodePort></code>.</li>
<li><code>LoadBalancer</code>: Exposes the Service externally using a cloud provider's load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</li>
<li><code>ExternalName</code>: Maps the Service to the contents of the <code>externalName</code> field (e.g. <code>foo.bar.example.com</code>), by returning a <code>CNAME</code> record with its value. No proxying of any kind is set up.</li>
</ul>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
</blockquote>
<p><strong>You can use <code>Headless Service with selectors</code> and <code>dnsConfig</code> (in <code>Deployment</code> manifest) to achieve the setup referenced in your question</strong>.</p>
<p>Let me explain more:</p>
<hr />
<h3>Example</h3>
<p>Let's assume that you have a <code>backend</code>:</p>
<ul>
<li><code>nginx-one</code> - located <strong>inside</strong> and <strong>outside</strong></li>
</ul>
<p>Your <code>frontend</code> manifest in most basic form should look following:</p>
<ul>
<li><code>deployment.yaml</code>:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
</code></pre>
<p>Taking specific look on:</p>
<pre><code> dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
</code></pre>
<p>Dissecting above part:</p>
<ul>
<li>
<blockquote>
<p><code>dnsConfig</code> - the dnsConfig field is optional and it can work with any <code>dnsPolicy</code> settings. However, when a Pod's <code>dnsPolicy</code> is set to "<code>None</code>", the <code>dnsConfig</code> field has to be specified.</p>
</blockquote>
</li>
<li>
<blockquote>
<p><code>searches</code>: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.</p>
</blockquote>
</li>
</ul>
<p>As for the <code>Services</code> for your <code>backends</code>.</p>
<ul>
<li><code>service.yaml</code>:</li>
</ul>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-one
spec:
clusterIP: None # <-- IMPORTANT
selector:
app: nginx-one
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Above <code>Service</code> will tell your frontend that one of your <code>backends</code> (<code>nginx</code>) is available through a <code>Headless service</code> (why it's <code>Headless</code> will come in hand later!). By default you could communicate with it by:</p>
<ul>
<li><code>service-name</code> (<code>nginx-one</code>)</li>
<li><code>service-name.namespace.svc.cluster.local</code> (<code>nginx-one.default.svc.cluster.local</code>) - only locally</li>
</ul>
<hr />
<h3>Connecting to your backend</h3>
<p>Assuming that you are sending the request using <code>curl</code> (for simplicity) from <code>frontend</code> to <code>backend</code> you will have a specific order when it comes to the <code>DNS</code> resolution:</p>
<ul>
<li>check the <code>DNS</code> record inside the cluster</li>
<li>check the <code>DNS</code> record specified in <code>dnsConfig</code></li>
</ul>
<p>The specifics of connecting to your <code>backend</code> will be following:</p>
<ul>
<li>If the <code>Pod</code> with your <code>backend</code> is available in the cluster, the <code>DNS</code> resolution will point to the Pod's IP (not <code>ClusterIP</code>)</li>
<li>If the <code>Pod</code> <code>backend</code> is not available in the cluster due to various reasons, the <code>DNS</code> resolution will first check the internal records and then opt to use <code>DOMAIN.NAME</code> in the <code>dnsConfig</code> (outside of <code>minikube</code>).</li>
<li>If there is no <code>Service</code> associated with specific <code>backend</code> (<code>nginx-one</code>), the <code>DNS</code> resolution will use the <code>DOMAIN.NAME</code> in the <code>dnsConfig</code> searching for it outside of the cluster.</li>
</ul>
<blockquote>
<p>A side note!</p>
<p>The <code>Headless Service with selector</code> comes into play here as its intention is to point directly to the <code>Pod</code>'s IP and not the <code>ClusterIP</code> (which exists as long as <code>Service</code> exists). If you used a "normal" <code>Service</code> you would always try to communicate with the <code>ClusterIP</code> even if there is no <code>Pods</code> available matching the selector. By using a <code>headless</code> one, if there is no <code>Pod</code>, the <code>DNS</code> resolution would look further down the line (external sources).</p>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Start</a></em></li>
<li><em><a href="https://aws.amazon.com/blogs/compute/enabling-dns-resolution-for-amazon-eks-cluster-endpoints/" rel="nofollow noreferrer">Aws.amazon.com: Blogs: Compute: Enabling dns resolution for amazon eks cluster endpoints</a></em></li>
</ul>
<hr />
<h3>EDIT:</h3>
<p>You could also take a look on alternative options:</p>
<p>Alernative option 1:</p>
<ul>
<li>Use rewrite rule plugin in CoreDNS to rewrite DNS queries for <code>backendapi.learning.com</code> to <code>backendapi.default.svc.cluster.local</code></li>
</ul>
<p>Alernative option 2:</p>
<ul>
<li>Add <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/#adding-additional-entries-with-hostaliases" rel="nofollow noreferrer">hostAliases</a> to the Frontend Pod</li>
</ul>
<p>You can also use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/configmap#creating_a_configmap" rel="nofollow noreferrer">Configmaps</a> to re-use <code>.env</code> files.</p>
| Dawid Kruk |
<p>I am trying to migrate my deployment from the Minikube platform to the KOPS cluster in AWS.
In my deployment, I have multiple pods that share the same pvc(persistent volume claim).</p>
<p>Therefore, accessing ebs pvc from different pods in the KOPS cluster is having problems when those pods are running on different nodes(different instances). For eg - I have 3 pods and 2 nodes. Assume pod1 is running on node1 and pod2&pod3 are running on node2. pod2&pod3 will not be able to attach ebs pvc after pod1 is attached to ebs pvc.</p>
<p>How to make ebs pvc accessible from different pods running on different nodes in the kops cluster in AWS?</p>
<p>volume.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: media-volume
spec:
storageClassName: gp2-manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: <volumeID>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: media-volume-claim
spec:
storageClassName: gp2-manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
| Kyaw Min Thu L | <p>The quick answer here would be to use <strong><a href="https://aws.amazon.com/efs/" rel="nofollow noreferrer">EFS</a></strong> with <code>ReadWriteMany</code> access mode instead of <strong>EBS</strong>, as EBS allow only ReadWriteOnce <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a>.</p>
<p>But as @KKyaw Min Thu L mentioned in comments</p>
<blockquote>
<p>Currently, I am using EFS, but my boss prefers to use EBS rather than EFS.</p>
</blockquote>
<p>As a workaround for this I would sugget to use <a href="https://www.gluster.org/" rel="nofollow noreferrer">GlusterFS</a>, it was suggested by @Harsh Manvar <a href="https://stackoverflow.com/questions/51212904/kubernetes-pvc-with-readwritemany-on-aws/59675356#59675356">here</a>.</p>
<blockquote>
<p>As you mention EBS volume with affinity & node selector will stop scalability however with EBS only ReadWriteOnce will work.</p>
<p>Sharing my experience, if you are doing many operations on the file system and frequently pushing & fetching files it might could be slow with EFS which can degrade application performance. operation rate on EFS is slow.</p>
<p>However, you can use GlusterFs in back it will be provisioning EBS volume. <strong>GlusterFS also support ReadWriteMany</strong> and it will be faster compared to EFS as it's block storage (SSD).</p>
</blockquote>
<p>There is <a href="https://medium.com/@naikaa/persistent-storage-kubernetes-67f0f1e1f31e" rel="nofollow noreferrer">tutorial</a> about this on medium.</p>
<blockquote>
<p><strong>GlusterFS</strong> is a connector based storage system, i.e. by itself gluster doesnt provide storage, but it connects to a durable storage and extrapolates storage to make it seamless for K8 pods.</p>
<p>The high level topology is as described in the diagram below where one EBS volume is mounted per EC2 instance that is running a kubernetes node. We have 3 EC2, EBS, K8 node setup below. We form a glusterfs cluster using the 3 EBS nodes. We can then define and carveout several persistent volumes (pv) PV1, PV2 … PV5 out of the 3 mounted EBS volumes, making it homogenous and seamless for K8 pods to claim.</p>
<p>K8 schedules pods as per its algorithm on any K8 node and the pods can claim a persistent volume via a persistent volume claim. Persistent volume claim (pvc) is nothing but a label that identifies a connection between a POD and a persitent volume. Per the diagram below we have POD C claim PV1 while POD A claim PV4.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/bVBvD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bVBvD.png" alt="enter image description here" /></a></p>
| Jakub |
<p>I am trying to achieve Horizontal Pods Autoscaling (HPA) in GCP based on GPU utilization.
My hpa.yaml file looks like this:</p>
<pre><code>kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: kubernetes.io|container|accelerator|duty_cycle
targetAverageValue: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-container-worker
</code></pre>
<p>After I run <code>kubectl create -f hpa.yaml</code> , I see the following error in the GCP.</p>
<p><a href="https://i.stack.imgur.com/bgQPT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bgQPT.png" alt="enter image description here"></a> </p>
<p>On the HPA, it says unable to read all metrics.</p>
<p>In stack driver monitoring I have created a dashboard with the metric mentioned above.</p>
<p>Has anyone come across this issue? </p>
| Suraj Acharya | <p>Summarizing our conversation in comments:</p>
<p>To make use of GCP metrics in kubernetes, <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling#step1" rel="nofollow noreferrer">Custom Metrics Stackdriver Adapter</a> had to be deployed. This is why HPA was unable to read the metrics and therefore was throwing errors.</p>
| Matt |
<p>Our deployment's <code>imagePullPolicy</code> wasn't set for a while, which means it used <code>IfNotPresent</code>.<br>
If I understand correctly, each k8s node stored the images locally so they can be reused on the next deployment if necessary. </p>
<p>Is it possible to list/show all the stored local images per node in an AKS cluster</p>
| SagiLow | <p>As docker is installed on every node of the k8s cluster, to list/show local images per node, you need login to the worker node and you could run :</p>
<pre><code> docker images
</code></pre>
<p>This would give you the list of all the images on that particular node.</p>
| Ratan Boddu |
<p>I'm building an example microservice application with Kubernetes to find out the best practices and some patterns for future projects. I'm using Istio as a Service Mesh to handle east-west traffic and I have a basic understanding of the concepts (VirtualServices, DestinationRules, ...). The service mesh allows me to easily push out new versions of a microservice and redirect the traffic to the new instance (using e.g. weighted distribution). When having semantic versioning in mind, this works really well for <code>Patch</code> and <code>Minor</code> updates, because they, in theory, didn't alter the existing contract and can therefore be a drop-in replacement for the existing service. Now I'm wondering how to properly deal with breaking changes of service, so a <code>Major</code> version update.</p>
<p>It's hard to find information for this, but with the limited info I got, I'm now thinking about two approaches:</p>
<ol>
<li><p>Each major version of a service (e.g. <code>user-service</code>) gets its own <code>VirtualService</code> so that clients can address it correctly (by a different service name, e.g. <code>user-service-v1</code>). Istio is then used to correctly route the traffic for a major version (e.g. <code>1.*</code>) to the different available services (e.g. <code>user-service v1.3.1</code> and <code>user-service v1.4.0</code>).</p>
</li>
<li><p>I use one overall <code>VirtualService</code> for a specific microservice (so e.g. <code>user-service</code>). This <code>VirtualService</code> contains many routing definitions to use e.g. a header sent by the client (e.g. <code>x-major-version=1</code>) to match the request to a destination.</p>
</li>
</ol>
<p>Overall there is not too much difference between both methods. The client obviously needs to specify to which major version he wants to talk, either by setting a header or by resolving a different service name. Are there any limitations to the described methods which make one superior to the other? Or are there other options I'm totally missing? Any help and pointers are greatly appreciated!</p>
| Simon | <p><strong>TLDR</strong></p>
<p>Besides what I mentioned in comments, after a more detailed check of the topic, I would choose <strong>approach 2</strong>, with one overall <strong>Virtual Service</strong> for specific microservice with <strong>canary deployment</strong> and <strong>mirroring</strong>.</p>
<h2>Approach 1</h2>
<p>As mentioned in <a href="https://istio.io/latest/docs/ops/best-practices/traffic-management/" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>In situations where it is inconvenient to define the complete set of route rules or policies for a particular host in a single VirtualService or DestinationRule resource, it may be preferable to incrementally specify the configuration for the host in multiple resources. Pilot will merge such destination rules and merge such virtual services if they are bound to a gateway.</p>
</blockquote>
<p>So in <strong>theory</strong> you could go with approach number 1, but I would say that there is too much configuration with that and there is better idea to do that.</p>
<p>Let's say you have old app with name <code>v1.3.1</code> and new app with name <code>v1.4.0</code>, so appropriate Virtual Service would look as follow.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-vervice1
spec:
hosts:
- '*'
http:
- name: "v1.3.1"
route:
- destination:
host: service1.namespace.svc.cluster.local
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-service2
spec:
hosts:
- '*'
http:
- name: "v1.4.0"
route:
- destination:
host: service2.namespace.svc.cluster.local
</code></pre>
<hr />
<h2>Approach 2</h2>
<p>In <strong>practise</strong> I would go with approach number 2, for example you can create 2 versions of your app, in below example it's <code>old</code> and <code>new</code> and then
configure Virtual Service and Destination Rules for it.</p>
<p>The question here would be, why? Because it's easier to manage, at least for me, and it's easy to use canary deployment and mirroring here, more about that below.</p>
<p>Let's say you deployed new app, you wan't to send 1% of incoming traffic here, additionally you can use mirror, so every request which goes to old service will be mirrored to new service for testing purposes.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-vervice
spec:
hosts:
- '*'
http:
- name: "old"
route:
- destination:
host: service.namespace.svc.cluster.local
subset: v1
weight: 99
mirror:
host: service.namespace.svc.cluster.local
subset: v2
mirror_percent: 100
- name: "new"
route:
- destination:
host: service.namespace.svc.cluster.local
subset: v2
weight: 1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: service.namespace.svc.cluster.local
subsets:
- name: v1
labels:
version: v1 <--- label on old pod
- name: v2
labels:
version: v2 <--- label on new pod
</code></pre>
<hr />
<h2>Testing new application</h2>
<blockquote>
<p>The client obviously needs to specify to which major version he wants to talk, either by setting a header or by resolving a different service name.</p>
</blockquote>
<p>Actually that depends on the configuration, if you use above option with <code>new</code> and <code>old</code> versions, then that's what canary deployment, e.g. weighted distribution, is used for. You can specify percentage amount of traffic which should be sent to new version of your app. Of course you can specify headers or prefixes in your Virtual Service so that users could use an older or newer version of your app.</p>
<h2>Canary Deployment</h2>
<p>As mentioned <a href="https://istio.io/latest/blog/2017/0.1-canary/" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>One of the benefits of the Istio project is that it provides the control needed to deploy canary services. The idea behind canary deployment (or rollout) is to introduce a new version of a service by first testing it using a small percentage of user traffic, and then if all goes well, increase, possibly gradually in increments, the percentage while simultaneously phasing out the old version. If anything goes wrong along the way, we abort and rollback to the previous version. In its simplest form, the traffic sent to the canary version is a randomly selected percentage of requests, but in more sophisticated schemes it can be based on the region, user, or other properties of the request.</p>
<p>Depending on your level of expertise in this area, you may wonder why Istio’s support for canary deployment is even needed, given that platforms like Kubernetes already provide a way to do version rollout and canary deployment. Problem solved, right? Well, not exactly. Although doing a rollout this way works in simple cases, it’s very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed.</p>
<p><strong>istio</strong></p>
<p>With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.</p>
<p>Istio’s routing rules also provide other important advantages; you can easily control fine-grained traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, let’s look at deploying the helloworld service and see how simple the problem becomes.</p>
</blockquote>
<p>There is an <a href="https://istio.io/latest/blog/2017/0.1-canary/#enter-istio" rel="nofollow noreferrer">example</a>.</p>
<h2>Mirroring</h2>
<p>Second thing often used to test new version of application is traffic mirroring.</p>
<p>As mentioned <a href="https://istio.io/latest/docs/ops/best-practices/traffic-management/" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>Using Istio, you can use traffic mirroring to duplicate traffic to another service. You can incorporate a traffic mirroring rule as part of a canary deployment pipeline, allowing you to analyze a service's behavior before sending live traffic to it.</p>
</blockquote>
<p>If you're looking for best practices I would recommend to start with this <a href="https://istio.io/latest/blog/2017/0.1-canary/" rel="nofollow noreferrer">tutorial</a> on medium, because it is explained very well here.</p>
<h2>How Traffic Mirroring Works</h2>
<blockquote>
<p>Traffic mirroring works using the steps below:</p>
<ul>
<li><p>You deploy a new version of the application and switch on traffic
mirroring.</p>
</li>
<li><p>The old version responds to requests like before but also sends an asynchronous copy to the new version.</p>
</li>
<li><p>The new version processes the traffic but does not respond to the user.</p>
</li>
<li><p>The operations team monitor the new version and report any issues to the development team.</p>
</li>
</ul>
</blockquote>
<p><a href="https://i.stack.imgur.com/Yg2xK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yg2xK.png" alt="enter image description here" /></a></p>
<blockquote>
<p>As the application processes live traffic, it helps the team uncover issues that they would typically not find in a pre-production environment. You can use monitoring tools, such as Prometheus and Grafana, for recording and monitoring your test results.</p>
</blockquote>
<p>Additionally there is an example with nginx that perfectly shows how it should work.</p>
<p>It is worth mentioning that if you use write APIs, like order or payment, then mirrored traffic will mean write APIs like order multiple times. This topic is described in detail <a href="https://blog.christianposta.com/microservices/advanced-traffic-shadowing-patterns-for-microservices-with-istio-service-mesh/" rel="nofollow noreferrer">here</a> by Christian Posta.</p>
<hr />
<p>Let me know if there is something more you want to discuss.</p>
| Jakub |
<p>I installed minikube in the Azure cloud on an ubuntu machine 18.04. But I do not know how to connect to it through kubectl using real IP of the virtual machine. Using i minikube on virtualbox driver (<a href="https://192.168.99.100:8443" rel="nofollow noreferrer">https://192.168.99.100:8443</a>). Please tell me how to make port forwarding? Thanks. </p>
| Vitalii Fedorenko | <p>I tested it and come up with some solutions.</p>
<ol>
<li><p>The easiest way to make minikube accessible from your local machine can be achieved by using ssh port forwarding (but you need to remember to have ssh session open all the time and its not really what you want because it will be accessible only from your local machine).</p>
<p>You can run:</p>
<pre><code>ssh <user>@<azure_vm_ip> -L 8443:192.168.99.100:8443
</code></pre>
<p>to start port forwarding from your local host to the minikube vm.</p>
<p>You will also need to copy these certificate files from azure vm <code>~/.minikube/</code> directory to
you local machine:</p>
<pre><code>ca.crt
client.crt
client.key
</code></pre>
<p>also copy <code>.kube/config</code> from azure vm to you local machine and edit paths to certificate files mentioned earlier and change server IP address to localhost.</p></li>
<li><p>second way to make it accessible (this time allowing for external access) using ssh port forwarding is possible by doing the following:</p>
<p>In file <code>/etc/ssh/sshd_config</code> on azure vm change <code>GatewayPorts</code> to <code>yes</code>, save file and run</p>
<pre><code>systemctl restart sshd
</code></pre>
<p>next, ssh to your azure vm and run:</p>
<pre><code>ssh -R 0.0.0.0:8443:192.168.99.100:8443 localhost
</code></pre>
<p>remember about certificate files and change server IP in <code>.kube/config</code> file public IP of your azure vm.</p>
<p>When trying to connect to minikube form you local machine may see:</p>
<pre><code>$ kubectl get pods
Unable to connect to the server: x509: certificate is valid for 192.168.99.100, 10.96.0.1, 10.0.0.1, not <your_vm_ip>
</code></pre>
<p>So you need to either use <code>--insecure-skip-tls-verify</code> flag or generate new valid certificates (or start minikube with <code>--apiserver-ips=<public_ip></code> and it will generate valid certificate for you).</p>
<p>NOTE: remember to allow ingress traffic to your azure vm on port 8443.</p>
<p>If you don't want to use ssh port forwarding you can use any kind of proxy e.g nginx, that will run on azure vm and forward requests to minkube vm</p></li>
<li><p>Probably the best way. Running without a VM:</p>
<pre><code> sudo minikube start --vm-driver=none --apiserver-ips=<public_ip>
</code></pre>
<p><code>--apiserver-ips</code> is needed to generate appropriate certificates.
<code>--vm-driver=none</code> won't create a vbox vm</p>
<p>Now all you need is to copy certificates to your local machine and provide appropriate server ip in <code>.kube/confg</code> file.</p></li>
</ol>
<p>Let me know if it was helpful.</p>
| Matt |
<p><strong>404 response <code>Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND</code> on google cloud endpoints esp</strong></p>
<p>I'm trying to deploy my API with google cloud endpoints with my backend over GKE. I'm getting this error on the Produced API logs, where shows:</p>
<p><code>Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND</code></p>
<p>and i'm getting a 404 responde from the endpoint.</p>
<p>The backend container is answering correctly, but when i try to post <a href="http://[service-ip]/v1/postoperation" rel="nofollow noreferrer">http://[service-ip]/v1/postoperation</a> i get the 404 error. I'm guessing it's related with the api_method name but i've already changed so it's the same in the openapi.yaml, the gke deployment and the app.py.</p>
<p>I deployed the API service succesfully with this openapi.yaml:</p>
<pre><code>swagger: "2.0"
info:
description: "API rest"
title: "API example"
version: "1.0.0"
host: "api.endpoints.gcp-project.cloud.goog"
basePath: "/v1"
# [END swagger]
consumes:
- "application/json"
produces:
- "application/json"
schemes:
# Uncomment the next line if you configure SSL for this API.
#- "https"
- "http"
paths:
"/postoperation":
post:
description: "Post operation 1"
operationId: "postoperation"
produces:
- "application/json"
responses:
200:
description: "success"
schema:
$ref: "#/definitions/Model"
400:
description: "Error"
parameters:
- description: "Description"
in: body
name: payload
required: true
schema:
$ref: "#/definitions/Resource"
definitions:
Resource:
type: "object"
required:
- "text"
properties:
tipodni:
type: "string"
dni:
type: "string"
text:
type: "string"
Model:
type: "object"
properties:
tipodni:
type: "string"
dni:
type: "string"
text:
type: "string"
mundo:
type: "string"
cluster:
type: "string"
equipo:
type: "string"
complejidad:
type: "string"
</code></pre>
<p>Then i tried to configure the backend and esp with this deploy.yaml and lb-deploy.yaml</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: api-deployment
namespace: development
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: api1
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: api1
spec:
volumes:
- name: google-cloud-key
secret:
secretName: secret-key
containers:
- name: api-container
image: gcr.io/gcp-project/docker-pqr:IMAGE_TAG_PLACEHOLDER
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
ports:
- containerPort: 5000
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:5000",
"--service=api.endpoints.gcp-project.cloud.goog",
"--rollout_strategy=managed"
]
ports:
- containerPort: 8081
kind: Service
metadata:
name: "api1-lb"
namespace: development
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
# loadBalancerIP: "172.30.33.221"
selector:
app: api1
ports:
- protocol: TCP
port: 80
targetPort: 8081
</code></pre>
<p>my flask app that serves the api, is this app.py</p>
<pre><code>app = Flask(__name__)
categorizador = Categorizador(model_properties.paths)
@app.route('/postoperation', methods=['POST'])
def postoperation():
text = request.get_json().get('text', '')
dni = request.get_json().get('dni', '')
tipo_dni = request.get_json().get('tipo_dni', '')
categoria,subcategoria = categorizador.categorizar(text)
content = {
'tipodni': tipo_dni,
'dni': dni,
'text': text,
'mundo': str(categoria),
'cluster': str(subcategoria),
'equipo': '',
'complejidad': ''
}
return jsonify(content)
</code></pre>
| Gleiwer Montoya | <p>Looks like you need to configure route in your flask app.
Try this:</p>
<pre><code>@app.route('/v1/postoperation', methods=['POST'])
</code></pre>
| Emil Gi |
<p>TL;DR: How can we configure istio sidecar injection/istio-proxy/envoy-proxy/istio egressgateway to allow long living (>3 hours), possibly idle, TCP connections?</p>
<p>Some details:</p>
<p>We're trying to perform a database migration to PostgreSQL which is being triggered by one application which has Spring Boot + Flyway configured, this migration is expected to last ~3 hours.</p>
<p>Our application is deployed inside our kubernetes cluster, which has configured istio sidecar injection. After exactly one hour of running the migration, the connection is always getting closed.</p>
<p>We're sure it's istio-proxy closing the connection as we attempted the migration from a pod without istio sidecar injection and it was running for longer than one hour, however this is not an option going forward as this may imply some downtime in production which we can't consider.</p>
<p>We suspect this should be configurable in istio proxy setting the parameter idle_timeout - which was implemented <a href="https://github.com/istio/istio/pull/13515" rel="noreferrer">here</a>. However this isn't working, or we are not configuring it properly, we're trying to configure this during istio installation by adding <code>--set gateways.istio-ingressgateway.env.ISTIO_META_IDLE_TIMEOUT=5s</code> to our helm template.</p>
| Yayotrón | <p>If you use istio version higher than 1.7 you might try use <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="noreferrer">envoy filter</a> to make it work. There is answer and example on <a href="https://github.com/istio/istio/issues/24387#issuecomment-713600319" rel="noreferrer">github</a> provided by @ryant1986.</p>
<blockquote>
<p>We ran into the same problem on 1.7, but we noticed that the ISTIO_META_IDLE_TIMEOUT setting was only getting picked up on the OUTBOUND side of things, not the INBOUND. By adding an additional filter that applied to the INBOUND side of the request, we were able to successfully increase the timeout (we used 24 hours)</p>
</blockquote>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: listener-timeout-tcp
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.tcp_proxy
patch:
operation: MERGE
value:
name: envoy.filters.network.tcp_proxy
typed_config:
'@type': type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy
idle_timeout: 24h
</code></pre>
<blockquote>
<p>We also created a similar filter to apply to the passthrough cluster (so that timeouts still apply to external traffic that we don't have service entries for), since the config wasn't being picked up there either.</p>
</blockquote>
| Jakub |
<p>OVERVIEW:: I am studying for the Kubernetes Administrator certification. To complete the training course, I created a dual node Kubernetes cluster on Google Cloud, 1 master and 1 slave. As I don't want to leave the instances alive all the time, I took snapshots of them to deploy new instances with the Kubernetes cluster already setup. I am aware that I would need to update the ens4 ip used by kubectl, as this will have changed, which I did.</p>
<p>ISSUE:: When I run "kubectl get pods --all-namespaces" I get the error "The connection to the server localhost:8080 was refused - did you specify the right host or port?"</p>
<p>QUESTION:: Would anyone have had similar issues and know if its possible to recreate a Kubernetes cluster from snapshots?</p>
<p>Adding -v=10 to command, the url matches info in .kube/config file</p>
<blockquote>
<p>kubectl get pods --all-namespaces -v=10
I0214 17:11:35.317678 6246 loader.go:375] Config loaded from file: /home/student/.kube/config
I0214 17:11:35.321941 6246 round_trippers.go:423] curl -k -v -XGET -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Accept: application/json, <em>/</em>" '<a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a>'
I0214 17:11:35.333308 6246 round_trippers.go:443] GET <a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a> in 11 milliseconds
I0214 17:11:35.333335 6246 round_trippers.go:449] Response Headers:
I0214 17:11:35.333422 6246 cached_discovery.go:121] skipped caching discovery info due to Get <a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a>: dial tcp 10.128.0.7:6443: connect: connection refused
I0214 17:11:35.333858 6246 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, <em>/</em>" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" '<a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a>'
I0214 17:11:35.334234 6246 round_trippers.go:443] GET <a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a> in 0 milliseconds
I0214 17:11:35.334254 6246 round_trippers.go:449] Response Headers:
I0214 17:11:35.334281 6246 cached_discovery.go:121] skipped caching discovery info due to Get <a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a>: dial tcp 10.128.0.7:6443: connect: connection refused
I0214 17:11:35.334303 6246 shortcut.go:89] Error loading discovery information: Get <a href="https://k8smaster:6443/api?timeout=32s" rel="nofollow noreferrer">https://k8smaster:6443/api?timeout=32s</a>: dial tcp 10.128.0.7:6443: connect: connection refused</p>
</blockquote>
| liam08 | <p>I replicated you issue and wrote this step by step debugging process for you so you can see what was my thinking.</p>
<p>I created 2 node cluster (master + worker) with kubeadm and made a snapshot.
Then I deleted all nodes and recreated them from snapshots.</p>
<p>After recreating master node from snapshot I started seeing the same error you are seeing:</p>
<pre><code>@kmaster ~]$ kubectl get po -v=10
I0217 11:04:38.397823 3372 loader.go:375] Config loaded from file: /home/user/.kube/config
I0217 11:04:38.398909 3372 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.3 (linux/amd64) kubernetes/06ad960" 'https://10.156.0.20:6443/api?timeout=32s'
^C
</code></pre>
<p>The connection was hanging so I interrupted it (ctrl+c).
First I noticed was that IP address of where kubectl was connecting was different than node ip, so I modified <code>.kube/config</code> file providing proper IP.</p>
<p>After doing this, here is what running kubectl showed:</p>
<pre><code>$ kubectl get po -v=10
I0217 11:26:57.020744 15929 loader.go:375] Config loaded from file: /home/user/.kube/config
...
I0217 11:26:57.025155 15929 helpers.go:221] Connection error: Get https://10.156.0.23:6443/api?timeout=32s: dial tcp 10.156.0.23:6443: connect: connection refused
F0217 11:26:57.025201 15929 helpers.go:114] The connection to the server 10.156.0.23:6443 was refused - did you specify the right host or port?
</code></pre>
<p>As you see, connection to apiserver was beeing refused so I checked if apiserver was running:</p>
<pre><code>$ sudo docker ps -a | grep apiserver
5e957ff48d11 90d27391b780 "kube-apiserver --ad…" 24 seconds ago Exited (2) 3 seconds ago k8s_kube-apiserver_kube-apiserver-kmaster_kube-system_997514ff25ec38012de6a5be7c43b0ae_14
d78e179f1565 k8s.gcr.io/pause:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-apiserver-kmaster_kube-system_997514ff25ec38012de6a5be7c43b0ae_1
</code></pre>
<p>api-server was exiting for some reason.
I checked its logs (I am only including relevant logs for readability):</p>
<pre><code>$ sudo docker logs 5e957ff48d11
...
W0217 11:30:46.710541 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
panic: context deadline exceeded
</code></pre>
<p>Notice apiserver was trying to connect to etcd (notice port: 2379) and receiving connection refused.
My first guess was etcd wasn't running, so I checked etcd container:</p>
<pre><code>$ sudo docker ps -a | grep etcd
4a249cb0743b 303ce5db0e90 "etcd --advertise-cl…" 2 minutes ago Exited (1) 2 minutes ago k8s_etcd_etcd-kmaster_kube-system_9018aafee02ebb028a7befd10063ec1e_19
b89b7e7227de k8s.gcr.io/pause:3.1 "/pause" 30 minutes ago Up 30 minutes k8s_POD_etcd-kmaster_kube-system_9018aafee02ebb028a7befd10063ec1e_1
</code></pre>
<p>I was right: <em>Exited (1) 2 minutes ago</em>. I checked its logs:</p>
<pre><code>$ sudo docker logs 4a249cb0743b
...
2020-02-17 11:34:31.493215 C | etcdmain: listen tcp 10.156.0.20:2380: bind: cannot assign requested address
</code></pre>
<p>etcd was trying to bind with old IP address.</p>
<p>I modified <code>/etc/kubernetes/manifests/etcd.yaml</code> and changed old IP address to new IP everywhere in file.</p>
<p>Quick <code>sudo docker ps | grep etcd</code> showed its running.
After a while apierver also started running.</p>
<p>Then I tried running kubectl:</p>
<pre><code>$ kubectl get po
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.156.0.20, not 10.156.0.23
</code></pre>
<p>Invalid apiserver certificate. SSL certificate was genereated for old IP so that would mean I need to generate new certificate with new IP.</p>
<pre><code>$ sudo kubeadm init phase certs apiserver
...
[certs] Using existing apiserver certificate and key on disk
</code></pre>
<p>That's not what I expected. I wanted to generate new certificates, not use old ones.</p>
<p>I deleted old certificates:</p>
<pre><code>$ sudo rm /etc/kubernetes/pki/apiserver.crt \
/etc/kubernetes/pki/apiserver.key
</code></pre>
<p>And tried to generate certificates one more time:</p>
<pre><code>$ sudo kubeadm init phase certs apiserver
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.156.0.23]
</code></pre>
<p>Looks good. Now let's try using kubectl:</p>
<pre><code>$ kubectl get no
NAME STATUS ROLES AGE VERSION
instance-21 Ready master 102m v1.17.3
instance-22 Ready <none> 95m v1.17.3
</code></pre>
<p>As you can see now its working.</p>
| Matt |
<p>I am working on setting up CI CD pipeline for Spring boot application on GKE.
While doing the CI using Cloud Build on GCP, Build fail while pushing the updated manifest to candidate branch. "Failed at Step 5"
I could see below logs in the cloud build</p>
<pre><code>Finished Step #4 - "Generate manifest"
Starting Step #5 - "Push manifest"
Step #5 - "Push manifest": Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #5 - "Push manifest": + cd eureka-cloudbuild-env
Step #5 - "Push manifest": + git add kubernetes.yaml
Step #5 - "Push manifest": + git log --format=%an <%ae> -n 1 HEAD
Step #5 - "Push manifest": On branch candidate
Step #5 - "Push manifest": Your branch is up to date with 'origin/candidate'.
Step #5 - "Push manifest":
Step #5 - "Push manifest": nothing to commit, working tree clean
Step #5 - "Push manifest": + git commit -m Deploying image gcr.io/amcartecom/eureka-cloudbuild:v2
Step #5 - "Push manifest": Built from commit 34dfdd69af726f80d3b91b29127e0c77cc2a83cf of repository eureka-cloudbuild-app
Step #5 - "Push manifest": Author: root <[email protected]>
Finished Step #5 - "Push manifest"
ERROR
ERROR: build step 5 "gcr.io/cloud-builders/gcloud" failed: exit status 1
</code></pre>
<p>To set up this pipeline, I followed all the guidelines mentioned at <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build</a></p>
| nsingla85 | <p>This tutorial's code supposes that build triggers when there are actual changes to the source code, and it looks like you are trying to trigger it manually from console.
When there are no changes step 5 of cloudbuild.yaml exits with error status due to empty git commit, which by default exits with error status when there is nothing to commit. </p>
<p>You can use <code>--allow-empty</code> flag with it according to <a href="https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---allow-empty" rel="nofollow noreferrer">man page</a>, but keep in mind that it will create an actual commit even if it is the same as previous one.
If you don't want such commit but just want to suppress the error you can explicitly add <code>|| exit 0</code> at the end of step 5 to ignore errors.</p>
| Emil Gi |
<p>I have three nodes in my cluster who are behind a firewall I do not control. This firewall has a public IP connected to it and can forward traffic to my kubernetes node. It has port 80 and 443 opened to my node.</p>
<p>Initially, I used the public IP in the MetalLB config like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 186.xx.xx.xx-186.xx.xx.xx
</code></pre>
<p>But after reading <a href="https://stackoverflow.com/questions/63935449/make-nginx-ingress-controller-publicly-available-using-metallb">this answer of another question</a> I'm guessing it is invalid since the IP used by MetalLB needs to be on the same subnet as the nodes? And they are all using private IPs.</p>
<p>When I tested locally a HTTP server listening on port 80 and ran it on the actual node (not in the cluster) then I was able get a response on the public IP from outside the network.</p>
<p>So my question is:
<strong>How do I make MetalLB or Nginx ingress controller listen on port 80 and 443 for incoming request?</strong></p>
<p>When using <code>curl 186.xx.xx.xx:80</code> on of the nodes in the cluster then I'm receiving a response from the nginx ingress controller. But not when doing it outside of the node.</p>
| TryingMyBest | <p>Answering the question:</p>
<blockquote>
<p>How can I create a setup with Kubernetes cluster and separate firewall to allow users to connect to my <code>Nginx Ingress</code> controller which is exposing my application.</p>
</blockquote>
<p>Assuming the setup is basing on Kubernetes cluster provisioned in the internal network and there is a firewall between the cluster and the "Internet", following points should be addressed (there could be some derivatives which I will address):</p>
<ul>
<li><code>Metallb</code> provisioned on Kubernetes cluster (assuming it's a bare metal self-managed solution)</li>
<li><code>Nginx Ingress</code> controller with modified <code>Service</code></li>
<li><code>Port-forwarding</code> set on the firewall</li>
</ul>
<hr />
<p><code>Service</code> of type <code>Loadbalancer</code> in the most part (there are some exclusions) is a resource that requires a cloud provider to assign an <strong><code>External IP</code></strong> address for your <code>Service</code>.</p>
<blockquote>
<p>A side note!</p>
<p>More reference can be found here:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">Kubernetes.io: Docs: Tasks: Access application cluster: Create external load balancer</a></em></li>
</ul>
</blockquote>
<p>For solutions that are on premise based, there is a tool called <code>metallb</code>:</p>
<blockquote>
<p>Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.</p>
<p>Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.</p>
<p>MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.</p>
<p><em><a href="https://metallb.universe.tf/" rel="noreferrer">Metallb.universe.tf</a></em></p>
</blockquote>
<p>Following the guide on the <a href="https://metallb.universe.tf/installation/" rel="noreferrer">installation/configuration of metallb</a>, there will be a configuration for a <strong>single internal IP address</strong> that the firewall will send the traffic to:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: single-ip # <-- HERE
protocol: layer2
addresses:
- 10.0.0.100/32 # <-- HERE
</code></pre>
<p>This IP address will be associated with the <code>Service</code> of type <code>LoadBalancer</code> of <code>Nginx Ingress</code> controller.</p>
<hr />
<p>The changes required with the <code>Nginx Ingress</code> manifest (<code>Service</code> part):</p>
<ul>
<li><em><a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml" rel="noreferrer">Raw.githubusercontent.com: Kubernetes: Ingress nginx: Controller: ... : deploy.yaml</a></em></li>
</ul>
<pre><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
metallb.universe.tf/address-pool: single-ip # <-- IMPORTANT
labels:
# <-- OMMITED -->
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
# <-- OMMITED -->
</code></pre>
<p>Above changes in the <code>YAML</code> manifest will ensure that the address that was configured in a <code>metallb</code> <code>ConfigMap</code> will be used with the <code>Service</code>.</p>
<blockquote>
<p>A side note!</p>
<p>You can omit the <code>metallb</code> and use the <code>Service</code> of type <code>Nodeport</code> but this carries some disadvantages.</p>
</blockquote>
<hr />
<p>The last part is to set the port-forwarding on the firewall. The rule should be following:</p>
<ul>
<li><code>FIREWALL_IP:80</code> -> <code>SINGLE_IP:80</code></li>
<li><code>FIREWALL_IP:443</code> -> <code>SINGLE_IP:443</code></li>
</ul>
<p>After that you should be able to communicate with your <code>Nginx Ingress</code> controller by:</p>
<ul>
<li><code>$ curl FIREWALL_IP:80</code></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li>
</ul>
| Dawid Kruk |
<p>I need to use relative volume host path in kubernetes config.</p>
<p>Is it possible to get the working directory in helm?</p>
<p>Something like $(pwd), but inside helm templates.</p>
| Dennis Meissel | <p>You can view working directory by running for example:</p>
<p><code>helm install . --dry-run --debug --set workingDirectory="${PWD}"</code></p>
<p>It will print the working directory under <code>USER-SUPPLIED VALUES:</code> and <code>COMPUTED VALUES</code> in the last line.</p>
| kool |
<p>I was wondering if it was possible to bind my minikube network to my <code>host</code> network.</p>
<p>I tried:</p>
<pre><code>minikube start --memory=10000 --cpus=4 --vm-driver=docker --kubernetes-version=v1.19.6 --mount --mount-string="/usr/local/citizennet/db:/usr/local/citizennet/db" --network="host"
</code></pre>
<p>But I'm getting the following error:</p>
<pre><code>❗ Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create network host 192.168.49.0/24: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true host: exit status 1
stdout:
stderr:
Error response from daemon: operation is not permitted on predefined host network
</code></pre>
<p>I was able to do that by using <code>haproxy</code> but I would like to know if there is a cleaner way of doing that.
My minikube is hosted on an EC2 instance and I would like to forward everything to my minikube directly. Or at least the HTTP/HTTPS requests.</p>
<p>Thanks!</p>
| Jérémy Octeau | <p>I haven't found a way to expose the <code>minikube</code> instance with <code>--driver=docker</code> to the host network (apart from <code>$ kubectl port-forward svc/svc-name --address=0.0.0.0 local_port:pod_port</code> ran on the host).</p>
<p>It produces the same error as original poster is experiencing:</p>
<pre><code>Error response from daemon: operation is not permitted on predefined host network
</code></pre>
<p>Acknowledging following comment:</p>
<blockquote>
<p>the problem is that I want to use the <code>ingress</code> addon and this addon is not compatible anymore with <code>--driver=none</code>.</p>
</blockquote>
<p>Instead of using <code>--driver=docker</code> which will place all of the resources in the Docker container, you can opt for a <code>--driver=none</code> which will provision all of your resources directly on the <code>VM</code>. You will be able to directly query the resources from other network devices.</p>
<p>For now <code>minikube</code> version <code>v1.17.1</code> does not allow to use the <code>ingress</code> addon with <code>--driver=none</code> but I found a way it could be provisioned. I've included this example on the end of this answer. Please treat this as a workaround.</p>
<p><strong>This issue (inability to use <code>ingress</code> addon on <code>--driver=none</code>) is already addressed on github</strong>:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/minikube/issues/9322" rel="nofollow noreferrer">Github.com: Kubernetes: Minikube: Issues: Ingress addon stopped to work with 'none' VM driver starting from v1.12.x</a></em></li>
</ul>
<hr />
<p>Talking from the perspective of exposing <code>minikube</code>:</p>
<p><strong>As it's intended for accessing from external sources, I do recommend trying out other solutions that will subjectively speaking have easier time exposing your workloads to the external sources.</strong> There are many available tools that spawn Kubernetes clusters and you can look which suits your needs the most. <strong>Some</strong> of them are:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">Kubeadm</a></em></li>
<li><em><a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Kubespray</a></em></li>
<li><em><a href="https://microk8s.io/" rel="nofollow noreferrer">MicroK8S</a></em></li>
</ul>
<hr />
<h2>Deploying <code>nginx-ingress</code> with <code>minikube --driver=none</code></h2>
<p>As stated previously, please treat it as a workaround.</p>
<blockquote>
<p>A side note!</p>
<p>Take a look on how your <code>NGINX Ingress</code> controller is configured with <code>minikube addons enable ingress</code> as it will be pretty much mimicked in this example.</p>
</blockquote>
<p>Steps:</p>
<ul>
<li><code>Download</code> the <code>nginx-ingress</code> <code>YAML</code> manifest:
<ul>
<li>Modify the <code>Deployment</code> in the manifest</li>
<li>Delete the <code>Service</code> from manifest</li>
</ul>
</li>
<li>Apply and check</li>
</ul>
<h3><code>Download</code> the <code>nginx-ingress</code> <code>YAML</code> manifest</h3>
<p>You can use following manifest:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress Nginx: Deploy</a></em> (for example <code>GKE</code> manifest could be downloaded)</li>
</ul>
<h3>Modify the <code>Deployment</code> in the manifest</h3>
<p>As I said previously, what is happening when you run <code>minikube addons enable ingress</code> could prove useful. The resources deployed have some clues on how you need to modify it.</p>
<ul>
<li>Add the <code>hostPort</code> for <code>HTTP</code> and <code>HTTPS</code> communication:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> ports:
- name: http
hostPort: 80 # <-- IMPORTANT, ADD THIS
containerPort: 80
protocol: TCP
- name: https
hostPort: 443 # <-- IMPORTANT, ADD THIS
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
</code></pre>
<ul>
<li>Delete the <code>--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller</code>:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller # <-- DELETE THIS
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
</code></pre>
<h3>Delete the <code>Service</code> from manifest</h3>
<p>You will need to entirely delete the <code>Service</code> of type <code>LoadBalancer</code> named: <code>ingress-nginx</code> from the manifest as you will already be using <code>hostPort</code>.</p>
<p>After this steps you should be able to use <code>Ingress</code> resources and communicate with them on <code>VM_IP</code>:<code>80</code>/<code>443</code>.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
<li><em><a href="https://minikube.sigs.k8s.io/docs/drivers/none/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Drivers: None</a></em></li>
</ul>
| Dawid Kruk |
<p>I have a <code>kubernetes</code> cluster and I have set up an <code>NFS</code> server as persistent volume for a <code>mongodb</code> deployment.</p>
<p>And I have set the <code>PeristentVolume</code> and <code>PersistentVolumeClaim</code> as below:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
name: mynfs
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: <nfs-server-ip>
path: "/srv/nfs/mydata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>Everything works fine but the only problem is, I can't run more than 1 <code>mongodb</code> pods because I get the following error.</p>
<blockquote>
<p>{"t":{"$date":"2020-10-15T15:16:39.140+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}}</p>
</blockquote>
<p>That pod is always in <code>CrashLoopBackOff</code> and restarts and again to the same status.</p>
<p><strong>I think the problem here is the same volume path mentioned in the <code>mongodb</code> deployment is trying to access by the two pods at the same time and when one pod is already have the lock, other pod failed.</strong></p>
<p>Here's the <code>mongodb</code> deployment yaml.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
name: mongodb
spec:
replicas: 2
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data/db
volumes:
- name: data
persistentVolumeClaim:
claimName: task-pv-claim
</code></pre>
<p>can someone please help me fix this?</p>
<p>Thank you.</p>
| Jananath Banuka | <h2>Issue</h2>
<p>This log entry already tells you what is the issue</p>
<blockquote>
<p>{"t":{"$date":"2020-10-15T15:16:39.140+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}}</p>
</blockquote>
<p>All members access the same volume and data.</p>
<p>AFAIK you cannot have multiple instances of MongoDB pointing to the same path, each MongoDB instance needs to have exclusive access to its own data files.</p>
<h2>Solution</h2>
<p>You can run your application as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage" rel="nofollow noreferrer">volumeClaimTemplate</a> which ensures that each replica will mount its own volume. There is great <a href="https://stackoverflow.com/questions/50237572/kubernetes-trouble-with-statefulset-and-3-persistentvolumes/50249935#50249935">answer</a> about that.</p>
| Jakub |
<p>I am creating a 3-node cluster inside an Ubuntu VM running on my Mac using <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">Kind</a>. They work as they should:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 20h v1.17.0
kind-worker Ready <none> 20h v1.17.0
kind-worker2 Ready <none> 20h v1.17.0
</code></pre>
<p>I have installed Consul using the <a href="https://www.consul.io/docs/platform/k8s/run.html" rel="nofollow noreferrer">official tutorial</a> with the default Helm chart. Now, the problem is that the consul pods are either running or pending and none of them are ready:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
busybox-6cd57fd969-9tzmf 1/1 Running 0 17h
hashicorp-consul-hgxdr 0/1 Running 0 18h
hashicorp-consul-server-0 0/1 Running 0 18h
hashicorp-consul-server-1 0/1 Running 0 18h
hashicorp-consul-server-2 0/1 Pending 0 18h
hashicorp-consul-vmsmt 0/1 Running 0 18h
</code></pre>
<p>Here is the full description of the pods:</p>
<pre><code>Name: busybox-6cd57fd969-9tzmf
Namespace: default
Priority: 0
Node: kind-worker2/172.17.0.4
Start Time: Tue, 14 Jan 2020 17:45:03 +0800
Labels: pod-template-hash=6cd57fd969
run=busybox
Annotations: <none>
Status: Running
IP: 10.244.2.11
IPs:
IP: 10.244.2.11
Controlled By: ReplicaSet/busybox-6cd57fd969
Containers:
busybox:
Container ID: containerd://710eba6a12607021098e3c376637476cd85faf86ac9abcf10f191126dc37026b
Image: busybox
Image ID: docker.io/library/busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Port: <none>
Host Port: <none>
Args:
sh
State: Running
Started: Tue, 14 Jan 2020 21:00:50 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zszqr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-zszqr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zszqr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Name: hashicorp-consul-hgxdr
Namespace: default
Priority: 0
Node: kind-worker2/172.17.0.4
Start Time: Tue, 14 Jan 2020 17:13:57 +0800
Labels: app=consul
chart=consul-helm
component=client
controller-revision-hash=6bc54657b6
hasDNS=true
pod-template-generation=1
release=hashicorp
Annotations: consul.hashicorp.com/connect-inject: false
Status: Running
IP: 10.244.2.10
IPs:
IP: 10.244.2.10
Controlled By: DaemonSet/hashicorp-consul
Containers:
consul:
Container ID: containerd://2209cfeaa740e3565213de6d0653dabbe9a8cbf1ffe085352a8e9d3a2d0452ec
Image: consul:1.6.2
Image ID: docker.io/library/consul@sha256:a167e7222c84687c3e7f392f13b23d9f391cac80b6b839052e58617dab714805
Ports: 8500/TCP, 8502/TCP, 8301/TCP, 8301/UDP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 8500/TCP, 8502/TCP, 0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
Command:
/bin/sh
-ec
CONSUL_FULLNAME="hashicorp-consul"
exec /bin/consul agent \
-node="${NODE}" \
-advertise="${ADVERTISE_IP}" \
-bind=0.0.0.0 \
-client=0.0.0.0 \
-node-meta=pod-name:${HOSTNAME} \
-hcl="ports { grpc = 8502 }" \
-config-dir=/consul/config \
-datacenter=dc1 \
-data-dir=/consul/data \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-domain=consul
State: Running
Started: Tue, 14 Jan 2020 20:58:29 +0800
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ADVERTISE_IP: (v1:status.podIP)
NAMESPACE: default (v1:metadata.namespace)
NODE: (v1:spec.nodeName)
Mounts:
/consul/config from config (rw)
/consul/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hashicorp-consul-client-token-4r5tv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hashicorp-consul-client-config
Optional: false
hashicorp-consul-client-token-4r5tv:
Type: Secret (a volume populated by a Secret)
SecretName: hashicorp-consul-client-token-4r5tv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 96s (x3206 over 14h) kubelet, kind-worker2 Readiness probe failed:
Name: hashicorp-consul-server-0
Namespace: default
Priority: 0
Node: kind-worker2/172.17.0.4
Start Time: Tue, 14 Jan 2020 17:13:57 +0800
Labels: app=consul
chart=consul-helm
component=server
controller-revision-hash=hashicorp-consul-server-98f4fc994
hasDNS=true
release=hashicorp
statefulset.kubernetes.io/pod-name=hashicorp-consul-server-0
Annotations: consul.hashicorp.com/connect-inject: false
Status: Running
IP: 10.244.2.9
IPs:
IP: 10.244.2.9
Controlled By: StatefulSet/hashicorp-consul-server
Containers:
consul:
Container ID: containerd://72b7bf0e81d3ed477f76b357743e9429325da0f38ccf741f53c9587082cdfcd0
Image: consul:1.6.2
Image ID: docker.io/library/consul@sha256:a167e7222c84687c3e7f392f13b23d9f391cac80b6b839052e58617dab714805
Ports: 8500/TCP, 8301/TCP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
Command:
/bin/sh
-ec
CONSUL_FULLNAME="hashicorp-consul"
exec /bin/consul agent \
-advertise="${POD_IP}" \
-bind=0.0.0.0 \
-bootstrap-expect=3 \
-client=0.0.0.0 \
-config-dir=/consul/config \
-datacenter=dc1 \
-data-dir=/consul/data \
-domain=consul \
-hcl="connect { enabled = true }" \
-ui \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-server
State: Running
Started: Tue, 14 Jan 2020 20:58:27 +0800
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
] delay=5s timeout=5s period=3s #success=1 #failure=2
Environment:
POD_IP: (v1:status.podIP)
NAMESPACE: default (v1:metadata.namespace)
Mounts:
/consul/config from config (rw)
/consul/data from data-default (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hashicorp-consul-server-token-hhdxc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data-default:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-default-hashicorp-consul-server-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hashicorp-consul-server-config
Optional: false
hashicorp-consul-server-token-hhdxc:
Type: Secret (a volume populated by a Secret)
SecretName: hashicorp-consul-server-token-hhdxc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 97s (x10686 over 14h) kubelet, kind-worker2 Readiness probe failed:
Name: hashicorp-consul-server-1
Namespace: default
Priority: 0
Node: kind-worker/172.17.0.3
Start Time: Tue, 14 Jan 2020 17:13:57 +0800
Labels: app=consul
chart=consul-helm
component=server
controller-revision-hash=hashicorp-consul-server-98f4fc994
hasDNS=true
release=hashicorp
statefulset.kubernetes.io/pod-name=hashicorp-consul-server-1
Annotations: consul.hashicorp.com/connect-inject: false
Status: Running
IP: 10.244.1.8
IPs:
IP: 10.244.1.8
Controlled By: StatefulSet/hashicorp-consul-server
Containers:
consul:
Container ID: containerd://c1f5a88e30e545c75e58a730be5003cee93c823c21ebb29b22b79cd151164a15
Image: consul:1.6.2
Image ID: docker.io/library/consul@sha256:a167e7222c84687c3e7f392f13b23d9f391cac80b6b839052e58617dab714805
Ports: 8500/TCP, 8301/TCP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
Command:
/bin/sh
-ec
CONSUL_FULLNAME="hashicorp-consul"
exec /bin/consul agent \
-advertise="${POD_IP}" \
-bind=0.0.0.0 \
-bootstrap-expect=3 \
-client=0.0.0.0 \
-config-dir=/consul/config \
-datacenter=dc1 \
-data-dir=/consul/data \
-domain=consul \
-hcl="connect { enabled = true }" \
-ui \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-server
State: Running
Started: Tue, 14 Jan 2020 20:58:36 +0800
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
] delay=5s timeout=5s period=3s #success=1 #failure=2
Environment:
POD_IP: (v1:status.podIP)
NAMESPACE: default (v1:metadata.namespace)
Mounts:
/consul/config from config (rw)
/consul/data from data-default (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hashicorp-consul-server-token-hhdxc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data-default:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-default-hashicorp-consul-server-1
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hashicorp-consul-server-config
Optional: false
hashicorp-consul-server-token-hhdxc:
Type: Secret (a volume populated by a Secret)
SecretName: hashicorp-consul-server-token-hhdxc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 95s (x10683 over 14h) kubelet, kind-worker Readiness probe failed:
Name: hashicorp-consul-server-2
Namespace: default
Priority: 0
Node: <none>
Labels: app=consul
chart=consul-helm
component=server
controller-revision-hash=hashicorp-consul-server-98f4fc994
hasDNS=true
release=hashicorp
statefulset.kubernetes.io/pod-name=hashicorp-consul-server-2
Annotations: consul.hashicorp.com/connect-inject: false
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/hashicorp-consul-server
Containers:
consul:
Image: consul:1.6.2
Ports: 8500/TCP, 8301/TCP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
Command:
/bin/sh
-ec
CONSUL_FULLNAME="hashicorp-consul"
exec /bin/consul agent \
-advertise="${POD_IP}" \
-bind=0.0.0.0 \
-bootstrap-expect=3 \
-client=0.0.0.0 \
-config-dir=/consul/config \
-datacenter=dc1 \
-data-dir=/consul/data \
-domain=consul \
-hcl="connect { enabled = true }" \
-ui \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-server
Readiness: exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
] delay=5s timeout=5s period=3s #success=1 #failure=2
Environment:
POD_IP: (v1:status.podIP)
NAMESPACE: default (v1:metadata.namespace)
Mounts:
/consul/config from config (rw)
/consul/data from data-default (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hashicorp-consul-server-token-hhdxc (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data-default:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-default-hashicorp-consul-server-2
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hashicorp-consul-server-config
Optional: false
hashicorp-consul-server-token-hhdxc:
Type: Secret (a volume populated by a Secret)
SecretName: hashicorp-consul-server-token-hhdxc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 63s (x434 over 18h) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity.
Name: hashicorp-consul-vmsmt
Namespace: default
Priority: 0
Node: kind-worker/172.17.0.3
Start Time: Tue, 14 Jan 2020 17:13:57 +0800
Labels: app=consul
chart=consul-helm
component=client
controller-revision-hash=6bc54657b6
hasDNS=true
pod-template-generation=1
release=hashicorp
Annotations: consul.hashicorp.com/connect-inject: false
Status: Running
IP: 10.244.1.9
IPs:
IP: 10.244.1.9
Controlled By: DaemonSet/hashicorp-consul
Containers:
consul:
Container ID: containerd://d502870f3476ea074b059361bc52a2c68ced551f5743b8448926bdaa319aabb0
Image: consul:1.6.2
Image ID: docker.io/library/consul@sha256:a167e7222c84687c3e7f392f13b23d9f391cac80b6b839052e58617dab714805
Ports: 8500/TCP, 8502/TCP, 8301/TCP, 8301/UDP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 8500/TCP, 8502/TCP, 0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
Command:
/bin/sh
-ec
CONSUL_FULLNAME="hashicorp-consul"
exec /bin/consul agent \
-node="${NODE}" \
-advertise="${ADVERTISE_IP}" \
-bind=0.0.0.0 \
-client=0.0.0.0 \
-node-meta=pod-name:${HOSTNAME} \
-hcl="ports { grpc = 8502 }" \
-config-dir=/consul/config \
-datacenter=dc1 \
-data-dir=/consul/data \
-retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-retry-join=${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
-domain=consul
State: Running
Started: Tue, 14 Jan 2020 20:58:35 +0800
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ADVERTISE_IP: (v1:status.podIP)
NAMESPACE: default (v1:metadata.namespace)
NODE: (v1:spec.nodeName)
Mounts:
/consul/config from config (rw)
/consul/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hashicorp-consul-client-token-4r5tv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hashicorp-consul-client-config
Optional: false
hashicorp-consul-client-token-4r5tv:
Type: Secret (a volume populated by a Secret)
SecretName: hashicorp-consul-client-token-4r5tv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 88s (x3207 over 14h) kubelet, kind-worker Readiness probe failed:
</code></pre>
<p>For the sake of completeness here is my <code>kubelet</code> status:</p>
<pre><code> Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2020-01-15 10:59:06 +08; 1h 5min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 11910 (kubelet)
Tasks: 17
Memory: 50.3M
CPU: 1min 16.431s
CGroup: /system.slice/kubelet.service
└─11910 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
Jan 15 12:04:41 ubuntu kubelet[11910]: E0115 12:04:41.610779 11910 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
Jan 15 12:04:42 ubuntu kubelet[11910]: W0115 12:04:42.370702 11910 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 15 12:04:46 ubuntu kubelet[11910]: E0115 12:04:46.612639 11910 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
Jan 15 12:04:47 ubuntu kubelet[11910]: W0115 12:04:47.371621 11910 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 15 12:04:51 ubuntu kubelet[11910]: E0115 12:04:51.614925 11910 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
Jan 15 12:04:52 ubuntu kubelet[11910]: W0115 12:04:52.372164 11910 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 15 12:04:56 ubuntu kubelet[11910]: E0115 12:04:56.616201 11910 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
Jan 15 12:04:57 ubuntu kubelet[11910]: W0115 12:04:57.372364 11910 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 15 12:05:01 ubuntu kubelet[11910]: E0115 12:05:01.617916 11910 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
Jan 15 12:05:02 ubuntu kubelet[11910]: W0115 12:05:02.372698 11910 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
</code></pre>
<p>Any help is much appreciated.</p>
| HemOdd | <p>I replicated your setup creating 3 node cluster (1 master and 2 workers) and deployed consul with helm and saw the same thing as you see. All pods were running beside one that was pending.</p>
<p>In statefulset object you can see there is <em>podAntiAffinity</em> which disallows scheduling 2 or more server pods on the same node. This is why You see one pod in pending state.</p>
<p>There are 4 ways I come up with you can make it work.</p>
<ol>
<li><p>Master node has a taint: <code>node-role.kubernetes.io/master:NoSchedule</code> which disallows scheduling any pods on master node. You can delete this taint by running: <code>kubectl taint node kind-control-plane node-role.kubernetes.io/master:NoSchedule-</code> (notice minus sign, it tells k8s to delete the taint) so now scheduler will be able to schedule the one <em>consul-server</em> pod that's left to this node.</p></li>
<li><p>You can add one more worker node.</p></li>
<li><p>You can remove <em>podAntiAffinity</em> from <em>consul-server</em> statfulset object so scheduler won't care where pods get scheduled.</p></li>
<li><p>Change <code>requiredDuringSchedulingIgnoredDuringExecution</code> to <code>preferredDuringSchedulingIgnoredDuringExecution</code> so this affinity rule does not need to be fulfilled, it's only <em>preferred</em>.</p></li>
</ol>
<p>Let me know if it helped.</p>
| Matt |
<p>I have read about k8s resource management but things are still not very clear to me. Lets say we have 2 k8s nodes each with 22 mb memory.
Lets say Pod A has request 10mb and limit 15mb(but lets say actual usage is 5mb). so this pod is scheduled on node 1. So node1 has 22 mb memory, 5 is used by Pod A but another 17mb is available if more memory is needed by Pod A.
Pod B has request 10 and limit 15(basically the same with Pod A). so this pod is scheduled on node 2</p>
<p>So both nodes have 5 mbs of usages out of 22mb. If Pod C has a request 5mb and limit 10mb, will this pod be scheduled on any of the nodes? If yes, what would happen Pod C needs 10m memory and the other pod needs 15mb of memory?</p>
<p>What would happen if Pod C has a request of 13mb and a limit of 15mb? In this case 13(request of pod C) + 10(request of pod A) will be 23(more than 22)?</p>
<p>Does k8s try to make sure that requests of all pods < available memory && limits of all pods < available memory ?</p>
| Fatih Arslan | <p>Answering question from the post:</p>
<blockquote>
<p>Lets say Pod A has request 10mb and limit 15mb(but lets say actual usage is 5mb). so this pod is scheduled on node 1. So node1 has 22 mb memory, 5 is used by Pod A but another 17mb is available if more memory is needed by Pod A. Pod B has request 10 and limit 15(basically the same with Pod A). so this pod is scheduled on node 2</p>
</blockquote>
<p>This is not really a question but I think this part needs some perspective on how <code>Pods</code> are scheduled onto the nodes. The component that is responsible for telling Kubernetes where a <code>Pod</code> should be scheduled is: <code>kube-scheduler</code>. It <strong>could</strong> come to the situation as you say that:</p>
<ul>
<li><code>Pod A, req:10M, limit: 15M</code> -> <code>Node 1, mem: 22MB</code></li>
<li><code>Pod B req:10M, limit: 15M</code> -> <code>Node 2, mem: 22MB</code></li>
</ul>
<p>Citing the official documentation:</p>
<blockquote>
<h3>Node selection in kube-scheduler</h3>
<p><code>kube-scheduler</code> selects a node for the pod in a 2-step operation:</p>
<ul>
<li>Filtering</li>
<li>Scoring</li>
</ul>
<p>The filtering step finds the set of Nodes where it's feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resource to meet a Pod's specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable.</p>
<p>In the scoring step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.</p>
<p><strong>Finally, kube-scheduler assigns the Pod to the Node with the highest ranking.</strong> If there is more than one node with equal scores, kube-scheduler selects one of these at random.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Scheduling eviction: Kube sheduler: Implementation</a></em></p>
</blockquote>
<hr />
<blockquote>
<p>So both nodes have 5 mbs of usages out of 22mb. If Pod C has a request 5mb and limit 10mb, will this pod be scheduled on any of the nodes? If yes, what would happen Pod C needs 10m memory and the other pod needs 15mb of memory?</p>
</blockquote>
<p>In this particular example I would much more focus on the <strong>request</strong> part rather than the actual usage. Assuming that there is no other factor that will deny the scheduling, <code>Pod C</code> should be spawned on one of the nodes (that the <code>kube-scheduler</code> chooses). Why is that?:</p>
<ul>
<li><code>resource.limits</code> will <strong>not deny</strong> the scheduling of the <code>Pod</code> (limit can be higher than memory)</li>
<li><code>resource.requests</code> will <strong>deny</strong> the scheduling of the <code>Pod</code> (request cannot be higher than memory)</li>
</ul>
<p>I encourage you to check following articles to get more reference:</p>
<ul>
<li><em><a href="https://sysdig.com/blog/kubernetes-limits-requests/" rel="nofollow noreferrer">Sysdig.com: Blog: Kubernetes limits requests</a></em></li>
<li><em><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">Cloud.google.com: Blog: Products: Containers Kubernetes: Kubernetes best practices: </a></em> (this is <code>GKE</code> blog but it should give the baseline idea, see the part on: "The lifecycle of a Kubernetes Pod" section)</li>
</ul>
<hr />
<blockquote>
<p>What would happen if Pod C has a request of 13mb and a limit of 15mb? In this case 13(request of pod C) + 10(request of pod A) will be 23(more than 22)?</p>
</blockquote>
<p>In that example the <code>Pod</code> <strong>will not</strong> be scheduled as the sum of requests > memory (assuming no <code>Pod</code> priority). The <code>Pod</code> will be in <code>Pending</code> state.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#what-s-next" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Scheduling eviction: Kube sheduler: What's next</a></em></li>
<li><em><a href="https://www.youtube.com/watch?v=eDkE4WNWKUc" rel="nofollow noreferrer">Youtube.com: Deep Dive Into the Latest Kubernetes Scheduler Features
</a></em> - from <a href="https://www.cncf.io/" rel="nofollow noreferrer">CNCF [Cloud Native Computing Foundation]</a> conference</li>
</ul>
| Dawid Kruk |
<p>I wanted to expose my application on all the ports between 3000-14000. For this, I have changed the default nodePort range 30k - 32767 by updating kubeapi-server manifest file field "--service-node-port-range=3000-14000". For me, it is almost impossible to expose by specifying all the port numbers manually following the below template.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: nodePort
selector:
app: myapp
ports:
- targetPort: 3000
port: 3000
nodePort: 3000
...................
...................
- targetPort: 14000
port: 14000
nodePort: 14000
</code></pre>
<p>Do we have any other alternative solution/plugins to this, so as to open all ports between the range 3k - 14k.</p>
| vidyadhar reddy | <p>Unfortunately Kubernetes doesn't yet support <a href="https://github.com/kubernetes/kubernetes/issues/23864" rel="nofollow noreferrer">exposing range of ports</a> (it's possible in <a href="https://stackoverflow.com/questions/28717464/docker-expose-all-ports-or-range-of-ports-from-7000-to-8000">Docker</a>).</p>
<p>As a workaround use <a href="https://helm.sh/docs/topics/chart_best_practices/templates/" rel="nofollow noreferrer">Helm templates</a> to create chart with service template and ports in values.yaml file. Or create a script to automate creation of a service yaml to expose each port.</p>
| kool |
<p>Maybe I am missing something but am I right in thinking that if I want to update a webapi that is hosted in my kubernetes cluster in Azure I have to delete and recreate it?</p>
<p>The instructions I am seeing online seem to show deleting the cluster rather than upgrading the api</p>
<p>Paul</p>
| Paul | <p>If it's your web API application then it's only deployment image update, you don't need to delete the whole cluster.</p>
<hr />
<p>There is an example of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">updating a nginx deployment</a> with new image:</p>
<pre><code>kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
</code></pre>
<p>This way it will first create a new pods with newer version of image and once successfully deployed it will terminate old pods.</p>
<p>You can check status of the update by running</p>
<pre><code>kubectl rollout status deployment.v1.apps/nginx-deployment
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources</a></li>
<li><a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></li>
<li><a href="https://www.bluematador.com/blog/kubernetes-deployments-rolling-update-configuration" rel="nofollow noreferrer">https://www.bluematador.com/blog/kubernetes-deployments-rolling-update-configuration</a></li>
</ul>
| Jakub |
<p>I have an application deployed to Kubernetes that depends on an outside application. Sometimes the connection between these 2 goes to an invalid state, and that can only be fixed by restarting my application.</p>
<p>To do automatic restarts, I have configured a liveness probe that will verify the connection.</p>
<p>This has been working great, however, I'm afraid that if that outside application goes down (such that the connection error isn't just due to an invalid pod state), all of my pods will immediately restart, and my application will become completely unavailable. I want it to remain running so that functionality not depending on the bad service can continue.</p>
<p>I'm wondering if a pod disruption budget would prevent this scenario, as it limits the # of pods down due to a "voluntary" disruption. However, the K8s docs don't state whether liveness probe failure are a voluntary disruption. Are they?</p>
| roim | <p>I would say, accordingly to the documentation:</p>
<blockquote>
<h3>Voluntary and involuntary disruptions</h3>
<p>Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.</p>
<p>We call these unavoidable cases <em>involuntary disruptions</em> to an application. Examples are:</p>
<ul>
<li>a hardware failure of the physical machine backing the node</li>
<li>cluster administrator deletes VM (instance) by mistake</li>
<li>cloud provider or hypervisor failure makes VM disappear</li>
<li>a kernel panic</li>
<li>the node disappears from the cluster due to cluster network partition</li>
<li>eviction of a pod due to the node being <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">out-of-resources</a>.</li>
</ul>
<p>Except for the out-of-resources condition, all these conditions should be familiar to most users; they are not specific to Kubernetes.</p>
<p>We call other cases <em>voluntary disruptions</em>. These include both actions initiated by the application owner and those initiated by a Cluster Administrator. Typical application owner actions include:</p>
<ul>
<li>deleting the deployment or other controller that manages the pod</li>
<li>updating a deployment's pod template causing a restart</li>
<li>directly deleting a pod (e.g. by accident)</li>
</ul>
<p>Cluster administrator actions include:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">Draining a node</a> for repair or upgrade.</li>
<li>Draining a node from a cluster to scale the cluster down (learn about <a href="https://github.com/kubernetes/autoscaler/#readme" rel="nofollow noreferrer">Cluster Autoscaling</a> ).</li>
<li>Removing a pod from a node to permit something else to fit on that node.</li>
</ul>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Disruptions</a></em></p>
</blockquote>
<p><strong>So your example is quite different and according to my knowledge it's neither voluntary or involuntary disruption.</strong></p>
<hr />
<p>Also taking a look on another Kubernetes documentation:</p>
<blockquote>
<h3>Pod lifetime</h3>
<p>Like individual application containers, Pods are considered to be relatively ephemeral (rather than durable) entities. Pods are created, assigned a unique ID (<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids" rel="nofollow noreferrer">UID</a>), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a> dies, the Pods scheduled to that node are <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection" rel="nofollow noreferrer">scheduled for deletion</a> after a timeout period.</p>
<p>Pods do not, by themselves, self-heal. If a Pod is scheduled to a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">node</a> that then fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">controller</a>, that handles the work of managing the relatively disposable Pod instances.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Pod lifecycle: Pod lifetime</a></em></p>
</blockquote>
<blockquote>
<h3>Container probes</h3>
<p>The kubelet can optionally perform and react to three kinds of probes on running containers (focusing on a <code>livenessProbe</code>):</p>
<ul>
<li><code>livenessProbe</code>: Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>. If a Container does not provide a liveness probe, the default state is <code>Success</code>.</li>
</ul>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Pod lifecycle: Container probes</a></em></p>
<h3>When should you use a liveness probe?</h3>
<p>If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod's <code>restartPolicy</code>.</p>
<p>If you'd like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a <code>restartPolicy</code> of Always or OnFailure.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Pod lifecycle: When should you use a startup probe</a></em></p>
</blockquote>
<p>According to those information it would be better to create custom liveness probe which should consider internal process health checks and external dependency(liveness) health check. In the first scenario your container should stop/terminate your process unlike the the second case with external dependency.</p>
<p>Answering following question:</p>
<blockquote>
<p>I'm wondering if a pod disruption budget would prevent this scenario.</p>
</blockquote>
<p><strong>In this particular scenario PDB will not help.</strong></p>
<hr />
<p>I'd reckon giving more visibility to the comment, I've made with additional resources on the matter could prove useful to other community members:</p>
<ul>
<li><em><a href="https://blog.risingstack.com/designing-microservices-architecture-for-failure/" rel="nofollow noreferrer">Blog.risingstack.com: Designing microservices architecture for failure</a></em></li>
<li><em><a href="https://loft.sh/blog/kubernetes-readiness-probes-examples-common-pitfalls/#external-dependencies" rel="nofollow noreferrer">Loft.sh: Blog: Kubernetes readiness probles examples common pitfalls: External depenedencies</a></em></li>
<li><em><a href="https://cloud.google.com/architecture/scalable-and-resilient-apps#resilience_designing_to_withstand_failures" rel="nofollow noreferrer">Cloud.google.com: Archiecture: Scalable and resilient apps: Resilience designing to withstand failures</a></em></li>
</ul>
| Dawid Kruk |
<p>Can Kubernetes pod that uses the host network send requests directly to a Service resource by using the service name and service port (incl. utilization of CoreDNS)? Or do I have to expose the service via nodePort on the host network?</p>
| Richard | <p>If you want a pod to send requests directly to service resource you have to change pod's <code>dnsPolicy</code> to <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer"><code>ClusterFirstWithHostNet</code></a>. It should be used explicitly for pods running with <code>hostNetwork: true</code>. This way it will use cluster DNS and will be in host network.</p>
<p>You can check it in <code>/etc/resolv.conf</code> file. When you are using <code>dnsPolicy: ClusterFirst</code> this file is inherited by pod and uses host's DNS resolver.
When you set <code>dnsPolicy: ClusterFirstWithHostNet</code> it will change DNS resolver to cluster's DNS.</p>
| kool |
<p>I have a Windows 10 machine running a Hyper-V virtual machine with Ubuntu guest.
On Ubuntu, there is a Microk8s installation for my single node Kubernetes cluster.</p>
<p>I can't figure out how to set up my kubectl on the Win10 to allow deploying on the microk8s cluster in the vm.</p>
<p>Atm, from outside, I can ssh into the vm and I can reach the dashboard-proxy for microk8s (in this case locally https://ubuntuk8s:10443 ).</p>
<p>How to configure <strong>kubectl on my windows</strong> to deploy to <strong>microk8s inside the vm</strong>?</p>
| babbyldockr | <p>In short, you can copy the file from <code>microk8s</code> instance that <code>kubectl</code> is using (<code>kubeconfig</code>) to the Windows machine in order to connect to your <code>microk8s</code> instance.</p>
<hr />
<p>As you already have full connectivity between the Windows machine and the Ubuntu that <code>microk8s</code> is configured on, you will need to:</p>
<ul>
<li>Install <code>kubectl</code> on Windows machine</li>
<li>Copy and modify the <code>kubeconfig</code> file on your Windows machine</li>
<li>Test the connection</li>
</ul>
<hr />
<h3>Install <code>kubectl</code></h3>
<p>You can refer to the official documentation on that matter by following this link:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Tools: Install kubectl Windows</a></em></li>
</ul>
<hr />
<h3>Copy and modify the <code>kubeconfig</code> file on your Windows machine</h3>
<blockquote>
<p><strong>IMPORTANT!</strong></p>
<p>After a bit of research, I've found easier way for this step.</p>
<p>You will need to run <strong><code>microk8s.config</code></strong> command on your <code>microk8s</code> host. It will give you the necessary config to connect to your instance and you won't need to edit any fields. You will just need to use it with <code>kubectl</code> on Windows machine.</p>
<p>I've left below part to give one of the options on how to check where the config file is located</p>
</blockquote>
<p>This will depend on the tools you are using but the main idea behind it is to login into your Ubuntu machine, check where the <code>kubeconfig</code> is located, copy it to the Windows machine and edit it to point to the IP address of your <code>microk8s</code> instance (not <code>127.0.0.1</code>).</p>
<p>If you can <code>SSH</code> to the <code>VM</code> you can run following command to check where the config file is located (I would consider this more of a workaround solution):</p>
<ul>
<li><code>microk8s kubectl get pods -v=6</code></li>
</ul>
<pre><code>I0506 09:10:14.285917 104183 loader.go:379] Config loaded from file: /var/snap/microk8s/2143/credentials/client.config
I0506 09:10:14.294968 104183 round_trippers.go:445] GET https://127.0.0.1:16443/api/v1/namespaces/default/pods?limit=500 200 OK in 5 milliseconds
No resources found in default namespace.
</code></pre>
<p>As you can see in this example, the file is in (it could be different in your setup):</p>
<ul>
<li><code>/var/snap/microk8s/2143/credentials/client.config</code></li>
</ul>
<p>You'll need to copy this file to your Windows machine either via <code>scp</code> or other means.</p>
<p>After it's copied, you'll need to edit this file to change the field responsible for specifying where the <code>kubectl</code> should connect to:</p>
<ul>
<li>before:
<ul>
<li><code> server: https://127.0.0.1:16443</code></li>
</ul>
</li>
<li>after:
<ul>
<li><code> server: https://ubuntuk8s:16443</code></li>
</ul>
</li>
</ul>
<hr />
<h3>Test the connection</h3>
<p>One of the ways to check if you can connect from your Windows machine to <code>microk8s</code> instance is following:</p>
<pre><code>PS C:\Users\XYZ\Desktop> kubectl --kubeconfig=client.config get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu Ready <none> 101m v1.20.6-34+e4abae43f6acde 192.168.0.116 <none> Ubuntu 20.04.2 LTS 5.4.0-65-generic containerd://1.3.7
</code></pre>
<p>Where:</p>
<ul>
<li><code>--kubeconfig=client.config</code> is specifying the file that you've downloaded</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Configuration: Organize cluster access kubeconfig</a></em></li>
<li><em><a href="https://github.com/ubuntu/microk8s/issues/421#issuecomment-481248192" rel="nofollow noreferrer">Github.com: Ubuntu: Microk8s: Issues: Cannot connect to microk8s from another machine</a></em> - this link pointed me to <code>microk8s.config</code></li>
</ul>
| Dawid Kruk |
<p>Is there a way to get the current kubernetes context at runtime to prevent calling <code>pulumi up</code> with the incorrect context? I'd specifically like to make sure my local kubernetes deployment logic doesn't end up in anything other than a local cluster.</p>
<p>I've gone through the examples and don't see anything that does this: <a href="https://github.com/pulumi/examples/search?l=TypeScript&p=2&q=context&type=" rel="nofollow noreferrer">https://github.com/pulumi/examples/search?l=TypeScript&p=2&q=context&type=</a> (maybe I'm thinking about my problem the wrong way).</p>
| Paymahn Moghadasian | <p>As explained <a href="https://www.pulumi.com/docs/intro/cloud-providers/kubernetes/setup/" rel="nofollow noreferrer">here</a>, first you have to create a context for your cluster that will be used, for example:</p>
<pre><code>kubectl config \
set-context <my-context> \
--cluster=<my-cluster> \
--user=<my-user>
</code></pre>
<p>Then run <code>pulumi stack init new-kube-stack</code> where you will be asked to enter your access token and finally run <code>pulumi config set kubernetes:context my-context</code> to work in a cluster defined in previously created context. </p>
| kool |
<p>Can I make an existing Istio open source installable compatible with the (Istioctl + Operator) ? I currently have Istio 1.4.3 installed via istioctl .. and need to make existing deployment Istio operator aware as well before I upgrade to Istio 1.5.6+ . Any specific steps to be followed here ?</p>
| Avi | <p>There shouldn't be any problem with that, I have tried that on my test cluster and everything worked just fine.</p>
<p>I had a problem with upgrading immediately from 1.4.3 to 1.5.6, so with below steps you're first upgrading from 1.4.3 to 1.5.0, then from 1.5.0 to 1.5.6</p>
<p>Take a look at below steps to follow.</p>
<hr />
<p>1.Follow istio <a href="https://istio.io/latest/docs/setup/getting-started/#download" rel="nofollow noreferrer">documentation</a> and install istioctl 1.4, 1.5 and 1.5.6 with:</p>
<pre><code>curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.6 sh -
</code></pre>
<p>2.Add the istioctl 1.4 to your path</p>
<pre><code>cd istio-1.4.0
export PATH=$PWD/bin:$PATH
</code></pre>
<p>3.Install istio 1.4</p>
<pre><code>istioctl manifest apply --set profile=demo
</code></pre>
<p>4.Check if everything works correct.</p>
<pre><code>kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
</code></pre>
<p>5.Add the istioctl 1.5 to your path</p>
<pre><code>cd istio-1.5.0
export PATH=$PWD/bin:$PATH
</code></pre>
<p>6.Install <a href="https://istio.io/latest/blog/2019/introducing-istio-operator/" rel="nofollow noreferrer">istio operator</a> for future upgrade.</p>
<pre><code>istioctl operator init
</code></pre>
<p>7.Prepare IstioOperator.yaml</p>
<pre><code>nano IstioOperator.yaml
</code></pre>
<hr />
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
tag: 1.5.0
</code></pre>
<p>8.Before the upgrade use below commands</p>
<pre><code>kubectl -n istio-system delete service/istio-galley deployment.apps/istio-galley
kubectl delete validatingwebhookconfiguration.admissionregistration.k8s.io/istio-galley
</code></pre>
<p>9.Upgrade from 1.4 to 1.5 with istioctl upgrade and prepared IstioOperator.yaml</p>
<pre><code>istioctl upgrade -f IstioOperator.yaml
</code></pre>
<p>10.After the upgrade use below commands</p>
<pre><code>kubectl -n istio-system delete deployment istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete service istio-citadel istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete horizontalpodautoscaler.autoscaling/istio-pilot horizontalpodautoscaler.autoscaling/istio-telemetry
kubectl -n istio-system delete pdb istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete deployment istiocoredns
kubectl -n istio-system delete service istiocoredns
</code></pre>
<p>11.Check if everything works correct.</p>
<pre><code>kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
</code></pre>
<p>12.Change istio IstioOperator.yaml tag value</p>
<pre><code>nano IstioOperator.yaml
</code></pre>
<hr />
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
tag: 1.5.6 <---
</code></pre>
<p>13.Upgrade from 1.5 to 1.5.6 with istioctl upgrade and prepared IstioOperator.yaml</p>
<pre><code>istioctl upgrade -f IstioOperator.yaml
</code></pre>
<p>14.Add the istioctl 1.5.6 to your path</p>
<pre><code>cd istio-1.5.6
export PATH=$PWD/bin:$PATH
</code></pre>
<p>15.I have deployed a bookinfo app to check if everything works correct.</p>
<pre><code>kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
</code></pre>
<p>16.Results</p>
<pre><code>curl -v xx.xx.xxx.xxx/productpage | grep HTTP
HTTP/1.1 200 OK
istioctl version
client version: 1.5.6
control plane version: 1.5.6
data plane version: 1.5.6 (9 proxies)
</code></pre>
<hr />
<p>Let me know if have any more questions.</p>
| Jakub |
<p>Is there a recommended minimum size (or minimum number of inodes) for the <code>/tmp</code> file system (partition) on Kubernetes cluster nodes?</p>
<p>What I am experiencing (on a bare-metal Kubernetes 1.16.3 cluster) is that cluster nodes hit 100% inodes used (according to <code>df -i</code>). This has some negative effects (as one would expect; e.g. <code>kubectl exec ... bash</code> into pods on concerned nodes leads to "no space left on device") but <code>kubectl get nodes</code> (strangely) still reports these nodes as "Ready". The <code>/tmp</code> file systems involved are relatively small, i.e. 2G (121920 inodes) and 524M (35120 inodes) respectively.</p>
| rookie099 | <p>There is no recommended minimum size for kubernetes. The defaults are good for most cases but is you are creating e.g. many of empty files you might eventually run out of inodes. If you need more you need to manually adjust its number.</p>
| Matt |
<p>I'm running a private GKE cluster with Cloud NAT. Cloud NAT has multiple static ip addresses assigned. Is it possible for me to tell a pod to always use one of those ip addresses as source ip?</p>
| David | <p>Answering the question:</p>
<blockquote>
<p>Is it possible in GKE to assign a specific outbound IP Address to a Pod when using Cloud NAT with multiple static IP addresses?</p>
</blockquote>
<p><strong>In short, no.</strong> There is no option to modify the <code>Cloud NAT</code> (and <code>Cloud Router</code>) in a way that would make specific <code>Pods</code> to have a specific IP address when connecting to the external sources.</p>
<p>You could also consider creating a Feature Request for it at:</p>
<ul>
<li><em><a href="https://issuetracker.google.com/issues/new?component=187077&template=1162696" rel="nofollow noreferrer">Issuetracker.google.com: Issues: GKE</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://cloud.google.com/nat/docs/overview#NATwithGKE" rel="nofollow noreferrer">Cloud.google.com: NAT: Docs: Overview: NAT with GKE</a></em></li>
<li><em><a href="https://cloud.google.com/nat/docs/gke-example" rel="nofollow noreferrer">Cloud.google.com: NAT: Docs: GKE example</a></em></li>
</ul>
| Dawid Kruk |
<p>I try to run ReportPortal in my minikube:</p>
<pre><code># Delete stuff from last try
minikube delete
minikube start --driver=docker
minikube addons enable ingress
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
mv v5 reportportal/
helm dependency update
helm install . --generate-name
→ Error: failed pre-install: warning: Hook pre-install reportportal/templates/migrations-job.yaml failed: Job.batch "chart-1601647169-reportportal-migrations" is invalid: spec.template.spec.containers[0].env[4].valueFrom.secretKeyRef.name: Invalid value: "": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
</code></pre>
<p>Here is the file harts.yaml: <a href="https://github.com/reportportal/kubernetes/tree/master/reportportal/v5" rel="nofollow noreferrer">https://github.com/reportportal/kubernetes/tree/master/reportportal/v5</a></p>
<p>What could be wrong?</p>
| guettli | <p>As mentioned <a href="https://github.com/reportportal/kubernetes/tree/master/reportportal/v5" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>Before you deploy ReportPortal you should have installed all its requirements. Their versions are described in <a href="https://github.com/reportportal/kubernetes/blob/master/reportportal/v5/requirements.yaml" rel="nofollow noreferrer">requirements.yaml</a>
You should also specify correct PostgreSQL and RabbitMQ addresses and ports in <a href="https://github.com/reportportal/kubernetes/blob/master/reportportal/v5/values.yaml" rel="nofollow noreferrer">values.yaml</a></p>
</blockquote>
<pre><code>rabbitmq:
SecretName: ""
installdep:
enable: false
endpoint:
address: <rabbitmq_chart_name>-rabbitmq-ha.default.svc.cluster.local
port: 5672
user: rabbitmq
apiport: 15672
apiuser: rabbitmq
postgresql:
SecretName: ""
installdep:
enable: false
endpoint:
cloudservice: false
address: <postgresql_chart_name>-postgresql.default.svc.cluster.local
port: 5432
user: rpuser
dbName: reportportal
password:
</code></pre>
<hr />
<p>I checked <a href="https://github.com/reportportal/kubernetes/blob/master/reportportal/v5/templates/migrations-job.yaml#L37-L40" rel="nofollow noreferrer">here</a> and it points to postgresql secret name in <a href="https://github.com/reportportal/kubernetes/blob/master/reportportal/v5/values.yaml#L133" rel="nofollow noreferrer">values.yaml</a>.</p>
<p>The solution here would be to change that from <code>""</code> to your <code>postgresql secret name</code> and install it again. You can change it in your values.yaml or with <a href="https://helm.sh/docs/intro/using_helm/" rel="nofollow noreferrer">--set</a>, which specify overrides on the command line</p>
| Jakub |
<p>I am currently using bitnami/kafka image(<a href="https://hub.docker.com/r/bitnami/kafka" rel="nofollow noreferrer">https://hub.docker.com/r/bitnami/kafka</a>) and deploying it on kubernetes. </p>
<ul>
<li>kubernetes master: 1</li>
<li>kubernetes workers: 3</li>
</ul>
<p>Within the cluster the other application are able to find kafka. The problem occurs when trying to access the kafka container from outside the cluster. When reading little bit I read that we need to set property "advertised.listener=PLAINTTEXT://hostname:port_number" for external kafka clients. </p>
<p>I am currently referencing "<a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/kafka</a>". Inside my values.yaml file I have added </p>
<p><strong>values.yaml</strong></p>
<ul>
<li>advertisedListeners1: 10.21.0.191</li>
</ul>
<p>and <strong>statefulset.yaml</strong></p>
<pre><code> - name: KAFKA_CFG_ADVERTISED_LISTENERS
value: 'PLAINTEXT://{{ .Values.advertisedListeners }}:9092'
</code></pre>
<p><strong>For a single kafka instance it is working fine.</strong></p>
<p>But for 3 node kafka cluster, I changed some configuration like below:
<strong>values.yaml</strong></p>
<ul>
<li>advertisedListeners1: 10.21.0.191 </li>
<li>advertisedListeners2: 10.21.0.192</li>
<li>advertisedListeners3: 10.21.0.193</li>
</ul>
<p>and <strong>Statefulset.yaml</strong></p>
<pre><code> - name: KAFKA_CFG_ADVERTISED_LISTENERS
{{- if $MY_POD_NAME := "kafka-0" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092'
{{- else if $MY_POD_NAME := "kafka-1" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092'
{{- else if $MY_POD_NAME := "kafka-2" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092'
{{- end }}
</code></pre>
<p>Expected result is that all the 3 kafka instances should get advertised.listener property set to worker nodes ip address.</p>
<p>example:</p>
<ul>
<li><p>kafka-0 --> "PLAINTEXT://10.21.0.191:9092"</p></li>
<li><p>kafka-1 --> "PLAINTEXT://10.21.0.192:9092"</p></li>
<li><p>kafka-3 --> "PLAINTEXT://10.21.0.193:9092"</p></li>
</ul>
<p>Currently only one kafka pod in up and running and the other two are going to crashloopbackoff state. </p>
<p>and the other two pods are showing error as:</p>
<p>[2019-10-20 13:09:37,753] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-10-20 13:09:37,786] ERROR [KafkaServer id=1002] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: requirement failed: Configured end points 10.21.0.191:9092 in advertised listeners are already registered by broker 1001
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:399)
at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:397)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:397)
at kafka.server.KafkaServer.startup(KafkaServer.scala:261)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)</p>
<p>That means the logic applied in statefulset.yaml is not working.
Can anyone help me in resolving this..? </p>
<p>Any help would be appreciated..</p>
<p>The output of <code>kubectl get statefulset kafka -o yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2019-10-29T07:04:12Z"
generation: 1
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-6.0.1
name: kafka
namespace: default
resourceVersion: "12189730"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/kafka
uid: d40cfd5f-46a6-49d0-a9d3-e3a851356063
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/name: kafka
serviceName: kafka-headless
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/instance: kafka
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kafka
helm.sh/chart: kafka-6.0.1
name: kafka
spec:
containers:
- env:
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: kafka-zookeeper
- name: KAFKA_PORT_NUMBER
value: "9092"
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://:$(KAFKA_PORT_NUMBER)
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://10.21.0.191:9092
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_CFG_BROKER_ID
value: "-1"
- name: KAFKA_CFG_DELETE_TOPIC_ENABLE
value: "false"
- name: KAFKA_HEAP_OPTS
value: -Xmx1024m -Xms1024m
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES
value: "10000"
- name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MS
value: "1000"
- name: KAFKA_CFG_LOG_RETENTION_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_RETENTION_CHECK_INTERVALS_MS
value: "300000"
- name: KAFKA_CFG_LOG_RETENTION_HOURS
value: "168"
- name: KAFKA_CFG_LOG_MESSAGE_FORMAT_VERSION
- name: KAFKA_CFG_MESSAGE_MAX_BYTES
value: "1000012"
- name: KAFKA_CFG_LOG_SEGMENT_BYTES
value: "1073741824"
- name: KAFKA_CFG_LOG_DIRS
value: /bitnami/kafka/data
- name: KAFKA_CFG_DEFAULT_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
value: https
- name: KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_CFG_NUM_IO_THREADS
value: "8"
- name: KAFKA_CFG_NUM_NETWORK_THREADS
value: "3"
- name: KAFKA_CFG_NUM_PARTITIONS
value: "1"
- name: KAFKA_CFG_NUM_RECOVERY_THREADS_PER_DATA_DIR
value: "1"
- name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
value: "104857600"
- name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS
value: "6000"
image: docker.io/bitnami/kafka:2.3.0-debian-9-r88
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
name: kafka
ports:
- containerPort: 9092
name: kafka
protocol: TCP
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/kafka
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 3
currentRevision: kafka-56ff499d74
observedGeneration: 1
readyReplicas: 1
replicas: 3
updateRevision: kafka-56ff499d74
updatedReplicas: 3
</code></pre>
| Nikhil | <p>I see you have some trouble with passing different environment variables for differents pods in a StatefulSet.</p>
<p>You are trying to achieve this using helm templates:</p>
<pre><code>- name: KAFKA_CFG_ADVERTISED_LISTENERS
{{- if $MY_POD_NAME := "kafka-0" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092'
{{- else if $MY_POD_NAME := "kafka-1" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092'
{{- else if $MY_POD_NAME := "kafka-2" }}
value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092'
{{- end }}
</code></pre>
<p>In <a href="https://helm.sh/docs/chart_template_guide/" rel="nofollow noreferrer">helm template guide documentation</a> you can find this explaination:</p>
<blockquote>
<p>In Helm templates, a variable is a named reference to another object.
It follows the form $name. Variables are assigned with a special assignment operator: :=.</p>
</blockquote>
<p>Now let's look at your code:</p>
<pre><code>{{- if $MY_POD_NAME := "kafka-0" }}
</code></pre>
<p>This is variable assignment, not comparasion and
after this assignment, <code>if</code> statement evaluates this expression to <code>true</code> and that's why in your
staefulset <code>yaml</code> manifest you see this as an output:</p>
<pre><code>- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://10.21.0.191:9092
</code></pre>
<hr>
<p>To make it work as expected, you shouldn't use helm templating. It's not going to work.</p>
<p>One way to do it would be to create separate enviroment variable for every kafka node
and pass all of these variables to all pods, like this:</p>
<pre><code>- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: KAFKA_0
value: 10.21.0.191
- name: KAFKA_1
value: 10.21.0.192
- name: KAFKA_2
value: 10.21.0.193
# - name: KAFKA_CFG_ADVERTISED_LISTENERS
# value: PLAINTEXT://$MY_POD_NAME:9092
</code></pre>
<p>and also create your own docker image with modified starting script that will export <code>KAFKA_CFG_ADVERTISED_LISTENERS</code> variable
with appropriate value depending on <code>MY_POD_NAME</code>.</p>
<p>If you dont want to create your own image, you can create a <code>ConfigMap</code> with modified <code>entrypoint.sh</code> and mount it
in place of old <code>entrypoint.sh</code> (you can also use any other file, just take a look <a href="https://github.com/bitnami/bitnami-docker-kafka/tree/master/2/debian-9" rel="nofollow noreferrer">here</a>
for more information on how kafka image is built).</p>
<p>Mounting <code>ConfigMap</code> looks like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test-container
image: docker.io/bitnami/kafka:2.3.0-debian-9-r88
volumeMounts:
- name: config-volume
mountPath: /entrypoint.sh
subPath: entrypoint.sh
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: kafka-entrypoint-config
defaultMode: 0744 # remember to add proper (executable) permissions
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-entrypoint-config
namespace: default
data:
entrypoint.sh: |
#!/bin/bash
# Here add modified entrypoint script
</code></pre>
<p>Please let me know if it helped.</p>
| Matt |
<p>Summary:</p>
<p>I have a docker container which is running kubectl port-forward, forwarding the port (5432) of a postgres service running as a k8s service to a local port (2223).
In the Dockerfile, I have exposed the relevant port 2223. Then I ran the container by publishing the said port (<code>-p 2223:2223</code>)</p>
<p>Now when I am trying to access the postgres through <code>psql -h localhost -p 2223</code>, I am getting the following error:</p>
<pre><code>psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
</code></pre>
<p>However, when I do <code>docker exec -ti</code> to the said container and run the above psql command, I am able to connect to postgres.</p>
<p>Dockerfile CMD:</p>
<pre><code>EXPOSE 2223
CMD ["bash", "-c", "kubectl -n namespace_test port-forward service/postgres-11-2 2223:5432"]
</code></pre>
<p>Docker Run command:</p>
<pre><code>docker run -it --name=k8s-conn-12 -p 2223:2223 my_image_name:latest
</code></pre>
<p>Output of the docker run command:</p>
<pre><code>Forwarding from 127.0.0.1:2223 -> 5432
</code></pre>
<p>So the port forwarding is successful, and I am able to connect to the postgres instance from inside the docker container. What I am not able to do is to connect from outside the container with the exposed and published port</p>
| Chayan Ghosh | <p>You are missing a following parameter with your <code>$ kubectl port-forward ...</code>:</p>
<ul>
<li><code>--address 0.0.0.0</code></li>
</ul>
<p>I've reproduced the setup that you've tried to achieve and this was the reason the connection wasn't possible. I've included more explanation below.</p>
<hr />
<h3>Explanation</h3>
<ul>
<li><code>$ kubectl port-forward --help</code></li>
</ul>
<blockquote>
<p>Listen on port 8888 on all addresses, forwarding to 5000 in the pod</p>
<p><code>kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000</code></p>
<hr />
<h3>Options:</h3>
<p><code>--address=[localhost]</code>: Addresses to listen on (comma separated). Only accepts IP addresses or
localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1
and will fail if neither of these addresses are available to bind.</p>
</blockquote>
<p>By default: <code>$ kubectl port-forward</code> will bind to the <code>localhost</code> i.e. <code>127.0.0.1</code>. <strong>In this setup the <code>localhost</code> will be the internal to the container</strong> and will not be accessible from your host even with the <code>--publish</code> (<code>-p</code>) parameter.</p>
<p>To allow the connections that are not originating from <code>localhost</code> you will need to pass earlier mentioned: <code>--address 0.0.0.0</code>. This will make <code>kubectl</code> listen on all IP addresses and respond to the traffic accordingly.</p>
<p>Your <code>Dockerfile</code> <code>CMD</code> should look similar to:</p>
<pre><code>CMD ["bash", "-c", "kubectl -n namespace_test port-forward --address 0.0.0.0 service/postgres-11-2 2223:5432"]
</code></pre>
<hr />
<p>Additional reference:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Generated: Kubectl commands</a></em></li>
</ul>
| Dawid Kruk |
<p>My cluster is in AKS with 5 Nodes of size Standard_D4s_v3 and with K8s version 1.14.8.</p>
<p>As soon as a pod is started/restarted it shows Running (kubectl get pods) and up until the pods are in Running state the CPU usage shows 150m or as much as they require.</p>
<p>But when I top it (kubectl top po) after a pod has moved to Running state, the specific pod shows only 1m CPU usage, but Memory usage is where they should be and the service is down as well.</p>
<p>Kubectl logs -f (pod_name) returns nothing but I can ssh into the pods(kubectl exec -it ....)</p>
| Shahnewaz Ul Islam Chowdhury | <p>It's totally normal behavior, if You create pod it needs more CPU resources to create it, once it's created it doesn't need that much resources anymore. </p>
<p>You can always use cpu/memory limits and resources, more about it with examples how to do it <a href="https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-resource-management" rel="nofollow noreferrer">here</a> </p>
<blockquote>
<p><strong>Pod CPU/Memory requests</strong> define a set amount of CPU and memory that the pod needs on a regular basis.
When the Kubernetes scheduler tries to place a pod on a node, the pod requests are used to determine which node has sufficient resources available for scheduling.
Not setting a pod request will default it to the limit defined.
It is very important to monitor the performance of your application to adjust these requests. If insufficient requests are made, your application may receive degraded performance due to over scheduling a node. If requests are overestimated, your application may have increased difficulty getting scheduled.</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Pod CPU/Memory limits</strong> are the maximum amount of CPU and memory that a pod can use. These limits help define which pods should be killed in the event of node instability due to insufficient resources. Without proper limits set pods will be killed until resource pressure is lifted.
Pod limits help define when a pod has lost of control of resource consumption. When a limit is exceeded, the pod is prioritized for killing to maintain node health and minimize impact to pods sharing the node.
Not setting a pod limit defaults it to the highest available value on a given node.
Don't set a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
Again, it is very important to monitor the performance of your application at different times during the day or week. Determine when the peak demand is, and align the pod limits to the resources required to meet the application's max needs.</p>
</blockquote>
| Jakub |
<p>Looking at <a href="https://stackoverflow.com/questions/53012798/kubernetes-configmap-size-limitation">Kubernetes ConfigMap size limitation</a> and <a href="https://github.com/kubernetes/kubernetes/issues/19781" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/19781</a>, the size limit for ConfigMap resources in Kubernetes seems to be 1 MB due to etcd restrictions. However, why do comments in the YAML for one's ConfigMap resources count towards the size and the 1 MB limit?</p>
<p>By comments, I mean the following in a sample ConfigMap yaml file:</p>
<pre><code>---
kind: ConfigMap
apiVersion: v1
metadata:
name: my-configmap
namespace: default
data:
# Comment lines here that do not impact the ConfigMap in terms of actual value
# or the resource but seem to count towards the 1 MB size limit
key: val
</code></pre>
| RochesterinNYC | <p>1MB limit is not a limit for your data, but the limit for whole ConfigMap object that is stored in etcd.</p>
| Matt |
<p>I have set up an Arango instance on Kubernetes nodes, which were installed on a VM, as mentioned in the ArangoDB docs <a href="https://www.arangodb.com/docs/stable/tutorials-kubernetes.html" rel="nofollow noreferrer">ArangoDB on Kubernetes</a>. Keep in mind, I skipped the <code>ArangoLocalStorage</code> and <code>ArangoDeploymentReplication</code> step. I can see 3 pods each of agent, coordinators and dbservers in get pods.</p>
<p>The <code>arango-cluster-ea service</code>, however, shows the external IP as pending. I can use the master node's IP address and the service port to access the Web UI, connect to the DB and make changes. But I am not able to access either the Arango shell, nor am I able to use my Python code to connect to the DB. I am using the Master Node IP and the service port shown in <code>arango-cluster-ea</code> in services to try to make the Python code connect to DB. Similarly, for arangosh, I am trying the code:</p>
<pre><code>kubectl exec -it *arango-cluster-crdn-pod-name* -- arangosh --service.endpoint tcp://masternodeIP:8529
</code></pre>
<p>In case of Python, since the Connection class call is in a try block, it goes to except block. In case of Arangosh, it opens the Arango shell with the error:</p>
<pre><code>Cannot connect to tcp://masternodeIP:port
</code></pre>
<p>thus not connecting to the DB.</p>
<p>Any leads about this would be appreciated.</p>
| Arghya Dutta | <p>Posting this community wiki answer to point to the github issue that this issue/question was resolved.</p>
<p>Feel free to edit/expand.</p>
<hr />
<p>Link to github:</p>
<ul>
<li><em><a href="https://github.com/arangodb/kube-arangodb/issues/734" rel="nofollow noreferrer">Github.com: Arangodb: Kube-arangodb: Issues: 734</a></em></li>
</ul>
<blockquote>
<p>Here's how my issue got resolved:</p>
<p>To connect to arangosh, what worked for me was to use ssl before using the localhost:8529 ip-port combination in the server.endpoint. Here's the command that worked:</p>
<ul>
<li><code>kubectl exec -it _arango_cluster_crdn_podname_ -- arangosh --server.endpoint ssl://localhost:8529</code></li>
</ul>
<p>For web browser, since my external access was based on NodePort type, I put in the master node's IP and the 30000-level port number that was generated (in my case, it was 31200).</p>
<p>For Python, in case of PyArango's Connection class, it worked when I used the arango-cluster-ea service. I put in the following line in the connection call:</p>
<ul>
<li><code>conn = Connection(arangoURL='https://arango-cluster-ea:8529', verify= False, username = 'root', password = 'XXXXX')</code>
The verify=False flag is important to ignore the SSL validity, else it will throw an error again.</li>
</ul>
<p>Hopefully this solves somebody else's issue, if they face the similar issue.</p>
</blockquote>
<hr />
<p>I've tested following solution and I've managed to successfully connect to the database via:</p>
<ul>
<li><code>arangosh</code> from <code>localhost</code>:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Connected to ArangoDB 'http+ssl://localhost:8529, version: 3.7.12 [SINGLE, server], database: '_system', username: 'root'
</code></pre>
<ul>
<li>Python code</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from pyArango.connection import *
conn = Connection(arangoURL='https://ABCD:8529', username="root", password="password",verify= False )
db = conn.createDatabase(name="school")
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://www.arangodb.com/tutorials/tutorial-python/" rel="nofollow noreferrer">Arangodb.com: Tutorials: Tutorial Python</a></em></li>
<li><em><a href="https://www.arangodb.com/docs/stable/tutorials-kubernetes.html" rel="nofollow noreferrer">Arangodb.com: Docs: Stable: Tutorials Kubernetes</a></em></li>
</ul>
| Dawid Kruk |
<p>i'm trying to reverse proxy using nginx-ingress.</p>
<p>but i cannot find a way to apply reverse proxy in only certain paths</p>
<p>for example, i want apply reverse proxy <a href="http://myservice.com/about/" rel="nofollow noreferrer">http://myservice.com/about/</a>* from CDN static resources </p>
<p>and other paths serve my service (in example, it means 'my-service-web' service)</p>
<p>maybe in terms of k8s, CDN means "public external service" </p>
<p>in result, </p>
<ul>
<li><a href="http://myservice.com/about/" rel="nofollow noreferrer">http://myservice.com/about/</a>* -> reverse proxy from CDN (external service)</li>
<li><a href="http://myservice.com/" rel="nofollow noreferrer">http://myservice.com/</a>* -> my-service-web (internal service)</li>
</ul>
<p>here is my ingress.yaml file</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service-web
namespace: my-service
annotations:
kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/server-snippet: |
location ~ /about/(.*) {
proxy_pass https://CDN_URL/$1${is_args}${args};
......and other proxy settings
}
spec:
rules:
- host: myservice.com
http:
paths:
- path: /about
........how do i configuration this?
- path: /*
backend:
serviceName: my-service-web
servicePort: 80
</code></pre>
<p>how do i set rules and annotations?</p>
| highalps | <p>You can create a service with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">externalName</a> type that will point to your external service (CDN) and it's well explained in this <a href="https://akomljen.com/kubernetes-tips-part-1/" rel="nofollow noreferrer">blog post</a>, for example:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-service
spec:
type: ExternalName
externalName: FQDN
</code></pre>
<p>and then use it in your ingress rules by referring to the service name.</p>
| kool |
<p>I am trying to do a fairly simple red/green setup using Minikube where I want 1 pod running a red container and 1 pod running a green container and a service to hit each. To do this my k82 file is like...</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: main-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/jackiegleason/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-red
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-red
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-green
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-green
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-red
labels:
app: express-app-red
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-red
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-green
labels:
app: express-app-green
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-green
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_TYPE
value: "Green"
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
</code></pre>
<p>But when I run I get...</p>
<blockquote>
<p>0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.</p>
</blockquote>
<p>What am I missing? It worked fine without the second deployment so what am I missing?</p>
<p>Thank you!</p>
| Jackie | <p>You cannot use the same a PV with <code>accessMode: ReadWriteOnce</code> multiple times.</p>
<p>To do this you would need to use a volume that supports ReadWriteMany access mode.
Check out k8s documentation for <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">the list of Volume Plugins</a> that support this feature.</p>
<p>Additionally, as David already menioned, it's much better to log to the STDOUT.</p>
<p>You can also check solutions like <a href="https://fluentbit.io/" rel="nofollow noreferrer">FluentBit</a>/<a href="https://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> or <a href="https://www.elastic.co/what-is/elk-stack" rel="nofollow noreferrer">ELK stack</a>.</p>
| Matt |
<p><a href="https://i.stack.imgur.com/iTlc3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iTlc3.png" alt="enter image description here"></a></p>
<p>What does the field mean?</p>
<p>Besides, if I will use it, what conditions my configuration need to meet?</p>
| zhangqichuan | <p>Template field determines how a pod will be created. In this field you specify <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata" rel="nofollow noreferrer">metadata</a> and <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status" rel="nofollow noreferrer">spec</a> values. Pod Template is used in workloads such as <code>DaemonSets, Deployments, Jobs</code>. Based on this template it creates a pod to match desired state. </p>
<p><code>Metadata</code> field in <code>template</code> section is mostly used to specify labels that will be assigned to each pod created by the controller (Deployment, DaemonSet etc.) and is used for identification. Whenever you specify those labels, you have to also specify <code>selector</code> that matches pod template's labels.</p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podspec-v1-core" rel="nofollow noreferrer">Spec</a> field is a specification of the desired behavior of the pod- configuration settings that will be used by a pod. Every replica of the pod will match this configuration.</p>
<p>Each workload controller react different to changes in pod template. For example, deployment will first create a new pod to match current template and then delete pod that doesn't match the template. </p>
| kool |
<p>I have a Kubernetes cluster with multiple tenants (in different namespaces). I'd like to deploy an independent Istio Gateway object into each tenant, which I seem to be able to do. However, setting up TLS requires a K8s secret that contains the TLS key/cert. The docs indicate that the "secret must be named istio-ingressgateway-certs in the istio-system namespace". This would seem to indicate that I can only have one TLS secret per cluster. Maybe I'm not reading this correctly. Is there a way to configure independent Istio Gateways in their own namespaces, with their own TLS secrets? How might I go about doing that?</p>
<p>Here is the doc that I'm referencing.<br>
<a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/</a></p>
<p>Any thoughts are much appreciated. </p>
| Joe J | <p>As provided on <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/#configure-a-tls-ingress-gateway-for-multiple-hosts" rel="nofollow noreferrer">istio documentation</a> it's possible.</p>
<blockquote>
<p>In this section you will configure an ingress gateway for multiple hosts, httpbin.example.com and bookinfo.com.</p>
</blockquote>
<p>So You need to create private keys, in this example, for <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/#create-a-server-certificate-and-private-key-for-bookinfo-com" rel="nofollow noreferrer">bookinfo</a> and <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/#configure-a-tls-ingress-gateway-with-a-file-mount-based-approach" rel="nofollow noreferrer">httbin</a>, and update istio-ingressgateway.</p>
<p>I created them both and they exist.</p>
<p>bookinfo certs and gateway</p>
<pre><code>kubectl exec -it -n istio-system $(kubectl -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items[0].metadata.name}') -- ls -al /etc/istio/ingressgateway-bookinfo-certs
lrwxrwxrwx 1 root root 14 Jan 3 10:12 tls.crt -> ..data/tls.crt
lrwxrwxrwx 1 root root 14 Jan 3 10:12 tls.key -> ..data/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https-bookinfo
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt
privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key
hosts:
- "bookinfo.com"
</code></pre>
<hr>
<p>httpbin certs and gateway</p>
<pre><code>kubectl exec -it -n istio-system $(kubectl -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items[0].metadata.name}') -- ls -al /etc/istio/ingressgateway-certs
lrwxrwxrwx 1 root root 14 Jan 3 10:07 tls.crt -> ..data/tls.crt
lrwxrwxrwx 1 root root 14 Jan 3 10:07 tls.key -> ..data/tls.key
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "httpbin.example.com"
</code></pre>
<p>Haven't made a full reproduction to check if they both works but if that won't work for You i will try to make it and update the question.</p>
<p>This <a href="https://istiobyexample.dev/secure-ingress/" rel="nofollow noreferrer">link</a> might be helpful.</p>
| Jakub |
<p>I couldn't find an equivalent k8s cli command to do something like this, nor any ssh keys stored as k8s secrets. It also appears to do this in a cloud-agnostic fashion.</p>
<p>Is it just using a k8s pod with special privileges or something?</p>
<p>Edit: oops, it's open-source. I'll investigate and update this question accordingly</p>
| Avi Mosseri | <p>Posting this community wiki answer to give more visibility on the comment that was made at a github issue that addressed this question:</p>
<blockquote>
<p>Lens will create <code>nsenter</code> pod to the selected node</p>
<blockquote>
<pre><code>protected async createNodeShellPod(podId: string, nodeName: string) {
const kc = this.getKubeConfig();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const pod = {
metadata: {
name: podId,
namespace: "kube-system"
},
spec: {
restartPolicy: "Never",
terminationGracePeriodSeconds: 0,
hostPID: true,
hostIPC: true,
hostNetwork: true,
tolerations: [{
operator: "Exists"
}],
containers: [{
name: "shell",
image: "docker.io/alpine:3.9",
securityContext: {
privileged: true,
},
command: ["nsenter"],
args: ["-t", "1", "-m", "-u", "-i", "-n", "sleep", "14000"]
}],
nodeSelector: {
"kubernetes.io/hostname": nodeName
}
}
} as k8s.V1Pod;
</code></pre>
</blockquote>
<p>and exec into that container in lens terminal.</p>
<p>-- <em><a href="https://github.com/lensapp/lens/issues/824#issuecomment-688826431" rel="noreferrer">Github.com: Lensapp: Issues: How Lens accessing nodes in AKS/EKS without user and SSH key under ROOT?</a></em></p>
</blockquote>
<hr />
<p>I've checked this and as it can be seen below the <code>Pod</code> with <code>nsenter</code> is created in the <code>kube-system</code> (checked on <code>GKE</code>):</p>
<ul>
<li><code>$ kubectl get pods -n kube-system</code> (output redacted)</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>kube-system node-shell-09f6baaf-dc4a-4faa-969e-8016490eb8e0 1/1 Running 0 10m
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://github.com/lensapp/lens/issues/295" rel="noreferrer">Github.com: Lensapp: Lens: Issues: How does lens use terminal/ssh for worker nodes?</a></em></li>
<li><em><a href="https://man7.org/linux/man-pages/man1/nsenter.1.html" rel="noreferrer">Man7.org: Linux: Man pages: Nsenter</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm struggling with a very basic example of an Ingress service fronting an nginx pod. When ever I try to visit my example site I get this simple text output instead of the default nginx page:</p>
<pre><code>404 page not found
</code></pre>
<p>Here is the deployment I'm working with:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: argo.corbe.net
http:
paths:
- backend:
serviceName: ningx
servicePort: 80
</code></pre>
<p>k3s kubectl get pods -o wide:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-d6dcb985-942cz 1/1 Running 0 8h 10.42.0.17 k3s-1 <none> <none>
nginx-deployment-d6dcb985-d7v69 1/1 Running 0 8h 10.42.0.18 k3s-1 <none> <none>
nginx-deployment-d6dcb985-dqbn9 1/1 Running 0 8h 10.42.1.26 k3s-2 <none> <none>
nginx-deployment-d6dcb985-vpf79 1/1 Running 0 8h 10.42.1.25 k3s-2 <none> <none>
</code></pre>
<p>k3s kubectl -o wide get services:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d <none>
nginx-service ClusterIP 10.43.218.234 <none> 80/TCP 8h app=nginx
</code></pre>
<p>k3s kubectl -o wide get ingress:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress <none> argo.corbe.net 207.148.25.119 80 8h
</code></pre>
<p>k3s kubectl describe deployment nginx-deployment:</p>
<pre><code>Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 22 Feb 2021 15:19:07 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 2
Selector: app=nginx
Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-7848d4b86f (4/4 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m43s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 1
Normal ScalingReplicaSet 2m43s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 3
Normal ScalingReplicaSet 2m43s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 2
Normal ScalingReplicaSet 2m40s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 2
Normal ScalingReplicaSet 2m40s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 3
Normal ScalingReplicaSet 2m40s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 1
Normal ScalingReplicaSet 2m40s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 4
Normal ScalingReplicaSet 2m38s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 0
</code></pre>
| user3056541 | <p>nginx image listens for connection on port 80 by default.</p>
<pre><code>$ kubectl run --image nginx
$ kubectl exec -it nginx -- bash
root@nginx:/# apt update
**output hidden**
root@nginx:/# apt install iproute2
**output hidden**
root@nginx:/# ss -lunpt
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 0 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1,fd=7))
tcp LISTEN 0 0 *:80 *:* users:(("nginx",pid=1,fd=8))
</code></pre>
<p>Notice it's port 80 that is open, not port 8080.
This mean that your service is misconfigured because it forwards to port 8080.</p>
<p>You should set target port to 80 like following.:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80 # <- HERE
</code></pre>
<p>Also notice the service name:</p>
<pre><code>kind: Service
metadata:
name: nginx-service
</code></pre>
<p>And as a backed you put service of a different name:</p>
<pre><code>- backend:
serviceName: ningx
</code></pre>
<p>Change it to the actual name of a service, like below:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: argo.corbe.net
http:
paths:
- backend:
serviceName: ningx-service
servicePort: 80
</code></pre>
<p>Apply the changes and it should work now.</p>
| Matt |
<p>In Kubernetes we can set the priority of a pod to <code>Guaranteed</code>, <code>Burstable</code> or <code>Best-Effort</code> base on requests and limits. Another method to assign priorities in Kubernetes is to define a <code>priorityClass</code> object and assign a <code>priorityClassName</code> to a pod. How are these methods different and when we have to choose one method over another? According to <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#interactions-of-pod-priority-and-qos" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#interactions-of-pod-priority-and-qos</a>:</p>
<blockquote>
<p>The scheduler’s preemption logic does not consider QoS when choosing preemption targets. Preemption considers Pod priority and attempts to choose a set of targets with the lowest priority.</p>
</blockquote>
<p>So if the Kubernetes has to choose between a pod with <code>Guaranteed</code> QoS which has a lower "priorityClass" Value than a <code>Burstable</code> pod, does it put the <code>Guaranteed</code> pod in Preempting state?</p>
| Mohammad Yosefpor | <p><code>Quality of Service</code> determines scheduling and eviction priority of pods. When a pod is not given any resources requests/ limits it is considered as <code>low-priority pod (best effort)</code>. If node is out of resources it is the first pod to be evicted. </p>
<p><code>Medium priority (burstable)</code> when a pod has any resource/limits specified (does not meet guaranteed class requirements). </p>
<p><code>Highest priority (guaranteed)</code> when a pod has requests and limits set to the same value (both CPU and memory).</p>
<p><code>PriorityClass</code> is used to determine pod priority when it comes to eviction. Sometimes you may want one pod to be evicted before another pod. Priority is described by integer in it's <code>value</code> field and the higher the value, the higher the priority. </p>
<p>When Pod priority is enabled, the scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. If there are no nodes with resources to satisfy high-priority node it will evict pods with lower priority. The highest priority is <code>system-node-critical</code>.</p>
<ul>
<li><code>QoS</code> is used to control and manage resources on the node among the
pods. QoS eviction happens when there are no available resources on
the node.</li>
<li>The scheduler considers the <code>PriorityClass</code> of the Pod before the QoS. It does not attempt to evict Pods unless higher-priority Pods need to be scheduled and the node does not have enough room for them.</li>
<li><code>PriorityClass</code>- when pods are preempted, <code>PriorityClass</code> respects graceful termination period but does not guarantee pod disruption budget to be honored. </li>
</ul>
| kool |
<p>I have the following scenario:
<a href="https://i.stack.imgur.com/PwLaE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PwLaE.png" alt="enter image description here"></a></p>
<ul>
<li>When the user <strong>A</strong> enter the address foo.example1.example.com in the
browser, then it should call the service <strong>FOO</strong> in the namespace
<strong>example1</strong>.</li>
<li>When the user <strong>B</strong> enter the address foo.example1.example.com in the
browser, then it should call the service <strong>FOO</strong> in the namespace
<strong>example2</strong>.</li>
</ul>
<p>I am using istio, the question is, how to configure the gateway, that is bind specific to a namespace:</p>
<p>Look at an example of istio gateway configuration:</p>
<pre><code> $ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ns_example1
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "example1.example.com"
EOF
</code></pre>
<p>When I would deploy the gateway, then it will apply to current namespace but I would like to specify a namespace.</p>
<p>How to assign a gateway to specific namespace?</p>
| softshipper | <p>I think this <a href="https://istiobyexample.dev/secure-ingress/" rel="nofollow noreferrer">link</a> should answer your question.</p>
<p>There is many things You won't need, but there is idea You want to apply to your istio cluster.</p>
<p>So You need 1 gateway and 2 virtual services.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foocorp-gateway
namespace: default
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http-example1
protocol: HTTP
hosts:
- "example1.example.com"
- port:
number: 80
name: http-example2
protocol: HTTP
hosts:
- "example2.example.com"
</code></pre>
<hr>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example1
namespace: ex1
spec:
hosts:
- "example1.example.com"
gateways:
- foocorp-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: example1.ex1.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example2
namespace: ex2
spec:
hosts:
- "example2.example.com"
gateways:
- foocorp-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: example2.ex2.svc.cluster.local
port:
number: 80
</code></pre>
<p><strong>EDIT</strong></p>
<p>You can create gateway in namespace ex1 and ex2, then just change gateway field in virtual service and it should work. </p>
<p>Remember to add namespace/gateway, not only gateway name, like <a href="https://istio.io/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">there</a>.</p>
<pre><code>gateways:
- some-config-namespace/gateway-name
</code></pre>
<p>Let me know if that help You.</p>
| Jakub |
<p>In my project I have to create a kubernetes cluster on my GCP with an External Load Balancer service for my django app. I create it with this <code>yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mydjango
namespace: test1
labels:
app: mydjango
spec:
ports:
- name: http
port: 8000
targetPort: 8000
selector:
app: mydjango
type: LoadBalancer
</code></pre>
<p>I apply it and all work is done on my cluster except for the fact that kubernetes create a Load balancer using <code>http</code>.</p>
<p>How can I modify my <code>yaml</code> to create the same Load Balancer using <code>https</code> instead <code>http</code> using my google managed certs?</p>
<p>So many thanks in advance
Manuel</p>
| Manuel Santi | <p>I whole wholeheartedly agree with the answer provided by @guillaume blaquiere.</p>
<p>You should use following guide to have the <code>HTTPS</code> connection to your Django.</p>
<p>I would also like to add some additional information/resources to the whole question.</p>
<hr />
<p>Addressing the following statement:</p>
<blockquote>
<p>I apply it and all work done on my cluster except for the fact that kubernetes create a Load balancer using http.</p>
</blockquote>
<p><strong>In fact you are creating a network load balancer</strong> (layer 4), (<strong><code>TCP</code></strong>/<code>UDP</code>):</p>
<blockquote>
<p>When you create a Service of type LoadBalancer, a Google Cloud controller wakes up and configures a network load balancer in your project. The load balancer has a stable IP address that is accessible from outside of your project.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service#services_of_type_loadbalancer" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Service: Service of type LoadBalancer</a></em></p>
</blockquote>
<p>This type of a load balancer will forward the packets to its destination but it won't be able to accomplish things like path based routing or SSL termination.</p>
<p>To have the ability to connect to your Django app with HTTPS you can:</p>
<ul>
<li>Use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE Ingress for HTTP(S) Load Balancing</a> (as pointed by guillaume blaquiere)</li>
</ul>
<p>In the whole process you will be using an <code>Ingress</code> resource to forward the traffic to the specific backend. Your <code>Ingress</code> controller will also be responsible for handling <code>SSL</code>.</p>
<blockquote>
<p>A side note!</p>
<p>I'd reckon you could change the <code>Service</code> of type <code>LoadBalancer</code> to a <code>Service</code> of type <code>NodePort</code>.</p>
</blockquote>
<p>You final <code>Ingress</code> definition will look similar to the one below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: INGRESS_NAME
namespace: test1
annotations:
kubernetes.io/ingress.global-static-ip-name: ADDRESS_NAME
networking.gke.io/managed-certificates: CERTIFICATE_NAME
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: mydjango
port:
number: 8080
</code></pre>
<p><strong>Alternatively</strong> you can:</p>
<ul>
<li>Use different <code>Ingress</code> controller like <code>nginx-ingress</code> and add the certificate to handle the <code>HTTPS</code> either by (this will not use Google managed certificate):
<ul>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: User guide: TLS</a></li>
<li><a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">Cert-manager.io: Docs: Installation: Kubernetes</a></li>
</ul>
</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
</ul>
<p>I'd reckon you could also take a look on this answer (on how the communication is happening with <code>nginx-ingress</code>):</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/64647258/how-nginx-ingress-controller-back-end-protocol-annotation-works-in-path-based-ro/64662822#64662822">Stackoverflow.com: Questions: How NGINX Ingress controller back-end protocol annotation works in path based routing?</a></em></li>
</ul>
| Dawid Kruk |
<p><strong>SHORT QUESTION:</strong></p>
<p>Is there a way for a Kubernetes pod to associate multiple IP addresses?</p>
<p>Even if only loopback ones?</p>
<p><strong>LONGER EXPLANATION:</strong></p>
<p>I have an application that needs to be deployed in a Kubernetes pod and makes use of Cassandra. Cassandra itself is located on premise behind the firewall that, for administrative reasons, cannot be opened for direct IP access from an external cloud where the K8S landscape is hosted. Instead, I have to develop a relay that goes through a custom tunnel.</p>
<p>Cassandra driver inside the application will be pointed not to the real Cassandra cluster, but to the relay, which then will tunnel a connection.</p>
<p>I'd very much prefer to run the relay inside the pod itself (even better right inside the app container), to minimize the number of data traversals, since data transmission rate will be quite high, and also to minimize the number of distinct failure points and components to manage, and also to provide a measure of co-scaling with app replicas (the app is autoscaled, potentially to a large number of replicas).</p>
<p>The problem however is that Cassandra driver makes connection to every node in Cassandra cluster by node IP address, e.g. if Cassandra cluster is three nodes, then the driver connects to node1:9042, node2:9042 and node3:9042. Port number is force-shared by all the connections. The driver does not allow to specify let us say node1:9042, node2:9043 and node3:9044. Thus, I cannot have the driver connect to thispod:9042, thispod:9043 and thispod:9044. If it were possible, I could have a relay to run inside the container, listen on three ports, and then forward the connections. However, because of Cassandra driver limitations, relay listening endpoints must have distinct IP addresses (and I'd rather avoid having to make a custom-modified version of the driver to lift this restriction).</p>
<p>Which brings us to a question: is it possible for a pod to associate additional IP addresses?</p>
<p>The type of address does not matter much, as long as it is possible within a container or pod to send data to this address and to receive from it. Communication is essentially loopback within the contaner, or pod. If it were a non-containerized environment but plain Linux VM, I could create additional loopback interfaces which would have solved the problem. But inside the container interfaces cannot be created.</p>
<p>Is there a way to make Kubernetes to associate additional IP addresses to the pod?</p>
| sergey_o | <p>To associate additional IP addresses with pod you can use <a href="https://github.com/intel/multus-cni" rel="noreferrer">Multus CNI</a>. It allows you to attach multiple network interfaces to pod. It required a default CNI for pod-to-pod communication (i.e Calico, Flannel). </p>
<p>How it works is it creates a <a href="https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#storing-a-configuration-as-a-custom-resource" rel="noreferrer">NetworkAttachmentDefinition CRD</a>, which then you add in <a href="https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#creating-a-pod-that-attaches-an-additional-interface" rel="noreferrer">annotation field</a> in pod manifest. Besides that you can define <a href="https://github.com/intel/multus-cni/blob/master/doc/how-to-use.md#specifying-a-default-route-for-a-specific-attachment" rel="noreferrer">default routes</a> for the interfaces. Example usage:</p>
<p>CRD definition</p>
<pre><code>apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
</code></pre>
<p>Pod manifest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
</code></pre>
<p>And when you exec into the pod when it's running you can see that an additional interface was created:</p>
<pre><code>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1440 qdisc noqueue state UP
link/ether 86:75:ed:87:a1:1a brd ff:ff:ff:ff:ff:ff
inet 192.168.171.65/32 scope global eth0
valid_lft forever preferred_lft forever
3: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue state UP
link/ether 1a:58:6e:88:fb:f5 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.200/24 scope global net1
valid_lft forever preferred_lft forever
</code></pre>
| kool |
<p>We are on Kubernetes and use Istio Service Mesh. Currently, there is SSL Termination for HTTPS in Gateway. I see in the istio-proxy logs that the HTTP protocol is HTTP 1.1.</p>
<p>I want to upgrade HTTP 1.1 to HTTP2 due to its various advantages. Clients should call our services HTTP2 over SSL/TLS.</p>
<p>I am using this <a href="https://blog-tech.groupeonepoint.com/playing-with-istio-service-mesh-on-kubernetes/" rel="nofollow noreferrer">blog</a> for an internal demo on this topic. </p>
<p>These are the bottlenecks:</p>
<p>1) I want to propose a plan which will causes least amount of changes. I understand I need to update the Gateway from </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
serverCertificate: /etc/certs/server.pem
privateKey: /etc/certs/privatekey.pem
</code></pre>
<p>to </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
tls:
mode: SIMPLE
serverCertificate: /etc/certs/server.pem
privateKey: /etc/certs/privatekey.pem
</code></pre>
<p>based on the examples I see in the <a href="https://istio.io/docs/reference/config/networking/gateway/#Server" rel="nofollow noreferrer">Istio's Gateway documentation</a>.</p>
<p>I want to know: Will this allow HTTP2 over TLS connections from browsers (which support only this mode)? Can I provide tls details for HTTP2, like I did with HTTPS?</p>
<p>2) What are some of the other Istio configurations to update?</p>
<p>3) Will this change be break Microservices which are using http protocol currently? How can I mitigate this?</p>
<p>4) I was reading about DestinationRule and <a href="https://istio.io/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings-H2UpgradePolicy" rel="nofollow noreferrer">upgrade policy</a>. Is this a good fit? </p>
| Anoop Hallimala | <p>Based on my knowledge, <a href="https://istio.io/" rel="nofollow noreferrer">istio documentation</a> and istio <a href="https://istio.io/about/feature-stages/" rel="nofollow noreferrer">feature stages</a>(http2 in stable phase)</p>
<blockquote>
<p>1) Will this allow HTTP2 over TLS connections from browsers (which support only this mode)? Can I provide tls details for HTTP2, like I did with HTTPS?</p>
</blockquote>
<p>Yes, it should allow http2.</p>
<hr>
<blockquote>
<p>2) What are some of the other Istio configurations to update?</p>
</blockquote>
<p>Places when You have options to apply http2 :</p>
<hr>
<ul>
<li><a href="https://istio.io/docs/reference/config/networking/gateway/#Server" rel="nofollow noreferrer"><strong>Gateway</strong></a></li>
</ul>
<hr>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-ingress
spec:
selector:
app: my-ingress-gateway
servers:
- port:
number: 80
name: **http2**
protocol: **HTTP2**
hosts:
- "*"
</code></pre>
<hr>
<ul>
<li><a href="https://istio.io/docs/ops/configuration/traffic-management/protocol-selection/" rel="nofollow noreferrer"><strong>Service protocol selection</strong></a> </li>
</ul>
<hr>
<p><strong>Manual protocol selection</strong></p>
<blockquote>
<p>Protocols can be specified manually by naming the Service port name: [-]. The following protocols are supported:</p>
</blockquote>
<ul>
<li>grpc </li>
<li>grpc</li>
<li>web </li>
<li>http </li>
<li><strong>http2</strong> </li>
<li>https </li>
<li>mongo </li>
<li>mysql* </li>
<li>redis* </li>
<li>tcp </li>
<li>tls </li>
<li>udp</li>
</ul>
<blockquote>
<p>*These protocols are disabled by default to avoid accidentally enabling experimental features. To enable them, configure the corresponding Pilot environment variables.</p>
</blockquote>
<hr>
<pre><code>kind: Service
metadata:
name: myservice
spec:
ports:
- number: 80
name: http2
</code></pre>
<hr>
<blockquote>
<p>3) Will this change be break Microservices which are using http protocol currently? How can I mitigate this?</p>
<p>4) I was reading about DestinationRule and upgrade policy. Is this a good fit?</p>
</blockquote>
<p>I think it should be a good fit,You would have to upgrade <a href="https://istio.io/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings-H2UpgradePolicy" rel="nofollow noreferrer">h2UpgradePolicy</a> and change services to http2.</p>
<hr>
<p>I hope it will help You.</p>
| Jakub |
<p>Assuming the following JSON:</p>
<pre class="lang-json prettyprint-override"><code>{
"A":{
"A_KEY":"%PLACEHOLDER_1%",
"B_KEY":"%PLACEHOLDER_2%"
}
}
</code></pre>
<p>And, the following values.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>placeholders:
PLACEHOLDER_1: Hello
PLACEHOLDER_2: World
</code></pre>
<p>I would like to load this JSON using Configmap into my pod. But, replace the placeholders with the values under <code>values.yaml</code> <strong>automatically based on the key</strong>.
Thought about writing a simple regex which will search for the words between two <code>%</code> and use this word with <code>.Values.placeholders.$1</code>.</p>
<p>So far, I managed to replace single value using:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
config.json: |-
{{- regexReplaceAll "%PLACEHOLDER_1%" ( .Files.Get "config.json") .Values.placeholders.PLACEHOLDER_1 | nindent 4 }}
</code></pre>
<p>The final goal is to replace both PLACEHOLDER_1 and PLACEHOLDER_2 by single regex.</p>
<p>desired JSON:</p>
<pre class="lang-json prettyprint-override"><code>{
"A":{
"A_KEY":"Hello",
"B_KEY":"World"
}
}
</code></pre>
<p>Any help will be much appriciated.</p>
| Amit Baranes | <p>Here is what I have come up with:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
config.json: |-
{{- $file := .Files.Get "config.json" }}
{{- range $k, $v := .Values.placeholders }}
{{- $file = regexReplaceAll (printf "%%%s%%" $k) $file $v }}
{{- end }}
{{- print $file | nindent 4 }}
</code></pre>
<p>I load a config file content into a <code>$file</code> variable and then iterate over all placeholers.keys from values.yaml file and replace them one by one, saving the output back to the same variable. At the end, <code>$file</code> variable has all fields substituted so I just print it.</p>
| Matt |
<p>I have a Kubernetes <code>Job</code>, <code>job.yaml</code> :</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
</code></pre>
<p>I need to run <code>command</code> for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the <code>commands</code> or containers all simultaneously.</p>
| user6308605 | <p>Some of the ways that you could do it outside of the solutions proposed in other answers are following:</p>
<ul>
<li>With a templating tool like <code>Helm</code> where you would template the exact specification of your workload and then iterate over it with different values (see the example)</li>
<li>Use the Kubernetes official documentation on work queue topics:
<ul>
<li><a href="https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/" rel="nofollow noreferrer">Indexed Job for Parallel Processing with Static Work Assignment</a> - alpha</li>
<li><a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">Parallel Processing using Expansions</a></li>
</ul>
</li>
</ul>
<hr />
<h3><code>Helm</code> example:</h3>
<p><code>Helm</code> in short is a templating tool that will allow you to template your manifests (<code>YAML</code> files). By that you could have multiple instances of <code>Jobs</code> with different name and a different command.</p>
<p>Assuming that you've installed <code>Helm</code> by following guide:</p>
<ul>
<li><em><a href="https://helm.sh/docs/intro/install/" rel="nofollow noreferrer">Helm.sh: Docs: Intro: Install</a></em></li>
</ul>
<p>You can create an example Chart that you will modify to run your <code>Jobs</code>:</p>
<ul>
<li><code>helm create chart-name</code></li>
</ul>
<p>You will need to delete everything that is in the <code>chart-name/templates/</code> and clear the <code>chart-name/values.yaml</code> file.</p>
<p>After that you can create your <code>values.yaml</code> file which you will iterate upon:</p>
<pre class="lang-yaml prettyprint-override"><code>jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(3)"']
image: perl
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(20)"']
image: perl
</code></pre>
<ul>
<li><code>templates/job.yaml</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
namespace: default # <-- FOR EXAMPLE PURPOSES ONLY!
spec:
template:
spec:
containers:
- name: my-container
image: {{ $jobs.image }}
command: {{ $jobs.command }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
{{- end }}
</code></pre>
<p>If you have above files created you can run following command on what will be applied to the cluster beforehand:</p>
<ul>
<li><code>$ helm template .</code> (inside the <code>chart-name</code> folder)</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job1
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(3)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job2
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(20)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
</code></pre>
<blockquote>
<p>A side note #1!</p>
<p>This example will create <code>X</code> amount of <code>Jobs</code> where each one will be separate from the other. Please refer to the documentation on data persistency if the files that are downloaded are needed to be stored persistently (example: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">GKE</a>).</p>
</blockquote>
<blockquote>
<p>A side note #2!</p>
<p>You can also add your <code>namespace</code> definition in the templates (<code>templates/namespace.yaml</code>) so it will be created before running your <code>Jobs</code>.</p>
</blockquote>
<p>You can also run above Chart by:</p>
<ul>
<li><code>$ helm install chart-name .</code> (inside the <code>chart-name</code> folder)</li>
</ul>
<p>After that you should be seeing 2 <code>Jobs</code> that are completed:</p>
<ul>
<li><code>$ kubectl get pods</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
job1-2dcw5 0/1 Completed 0 82s
job2-9cv9k 0/1 Completed 0 82s
</code></pre>
<p>And the output that they've created:</p>
<ul>
<li><code>$ echo "one:"; kubectl logs job1-2dcw5; echo "two:"; kubectl logs job2-9cv9k</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>one:
3.14
two:
3.1415926535897932385
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/68159868/kubernetes-creation-of-multiple-deployment-with-one-deployment-file/68251390#68251390">Stackoverflow.com: Questions: Kubernetes creation of multiple deployment with one deployment file</a></em></li>
</ul>
| Dawid Kruk |
<p>i'm trying to reverse proxy using istio virtual service</p>
<p>it is possible forward request in virtual service? (like nginx's proxy_pass)</p>
<p>in result, </p>
<ul>
<li><a href="http://myservice.com/about/" rel="nofollow noreferrer">http://myservice.com/about/</a>* -> forward request to CDN (external service outside k8s system - aws s3, etc....)</li>
<li><a href="http://myservice.com/" rel="nofollow noreferrer">http://myservice.com/</a>* -> my-service-web (internal service includes in istio mesh)</li>
</ul>
<p>defined serviceentry, but it just "redirect", not forward reqeust.</p>
<p>here is my serviceentry.yaml and virtualservice.yaml</p>
<ol>
<li>serviceentry.yaml</li>
</ol>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: my-service-proxy
namespace: my-service
spec:
hosts:
- CDN_URL
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: TLS
resolution: DNS
</code></pre>
<ol start="2">
<li>virtualservice.yaml</li>
</ol>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
namespace: my-service
spec:
hosts:
- myservice.com
gateways:
- myservice
http:
- match:
- uri:
prefix: /about
rewrite:
authority: CDN_URL
uri: /
route:
- destination:
host: CDN_URL
- route:
- destination:
host: my-service-web.svc.cluster.local
port:
number: 80
</code></pre>
<p>virtualservice can acts like nginx-igress?</p>
| highalps | <p>Based on that <a href="https://discuss.istio.io/t/istio-egress-breaking-apache-httpd-reverse-proxy/3542/9" rel="nofollow noreferrer">istio discuss</a></p>
<p>User @palic asked same question here </p>
<blockquote>
<p>Shouldn’t it be possible to let ISTIO do the reverse proxy
thing, so that no one needs a webserver (httpd/nginx/
lighthttpd/…) to do the reverse proxy job?</p>
</blockquote>
<p>And the answer provided by @Daniel_Watrous</p>
<blockquote>
<p>The job of the Istio control plane is to configure a fleet of reverse proxies. The purpose of the webserver is to serve content, not reverse proxy. The reverse proxy technology at the heart of Istio is Envoy, and Envoy can be use as a replacement for HAProxy, nginx, Apache, F5, or any other component that is being used as a reverse proxy.</p>
</blockquote>
<hr>
<blockquote>
<p>it is possible forward request in virtual service</p>
</blockquote>
<p>Based on that I would say it's not possible to do in virtual service, it's just rewrite(redirect), which I assume is working for you.</p>
<hr>
<blockquote>
<p>when i need function of reverse proxy, then i have to using nginx ingresscontroller (or other things) instead of istio igress gateway? </p>
</blockquote>
<p>If we talk about reverse proxy, then yes, you need to use other technology than istio itself.</p>
<p>As far as I'm concerned, you could use some nginx pod, which would be configured as reverse proxy to the external service, and it will be the host for your virtual service.</p>
<p>So it would look like in below example.</p>
<p><strong>EXAMPLE</strong></p>
<p>ingress gateway -> Virtual Service -> nginx pod ( reverse proxy configured on nginx)<br>
Service entry -> accessibility of URLs outside of the cluster</p>
<p>Let me know if you have any more questions.</p>
| Jakub |
<p>When curl is made inside pod on port 80, response is fine.
Calling curl outside container via Kubernetes service on machines IP and port 30803, sporadically "Connection refused" appears.</p>
<p>nginx app config:</p>
<pre><code>server {
listen 80;
server_name 127.0.0.1;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /usr/share/nginx/html;
index index.html;
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
</code></pre>
<p>Kubernetes deployments and service manifest which is used:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: dev
labels:
environment: dev
spec:
selector:
matchLabels:
environment: dev
replicas: 1
template:
metadata:
labels:
environment: dev
spec:
containers:
- name: web-app
imagePullPolicy: Never
image: web-app:$BUILD_ID
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: web-app-dev-svc
namespace: dev
labels:
environment: dev
spec:
selector:
environment: dev
type: NodePort
ports:
- name: http
nodePort: 30803
port: 80
protocol: TCP
targetPort: 80
</code></pre>
<p><a href="https://i.stack.imgur.com/idMtK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/idMtK.png" alt="enter image description here" /></a></p>
| Mr.Ri | <p>The issue was that 2 services in selector was using same label value - 'environment: dev' , and I assume this random connection was provoked, because it was balancing between one pod to another. Fixed labels values, now works perfectly.</p>
| Matt |
<p>i try to deploy <code>nginx-ingress-controller</code> on bare metal , I have</p>
<p><strong>4 Node</strong></p>
<ol>
<li>10.0.76.201 - Node 1</li>
<li>10.0.76.202 - Node 2</li>
<li>10.0.76.203 - Node 3</li>
<li>10.0.76.204 - Node 4</li>
</ol>
<p><strong>4 Worker</strong></p>
<ol start="5">
<li>10.0.76.205 - Worker 1</li>
<li>10.0.76.206 - Worker 2</li>
<li>10.0.76.207 - Worker 3</li>
<li>10.0.76.214 - Worker 4</li>
</ol>
<p><strong>2 LB</strong></p>
<ol start="9">
<li><p>10.0.76.208 - LB 1</p>
</li>
<li><p>10.0.76.209 - Virtual IP (keepalave)</p>
</li>
<li><p>10.0.76.210 - LB 10</p>
</li>
</ol>
<p>Everything is on <code>BareMetal</code> , Load balancer located outside Cluster .</p>
<p><strong>This is simple haproxy config , just check 80 port ( Worker ip )</strong></p>
<pre><code>frontend kubernetes-frontends
bind *:80
mode tcp
option tcplog
default_backend kube
backend kube
mode http
balance roundrobin
cookie lsn insert indirect nocache
option http-server-close
option forwardfor
server node-1 10.0.76.205:80 maxconn 1000 check
server node-2 10.0.76.206:80 maxconn 1000 check
server node-3 10.0.76.207:80 maxconn 1000 check
server node-4 10.0.76.214:80 maxconn 1000 check
</code></pre>
<p>I Install nginx-ingress-controller using Helm and everything work fine</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-xb5rw 0/1 Completed 0 18m
pod/ingress-nginx-admission-patch-skt7t 0/1 Completed 2 18m
pod/ingress-nginx-controller-6dc865cd86-htrhs 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.106.233.186 <none> 80:30659/TCP,443:32160/TCP 18m
service/ingress-nginx-controller-admission ClusterIP 10.102.132.131 <none> 443/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-6dc865cd86 1 1 1 18m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 24s 18m
job.batch/ingress-nginx-admission-patch 1/1 34s 18m
</code></pre>
<p><strong>Deploy nginx simple way and works fine</strong></p>
<pre><code>kubectl create deploy nginx --image=nginx:1.18
kubectl scale deploy/nginx --replicas=6
kubectl expose deploy/nginx --type=NodePort --port=80
</code></pre>
<p>after , i decided to create <code>ingress.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tektutor-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "tektutor.training.org"
http:
paths:
- pathType: Prefix
path: "/nginx"
backend:
service:
name: nginx
port:
number: 80
</code></pre>
<p>works fine</p>
<pre><code>kubectl describe ingress tektutor-ingress
Name: tektutor-ingress
Labels: <none>
Namespace: default
Address: 10.0.76.214
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
tektutor.training.org
/nginx nginx:80 (192.168.133.241:80,192.168.226.104:80,192.168.226.105:80 + 3 more...)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 18m nginx-ingress-controller Configuration for default/tektutor-ingress was added or updated
Normal Sync 18m (x2 over 18m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>everything work fine , when i try curl any ip works <code>curl (192.168.133.241:80,192.168.226.104:80,192.168.226.105:80 + 3 more...)</code></p>
<p>now i try to add hosts</p>
<pre><code>10.0.76.201 tektutor.training.org
</code></pre>
<p>This is my master ip , is it correct to add here master ip ? when i try <code>curl tektutor.training.org</code> not working</p>
<p>Can you please explain what I am having problem with this last step?
I set the IP wrong? or what ? Thanks !</p>
<p>I hope I have written everything exhaustively</p>
<p>I used to this tutor <a href="https://medium.com/tektutor/using-nginx-ingress-controller-in-kubernetes-bare-metal-setup-890eb4e7772" rel="nofollow noreferrer">Medium Install nginx Ingress Controller </a></p>
| CaptainPy | <p><strong>TL;DR</strong></p>
<p>Put in your haproxy backend config values shown below instead of the ones you've provided:</p>
<ul>
<li><code>30659</code> instead of <code>80</code></li>
<li><code>32160</code> instead of <code>443</code> (if needed)</li>
</ul>
<hr />
<p>More explanation:</p>
<p><code>NodePort</code> works on certain set of ports (default: <code>30000</code>-<code>32767</code>) and in this scenario it allocated:</p>
<ul>
<li><code>30659</code> for your ingress-nginx-controller port <code>80</code>.</li>
<li><code>32160</code> for your ingress-nginx-controller port <code>443</code>.</li>
</ul>
<p>This means that every request trying to hit your cluster from <strong>outside</strong> will need to contact this ports (30...).</p>
<p>You can read more about it by following official documentation:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services</a></li>
</ul>
| Dawid Kruk |
<p>I have a service running in a k8s cluster, which I want to monitor using Prometheus Operator. The service has a <code>/metrics</code> endpoint, which returns simple data like:</p>
<pre><code>myapp_first_queue_length 12
myapp_first_queue_processing 2
myapp_first_queue_pending 10
myapp_second_queue_length 4
myapp_second_queue_processing 4
myapp_second_queue_pending 0
</code></pre>
<p>The API runs in multiple pods, behind a basic <code>Service</code> object:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
ports:
- port: 80
name: myapp-api
targetPort: 80
selector:
app: myapp-api
</code></pre>
<p>I've installed Prometheus using <code>kube-prometheus</code>, and added a <code>ServiceMonitor</code> object:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
selector:
matchLabels:
app: myapp-api
endpoints:
- port: myapp-api
path: /api/metrics
interval: 10s
</code></pre>
<p>Prometheus discovers all the pods running instances of the API, and I can query those metrics from the Prometheus graph. So far so good.</p>
<p>The issue is, those metrics are aggregate - each API instance/pod doesn't have its own queue, so there's no reason to collect those values from every instance. In fact it seems to invite confusion - if Prometheus collects the same value from 10 pods, it looks like the total value is 10x what it really is, unless you know to apply something like <code>avg</code>.</p>
<p>Is there a way to either tell Prometheus "this value is already aggregate and should always be presented as such" or better yet, tell Prometheus to just scrape the values once via the internal load balancer for that service, rather than hitting each pod?</p>
<p><strong>edit</strong></p>
<p>The actual API is just a simple <code>Deployment</code> object:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 2
selector:
matchLabels:
app: myapp-api
template:
metadata:
labels:
app: myapp-api
spec:
imagePullSecrets:
- name: mysecret
containers:
- name: myapp-api
image: myregistry/myapp:2.0
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: "app/config.yaml"
subPath: config.yaml
volumes:
- name: config
configMap:
name: myapp-api-config
</code></pre>
| superstator | <p>In your case to avoid metrics aggregation you can use, as already mentioned in your post, <code>avg()</code> operator to or <a href="https://github.com/coreos/prometheus-operator/blob/fc6fde56c8daaa34aae2cdee77536a8437101be7/Documentation/design.md#podmonitor" rel="nofollow noreferrer">PodMonitor</a> instead of <code>ServiceMonitor</code>.</p>
<blockquote>
<p>The <code>PodMonitor</code> custom resource definition (CRD) allows to
declaratively define how a dynamic set of pods should be monitored.
Which pods are selected to be monitored with the desired configuration
is defined using label selections.</p>
</blockquote>
<p>This way it will scrape the metrics from the specified pod only. </p>
| kool |
<p>I created the kops cluster with 1 master node & 2 worker nodes. Also I created the pod & it's running successfully. But how do I deploy this pod in the kops cluster nodes? Please help.</p>
<p>--EDIT--
The "kops get cluster" command returns that, the cluster is on. And the "kubectl get pod" returns that the pod is created and the status is "Running".</p>
<p>I tried creating the cluster with the --image tag. But since its a private repository, it caused error. If that's the way, can you please suggest a way to create the cluster with private repository image.</p>
| Rinsheed | <blockquote>
<p>how do I deploy this pod in the kops cluster nodes? </p>
</blockquote>
<p>When you deployed your pod it was deployed on 1 of the worker nodes, you can check which one with </p>
<p><code>kubectl get pods -o wide</code></p>
<p>Worth to take a look at kubernetes documentation about <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/" rel="nofollow noreferrer">Viewing Pods and Nodes</a></p>
<hr>
<blockquote>
<p>If that's the way, can you please suggest a way to create the cluster with private repository image.</p>
</blockquote>
<p>About private repository take a look at this <a href="https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry" rel="nofollow noreferrer">kubernetes documentation</a></p>
<p>I would recommend to use <a href="https://docs.docker.com/registry/" rel="nofollow noreferrer">docker registry</a>, if you know <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> there is a <a href="https://github.com/helm/charts/tree/master/stable/docker-registry" rel="nofollow noreferrer">helm chart</a> for it.</p>
| Jakub |
<p>I wanted to test a very basic application for NATS-streaming on Kubernetes. To do so, <a href="https://docs.nats.io/nats-on-kubernetes/minimal-setup" rel="nofollow noreferrer">I followed the commands from the official NATS-docs</a>.</p>
<p>It basically comes down to running</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/single-server-nats.yml
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-streaming-server/single-server-stan.yml
</code></pre>
<p>in a terminal with access to the cluster (it's a <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">kind</a>-cluster in my case).</p>
<p>I used <a href="https://github.com/nats-io/stan.go" rel="nofollow noreferrer"><code>stan.go</code></a> as the NATS-streaming-client. Here is the code I tried to connect to the NATS-streaming-server:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import stan "github.com/nats-io/stan.go"
func main() {
sc, err := stan.Connect("stan", "test-client")
if err != nil {
panic(err)
}
if err := sc.Publish("test-subject", []byte("This is a test-message!")); err != nil {
panic(err)
}
}
</code></pre>
<p>and this is the error I'm getting:</p>
<pre><code>panic: nats: no servers available for connection
goroutine 1 [running]:
main.main()
/Users/thilt/tmp/main.go:9 +0x15d
exit status 2
</code></pre>
<p>so I think another name was used for the cluster or something. If I use the provided example with <code>nats-box</code> from the docs.nats-link above, it also doesn't work! Where did I go wrong here?</p>
<p>I will happily provide more information, if needed.</p>
| Tim Hilt | <p>There is a <a href="https://github.com/nats-io/stan.go/blob/master/examples/stan-pub/main.go#L82-L93" rel="nofollow noreferrer">great example in stan.go docs</a>:</p>
<pre><code>// Connect to NATS
nc, err := nats.Connect(URL, opts...)
if err != nil {
log.Fatal(err)
}
defer nc.Close()
sc, err := stan.Connect(clusterID, clientID, stan.NatsConn(nc))
if err != nil {
log.Fatalf("Can't connect: %v.\nMake sure a NATS Streaming Server is running at: %s", err, URL)
}
defer sc.Close()
</code></pre>
<p>Your error happens because by default stan connects to localhost address (<a href="https://github.com/nats-io/stan.go/blob/910e9bca44c8caf96e3e479874a589bf6c36e817/stan.go#L33" rel="nofollow noreferrer">source code</a>):</p>
<pre><code>// DefaultNatsURL is the default URL the client connects to
DefaultNatsURL = "nats://127.0.0.1:4222"
</code></pre>
<p>Notice that povided above example overwrite this default connection.</p>
<p>Stan source code is short and easy to analyze. I really recommend you to try to analyze it and figure out what it does.</p>
<hr />
<p>Now let's put it all together; here is a working example:</p>
<pre><code>package main
import (
nats "github.com/nats-io/nats.go"
stan "github.com/nats-io/stan.go"
)
func main() {
// Create a NATS connection
nc, err := nats.Connect("nats://nats:4222")
if err != nil {
panic(err)
}
// Then pass it to the stan.Connect() call.
sc, err := stan.Connect("stan", "me", stan.NatsConn(nc))
if err != nil {
panic(err)
}
if err := sc.Publish("test-subject", []byte("This is a test-message!")); err != nil {
panic(err)
}
}
</code></pre>
| Matt |
<p>I have an app set up which can be contacted via the service-IP, but not using the Ingress Rule.</p>
<p>Consider the following Ingress Manifest:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: default
annotations:
kubernetes.io/ingress.class: public
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$0
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- my.host.net
rules:
- host: my.host.net
http:
paths:
- path: /myapp(/|$)(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 1234
</code></pre>
<p>The relevant service is up and running. Doing a <code>curl 10.152.183.91/myapp/path/</code> with <code>10.152.183.91</code> being the service IP gives the desired result.</p>
<p>When I go through <code>curl my.host.net/myapp/path/</code> however, I get a <code>308 Permanent Redirect</code>. Other apps on the same cluster are running as expected, so the cluster itself as well as nginx-ingress and CoreDNS are doing their job.</p>
<p>Where did I go wrong? Is the <code>nginx.ingress.kubernetes.io/rewrite-target: /$0</code> wrong?</p>
| petwri | <p>First and foremost, you will need to change the:</p>
<ul>
<li>from: <code>nginx.ingress.kubernetes.io/rewrite-target: /$0</code></li>
<li>to: <code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code></li>
</ul>
<p>Explanation:</p>
<blockquote>
<h2><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">Rewrite Target</a></h2>
<h3>Note</h3>
<p><a href="https://www.regular-expressions.info/refcapture.html" rel="nofollow noreferrer">Captured groups</a> are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the <code>rewrite-target</code> annotation.</p>
<p>Create an Ingress rule with a rewrite annotation:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
ingressClassName: nginx
rules:
- host: rewrite.bar.com
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
</code></pre>
<p>In this ingress definition, any characters captured by <code>(.*)</code> will be assigned to the placeholder <code>$2</code>, which is then used as a parameter in the <code>rewrite-target</code> annotation.</p>
<p>For example, the ingress definition above will result in the following rewrites:</p>
<ul>
<li><code>rewrite.bar.com/something</code> rewrites to <code>rewrite.bar.com/</code></li>
<li><code>rewrite.bar.com/something/</code> rewrites to <code>rewrite.bar.com/</code></li>
<li><code>rewrite.bar.com/something/new</code> rewrites to <code>rewrite.bar.com/new</code></li>
</ul>
<p>-- <em><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress-nginx: Examples: Rewrite</a></em></p>
</blockquote>
<hr />
<p>As for the <code>curl</code> part. By running exactly the same as you did:</p>
<ul>
<li><code>curl my.host.net/myapp/path/</code></li>
</ul>
<p>You will receive a <code>308 Permanent Redirect</code>.</p>
<p><code>curl</code> by default is not following the redirection. Try with <code>curl -L</code>.</p>
<p>Example with setup similar to yours:</p>
<pre class="lang-bash prettyprint-override"><code>❯ curl kruklabs.internal/myapp/
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>
❯ curl kruklabs.internal/myapp/ -L -k
Here is the nginx container!
</code></pre>
<blockquote>
<p>A side note!</p>
<p>You could also try to contact <code>Ingress</code> directly with <code>HTTPS</code>.</p>
</blockquote>
| Dawid Kruk |
<p>I am trying to set an environment variable with the IP address at which Kong is accessible that will be used to actually send requests into the Kubernetes cluster. Installed Kong Kubernetes Ingress Controller Without Database. But always getting this error. Can anyone please help? Thanks in advance.</p>
<p><a href="https://i.stack.imgur.com/nblAW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nblAW.png" alt="enter image description here"></a></p>
| DevSay | <p>It's because you're using Linux commands in Powershell.
To export PROXY_IP you can use:</p>
<p><code>$PROXY_IP=(minikube service -n kong kong-proxy --url | select -first 1)</code></p>
<p><code>select -first 1</code> behaves the same as <code>head -1</code>.</p>
<p>Then to set it as environment variable you can use:</p>
<p><code>[Environment]::SetEnvironmentVariable("PROXY_IP", "$PROXY_IP"</code>)</p>
| kool |
<p>I'm playing around with GitOps and ArgoCD in Redhat Openshift. My goal is to switch a worker node to an infra node.</p>
<p>I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)</p>
<p>In order to do make the node an infra node, I want to add a label "infra" and take the label "worker" from it. Before, the object looks like this (irrelevant labels omitted): </p>
<pre><code>apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/infra: ""
name: node6.example.com
spec: {}
</code></pre>
<p>After applying a YAML File, it's supposed to look like that: </p>
<pre><code>apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/worker: ""
name: node6.example.com
spec: {}
</code></pre>
<p>If I put the latter config in a file, and do "kubectl apply -f ", the node has both infra and worker labels. So adding a label or changing the value of a label is easy, but is there a way to remove a label in an objects metadata by applying a YAML file ?</p>
| tvitt | <p>I would say it's not possible to do with <code>kubectl apply</code>, at least I tried and couldn't find any informations about that.</p>
<p>As @Petr Kotas mentioned you can always use </p>
<pre><code>kubectl label node node6.example.com node-role.kubernetes.io/infra-
</code></pre>
<p>But I see you're looking for something else</p>
<blockquote>
<p>I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)</p>
</blockquote>
<hr>
<p>So maybe the answer could be to use API clients, for example <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">python</a>? I have found this example <a href="https://stackoverflow.com/a/54168783/11977760">here</a>, made by @Prafull Ladha</p>
<blockquote>
<p>As already mentioned, correct kubectl example to delete label, but there is no mention of removing labels using API clients. if you want to remove label using the API, then you need to provide a new body with the <code>labelname: None</code> and then patch that body to the node or pod. I am using the kubernetes python client API for example purpose</p>
</blockquote>
<pre><code>from pprint import pprint
from kubernetes import client, config
config.load_kube_config()
client.configuration.debug = True
api_instance = client.CoreV1Api()
body = {
"metadata": {
"labels": {
"label-name": None}
}
}
api_response = api_instance.patch_node("minikube", body)
print(api_response)
</code></pre>
| Jakub |
<p>I have a working Ambassador and a working Istio and I use the default Jaeger tracer in Istio which works fine.</p>
<p>Now I would like to make Ambassador report trace data to Istio's Jaeger.</p>
<p>Ambassador documentation suggests that Jaeger is supported with the Zipkin driver, but gives example only for usage with Zipkin.</p>
<p><a href="https://www.getambassador.io/user-guide/with-istio/#tracing-integration" rel="nofollow noreferrer">https://www.getambassador.io/user-guide/with-istio/#tracing-integration</a></p>
<p>So I checked the ports of jaeger-collector service, and picked the http: jaeger-collector-http 14268/TCP</p>
<pre><code>kubectl describe svc jaeger-collector -n istio-system
</code></pre>
<p>And modified the TracingService shown in the Ambassador docs:</p>
<pre><code>apiVersion: getambassador.io/v2
kind: TracingService
metadata:
name: tracing
namespace: {{ .Values.namespace }}
spec:
#service: "zipkin.istio-system:9411"
service: "jaeger-collector.istio-system:14268"
driver: zipkin
ambassador_id: ambassador-{{ .Values.namespace }}
config: {}
</code></pre>
<p>But I cannot see trace data from Ambassador in Jaeger.</p>
<p>Does anyone have any experience on this topic?</p>
| Donato Szilagyi | <p>The answer here is to install istio with <code>--set values.global.tracer.zipkin.address</code> as provided in <a href="https://istio.io/docs/tasks/observability/distributed-tracing/jaeger/#before-you-begin" rel="nofollow noreferrer">istio documentation</a></p>
<pre><code>istioctl manifest apply --set values.global.tracer.zipkin.address=<jaeger-collector-service>.<jaeger-collector-namespace>:9411
</code></pre>
<hr>
<p><strong>And</strong></p>
<hr>
<p>Use the original TracingService <code>setting: service: "zipkin.istio-system:9411"</code> as Donato Szilagyi confirmed in comments.</p>
<pre><code>apiVersion: getambassador.io/v2
kind: TracingService
metadata:
name: tracing
namespace: {{ .Values.namespace }}
spec:
service: "zipkin.istio-system:9411"
driver: zipkin
ambassador_id: ambassador-{{ .Values.namespace }}
config: {}
</code></pre>
<blockquote>
<p>Great! It works. And this time I used the original TracingService setting: service: "zipkin.istio-system:9411" – Donato Szilagy</p>
</blockquote>
| Jakub |
<p>I am deploying my application in kubernetes using helm chart with 2 sub-charts <code>app</code> and <code>test</code>.</p>
<p>I have the pod of <code>app</code> chart properly running.
But <code>test</code> pod will be running only if it can properly authenticate to <code>app</code> container.</p>
<p>That means, i have to generate an <code>auth_token</code> using a curl request to <code>app</code> service and then add that token as Environment variable <code>AUTH_TOKEN</code> for <code>test</code> container.</p>
<p>I tried different ways to achieve this:</p>
<ul>
<li><p>Added an init-container <code>generate-token</code> for <code>test</code> pod, that will generate the token and will save it in a shared volume. And <code>test</code> container will have access to that volume. But the problem here is, the <code>test</code> container doesn't have a code to set env for the container by reading from the shared volume.</p>
</li>
<li><p>Added a sidecar-container <code>sidecar-generate-token</code> instead of an init-container for the same setup as mentioned above. Here also problem is, the <code>test</code> container doesn't have a code to set env for the container by reading from the shared volume. And also, the test pod got into a crashloopbackoff state. If you check the content of volume by getting into the container, there are multiple tokens in the volume file which are generated on each pod restart of crashloopbackoff.</p>
</li>
<li><p>Third plan was that an init-container <code>generate-token</code> should create a kubernetes secret in the cluster, after generating the auth_token. Then the main container <code>test</code> can set Environment variable from that secret. For that, the init container <code>generate-token</code> should have a kubectl setup in it first.</p>
</li>
</ul>
<p>If i am proceeding with the third plan, How can i setup and use kubectl from init-container to generate secret in the cluster?</p>
<p>Is there any other alternative plan to achieve this goal?</p>
<p><strong>EDIT:</strong></p>
<hr />
<p>This is the yaml part for the first option:</p>
<pre><code> initContainers:
- name: generate-service-token
image: app.mycr.io/alpine-network-troubleshooting:dev-latest
command:
- /bin/sh
- -c
- |
BEARER_TOKEN=$(curl -k -X POST -H "Content-Type:application/json" --data '{"user":"dynizer","password":"xxxx"}' "https://app:50051/api/v2/login" | jq -r '.jwt')
SERVICE_TOKEN=$(curl -k -X GET -H 'Accept: application/json' -H "Authorization: Bearer ${BEARER_TOKEN}" "https://app:50051/api/v2/servicetoken/issue" | jq -r '.token')
echo $SERVICE_TOKEN
mkdir -p /vol
touch /vol/token.txt
echo $SERVICE_TOKEN >> /vol/token.txt
volumeMounts:
- mountPath: /vol
name: token-vol
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: token-vol
mountPath: /vol
volumes:
- name: token-vol
emptyDir: {}
</code></pre>
| AnjK | <p>Trying to answer your question:</p>
<blockquote>
<p>But still the same problem of container not having the code to set env by reading from the shared volume, will be there.</p>
</blockquote>
<p>Let's try to read this env from other container. Here Is what I have come up with.</p>
<p>First you need to know what command your container is running. In case of nginx that is <code>/docker-entrypoint.sh nginx -g "daemon off;"</code> (<a href="https://github.com/nginxinc/docker-nginx/blob/464886ab21ebe4b036ceb36d7557bf491f6d9320/mainline/debian/Dockerfile#L110-L116" rel="nofollow noreferrer">source code</a>)</p>
<p>Then you use <code>command</code> field where you read the token value from file and use <code>env</code> to set it and run the actual applciation.</p>
<p>Example:</p>
<pre><code> initContainers:
- name: generate-service-token
image: app.mycr.io/alpine-network-troubleshooting:dev-latest
command:
- /bin/sh
- -c
- |
BEARER_TOKEN=$(curl -k -X POST -H "Content-Type:application/json" --data '{"user":"dynizer","password":"xxxx"}' "https://app:50051/api/v2/login" | jq -r '.jwt')
SERVICE_TOKEN=$(curl -k -X GET -H 'Accept: application/json' -H "Authorization: Bearer ${BEARER_TOKEN}" "https://app:50051/api/v2/servicetoken/issue" | jq -r '.token')
echo $SERVICE_TOKEN
mkdir -p /vol
touch /vol/token.txt
echo $SERVICE_TOKEN >> /vol/token.txt
volumeMounts:
- mountPath: /vol
name: token-vol
containers:
- name: nginx-container
image: nginx
command:
- sh
- -c
- exec env SERVICE_TOKEN=$(cat /vol/token.txt) /docker-entrypoint.sh nginx -g "daemon off;"
volumeMounts:
- name: token-vol
mountPath: /vol
volumes:
- name: token-vol
emptyDir: {}
</code></pre>
<p>More general example:</p>
<pre><code> command:
- sh
- -c
- exec env SERVICE_TOKEN=$(cat /vol/token.txt) <<any command>>
</code></pre>
<p>I am not sure if this is the best example, but I hope that at least it gives you an idea how you can approach this problem.</p>
| Matt |
<p>I'm trying to get mTLS between two applications in two kubernetes clusters without the way Istio does it (with its ingress gateway), and I was wondering if the following would work (for Istio, for Likerd, for Consul...).</p>
<p>Let's say we have a k8s cluster A with an app A.A. and a cluster B with an app B.B. and I want them to communicate with mTLS.</p>
<ul>
<li>Cluster A has letsEncrypt cert for nginx ingress controller, and a mesh (whatever) for its application.</li>
<li>Cluster B has self signed cert from our root CA.</li>
<li>Cluster A and B service meshes have different certificates signed by our root CA.</li>
<li>Traffic goes from the internet to Cluster A ingress controller (HTTPS), from there to app A.A. </li>
<li>After traffic gets to app A.A., this app wants to talk to app B.B.</li>
<li>Apps A.A. and B.B. have endpoints exposed via ingress (using their ingress controllers).</li>
<li>The TLS certificates are ended in the endpoints and are wildcards.</li>
</ul>
<p>Do you think the mTLS will work in this situation?</p>
| JGG | <p>Basically this <a href="https://www.portshift.io/blog/secure-multi-cluster-connectivity/?fbclid=IwAR22fqPDRaEdNdj8m7G2hNl2Y7S8lpxJDf5G8eBwnDAj3hnh4S1tcp8qBQk" rel="nofollow noreferrer">blog</a> from portshift answer your question.</p>
<p>The answer is depends on how your clusters are built, because </p>
<blockquote>
<p>Istio offers few options to deploy service mesh in multiple kubernetes clusters, more about it <a href="https://stackoverflow.com/a/60149783/11977760">here</a>.</p>
</blockquote>
<p>So, if you have Single Mesh deployment </p>
<p><img src="https://www.portshift.io/wp-content/uploads/2019/10/Istio-extension-post.png" alt="Istio extension post"></p>
<blockquote>
<p>You can deploy a single service mesh (control-plane) over a fully connected multi-cluster network, and all workloads can reach each other directly without an Istio gateway, regardless of the cluster on which they are running.</p>
</blockquote>
<hr>
<p><em>BUT</em></p>
<hr>
<p>If you have Multi Mesh Deployment</p>
<p><img src="https://www.portshift.io/wp-content/uploads/2019/10/Istio-diagram.png" alt="A multi service mesh deployment over multiple clusters"></p>
<blockquote>
<p>With a multi-mesh deployment you have a greater degree of isolation and availability, but it increases the set-up complexity. Meshes that are otherwise independent are loosely coupled together using ServiceEntries, Ingress Gateway and use a common root CA as a base for secure communication. From a networking standpoint, the only requirement is that the ingress gateways be reachable from one another. Every service in a given mesh that needs to be accessed a service in a different mesh requires a ServiceEntry configuration in the remote mesh.</p>
</blockquote>
<hr>
<blockquote>
<p>In multi-mesh deployments security can become complicated as the environment grows and diversifies. There are security challenges in authenticating and authorizing services between the clusters. The local <a href="https://istio.io/docs/reference/config/policy-and-telemetry/" rel="nofollow noreferrer">Mixer</a> (services policies and telemetries) needs to be updated with the attributes of the services in the neighbouring clusters. Otherwise, it will not be able to authorize these services when they reaching its cluster. To achieve this, each Mixer needs to be aware of the workload identities, and their attributes, in neighbouring clusters Each <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">Citadel</a> needs to be updated with the certificates of neighbouring clusters, to allow mTLS connections between clusters.</p>
<p>Federation of granular workloads identities (mTLS certificates) and service attributes across multi-mesh control-planes can be done in the following ways:</p>
<ul>
<li><em>Kubernetes Ingress:</em> exposing HTTP and HTTPS routes from outside the cluster to <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress can terminate SSL / TLS, and offer name based virtual hosting. Yet, it requires an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers" rel="nofollow noreferrer">Ingress controller</a> for fulfilling the Ingress rules</li>
<li><em>Service-mesh gateway:</em> The Istio service mesh offers a different configuration model, <a href="https://istio.io/docs/reference/config/networking/v1alpha3/gateway/" rel="nofollow noreferrer">Istio Gateway</a>. A gateway allows Istio features such as monitoring and route rules to be applied to traffic entering the cluster. An ingress g<a href="https://istio.io/docs/reference/config/networking/v1alpha3/gateway/" rel="nofollow noreferrer">ateway</a> describes a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports, protocols, etc. Traffic routing for ingress traffic is configured instead using Istio routing rules, exactly the same way as for internal service requests.</li>
</ul>
</blockquote>
<hr>
<blockquote>
<p>Do you think the mTLS will work in this situation?</p>
</blockquote>
<p>Based on above informations </p>
<ul>
<li><p>If you have Single Mesh Deployment</p>
<p>It should be possible without any problems.</p></li>
<li><p>If you have Multi Mesh Deployment</p>
<p>It should work, but since you don't want to use istio gateway then the only option is <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress</a>. </p></li>
</ul>
<p>I hope it answer your question. Let me know if you have any more questions.</p>
| Jakub |
<p>i am not able to install "Keda" with helm on AKS. Getting below error..</p>
<p>Any help is greatly appreciated. </p>
<pre><code>Error: unable to convert to CRD type: unable to convert unstructured object to apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition: cannot convert int64 to float64
</code></pre>
| Paramesh | <p>I made a reproduction of your problem and that is the solution</p>
<p>You need to use </p>
<pre><code>helm fetch kedacore/keda-edge --devel
</code></pre>
<p>To download keda files to your pc</p>
<p>Unzip it</p>
<pre><code>tar -xvzf keda-edge-xxx.tgz
</code></pre>
<p>Then you need to change <a href="https://github.com/helm/helm/blob/master/docs/charts_hooks.md#hooks" rel="nofollow noreferrer">hook</a> in scaledobject-crd.yaml</p>
<pre><code>nano keda-edge/templates/scaledobject-crd.yaml
"helm.sh/hook": crd-install need to be changed to "helm.sh/hook": pre-install
</code></pre>
<p>And install it will helm</p>
<pre><code>helm install ./keda-edge --name keda
NAME: keda
LAST DEPLOYED: Mon Sep 30 12:13:14 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRoleBinding
NAME AGE
hpa-controller-custom-metrics 1s
keda-keda-edge 1s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
keda-keda-edge 0/1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
keda-keda-edge-6b55bf7674-j5kgc 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keda-keda-edge ClusterIP 10.110.59.143 <none> 443/TCP,80/TCP 1s
==> v1/ServiceAccount
NAME SECRETS AGE
keda-serviceaccount 1 1s
==> v1beta1/APIService
NAME AGE
v1beta1.external.metrics.k8s.io 0s
</code></pre>
| Jakub |
<p>I'm deploying a device plugin for FPGAs on a local kubernetes cluster. Essentially it is just a daemon set, so each node in the cluster (barring master nodes) will have one pod of this deployment.</p>
<p>The pods need to access the device trees of the hosts (nodes), they also need to access the kubelet socket. So I mount two specific directories from the hosts to the containers, as follows:</p>
<pre><code> containers:
- image: uofthprc/fpga-k8s-deviceplugin
name: fpga-device-plugin-ctr
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: device-info
mountPath: /sys/firmware/devicetree/base
readOnly: true
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: device-info
hostPath:
path: /sys/firmware/devicetree/base
</code></pre>
<p>For some reason, the <code>/var/lib/kubelet/device-plugins</code> mounts fine, and is fully accessible from within the containers, while the <code>/sys/firmware/devicetree/base</code> is not!. The following is the output of attaching to one of the containers <code>kubectl exec -it fpga-device-plugin-ds-hr6s5 -n device-plugins -- /bin/sh</code>:</p>
<pre><code>/work # ls /var/lib/kubelet/device-plugins
DEPRECATION kubelet.sock kubelet_internal_checkpoint
/work # ls /sys/firmware/devicetree/base
ls: /sys/firmware/devicetree/base: No such file or directory
/work # ls /sys/firmware/devicetree
ls: /sys/firmware/devicetree: No such file or directory
/work # ls /sys/firmware
/work # ls /sys
block bus class dev devices firmware fs kernel module power
/work #
</code></pre>
<p>I'm not sure why this happens, but I tested this with Read Only permissions, Read Write permissions, and without the mount at all. In all three cases, there's nothing inside the path <code>/sys/firmware</code> in the containers. On the hosts, I'm 100% sure that the path <code>/sys/firmware/devicetree/base/</code> exists and includes the files I want.</p>
<p>Here's the output of <code>describe pods</code> on one of the containers:</p>
<pre><code>Name: fpga-device-plugin-ds-hr6s5
Namespace: device-plugins
Priority: 2000001000
Priority Class Name: system-node-critical
Node: mpsoc2/10.84.31.12
Start Time: Wed, 20 May 2020 22:56:25 -0400
Labels: controller-revision-hash=cfbc8976f
name=fpga-device-plugin-ds
pod-template-generation=1
Annotations: cni.projectcalico.org/podIP: 10.84.32.223/32
cni.projectcalico.org/podIPs: 10.84.32.223/32
Status: Running
IP: 10.84.32.223
IPs:
IP: 10.84.32.223
Controlled By: DaemonSet/fpga-device-plugin-ds
Containers:
fpga-device-plugin-ctr:
Container ID: docker://629ab2fd7d05bc17e6f566912b127eec421f214123309c10674c40ed2839d1cf
Image: uofthprc/fpga-k8s-deviceplugin
Image ID: docker-pullable://uofthprc/fpga-k8s-deviceplugin@sha256:06f9e46470219d5cfb2e6233b1473e9f1a2d3b76c9fd2d7866f7a18685b60ea3
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 20 May 2020 22:56:29 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/sys/firmware/devicetree/base from device-info (ro)
/var/lib/kubelet/device-plugins from device-plugin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dwbsm (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
device-plugin:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/device-plugins
HostPathType:
device-info:
Type: HostPath (bare host directory volume)
Path: /sys/firmware/devicetree/base
HostPathType:
default-token-dwbsm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dwbsm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned device-plugins/fpga-device-plugin-ds-hr6s5 to mpsoc2
Normal Pulling 23s kubelet, mpsoc2 Pulling image "uofthprc/fpga-k8s-deviceplugin"
Normal Pulled 23s kubelet, mpsoc2 Successfully pulled image "uofthprc/fpga-k8s-deviceplugin"
Normal Created 23s kubelet, mpsoc2 Created container fpga-device-plugin-ctr
Normal Started 22s kubelet, mpsoc2 Started container fpga-device-plugin-ctr
</code></pre>
<p>As far as I can see, no problems in it.</p>
<p>I'm using kubernetes (kubeadm installed, not microk8s) version 1.18.2 for the client and server. The nodes at question are ARM64 nodes with Ubuntu 16.04, using a 4.14.0 kernel. The containers are all alpine:3.11 with a simple binary copied inside them. I have no idea why the mount is not working, any help would certainly be appreciated.</p>
<h2>Edit1:</h2>
<p>The permissions of <code>/sys/firmware/devicetree/base/</code> on the hosts are as follows:</p>
<pre><code>savi@mpsoc10:~$ ls -alh /sys/firmware/devicetree/base/
total 0
drwxr-xr-x 36 root root 0 May 20 21:32 .
drwxr-xr-x 3 root root 0 May 20 21:32 ..
-r--r--r-- 1 root root 4 May 20 21:32 #address-cells
drwxr-xr-x 2 root root 0 May 20 21:32 aliases
drwxr-xr-x 56 root root 0 May 20 21:32 amba
drwxr-xr-x 3 root root 0 May 20 21:32 amba_apu@0
drwxr-xr-x 2 root root 0 May 20 21:32 aux_ref_clk
-r--r--r-- 1 root root 15 May 20 21:32 board
drwxr-xr-x 2 root root 0 May 20 21:32 chosen
drwxr-xr-x 2 root root 0 May 20 21:32 clk
-r--r--r-- 1 root root 12 May 20 21:32 compatible
drwxr-xr-x 6 root root 0 May 20 21:32 cpu_opp_table
drwxr-xr-x 7 root root 0 May 20 21:32 cpus
drwxr-xr-x 2 root root 0 May 20 21:32 dcc
drwxr-xr-x 2 root root 0 May 20 21:32 dp_aclk
drwxr-xr-x 2 root root 0 May 20 21:32 edac
drwxr-xr-x 2 root root 0 May 20 21:32 fclk0
drwxr-xr-x 2 root root 0 May 20 21:32 fclk1
drwxr-xr-x 2 root root 0 May 20 21:32 fclk2
drwxr-xr-x 2 root root 0 May 20 21:32 fclk3
drwxr-xr-x 3 root root 0 May 20 21:32 firmware
drwxr-xr-x 2 root root 0 May 20 21:32 fpga-full
drwxr-xr-x 2 root root 0 May 20 21:32 gt_crx_ref_clk
drwxr-xr-x 2 root root 0 May 20 21:32 mailbox@ff990400
drwxr-xr-x 2 root root 0 May 20 21:32 memory
-r--r--r-- 1 root root 1 May 20 21:32 name
drwxr-xr-x 3 root root 0 May 20 21:32 nvmem_firmware
drwxr-xr-x 2 root root 0 May 20 21:32 pcap
drwxr-xr-x 2 root root 0 May 20 21:32 pmu
drwxr-xr-x 31 root root 0 May 20 21:32 power-domains
drwxr-xr-x 2 root root 0 May 20 21:32 psci
drwxr-xr-x 2 root root 0 May 20 21:32 pss_alt_ref_clk
drwxr-xr-x 2 root root 0 May 20 21:32 pss_ref_clk
drwxr-xr-x 2 root root 0 May 20 21:32 reset-controller
drwxr-xr-x 2 root root 0 May 20 21:32 sha384
-r--r--r-- 1 root root 4 May 20 21:32 #size-cells
drwxr-xr-x 2 root root 0 May 20 21:32 __symbols__
drwxr-xr-x 2 root root 0 May 20 21:32 timer
-r--r--r-- 1 root root 10 May 20 21:32 vendor
drwxr-xr-x 2 root root 0 May 20 21:32 video_clk
drwxr-xr-x 2 root root 0 May 20 21:32 zynqmp-power
drwxr-xr-x 2 root root 0 May 20 21:32 zynqmp_rsa
</code></pre>
<p>some of the files inside it are read only, which is what prompted me to use read only permissions for the volume mount in the first place.</p>
<p>The following is the permissions of <code>/sys</code> and <code>/sys/firmware</code> on the containers:</p>
<pre><code>/work # ls -alh /sys/
total 4K
dr-xr-xr-x 12 root root 0 May 21 02:56 .
drwxr-xr-x 1 root root 4.0K May 21 02:56 ..
drwxr-xr-x 2 root root 0 May 21 03:08 block
drwxr-xr-x 32 root root 0 May 21 03:08 bus
drwxr-xr-x 64 root root 0 May 21 03:08 class
drwxr-xr-x 4 root root 0 May 21 03:08 dev
drwxr-xr-x 9 root root 0 May 21 03:08 devices
drwxrwxrwt 2 root root 40 May 21 02:56 firmware
drwxr-xr-x 10 root root 0 May 21 02:56 fs
drwxr-xr-x 7 root root 0 May 21 02:56 kernel
drwxr-xr-x 156 root root 0 May 21 03:08 module
drwxr-xr-x 2 root root 0 May 21 03:08 power
/work # ls -alh /sys/firmware/
total 0
drwxrwxrwt 2 root root 40 May 21 02:56 .
dr-xr-xr-x 12 root root 0 May 21 02:56 ..
</code></pre>
<h2>EDIT2:</h2>
<p>The output of <code>mount | grep sysfs</code> on containers is:</p>
<pre><code>/work # mount | grep sysfs
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/firmware/devicetree/base type sysfs (ro,relatime)
</code></pre>
| mewais | <p><code>sysfs</code> is <a href="https://github.com/moby/moby/issues/24000" rel="nofollow noreferrer">mounted as read-only</a>:</p>
<pre><code>mount | grep sysfs
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
</code></pre>
<p>And that's why the volume is not shown in the pod. You can change it to writable volume by using <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> and running both containers as <strong>privileged</strong>. If ran without <code>privileged: true</code> the access will not be modified and volume won't be mounted to a pod:</p>
<pre><code>initContainers:
- name: mount
image: nginx:alpine
command: ["/bin/sh", "-c", "mount -o remount,rw '/sys'"]
securityContext:
privileged: true
</code></pre>
<p>This way it will change <code>/sys</code> to writable:</p>
<pre><code>mount | grep sysfs
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys/firmware/ type sysfs (rw,relatime)
</code></pre>
| kool |
<p>I deployed a kubernetes cluster on Google Cloud using VMs and <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/setting-up-your-first-cluster.md" rel="nofollow noreferrer">Kubespray</a>.</p>
<p>Right now, I am looking to expose a simple node app to external IP using <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#gcloud" rel="nofollow noreferrer">loadbalancer</a> but showing my external IP from gcloud to service does not work. It stays on pending state when I query <code>kubectl get services</code>.</p>
<p>According to <a href="https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending">this</a>, kubespray does not have any loadbalancer mechanicsm included/integrated by default. How should I progress?</p>
| bcan | <p>Let me start of by summarizing the problem we are trying to solve here.</p>
<p>The problem is that you have self-hosted kubernetes cluster and you want to be able to create a service of type=LoadBalancer and you want k8s to create a LB for you with externlIP and in fully automated way, just like it would if you used a GKE (kubernetes as a service solution).</p>
<p>Additionally I have to mention that I don't know much of a kubespray, so I will only describe all the steps that need to bo done to make it work, and leave the rest to you. So if you want to make changes in kubespray code, it's on you.
All the tests I did with kubeadm cluster but it should not be very difficult to apply it to kubespray.</p>
<hr />
<p>I will start of by summarizing all that has to be done into 4 steps:</p>
<ol>
<li>tagging the instances</li>
<li>enabling cloud-provider functionality</li>
<li>IAM and service accounts</li>
<li>additional info</li>
</ol>
<hr />
<p><strong>Tagging the instances</strong>
All worker node instances on GCP have to be labeled with unique tag that is the name of an instance; these tags are later used to create a firewall rules and target lists for LB. So lets say that you have an instance called <strong>worker-0</strong>; you need to tag that instance with a tag <code>worker-0</code></p>
<p>Otherwise it will result in an error (that can be found in controller-manager logs):</p>
<pre><code>Error syncing load balancer: failed to ensure load balancer: no node tags supplied and also failed to parse the given lists of hosts for tags. Abort creating firewall rule
</code></pre>
<hr />
<p><strong>Enabling cloud-provider functionality</strong>
K8s has to be informed that it is running in cloud and what cloud provider that is so that it knows how to talk with the api.</p>
<p>controller manager logs informing you that it wont create an LB.</p>
<pre><code>WARNING: no cloud provider provided, services of type LoadBalancer will fail
</code></pre>
<p>Controller Manager is responsible for creation of a LoadBalancer. It can be passed a flag <code>--cloud-provider</code>. You can manually add this flag to controller manager pod manifest file; or like in your case since you are running kubespray, you can add this flag somewhere in kubespray code (maybe its already automated and just requires you to set some env or sth, but you need to find it out yourself).</p>
<p>Here is how this file looks like with the flag:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --cloud-provider=gce # <----- HERE
</code></pre>
<p>As you can see the value in our case is <code>gce</code>, which stangs for Google Compute Engine. It informs k8s that its running on GCE/GCP.</p>
<hr />
<p><strong>IAM and service accounts</strong>
Now that you have your provider enabled, and tags covered, I will talk about IAM and permissions.</p>
<p>For k8s to be able to create a LB in GCE, it needs to be allowed to do so. Every GCE instance has a deafult service account assigned. Controller Manager uses instance service account, stored within <a href="https://cloud.google.com/compute/docs/storing-retrieving-metadata" rel="nofollow noreferrer">instance metadata</a> to access GCP API.</p>
<p>For this to happen you need to set Access Scopes for GCE instance (master node; the one where controller manager is running) so it can use Cloud Engine API.</p>
<blockquote>
<p>Access scopes -> Set access for each API -> compute engine=Read Write</p>
</blockquote>
<p>To do this the instance has to be stopped, so now stop the instance. It's better to set these scopes during instance creation so that you don't need to make any unnecessary steps.</p>
<p>You also need to go to IAM & Admin page in GCP Console and add permissions so that master instance's service account has <code>Kubernetes Engine Service Agent</code> role assigned. This is a predefined role that has much more permissions than you probably need but I have found that everything works with this role so I decided to use is for demonstration purposes, but you probably want to use <em>least privilege rule</em>.</p>
<hr />
<p><strong>additional info</strong>
There is one more thing I need to mention. It does not impact you but while testing I have found out an interesting thing.</p>
<p>Firstly I created only one node cluster (single master node). Even though this is allowed from k8s point of view, controller manager would not allow me to create a LB and point it to a master node where my application was running. This draws a conclusion that one cannot use LB with only master node and has to create at least one worker node.</p>
<hr />
<p>PS
I had to figure it out the hard way; by looking at logs, changing things and looking at logs again to see if the issue got solved. I didn't find a single article/documentation page where it is documented in one place. If you manage to solve it for yourself, write the answer for others. Thank you.</p>
| Matt |
<p>I am trying to deploy service using helm chart on kubernetes cluster. It is throwing error as </p>
<blockquote>
<p>"Error: Non-absolute URLs should be in form of
repo_name/path_to_chart, got: guestbook"</p>
</blockquote>
<p>Here is the guestbook service that i am deploying <a href="https://github.com/phcollignon/helm/tree/master/lab5_helm_chart_version1/" rel="nofollow noreferrer">https://github.com/phcollignon/helm/tree/master/lab5_helm_chart_version1/</a></p>
<p>provider.helm v2.14.3</p>
<p>provider.kubernetes v1.16</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<pre><code>$ helm install guestbook
Error: failed to download "guestbook" (hint: running `helm repo update` may help)
</code></pre>
<pre><code>$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
</code></pre>
<pre><code>$ helm install guestbook --debug
[debug] Created tunnel using local port: '39069'
[debug] SERVER: "127.0.0.1:39069"
[debug] Original chart version: ""
Error: Non-absolute URLs should be in form of repo_name/path_to_chart, got: guestbook
</code></pre>
| Debiprasanna Mallia | <p>There are five different ways you can express the chart you want to install:</p>
<ol>
<li>By chart reference: <code>helm install stable/mariadb</code></li>
<li>By path to a packaged chart: <code>helm install ./nginx-1.2.3.tgz</code></li>
<li>By path to an unpacked chart directory: <code>helm install ./nginx</code></li>
<li>By absolute URL: <code>helm install https://example.com/charts/nginx-1.2.3.tgz</code></li>
<li>By chart reference and repo url: <code>helm install --repo https://example.com/charts/ nginx</code></li>
</ol>
<p>There is example using option number <strong>3</strong></p>
<p>Download github repository using this command:</p>
<pre><code>git clone https://github.com/phcollignon/helm
</code></pre>
<p>Then go to the <strong>lab5_helm_chart_version1</strong> file</p>
<pre><code>cd helm/lab5_helm_chart_version1
</code></pre>
<p>And simply use helm install to create guestbook</p>
<pre><code>helm install chart/guestbook/ --name guestbook
</code></pre>
| Jakub |
<p>I have <code>minikube</code> and <code>kubectl</code> installed:</p>
<pre><code>$ minikube version
minikube version: v1.4.0
commit: 7969c25a98a018b94ea87d949350f3271e9d64b6
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I have then followed the instructions from <a href="https://helm.sh/docs/using_helm/" rel="noreferrer">https://helm.sh/docs/using_helm/</a>:</p>
<ol>
<li><p>I have downloaded <a href="https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz" rel="noreferrer">https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz</a></p></li>
<li><p>I have run</p></li>
</ol>
<pre><code>$ tar -xzvf Downloads/helm-v2.13.1-linux-amd64.tar.gz linux-amd64/
linux-amd64/LICENSE
linux-amd64/tiller
linux-amd64/helm
linux-amd64/README.md
</code></pre>
<p>But now, if I check my <code>helm</code> version, I get this:</p>
<pre><code>$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find tiller
</code></pre>
<p>I have tried running <code>helm init</code>, but get the following:</p>
<pre><code>$ helm init
$HELM_HOME has been configured at /home/SERILOCAL/<my-username>/.helm.
Error: error installing: the server could not find the requested resource
</code></pre>
<p>How can I get <code>helm</code> to initialise correctly?</p>
| EuRBamarth | <p>The current helm version does not work with kubernetes version 1.16.0 </p>
<p>You can downgrade kubernetes to version 1.15.3 </p>
<pre><code>minikube start --kubernetes-version 1.15.3
helm init
</code></pre>
<p>or use my solution to fix it at version 1.16.0</p>
<p>You have to create tiller <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer"><strong>Service Account</strong></a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="noreferrer"><strong>ClusterRoleBinding</strong></a>.</p>
<p>You can simply do that by using those commands:</p>
<pre><code>kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
</code></pre>
<p>And simply create tiller </p>
<pre><code>helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
</code></pre>
| Jakub |
<p>Environment information:</p>
<pre><code>Computer detail: One master node and four slave nodes. All are CentOS Linux release 7.8.2003 (Core).
Kubernetes version: v1.18.0.
Zero to JupyterHub version: 0.9.0.
Helm version: v2.11.0
</code></pre>
<p>I recently try to deploy an online code environment(like Google Colab) in new lab servers via Zero to JupyterHub. Unfortunately, I failed to deploy Persistent Volume(PV) for JupyterHub and I got a failure message such below:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x27 over 35m) default-scheduler running "VolumeBinding" filter plugin for pod "hub-7b9cbbcf59-747jl": pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>I followed the installing process by the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub/setup-jupyterhub.html" rel="nofollow noreferrer">tutorial of JupyterHub</a>, and I was used Helm to install JupyterHub on k8s. That config file such below:</p>
<p><code>config.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>proxy:
secretToken: "2fdeb3679d666277bdb1c93102a08f5b894774ba796e60af7957cb5677f40706"
singleuser:
storage:
dynamic:
storageClass: local-storage
</code></pre>
<p>Here, I was config a <code>local-storage</code> for JupyterHub, the <code>local-storage</code> was observed k8s: <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">Link</a>. And its <code>yaml</code> file
such like that:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Then I use <code>kubectl get storageclass</code> to check it work, I got the message below:</p>
<pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 64m
</code></pre>
<p>So, I thought I deployed a storage for JupyterHub, but I so naive. I am so disappointed about that because my other Pods(JupyterHub) are all running. And I have been search some solutions so long, but also failed.</p>
<p>So now, my problems are:</p>
<ol>
<li><p>What is the true way to solve the PV problems? (Better using local storage.)</p>
</li>
<li><p>Is the local storage way will using other nodes disk not only master?</p>
</li>
<li><p>In fact, my lab had a could storage service, so if Q2 answer is No, and how I using my lab could storage service to deploy PV?</p>
</li>
</ol>
<hr />
<p>I had been addressed above problem with @Arghya Sadhu's solution. But now, I got a new problem is the Pod <code>hub-db-dir</code> also pending, it result my service <code>proxy-public</code> pending.</p>
<p>The description of <code>hub-db-dir</code> such below:</p>
<pre><code>Name: hub-7b9cbbcf59-jv49z
Namespace: jhub
Priority: 0
Node: <none>
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=7b9cbbcf59
release=jhub
Annotations: checksum/config-map: c20a64c7c9475201046ac620b057f0fa65ad6928744f7d265bc8705c959bce2e
checksum/secret: 1beaebb110d06103988476ec8a3117eee58d97e7dbc70c115c20048ea04e79a4
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hub-7b9cbbcf59
Containers:
hub:
Image: jupyterhub/k8s-hub:0.9.0
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
--config
/etc/jupyterhub/jupyterhub_config.py
--upgrade-db
Requests:
cpu: 200m
memory: 512Mi
Readiness: http-get http://:hub/hub/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/cull_idle_servers.py from config (rw,path="cull_idle_servers.py")
/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
/etc/jupyterhub/secret/ from secret (rw)
/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-vlgwz (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-vlgwz:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-vlgwz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 61s (x43 over 56m) default-scheduler 0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't find available persistent volumes to bind.
</code></pre>
<p>The information with <code>kubectl get pv,pvc,sc</code>.</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/hub-db-dir Pending local-storage 162m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 8h
</code></pre>
<p>So, how to fix it?</p>
| Guanzhou Ke | <p>In addition to @Arghya Sadhu answer, in order to make it work using <a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/#enabling-smarter-scheduling-and-volume-binding" rel="nofollow noreferrer">local storage</a> you have to create a <code>PersistentVolume</code> manually.</p>
<p>For example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: hub-db-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: <path_to_local_volume>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <name_of_the_node>
</code></pre>
<p>Then you can deploy the chart:</p>
<pre><code>helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
</code></pre>
<p>The <code>config.yaml</code> file can be left as is:</p>
<pre><code>proxy:
secretToken: "<token>"
singleuser:
storage:
dynamic:
storageClass: local-storage
</code></pre>
| kool |
<p>I have a Pod configuration file very similar to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#use-configmap-defined-environment-variables-in-pod-commands" rel="nofollow noreferrer">this example in the docs</a> where I set a few env variables from a configMap file.<br />
Now I need to add another variable but I need to <code>base64</code> encode it. I can easily do it when I take data from <code>values</code> by applying the <code>b64enc</code> function, but I don't know how to do it when getting the value from a <code>configMap</code></p>
<p>This is what I can do</p>
<pre><code>env:
- name: PLAIN_VALUE
valueFrom:
configMapKeyRef:
name: myconfig
key: PLAIN_VALUE
- name: ENCODED_VALUE_FROM_VALUES
value: {{ .Values.myConfig.plainValue | b64enc | quote }}
</code></pre>
<p>I would like to do something like the following</p>
<pre><code>env:
- name: ENCODED_VALUE
valueFrom:
configMapKeyRef:
name: myconfig
key: PLAIN_VALUE
transformation: b64enc
</code></pre>
<p>How can I <code>b64enc</code> the <code>valueFrom: configMapKeyRef: myconfig/PLAIN_VALUE</code>?<br />
P.S. <code>configMapRef</code> would also work, I can make a separate config file for that value.</p>
| Naigel | <p>In this scenario you should use secrets. It encodes the values in <code>base64</code>.</p>
<p>You can easily create a secret using <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret-directly-with-kubectl" rel="nofollow noreferrer">kubectl command</a>, for example:</p>
<pre><code>kubectl create secret generic test-secret --from-literal='your_value'
</code></pre>
<p>And it works similarly to configmap when it comes to <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data" rel="nofollow noreferrer">passing encoded values to pods</a>.</p>
<pre><code>env:
- name: ENCODED_VALUE
valueFrom:
secretKeyRef:
name: myconfig
key: value
</code></pre>
| kool |
<p>I have an EKS node group with 2 nodes for compute workloads. I use a taint on these nodes and tolerations in the deployment. I have a deployment with 2 replicas I want these two pods to be spread on these two nodes like one pod on each node.</p>
<p>I tried using:</p>
<pre><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- appname
</code></pre>
<p>Each pod is put on each node but if I updated the deployment file like changing its image name, it fails to schedule a new pod.</p>
<p>I also tried:</p>
<pre><code> topologySpreadConstraints:
- maxSkew: 1
topologyKey: type
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
type: compute
</code></pre>
<p>but they aren't spread evenly like 2 pods on a node.</p>
| Icarus | <p>Try adding:</p>
<pre><code>spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
</code></pre>
<p>By default K8s is trying to scale the new replicaset up first before it starts downscaling the old replicas. Since it cannot schedule new replicas (because antiaffinity) they are stuck in pending state.</p>
<p>Once you set the deployment's maxSurge=0, you tell k8s that you don't want the deployment to scale up first during update, and thus in result it can only scale down making place for new replicas to be scheduled.</p>
<p>Setting maxUnavailable=1 tells k8s to replace only one pod at a time.</p>
| Matt |
<p>I look at this help requirement:</p>
<pre><code>dependencies:
- name: postgresql
version: 8.6.2
repository: https://kubernetes-charts.storage.googleapis.com/
condition: postgresql.installdep.enable
</code></pre>
<p>Source: <a href="https://github.com/reportportal/kubernetes/blob/master/reportportal/v5/requirements.yaml" rel="nofollow noreferrer">https://github.com/reportportal/kubernetes/blob/master/reportportal/v5/requirements.yaml</a></p>
<p>Postgres 8 is very very old. I guess this version is just the version of some package.</p>
<p>But how can I get more information about this package?</p>
<p>I look at <a href="https://kubernetes-charts.storage.googleapis.com/" rel="nofollow noreferrer">https://kubernetes-charts.storage.googleapis.com/</a>. But this URL seems made for robots, not for humans.</p>
<p>How can I (as a human) find more details about this dependency?</p>
| guettli | <p><code>Requirements.yaml</code> is used to list <a href="https://helm.sh/docs/topics/charts/#managing-dependencies-with-the-dependencies-field" rel="nofollow noreferrer">Chart dependencies</a>. Those dependencies can be built using <code>helm dependency build</code> from <code>requirement.lock</code> file.</p>
<p><code>Version</code> describes chart version, not the image itself.</p>
<p>All necessary information about the chart are described in <code>values.yaml</code>- you can find there information about images to be installed, it's version etc. In this case it's postgresql:11.7.0.</p>
<p>You can retrieve information about the chart by using <code>helm show values <chart_name></code> (chart doesn't have to be installed in the cluster) or it can be found on chart's <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="nofollow noreferrer">github</a>/ helm hub repository.</p>
| kool |
<p>I want to run different test suites on my helm release in different (partly manual) CI jobs.</p>
<p>How do I best execute these test suites from a CI job?</p>
<hr />
<p>Details:</p>
<p>With a single test suite, <code>helm test</code> is very helpful. But how can I easily tell <code>helm test</code> which test suite to execute?</p>
<p>Currently, I have only two test suites <code>A</code> and <code>B</code> and an environment variable <code>SUITE</code> I inject via helm install. The test job decides based on the value of <code>SUITE</code> which test suite to execute. But this injection is complex and I would like to have the possibility to execute multiple test suites sequentially or concurrently.</p>
<p>Thus I created two helm charts <code>A.yaml</code> and <code>B.yaml</code>. Can I somehow call <code>helm test</code> with a specific helm chart, e.g. <code>helm test general/A.yaml</code> (see <a href="https://stackoverflow.com/q/60818255/750378">Can Helm test be used to run separate suites?</a>)?</p>
<p>If not, what is the best approach? Using <code>deployment-A</code> and <code>deployment-B</code> with instances 0 and scale a deployment to 1 when we want to execute it? How do I then communicate the (un-)successful test execution (and log output) back to CI (gitlab in my case)?</p>
| DaveFar | <p>It's possible to choose test cases using <code>helm test --filter name=value</code> - see the <a href="https://helm.sh/docs/helm/helm_test/" rel="nofollow noreferrer">helm docs</a>.</p>
<p>Note that <code>name</code> here refers to the the <code>metadata.name</code> in the helm chart. It's possible to use other attributes set in the helm chart.</p>
| chrispduck |
<p>I'm trying to setup a cluster of one machine for now. I know that I can get the API server running and listening to some ports.</p>
<p>I am looking to issue commands against the master machine from my laptop.</p>
<p><code>KUBECONFIG=/home/slackware/kubeconfig_of_master kubectl get nodes</code> should send a request to the master machine, hit the API server, and get a response of the running nodes.</p>
<p>However, I am hitting issues with permissions. One is similar to <code>x509: certificate is valid for 10.61.164.153, not 10.0.0.1</code>. Another is a 403 if I hit the <code>kubectl proxy --port=8080</code> that is running on the master machine. </p>
<p>I think two solutions are possible, with a preferable one (B):</p>
<p>A. Add my laptop's ip address to the list of accepted ip addresses that API server or certificates or certificate agents holds. How would I do that? Is that something I can set in <code>kubeadm init</code>?</p>
<p>B. Add <code>127.0.0.1</code> to the list of accepted ip addresses that API server or certificates or certificate agents holds. How would I do that? Is that something I can set in <code>kubeadm init</code>?</p>
<p>I think B would be better, because I could create an ssh tunnel from my laptop to the remote machine and allow my teammates (if I ever have any) to do similarly.</p>
<p>Thank you,</p>
<p>Slackware</p>
| Slackware | <p>You shoud add <code>--apiserver-cert-extra-sans 10.0.0.1</code> to your <code>kubeadm init</code> command.</p>
<p>Refer to <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options</a></p>
<p>You should also use a config file:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
apiServer:
certSANs:
- 10.0.0.1
</code></pre>
<p>You can find all relevant info here: <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2" rel="nofollow noreferrer">https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2</a></p>
| Stéphane Beuret |
<p>Our GKE Autopilot cluster was recently upgraded to version 1.21.6-gke.1503, which apparently causes the <code>cluster-autoscaler.kubernetes.io/safe-to-evict=false</code> annotation to be banned.</p>
<p>I totally get this for deployments, as Google doesn't want a deployment preventing scale-down, but for jobs I'd argue this annotation makes perfect sense in certain cases. We start complex jobs that start and monitor other jobs themselves, which makes it hard to make them restart-resistant given the sheer number of moving parts.</p>
<p><strong>Is there any way to make it as unlikely as possible for job pods to be restarted/moved around when using Autopilot?</strong> Prior to switching to Autopilot, we used to make sure our jobs filled a single node by requesting all of its available resources; combined with a Guaranteed QoS class, this made sure the only way for a pod to be evicted was if the node somehow failed, which almost never happened. Now all we seem to have left is the Guaranteed QoS class, but that doesn't prevent pods from being evicted.</p>
| PLPeeters | <p>At this point the only thing left is to ask to bring back this feature on <a href="https://issuetracker.google.com" rel="nofollow noreferrer">IssueTracker</a> - raise a new feature reqest and hope for the best.</p>
<p>Link to this thread also as it contains quite a lot of troubleshooting and may be useful.</p>
| Wojtek_B |
<p>I have a local custom cluster i'm trying to run a php application with a MySQL database. I have exposed the MySQL service and deployment with PersistentVolumes and can access them fine through a local PHP instance but when trying to deploy Apache to run the web server my browser keeps rejecting the connection. </p>
<p>Ive tried to expose different ports on the deployment.yaml in the phpmyadmin deployment, i've tried port 80 and 8080 but they wouldn't expose correctly. Once i tried port 8088 they did deploy correctly but now my browser rejects the connection.</p>
<p>Ive tried going into the individual pod and run lsof to see if apache is listening on 80 and it is so im really at a loss with this.</p>
<pre class="lang-sh prettyprint-override"><code>root@ras1:/home/pi/k3s# ./k3s kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.1.110:6443 16d
mysql-service 10.42.1.79:3306 51m
phpmyadmin-service 10.42.1.85:8088 2m45s
root@ras1:/home/pi/k3s# ./k3s kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16d
mysql-service LoadBalancer 10.43.167.186 192.168.1.110,192.168.1.111 3306:31358/TCP 49m
phpmyadmin-service LoadBalancer 10.43.126.107 192.168.1.110,192.168.1.111 8088:31445/TCP 10s
</code></pre>
<p>The Cluster IP is 192.168.1.110 for the node1 and 192.168.1.111 for node2 (where the deployment is running)</p>
<p>Thanks for the help.</p>
| virshan | <p>Managed to find a solution for this. Turns out my own ingress controller was already using port 80 and 8080 as "LoadBalancer" so I created an ingress.yaml and linked it to my phpmyadmin service which I set to "ClusterIP" rather than "LoadBalancer" now I can access my PHP app through port 80.</p>
| virshan |
<p>Note: I am not running locally on Minikube or something, but GKE - but could be any provider.</p>
<p>I want to be able to create users/contexts in K8s with openssl:</p>
<pre><code>openssl x509 -req -in juan.csr -CA CA_LOCATION/ca.crt -CAKey CA_LOCATION/ca.key -CAcreateserial -out juan.crt -days 500
</code></pre>
<p>How do I get the K8s <code>ca.crt</code> and <code>ca.key</code>? - I found this for <code>ca.crt</code>, but is this the way and still missing the ca.key?</p>
<pre><code>kubectl get secret -o jsonpath="{.items[?(@.type==\"kubernetes.io/service-account-token\")].data['ca\.crt']}" | base64 --decode
</code></pre>
<p>And, other way than logging into master node <code>/etc/kubernetes/pki/.</code></p>
| Chris G. | <p>I would suggest viewing the following <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/#openssl" rel="nofollow noreferrer">documentation</a> on how to generate a ca.key and ca.crt for your kubernetes cluster. Please keep in mind this is not an official google document, however this may help you achieve what you are looking for.</p>
<p>Here are the commands found in the document.</p>
<p>Generate ca.key: <code>openssl genrsa -out ca.key 2048</code></p>
<p>Generate ca.cert: <code>openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt</code></p>
<p>EDIT </p>
<p>I found 2 unsupported documents <a href="https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/" rel="nofollow noreferrer">[1]</a> <a href="https://jamielinux.com/docs/openssl-certificate-authority/sign-server-and-client-certificates.html#create-a-key" rel="nofollow noreferrer">[2]</a> on generating a certificate and key with openssl, it should be applicable with kubernetes. </p>
| Gustavo |
<p>Do you know how to configure default network route for Kubernetes to reach internet?
My cluster nodes (Ubuntu 18.04 with netplan) have 2 IP exposed on internet. When I installed Kubernetes, first IP was the default route, I changed default route to the second IP on system. But pods in Kubernetes continue to use first one to go on internet. Its don't use system configuration.
I have Kubernetes 1.17.5 with Canal network deployed with Rancher. I don't find if I should change a configuration or edit iptables of docker0 to tell Kubernetes which route to use.</p>
| Arzhr | <p>You can change default network interface by adding IP address to <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options" rel="nofollow noreferrer"><code>--apiserver-advertise-address</code></a> flag in <code>kubeadm init</code>.</p>
<blockquote>
<p>The IP address the API Server will advertise it's listening on. If not set the default network
interface will be used.</p>
</blockquote>
<p>When you join nodes to your cluster make sure you add correct API server IP address</p>
<pre><code> kubeadm join --apiserver-advertise-address <ip-address-used-in-init> [any additional flags]
</code></pre>
| kool |
<p>Our EKS cluster is terraform managed and was specified with EC2 Launch Template as terraform resource. Our aws_eks_node_group includes Launch template section as shown below.</p>
<pre><code>resource "aws_eks_node_group" eks_node_group" {
.........................
..........................
launch_template {
id = aws_launch_template.eksnode_template.id
version = aws_launch_template.eksnode_template.default_version
name = aws_launch_template.eksnode_template.name
}
}
</code></pre>
<p>However, after a while, EKS self-deployed the new Launch Template and linked it to the relevant auto-scaling group.</p>
<ol>
<li>Why this has happened at the first place and how to avoid it in the future?</li>
<li>How can we link the customer managed Launch Template to EKS Autoscaling Group via terraform again? I tried changing the name or version of the launch template, but it is still using one created by EKS (self-managed)</li>
</ol>
| Viji | <ol>
<li></li>
</ol>
<blockquote>
<p>The Amazon EKS API creates this launch template either by <strong>copying</strong> one you provide or by creating one automatically with default values in your account.</p>
</blockquote>
<ol start="2">
<li></li>
</ol>
<blockquote>
<p>We don't recommend that you modify auto-generated launch templates.</p>
</blockquote>
<p>(<a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html" rel="nofollow noreferrer">source</a>)</p>
| jc36 |
<p>Building a Microservices app(Node/React) with docker and kubernetes and I keep getting the following error when I run the <code>skaffold dev</code> command.</p>
<pre class="lang-sh prettyprint-override"><code>- stderr: "error: unable to recognize ~/infra/k8s/ingress-srv.yaml\": no matches for kind \"Ingress\" in version \"extensions/v1beta1\"\n"
</code></pre>
<p>This is my ingress-srv.yaml file:</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-serv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- pathType: Prefix
- path: "/api/users/?(.*)"
backend:
service:
name: auth-srv
port:
number: 3000
</code></pre>
<p>And this is my skaffold.yaml file</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: mutuadocker/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
</code></pre>
| Joseph Mutua | <p>You should change <code>apiVersion: extensions/v1beta1</code> to <code>apiVersion: networking.k8s.io/v1</code>
so your file would look like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-serv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- pathType: Prefix
- path: "/api/users/?(.*)"
backend:
service:
name: auth-srv
port:
number: 3000
</code></pre>
<p>Refer to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#ingress-v1-networking-k8s-io" rel="nofollow noreferrer">this documentation</a>.</p>
| kool |
<p>I am using minikube to learn about docker, but I have come across a problem.</p>
<p>I am following along with the examples in Kubernetes in Action, and I am trying to get a pod that I have pulled from my docker hub account, but I cannot make this pod visible.</p>
<p>if I run</p>
<pre><code>kubectl get pod
</code></pre>
<p>I can see that the pod is present.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
kubia 1/1 Running 1 6d22h
</code></pre>
<p>However when I do the first step to create a service</p>
<pre><code>kubectl expose rc kubia --type=LoadBalancer --name kubia-http service "kubia-http" exposed
</code></pre>
<p>I am getting this error returned</p>
<pre><code>Error from server (NotFound): replicationcontrollers "kubia" not found
Error from server (NotFound): replicationcontrollers "service" not found
Error from server (NotFound): replicationcontrollers "kubia-http" not found
Error from server (NotFound): replicationcontrollers "exposed" not found
</code></pre>
<p>Any ideas why I am getting this error and what I need to do to correct it?</p>
<p>I am using minikube v1.13.1 on mac Mojave (v10.14.6), and I can't upgrade because I am using a company supplied machine, and all updates are controlled by HQ.</p>
| Darren Guy | <p>In this book, used command is <code>kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1</code> which used to create <code>ReplicationController</code> back in the days when book was written however this object is currently depracated.</p>
<p>Now <code>kubectl run</code> command creates standalone pod without ReplicationController. So to expose it you should run:</p>
<pre><code>kubectl expose pod kubia --type=LoadBalancer --name kubia-http
</code></pre>
<p>In order to create a replication it is recommended to use <code>Deployment</code>. To create it using CLI you can simply run</p>
<pre><code>kubectl create deployment <name_of_deployment> --image=<image_to_be_used>
</code></pre>
<p>It will create a deployment and one pod. And then it can be exposed similarly to previous pod exposure:</p>
<pre><code>kubectl expose deployment kubia --type=LoadBalancer --name kubia-http
</code></pre>
| kool |
<p>Not sure why this is happening but we're seeing old replicasets with active pods running in our Kubernetes cluster despite the fact the deployments they are attached to have been long deleted (up to 82 days old). Our deployments have <code>spec.replicas</code> set to a max of 2, however we're seeing up to 6/8 active pods in these deployments. </p>
<p>We are currently running k8s version 1.14.6. Also below is a sample deployment</p>
<pre><code>{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "xxxxxxxxxxxxxxxx",
"namespace": "default",
"annotations": {
"deployment.kubernetes.io/revision": "15",
}
},
"spec": {
"replicas": 2,
"selector": {
"matchLabels": {
"app": "xxxxxxxx"
}
},
"template": {
"spec": {
"containers": [
{
"name": "xxxxxxxx",
"image": "xxxxxxxx",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"resources": {},
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 1,
"maxSurge": 1
}
},
"minReadySeconds": 10,
"revisionHistoryLimit": 2147483647,
"progressDeadlineSeconds": 2147483647
},
"status": {
"observedGeneration": 15,
"replicas": 2,
"updatedReplicas": 2,
"readyReplicas": 2,
"availableReplicas": 2,
"conditions": [
{
"type": "Available",
"status": "True",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
}
]
}
}
</code></pre>
| NealR | <p>Changes to label selectors make existing pods fall out of ReplicaSet's scope, so if you change labels and label selector the pods are no longer "controlled" by ReplicaSet.</p>
<p>If you run <code>kubectl get pods <pod_name> -o yaml</code> where <code><pod_name></code> is a pod created by ReplicaSet, you will see owner reference.
However if you change labels and run the same command, owner reference is no longer visible because it fell out of ReplicaSet scope.</p>
<p>Also if you create bare pods and they happen to have the same labels as ReplicaSet, they will be acquired by ReplicaSet. It happens because RS is not limited to pods created by its template- it can acquire pods matching its selectors and terminate them as desired number specified in RS manifest will be exceeded.</p>
<p>If a bare pod is created before RS with the same labels, RS will count this pod and deploy only required number of pods to achieve desired number of replicas.</p>
<p>You can also remove ReplicaSet without affecting any of its Pods by using <code>kubectl delete</code> with <code>--cascade=false</code> option.</p>
| kool |
<p>I'm trying to setup GCR with kubernetes</p>
<p>and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'</p>
<p>Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec</p>
<p>deployment.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
</code></pre>
<p>used this to add the secret, and it said created</p>
<pre><code>kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" [email protected]
</code></pre>
<p>How can I debug this, any ideas are welcome!</p>
| Omar | <p>It looks like the issue is caused by lack of permission on the related service account
[email protected] which is missing Editor role.</p>
<p>Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].</p>
<p>Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2]. </p>
<p>[1] <a href="https://cloud.google.com/container-registry/docs/access-control" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/access-control</a>
[2]<a href="https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engine</a>”</p>
| Shafiq I |
<p>I have the sample cm.yml for configMap with nested json like data.</p>
<pre><code>kind: ConfigMap
metadata:
name: sample-cm
data:
spring: |-
rabbitmq: |-
host: "sample.com"
datasource: |-
url: "jdbc:postgresql:sampleDb"
</code></pre>
<p>I have to set environment variables, spring-rabbitmq-host=sample.com and spring-datasource-url= jdbc:postgresql:sampleDb in the following pod.</p>
<pre><code>kind: Pod
metadata:
name: pod-sample
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: sping-rabbitmq-host
valueFrom:
configMapKeyRef:
name: sample-cm
key: <what should i specify here?>
- name: spring-datasource-url
valueFrom:
configMapKeyRef:
name: sample-cm
key: <what should i specify here?>
</code></pre>
| Sarabesh n.r | <p>Unfortunately it won't be possible to pass values from the configmap you created as separate environment variables because it is read as a single string. </p>
<p>You can check it using <code>kubectl describe cm sample-cm</code></p>
<pre><code>Name: sample-cm
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"spring":"rabbitmq: |-\n host: \"sample.com\"\ndatasource: |-\n url: \"jdbc:postgresql:sampleDb\""},"kind":"Con...
Data
====
spring:
----
rabbitmq: |-
host: "sample.com"
datasource: |-
url: "jdbc:postgresql:sampleDb"
Events: <none>
</code></pre>
<p>ConfigMap needs key-value pairs so you have to modify it to represent separate values. </p>
<p>Simplest approach would be:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: sample-cm
data:
host: "sample.com"
url: "jdbc:postgresql:sampleDb"
</code></pre>
<p>so the values will look like this:</p>
<pre><code>kubectl describe cm sample-cm
Name: sample-cm
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"host":"sample.com","url":"jdbc:postgresql:sampleDb"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"s...
Data
====
host:
----
sample.com
url:
----
jdbc:postgresql:sampleDb
Events: <none>
</code></pre>
<p>and pass it to a pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: sping-rabbitmq-host
valueFrom:
configMapKeyRef:
name: sample-cm
key: host
- name: spring-datasource-url
valueFrom:
configMapKeyRef:
name: sample-cm
key: url
</code></pre>
| kool |
<p>I'm attempting to upgrade cert-manager in my kubernetes cluster. Currently the version installed was pre crd name change and I'm trying to clean up the old CRDs.</p>
<pre><code>> kubectl get crd | grep certmanager.k8s.io
certificates.certmanager.k8s.io 2020-01-31T08:25:56Z
challenges.certmanager.k8s.io 2020-01-31T08:25:56Z
clusterissuers.certmanager.k8s.io 2020-01-31T08:25:58Z
issuers.certmanager.k8s.io 2020-01-31T08:25:03Z
orders.certmanager.k8s.io 2020-01-31T08:25:08Z
</code></pre>
<p>After identifying the crds I delete them:</p>
<pre><code>> kubectl delete customresourcedefinition certificates.certmanager.k8s.io challenges.certmanager.k8s.io clusterissuers.certmanager.k8s.io issuers.certmanager.k8s.io orders.certmanager.k8s.io
customresourcedefinition.apiextensions.k8s.io "certificates.certmanager.k8s.io" deleted
customresourcedefinition.apiextensions.k8s.io "challenges.certmanager.k8s.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterissuers.certmanager.k8s.io" deleted
customresourcedefinition.apiextensions.k8s.io "issuers.certmanager.k8s.io" deleted
customresourcedefinition.apiextensions.k8s.io "orders.certmanager.k8s.io" deleted
</code></pre>
<p>Following deletion <code>kubectl get crd | grep certmanager.k8s.io</code> shows no crds but after about 30 seconds they are regenerated. Where do I need to look to identify what's causing them to regenerate.</p>
<p>All other cert-manager resources have been deleted.</p>
<p>This has come about because I don't seem to be able to view/edit/delete resources in the new CRD but applying changes updates the (hidden) resource. </p>
| Nick | <p>The behavior you are experiencing is probably caused by the Istio addon. When Istio is enabled in the cluster, the following resources are created:</p>
<p>certificates.certmanager.k8s.io,</p>
<p>challenges.certmanager.k8s.io,</p>
<p>clusterissuers.certmanager.k8s.io, </p>
<p>issuers.certmanager.k8s.io, and </p>
<p>orders.certmanager.k8s.io </p>
<p>.. and istio is in charge of reconciling them periodically. This means if you delete them manually they will get automatically recreated.</p>
<p>You can verify this by creating a sample cluster with enabling istio and then run the following command:</p>
<p>~$ kubectl get apiservices</p>
<p>~$ kubectl get customresourcedefinitions.apiextensions.k8s.io, and</p>
<p>~$ kubectl describe customresourcedefinitions.apiextensions.k8s.io certificates.certmanager.k8s.io</p>
<p>There is an addon with label “addonmanager.kubernetes.io/mode=Reconcile” which by definition periodically reconciles and this is the reason why the CRD (managed by Istio) kept getting recreated. For details you can refer to this <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager" rel="nofollow noreferrer">URL</a>. Please try disabling the addon before doing the deletion."</p>
| Mahtab |
<p>I have end-to-end tests written in pytest, running on a Kubernetes cluster in Namespace <code>foo</code>. Now I want to add simple chaos engineering to the tests to check the resilience of my services. For this, I only need to delete specific pods within <code>foo</code> -- since K8s restarts the corresponding service, this simulates the service being temporarily unreachable.</p>
<p>What is a simple way to delete a specific pod in the current namespace with Python?</p>
<hr />
<p>What I have tried so far:</p>
<p>Since I did not find a suitable example in <a href="https://github.com/kubernetes-client/python/tree/master/examples" rel="noreferrer">https://github.com/kubernetes-client/python/tree/master/examples</a>, but the ones using pods looked quite complex,
I looked into <code>kubetest</code>, which seems very simple and elegant.</p>
<p>I wanted to use the <code>kube</code> fixture and then do something like this:</p>
<pre><code>pods = kube.get_pods()
for pod in pods:
if can_be_temporarily_unreachable(pod):
kube.delete(pod)
</code></pre>
<p>I thought calling pytest with parameter <code>--in-cluster</code> would tell <code>kubetest</code> to use the current cluster setup and not create new K8s resources.
However, <code>kubetest</code> wants to create a new namespace for each test case that uses the <code>kube</code> fixture, which I do not want.
Is there a way to tell <code>kubetest</code> not to create new namespaces but do everything in the current namespace?</p>
<p>Though <code>kubetest</code> looks very simple and elegant otherwise, I am happy to use another solution, too.
A simple solution that requires little time and maintenance and does not complicate (reading) the tests would be awesome.</p>
| DaveFar | <p>you can use <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md" rel="noreferrer"><code>delete_namespaced_pod</code></a>(from <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md" rel="noreferrer">CoreV1Api</a>) to delete specific pods within a namespace.</p>
<p>Here is an example:</p>
<pre><code>from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_incluster_config() # or config.load_kube_config()
configuration = client.Configuration()
with client.ApiClient(configuration) as api_client:
api_instance = client.CoreV1Api(api_client)
namespace = 'kube-system' # str | see @Max Lobur's answer on how to get this
name = 'kindnet-mpkvf' # str | Pod name, e.g. via api_instance.list_namespaced_pod(namespace)
try:
api_response = api_instance.delete_namespaced_pod(name, namespace)
print(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->delete_namespaced_pod: %s\n" % e)
</code></pre>
| Tibebes. M |
<p>There is no terminal coming in the Lens. Lens terminal just showing <strong>connecting...</strong></p>
| Neeraj Singh Negi | <p><strong>Root Cause</strong></p>
<p>Bydefault lens uses the powershell but Lens need wsl shell. By changing the shell to wsl this issue can be solve also we have to add the path for wsl in lens application.</p>
<p>In the backend Lens call the <strong>WSL</strong> shell. But Lens unable to find it.</p>
<p><strong>Solution</strong></p>
<p>We can solve this issue by setting up the system environment variables.</p>
<ol>
<li>Go to the Preferences and set <strong>Terminal</strong> as <strong>wsl.exe</strong>.</li>
<li>Set environment for <strong>wsl.exe</strong>. Go to System Variable and add the PATH.</li>
</ol>
<p><a href="https://i.stack.imgur.com/71WHh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/71WHh.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/YJywz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YJywz.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/KZSlj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZSlj.png" alt="enter image description here" /></a></p>
| Neeraj Singh Negi |
<p>How can I delete release older than, for example, the 1st of October?
I mean updated before the 1st of October.<br>
Alternatively, delete all releases with the app version lower than _.</p>
<p>Helm ls output: </p>
<blockquote>
<p>|NAME|REVISION|UPDATED |STATUS |CHART |APP VERSION|NAMESPACE|<br>
|myapp| 8 |Fri Sep 27 17:27:20 2019|DEPLOYED|myapp.85|85 |default|</p>
</blockquote>
<p>The following command deletes just one.</p>
<blockquote>
<p>helm delete relase_name</p>
</blockquote>
<p>The following is not a great solution as well </p>
<blockquote>
<p>helm delete relase_name1 relase_name2 relase_name3</p>
</blockquote>
<p><strong>Note1:</strong> I don't want to delete all. There is an explanation of how to do that over here <a href="https://stackoverflow.com/questions/47817818/helm-delete-all-releases">Helm delete all releases</a> and I don't want to do that. However, I assume I need to use bash for this task.</p>
<p><strong>Note2:</strong> I have already read documentation it is not that big. There is nothing over there about filtering.
<a href="https://helm.sh/docs/helm/#helm-delete" rel="noreferrer">https://helm.sh/docs/helm/#helm-delete</a></p>
<p><strong>Note3:</strong> I have already looked into helm sources, I am not 100% sure but It doesn't look like it is possible <a href="https://github.com/helm/helm/tree/master/pkg/helm" rel="noreferrer">https://github.com/helm/helm/tree/master/pkg/helm</a></p>
<p>Thank you in advance!</p>
| Sergii Zhuravskyi | <p><em>Assuming Helm version 3+</em></p>
<p>You can use <code>jq</code> and <code>xargs</code> to accomplish this:</p>
<p><strong>Question 1</strong>, to delete releases that were last updated before <code>$TIMESTAMP</code> (in seconds):</p>
<pre><code>helm ls -o json | jq -r --argjson timestamp $TIMESTAMP '.[] | select (.updated | sub("\\..*";"Z") | sub("\\s";"T") | fromdate < now - $timestamp).name' | xargs -L1 helm uninstall
</code></pre>
<blockquote>
<p><code>sub("\\..*";"Z") | sub("\\s";"T")</code> converts the date format
Helm uses in their output to ISO 8601.</p>
</blockquote>
<p><strong>Question 2</strong>, to delete releases with an app version older than <code>$VERSION</code>:</p>
<pre><code>helm ls -o json | jq -r --argjson version $VERSION '.[] | select(.app_version | sub("\\.[0-9]$";"") | tonumber | . < $version).name' | xargs -L1 helm uninstall
</code></pre>
<blockquote>
<p><code>$VERSION</code> should be major or major.minor only (i.e. <code>2</code> or
<code>2.1</code>). don't use a patch number.</p>
</blockquote>
<p><strong>Question 3</strong>, to delete releases by their initial deploy date, you'll have to parse the contents of the <code>helm history RELEASE</code> command for each release.</p>
<p>I won't solve this one, but it would look something like:</p>
<pre><code>loop over helm ls
get release name
get first entry of helm history for that release
pass to jq and process like the answer for question 1
</code></pre>
<p>Relevant Links:</p>
<ul>
<li><a href="https://stedolan.github.io/jq/manual/#select(boolean_expression)" rel="noreferrer">https://stedolan.github.io/jq/manual/#select(boolean_expression)</a></li>
<li><a href="https://stedolan.github.io/jq/manual/#sub(regex;tostring)sub(regex;string;flags)" rel="noreferrer">https://stedolan.github.io/jq/manual/#sub(regex;tostring)sub(regex;string;flags)</a></li>
<li><a href="https://stedolan.github.io/jq/manual/#Dates" rel="noreferrer">https://stedolan.github.io/jq/manual/#Dates</a></li>
<li><a href="https://helm.sh/docs/helm/helm_history/" rel="noreferrer">https://helm.sh/docs/helm/helm_history/</a></li>
</ul>
| ericfossas |
<p>Im starting with K8s and I have problem with my DaemonSet resource. When I apply it, it goes to <code>Running</code> state, then <code>Completed</code> and then <code>CrashLoopBackOff</code> and then again to running etc. I would like to have all my pods to be in <code>Running</code> state. Here is my manifest:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-test
labels:
app: busybox-ds
spec:
selector:
matchLabels:
name: busybox-ds
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 3
template:
metadata:
labels:
name: busybox-ds
spec:
containers:
- name: busybox-ds
image: busybox
</code></pre>
<p>Could somebody tell me what am I doing wrong?</p>
| Frendom | <p>The <code>busybox</code> image just runs <code>sh</code> as its command. If it's not being interacted with, it immediately exits, and hence why you see your pod go to the <code>Completed</code> state.</p>
<p>You need to have that image run a command that will keep its process running.</p>
<p>Using <code>tail -f /dev/null</code> is a common way to do this. So your manifest would look like:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- name: busybox-ds
image: busybox
command:
- tail
- -f
- /dev/null
</code></pre>
| ericfossas |
<p>I am getting ErrImagePull when trying to create a deployment from an image hosted on my private helm docker registry. Based on the "server gave HTTP response to HTTPS client" error, I tried adding the --insecure-registry option in systemd drop-in as well as the daemon.json file on all worker nodes and master but it's still not working. What else should I try to troubleshoot?</p>
<p>edit: Perhaps it's because docker is using containerd?</p>
<pre class="lang-sh prettyprint-override"><code> $ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.conf
Active: active (running) since Fri 2020-02-07 10:00:25 UTC; 4min 44s ago
Docs: https://docs.docker.com
Main PID: 27700 (dockerd)
Tasks: 14
Memory: 41.8M
CGroup: /system.slice/docker.service
└─27700 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
</code></pre>
<pre class="lang-sh prettyprint-override"><code> $ cat /etc/systemd/system/docker.service.d/docker.conf
DOCKER_OPTS="--insecure-registry 10.10.30.200:5000"
</code></pre>
<pre class="lang-sh prettyprint-override"><code> $ cat /etc/docker/daemon.json
</code></pre>
<pre class="lang-sh prettyprint-override"><code> {
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"insecure-registries": ["10.10.30.200:5000"]
}
</code></pre>
<pre class="lang-sh prettyprint-override"><code> $ curl 10.10.30.200:5000/v2/mybuild/tags/list
{"name":"mybuild","tags":["v1"]}
</code></pre>
<pre class="lang-sh prettyprint-override"><code> $ kubectl describe pod myweb-769d57d99-lz6xs
...
Normal Pulling 1s (x2 over 13s) kubelet, k8s-node2 Pulling image "10.10.30.200:5000/mybuild:v1"
Warning Failed 1s (x2 over 13s) kubelet, k8s-node2 Failed to pull image "10.10.30.200:5000/mybuild:v1": rpc error: code = Unknown desc = failed to resolve image "10.10.30.200:5000/mybuild:v1": no available registry endpoint: failed to do request: Head https://10.10.30.200:5000/v2/mybuild/manifests/v1: http: server gave HTTP response to HTTPS client
Warning Failed 1s (x2 over 13s) kubelet, k8s-node2 Error: ErrImagePull
</code></pre>
<pre class="lang-sh prettyprint-override"><code> $ cat deployment.yaml
</code></pre>
<pre class="lang-sh prettyprint-override"><code> ---
apiVersion: apps/v1
kind: Deployment metadata:
labels:
app: myweb
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- image: 10.10.30.200:5000/mybuild:v1
imagePullPolicy: Always
name: myweb
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myweb
name: myweb
spec:
ports:
- nodePort: 32223
port: 80
protocol: TCP
targetPort: 80
selector:
app: myweb
type: NodePort
</code></pre>
| bayman | <p>I reproduced your issue and the solution is to add </p>
<pre><code>{
"insecure-registry" : ["10.10.30.200:5000"]
}
</code></pre>
<p>on <strong>every node (workers and master)</strong> in <code>/etc/docker/daemon.json</code> in the cluster and restarting docker <code>sudo systemctl restart docker</code> to load new configuration.</p>
<p>You can follow <a href="https://guide.opencord.org/cord-6.0/prereqs/docker-registry.html" rel="nofollow noreferrer">this guide</a> on how to set up insecure private docker registry using helm. </p>
<p>Once the chart was successfully installed, add entry in <code>daemon.json</code> on every node and restart docker. After that you can tag and push your image to the repository.</p>
<p>To check if the image was successfully pushed to your registry you can run:</p>
<p><code>curl -X GET http://10.10.30.200:5000/v2/_catalog</code> </p>
<p>and the output should be similar to this: </p>
<p><code>{"repositories":["ubuntu/myimage"]}</code></p>
| kool |
<p>I've been trying to create a deployment of docker image to Kubernetes cluster without luck, my deployment.yaml looks like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: application-deployment
labels:
app: application
spec:
serviceAccountName: gitlab
automountServiceAccountToken: false
containers:
- name: application
image: example.org:port1/foo/bar:latest
ports:
- containerPort: port2
volumes:
- name: foo
secret:
secretName: regcred
</code></pre>
<p>But it fails to get the image.</p>
<blockquote>
<p>Failed to pull image "example.org:port1/foo/bar:latest": rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://example.org:port1/v2/foo/bar/manifests/latest" rel="noreferrer">https://example.org:port1/v2/foo/bar/manifests/latest</a>: denied: access forbidden</p>
</blockquote>
<p>The secret used in <code>deployment.yaml</code>, was created like this:</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_REGISTRY_USER} --docker-password=${CI_REGISTRY_PASSWORD} --docker-email=${GITLAB_USER_EMAIL}
</code></pre>
<p><strong>Attempt #1: adding imagePullSecrets</strong></p>
<pre><code>...
imagePullSecrets:
- name: regcred
</code></pre>
<p>results in:</p>
<blockquote>
<p>Failed to pull image "example.org:port1/foo/bar:latest": rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://example.org:port1/v2/foo/bar/manifests/latest" rel="noreferrer">https://example.org:port1/v2/foo/bar/manifests/latest</a>: unauthorized: HTTP Basic: Access denied</p>
</blockquote>
<p><strong>Solution:</strong></p>
<p>I've created deploy token under <em>Settings > Repository > Deploy Tokens > (created one with read_registry scope)</em></p>
<p>And added given values to environment variables and an appropriate line now looks like:</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=${CI_REGISTRY} --docker-username=${CI_DEPLOY_USER} --docker-password=${CI_DEPLOY_PASSWORD}
</code></pre>
<p>I've got the problematic line from tutorials & Gitlab docs, where they've described deploy tokens but further used problematic line in examples.</p>
| Penguin74 | <p>I reproduced your issue and the problem is with password you used while creating a repository's secret. When creating a secret for gitlab repository you have to use personal token created in gitlab instead of a password. </p>
<p>You can create a token by going to <code>Settings -> Access Tokens</code>. Then you have to pick a name for your token, expiration date and token's scope. </p>
<p>Then create a secret as previously by running </p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=$docker_server --docker-username=$docker_username --docker-password=$personal_token
</code></pre>
<p>While creating a pod you have to include </p>
<pre><code> imagePullSecrets:
- name: regcred
</code></pre>
| kool |
<p>I need to copy a file (.txt) to a PersistentVolume?</p>
<pre><code>kubectl cp <file-spec-src> <file-spec-dest>
</code></pre>
<p>I need to know the <code><file-spec-dest></code> for PersistentVolume.</p>
<p>Backgroud: I have a single-node Kubernetes-Cluster<a href="https://i.stack.imgur.com/OdyjI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OdyjI.png" alt="enter image description here" /></a> (docker-desktop) running locally on my mac. I am trying to copy a .txt file to a PersistentVolume (PV). I have create the PV and the PersistentVolumeClaim (PVC).</p>
<p>Note: I have been asked if it would make more sense with a pod instead of persistentVolume. The aim is that an image that will run as a Kubernetes Job will be using the data in the .txt file.</p>
<p>PersistentVolume:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
</code></pre>
<p>PersistentVolumeClaim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>here is what i get with <code>kubectl get pvc -o yaml</code></p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
creationTimestamp: "2021-02-18T15:06:19Z"
finalizers:
- kubernetes.io/pvc-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bind-completed: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:volume.beta.kubernetes.io/storage-provisioner: {}
f:spec:
f:volumeName: {}
f:status:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2021-02-18T15:06:19Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
manager: kubectl-create
operation: Update
time: "2021-02-18T15:06:19Z"
name: task-pv-claim
namespace: default
resourceVersion: "113659"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/task-pv-claim
uid: 5b825c41-cf4f-4c08-b90e-47e3fca557a1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: hostpath
volumeMode: Filesystem
volumeName: pvc-5b825c41-cf4f-4c08-b90e-47e3fca557a1
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 3Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
| QBits | <p>The destination directory is the one that you use in pod/job manifest as <code>mountPath</code>. So if you choose to mount it in <code>/mnt/data</code> it will be your destination directory. For example:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
volumes:
- name: task-pv-claim
hostPath:
path: /mnt/data
type: Directory
containers:
- name: pi
image: nginx
command: ["some_job"]
volumeMounts:
- name: task-pv-claim
mountPath: /mnt/data
restartPolicy: Never
</code></pre>
<p>So if you want to copy file from host to directory you have mounted:</p>
<pre><code>kubectl cp <some_host_file> <pod_name>/:/mnt/data
</code></pre>
<p>Besides that, if you are using <code>hostPath</code> it copies file from your host to pod in specified directory so you can just place your file there and it will be copied to the pod.</p>
| kool |
<p>I use wscat having successfully established connection with pod container(/bin/bash), but when I send command to it, I received no response from it. Could anyone tell me why?
<a href="https://i.stack.imgur.com/2HhQz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2HhQz.png" alt="enter image description here"></a> </p>
| user3398604 | <p>Posting @user3398604 solution as community wiki for better visibility:</p>
<blockquote>
<p>K8S api-server using websocket sub-protocol to exchange data. For
input(stdin), the protocol requires the payload is heading with '\0'
byte, not zero, it's ascii zero-valued character. So wscat is limited
to interacting with k8s pod due to wscat can not send invisible
character.</p>
</blockquote>
| kool |
<p>I'm using <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">this prometheus chart</a>. In the documentation it says</p>
<blockquote>
<p>In order to get prometheus to scrape pods, you must add annotations to the the pods as below:</p>
<pre><code>metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "8080"
</code></pre>
</blockquote>
<p>So I have created a service like this</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nodejs-client-service
labels:
app: nodejs-client-app
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "5000"
spec:
type: LoadBalancer
selector:
app: nodejs-client-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000
</code></pre>
<p>But my Service won't show up in the prometheus targets. What am I missing?</p>
| Jonas | <p>I ran into the same problem with the <code>stable/prometheus-operator</code> chart. I tried adding the annotations above both to the pods and to the service and neither worked.</p>
<p>For me, the solution was to add a <a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md" rel="nofollow noreferrer">ServiceMonitor</a> object. Once added, Prometheus dynamically discovered my service:</p>
<p><a href="https://i.stack.imgur.com/shgGp.png" rel="nofollow noreferrer">Fig 1: target list</a></p>
<p><a href="https://i.stack.imgur.com/sLRzq.png" rel="nofollow noreferrer">Fig 2: dynamically added scrape_config</a></p>
<h2>Solution Example</h2>
<p>This single command fixed the problem: <code>kubectl apply -f service-monitor.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code># service-monitor.yml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
release: prom
name: eztype
namespace: default
spec:
endpoints:
- path: /actuator/prometheus
port: management
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/name: eztype
</code></pre>
<p>Here, my pods and service were annotated with the name <code>eztype</code> and were exposing metrics on port 8282 under the indicated path. For completeness here's the relavant part of my service definition:</p>
<pre class="lang-yaml prettyprint-override"><code># service definition (partial)
spec:
clusterIP: 10.128.156.246
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: management
port: 8282
protocol: TCP
targetPort: 8282
</code></pre>
<p>It's worth noting that ServiceMonitor objects are used in the Prometheus chart itself:</p>
<pre><code>$ kubectl get servicemonitors -n monitor
NAME AGE
prom-prometheus-operator-alertmanager 14d
prom-prometheus-operator-apiserver 14d
prom-prometheus-operator-coredns 14d
prom-prometheus-operator-grafana 14d
prom-prometheus-operator-kube-controller-manager 14d
prom-prometheus-operator-kube-etcd 14d
prom-prometheus-operator-kube-proxy 14d
prom-prometheus-operator-kube-scheduler 14d
prom-prometheus-operator-kube-state-metrics 14d
prom-prometheus-operator-kubelet 14d
prom-prometheus-operator-node-exporter 14d
prom-prometheus-operator-operator 14d
prom-prometheus-operator-prometheus 14d
</code></pre>
| Asa |
<p>I successfully use cri-o to run pod and container, following the <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/setup.md" rel="nofollow noreferrer">guide</a> and <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/crictl.md" rel="nofollow noreferrer">tutorial</a>, whose default <code>cgroup_manager</code> is <code>cgroupfs</code>.</p>
<p>when I tried to set <code>cgroup_manager = "systemd"</code> in <code>/etc/crio/crio.conf</code> and restart <code>crio</code> service.</p>
<p>then, I tried the same steps in <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/crictl.md" rel="nofollow noreferrer">tutorial</a></p>
<pre><code>POD_ID=$(sudo crictl runp test/testdata/sandbox_config.json)
</code></pre>
<p>got the error below:</p>
<pre><code>FATA[0000] run pod sandbox failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /Burstable/pod_123-456
</code></pre>
<p>the <code>sandbox_config.json</code> is same as <a href="https://github.com/cri-o/cri-o/blob/master/test/testdata/sandbox_config.json" rel="nofollow noreferrer">sandbox_config.json</a></p>
<p>How to use cri-o to start pod and container when <code>cgroup_manager=systemd</code>? Is there a sample?</p>
| Jiaheng Tao | <p>When you switch the cgroup manager to systemd in /etc/crio/crio.conf, you have to modify the pod yaml/json to give the cgroup_parent a slice instead.
So in your sandbox_config.json change</p>
<pre><code>"linux": {
"cgroup_parent": "/Burstable/pod_123-456",
</code></pre>
<p>to something like this</p>
<pre><code>"linux": {
"cgroup_parent": "podabc.slice",
</code></pre>
<p>Try re-creating your pod and it should start up fine now.</p>
| Urvashi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.